淘宝分布式 key/value 存储引擎Tair安装部署过程及Java客户端测试一例

时间:2022-04-15 06:09:12

目录


1. 简介

2. 安装步骤及问题小记

3. 部署配置

4. Java客户端测试

5. 参考资料


声明


1. 下面的安装部署基于Linux系统环境:centos 6(64位),其它Linux版本可能有所差异。

2. 网上有人说tair安装失败可能是因为gcc版本问题,高版本的gcc可能不支持某些特性导致安装失败,经过实验证明,该说法是错误的,tair安装失败有各种可能的原因但绝对与gcc版本无关,比如我的gcc开始版本为4.4.7,后来tair安装失败,我重新编译低版本的gcc(gcc4.1.2),但是问题同样出现。后来发现是其它原因,修正后重新用高版本gcc4.4.7安装成功。

3. 下面的内容部分参考tair官方介绍文档,转载请注明原文地址。


正文


1. 简介


tair 是淘宝自己开发的一个分布式 key/value 存储引擎. tair 分为持久化和非持久化两种使用方式. 非持久化的 tair 可以看成是一个分布式缓存. 持久化的 tair 将数据存放于磁盘中. 为了解决磁盘损坏导致数据丢失, tair 可以配置数据的备份数目, tair 自动将一份数据的不同备份放到不同的主机上, 当有主机发生异常, 无法正常提供服务的时候, 其余的备份会继续提供服务.


2. 安装步骤及问题小记


2.1 安装步骤

由于tair的实现用到了底层库 tbsys 和 tbnet,因此在安装tair之前需要先安装依赖库 tbsys 和 tbnet。


2.1.1 获取源代码

首先需要通过svn下载源码,可以通过sudo yum install subversion安装svn服务。


  
  
  1. svn checkout http://code.taobao.org/svn/tb-common-utils/trunk/ tb-common-utils # 获取tbsys 和 tbnet的源代码
  2. svn checkout http://code.taobao.org/svn/tair/trunk/ tair # 获取tair源代码

2.1.2 安装依赖库或软件

编译tair或tbnet/tbsys之前需要预先安装一些编译所需的依赖库或软件。 在安装这些依赖之前最好首先检查系统是否已经安装,在用rpm管理软件包的os上可以使用rpm -q 软件包名查看是否已安装该软件或库。
a. 安装libtool
sudo yum install libtool # 同时会安装libtool所依赖的automake和autoconfig
b. 安装boost-devel库
sudo yum install boost-devel
c. 安装zlib库
sudo yum install zlib-devel

2.1.3 编译安装tbsys和tbnet

  1. tair 的底层依赖于tbsys库和tbnet库, 所以要先编译安装这两个库.

  2. a. 设置环境变量 TBLIB_ROOT 
取得源代码后, 先指定环境变量 TBLIB_ROOT 为需要安装的目录. 这个环境变量在后续 tair 的编译安装中仍旧会被使用到. 比如要安装到当前用户的lib目录下, 则指定export TBLIB_ROOT="~/lib"。
b. 安装进入源码目录, 执行build.sh进行安装. 

  1. 2.1.4 编译安装tair

进入 tair 源码目录,依次按以下顺序编译安装
./bootstrap.sh./configure # 注意, 在运行configue的时候, 可以使用 --with-boost=xxxx 来指定boost的目录. 使用--with-release=yes 来编译release版本.makemake install
安装成功后会在当前用户home目录下生成文件夹tair_bin,即tair的安装成功后的目录。

2.2 问题小记

安装过程并不是一帆风顺的,期间出现了很多问题,在此简单记录以供参考。

2.2.1 g++未安装

checking for C++ compiler default output file name...configure: error: in `/home/config_server/tair/tb-common-utils/tbnet':configure: error: C++ compiler cannot create executablesSee `config.log' for more details.make: *** No targets specified and no makefile found. Stop.make: *** No rule to make target `install'. Stop.
说明安装了gcc但未安装g++,而tair是用C++开发的,因此只能用g++编译,通过过sudo yum install gcc-c++安装即可。

2.2.2 头文件路径错误

In file included from channel.cpp:16:tbnet.h:39:19: error: tbsys.h: No such file or directorydatabuffer.h: In member function 'void tbnet::DataBuffer::expand(int)':databuffer.h:429: error: 'ERROR' was not declared in this scopedatabuffer.h:429: error: 'TBSYS_LOG' was not declared in this scopesocket.h: At global scope:socket.h:191: error: 'tbsys' has not been declaredsocket.h:191: error: ISO C++ forbids declaration of 'CThreadMutex' with no typesocket.h:191: error: expected ';' before '_dnsMutex'channelpool.h:85: error: 'tbsys' has not been declaredchannelpool.h:85: error: ISO C++ forbids declaration of 'CThreadMutex' with no typechannelpool.h:85: error: expected ';' before '_mutex'channelpool.h:93: error: 'atomic_t' does not name a typechannelpool.h:94: error: 'atomic_t' does not name a typeconnection.h:164: error: 'tbsys' has not been declaredconnection.h:164: error: ISO C++ forbids declaration of 'CThreadCond' with no typeconnection.h:164: error: expected ';' before '_outputCond'iocomponent.h:184: error: 'atomic_t' does not name a typeiocomponent.h: In member function 'int tbnet::IOComponent::addRef()':iocomponent.h:108: error: '_refcount' was not declared in this scopeiocomponent.h:108: error: 'atomic_add_return' was not declared in this scopeiocomponent.h: In member function 'void tbnet::IOComponent::subRef()':iocomponent.h:115: error: '_refcount' was not declared in this scopeiocomponent.h:115: error: 'atomic_dec' was not declared in this scopeiocomponent.h: In member function 'int tbnet::IOComponent::getRef()':iocomponent.h:122: error: '_refcount' was not declared in this scopeiocomponent.h:122: error: 'atomic_read' was not declared in this scopetransport.h: At global scope:transport.h:23: error: 'tbsys' has not been declaredtransport.h:23: error: expected `{' before 'Runnable'transport.h:23: error: invalid function declarationpacketqueuethread.h:28: error: 'tbsys' has not been declaredpacketqueuethread.h:28: error: expected `{' before 'CDefaultRunnable'packetqueuethread.h:28: error: invalid function declarationconnectionmanager.h:93: error: 'tbsys' has not been declaredconnectionmanager.h:93: error: ISO C++ forbids declaration of 'CThreadMutex' with no typeconnectionmanager.h:93: error: expected ';' before '_mutex'make[1]: *** [channel.lo] Error 1make[1]: Leaving directory `/home/tair/tair/tb-common-utils/tbnet/src'make: *** [install-recursive] Error 1have installed in ~/lib
因为tbnet和tbsys在两个不同的目录,但它们的源码文件里头文件的互相引用却没有加绝对或相对路径,将两个目录的源码加入到C++环境变量中即可。

CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tair/tair/tb-common-utils/tbsys/src:/home/tair/tair/tb-common-utils/tbnet/srcexport CPLUS_INCLUDE_PATH


3. 部署配置

tair的运行, 至少需要一个 config server 和一个 data server. 推荐使用两个 config server 多个data server的方式. 两个config server有主备之分. tair有三个配置文件,分别是对config server、data server及group信息的配置,在tair_bin安装目录下的etc目录下有这三个配置文件的样例,我们将其复制一下,成为我们需要的配置文件。
cp configserver.conf.default configserver.confcp dataserver.conf.default dataserver.confcp group.conf.default group.conf

我的部署环境:

淘宝分布式 key/value 存储引擎Tair安装部署过程及Java客户端测试一例


在配置之前,请查阅官网给出的配置文件字段详解,下面直接贴出我自己的配置并加以简单的说明。


3.1 配置config server

## tair 2.3 --- configserver config#[public]config_server=10.10.7.144:51980config_server=10.10.7.144:51980[configserver]port=51980log_file=/home/dataserver1/tair_bin/logs/config.logpid_file=/home/dataserver1/tair_bin/logs/config.pidlog_level=warngroup_file=/home/dataserver1/tair_bin/etc/group.confdata_dir=/home/dataserver1/tair_bin/data/datadev_name=venet0:0
注意事项:

(1)首先需要配置config server的服务器地址和端口号,端口号可以默认,服务器地址改成自己的,有一主一备两台configserver,这里仅为测试使用就设置为一台了。

(2)log_file/pid_file等的路径设置最好用绝对路径,默认的是相对路径,而且是不正确的相对路径(没有返回上级目录),因此这里需要修改。注意data文件和log文件非常重要,data文件不可缺少,而log文件是部署出错后能给你详细的出错原因。

(3)dev_name很重要,需要设置为你自己当前网络接口的名称,默认为eth0,这里我根据自己的网络情况进行了修改(ifconfig查看网络接口名称)。


3.2 配置data server

##  tair 2.3 --- tairserver config #[public]config_server=10.10.7.144:51980config_server=10.10.7.144:51980[tairserver]##storage_engine:## mdb # kdb# ldb#storage_engine=ldblocal_mode=0##mdb_type:# mdb# mdb_shm#mdb_type=mdb_shm## if you just run 1 tairserver on a computer, you may ignore this option.# if you want to run more than 1 tairserver on a computer, each tairserver must have their own "mdb_shm_path"##mdb_shm_path=/mdb_shm_path01#tairserver listen portport=51910heartbeat_port=55910process_thread_num=16##mdb size in MB#slab_mem_size=1024log_file=/home/dataserver1/tair_bin/logs/server.logpid_file=/home/dataserver1/tair_bin/logs/server.pidlog_level=warndev_name=venet0:0ulog_dir=/home/dataserver1/tair_bin/data/ulogulog_file_number=3ulog_file_size=64check_expired_hour_range=2-4check_slab_hour_range=5-7dup_sync=1do_rsync=0# much resemble json format# one local cluster config and one or multi remote cluster config.# {local:[master_cs_addr,slave_cs_addr,group_name,timeout_ms,queue_limit],remote:[...],remote:[...]}rsync_conf={local:[10.0.0.1:5198,10.0.0.2:5198,group_local,2000,1000],remote:[10.0.1.1:5198,10.0.1.2:5198,group_remote,2000,3000]}# if same data can be updated in local and remote cluster, then we need care modify time to# reserve latest update when do rsync to each other.rsync_mtime_care=0# rsync data directory(retry_log/fail_log..)rsync_data_dir=/home/dataserver1/tair_bin/data/remote# max log file size to record failed rsync data, rotate to a new file when over the limitrsync_fail_log_size=30000000# whether do retry when rsync failed at first timersync_do_retry=0# when doing retry,  size limit of retry log's memory usersync_retry_log_mem_size=100000000[fdb]# in MBindex_mmap_size=30cache_size=256bucket_size=10223free_block_pool_size=8data_dir=/home/dataserver1/tair_bin/data/fdbfdb_name=tair_fdb[kdb]# in bytemap_size=10485760      # the size of the internal memory-mapped regionbucket_size=1048583    # the number of buckets of the hash tablerecord_align=128       # the power of the alignment of record sizedata_dir=/home/dataserver1/tair_bin/data/kdb      # the directory of kdb's data[ldb]#### ldb manager config## data dir prefix, db path will be data/ldbxx, "xx" means db instance index.## so if ldb_db_instance_count = 2, then leveldb will init in## /data/ldb1/ldb/, /data/ldb2/ldb/. We can mount each disk to## data/ldb1, data/ldb2, so we can init each instance on each disk.data_dir=/home/dataserver1/tair_bin/data/ldb## leveldb instance count, buckets will be well-distributed to instancesldb_db_instance_count=1## whether load backup version when startup.## backup version may be created to maintain some db data of specifid version.ldb_load_backup_version=0## whether support version strategy.## if yes, put will do get operation to update existed items's meta info(version .etc),## get unexist item is expensive for leveldb. set 0 to disable if nobody even care version stuff.ldb_db_version_care=1## time range to compact for gc, 1-1 means do no compaction at allldb_compact_gc_range = 3-6## backgroud task check compact interval (s)ldb_check_compact_interval = 120## use cache count, 0 means NOT use cache,`ldb_use_cache_count should NOT be larger## than `ldb_db_instance_count, and better to be a factor of `ldb_db_instance_count.## each cache mdb's config depends on mdb's config item(mdb_type, slab_mem_size, etc)ldb_use_cache_count=1## cache stat can't report configserver, record stat locally, stat file size.## file will be rotate when file size is over this.ldb_cache_stat_file_size=20971520## migrate item batch size one time (1M)ldb_migrate_batch_size = 3145728## migrate item batch count.## real batch migrate items depends on the smaller size/countldb_migrate_batch_count = 5000## comparator_type bitcmp by default# ldb_comparator_type=numeric## numeric comparator: special compare method for user_key sorting in order to reducing compact## parameters for numeric compare. format: [meta][prefix][delimiter][number][suffix] ## skip meta size in compare# ldb_userkey_skip_meta_size=2## delimiter between prefix and number # ldb_userkey_num_delimiter=:###### use blommfilterldb_use_bloomfilter=1## use mmap to speed up random acess file(sstable),may cost much memoryldb_use_mmap_random_access=0## how many highest levels to limit compactionldb_limit_compact_level_count=0## limit compaction ratio: allow doing one compaction every ldb_limit_compact_interval## 0 means limit all compactionldb_limit_compact_count_interval=0## limit compaction time interval## 0 means limit all compactionldb_limit_compact_time_interval=0## limit compaction time range, start == end means doing limit the whole day.ldb_limit_compact_time_range=6-1## limit delete obsolete files when finishing one compactionldb_limit_delete_obsolete_file_interval=5## whether trigger compaction by seekldb_do_seek_compaction=0## whether split mmt when compaction with user-define logic(bucket range, eg) ldb_do_split_mmt_compaction=0#### following config effects on FastDump ###### when ldb_db_instance_count > 1, bucket will be sharded to instance base on config strategy.## current supported:##  hash : just do integer hash to bucket number then module to instance, instance's balance may be##         not perfect in small buckets set. same bucket will be sharded to same instance##         all the time, so data will be reused even if buckets owned by server changed(maybe cluster has changed),##  map  : handle to get better balance among all instances. same bucket may be sharded to different instance based##         on different buckets set(data will be migrated among instances).ldb_bucket_index_to_instance_strategy=map## bucket index can be updated. this is useful if the cluster wouldn't change once started## even server down/up accidently.ldb_bucket_index_can_update=1## strategy map will save bucket index statistics into file, this is the file's directoryldb_bucket_index_file_dir=/home/dataserver1/tair_bin/data/bindex## memory usage for memtable sharded by bucket when batch-put(especially for FastDump)ldb_max_mem_usage_for_memtable=3221225472######## leveldb config (Warning: you should know what you're doing.)## one leveldb instance max open files(actually table_cache_ capacity, consider as working set, see `ldb_table_cache_size)ldb_max_open_files=655## whether return fail when occure fail when init/load db, and## if true, read data when compactiong will verify checksumldb_paranoid_check=0## memtable sizeldb_write_buffer_size=67108864## sstable sizeldb_target_file_size=8388608## max file size in each level. level-n (n > 0): (n - 1) * 10 * ldb_base_level_sizeldb_base_level_size=134217728## sstable's block size# ldb_block_size=4096## sstable cache size (override `ldb_max_open_files)ldb_table_cache_size=1073741824##block cache sizeldb_block_cache_size=16777216## arena used by memtable, arena block size#ldb_arenablock_size=4096## key is prefix-compressed period in block,## this is period length(how many keys will be prefix-compressed period)# ldb_block_restart_interval=16## specifid compression method (snappy only now)# ldb_compression=1## compact when sstables count in level-0 is over this triggerldb_l0_compaction_trigger=1## write will slow down when sstables count in level-0 is over this trigger## or sstables' filesize in level-0 is over trigger * ldb_write_buffer_size if ldb_l0_limit_write_with_count=0ldb_l0_slowdown_write_trigger=32## write will stop(wait until trigger down)ldb_l0_stop_write_trigger=64## when write memtable, max level to below maybeldb_max_memcompact_level=3## read verify checksumldb_read_verify_checksums=0## write sync log. (one write will sync log once, expensive)ldb_write_sync=0## bits per key when use bloom filter#ldb_bloomfilter_bits_per_key=10## filter data base logarithm. filterbasesize=1<<ldb_filter_base_logarithm#ldb_filter_base_logarithm=12                                   

该配置文件内容很多,红色标出来的是我修改的部分,其它的采用默认,其中:

(1)config_server的配置与之前必须完全相同。

(2)这里面的port和heartbeat_port是data server的端口号和心跳端口号,必须确保系统能给你使用这些端口号。一般默认的即可,这里我修改是因为自己的Linux系统只允许分配30000以后的端口号,根据自己情况修改。

(3)data文件、log文件等很重要,与前一样,最好用绝对路径


3.3 配置group信息

#group name[group_1]# data move is 1 means when some data serve down, the migrating will be start. # default value is 0_data_move=0#_min_data_server_count: when data servers left in a group less than this value, config server will stop serve for this group#default value is copy count._min_data_server_count=1#_plugIns_list=libStaticPlugIn.so_build_strategy=1 #1 normal 2 rack _build_diff_ratio=0.6 #how much difference is allowd between different rack # diff_ratio =  |data_sever_count_in_rack1 - data_server_count_in_rack2| / max (data_sever_count_in_rack1, data_server_count_in_rack2)# diff_ration must less than _build_diff_ratio_pos_mask=65535  # 65535 is 0xffff  this will be used to gernerate rack info. 64 bit serverId & _pos_mask is the rack info, _copy_count=1    _bucket_number=1023# accept ds strategy. 1 means accept ds automatically_accept_strategy=1# data center A_server_list=10.10.7.146:51910#_server_list=192.168.1.2:5191#_server_list=192.168.1.3:5191#_server_list=192.168.1.4:5191# data center B#_server_list=192.168.2.1:5191#_server_list=192.168.2.2:5191#_server_list=192.168.2.3:5191#_server_list=192.168.2.4:5191#quota info_areaCapacity_list=0,1124000;

这个文件我只配置了data server列表,我只有一个dataserver,因此只需配置一个。


3.4 启动集群

在完成安装配置之后, 可以启动集群了.  启动的时候需要先启动data server 然后再启动cofnig server.  如果是为已有的集群添加dataserver则可以先启动dataserver进程然后再修改gruop.conf,如果你先修改group.conf再启动进程,那么需要执行touch group.conf;在scripts目录下有一个脚本 tair.sh 可以用来帮助启动 tair.sh start_ds 用来启动data server.  tair.sh start_cs 用来启动config server.  这个脚本比较简单, 它要求配置文件放在固定位置, 采用固定名称.  使用者可以通过执行安装目录下的bin下的 tair_server (data server) 和 tair_cfg_svr(config server) 来启动集群.


进入tair_bin目录后,按顺序启动:

sudo sbin/tair_server -f etc/dataserver.conf # 在dataserver端启动sudo sbin/tair_cfg_svr -f etc/configserver.conf # 在config server端启动
执行启动命令后,在两端通过ps aux | grep tair查看是否启动了,这里启动起来只是第一步,还需要测试看是否真的启动成功,通过下面命令测试:

sudo sbin/tairclient -c 10.10.7.144:51980 -g group_1TAIR> put k1 v1       put: successTAIR> put k2 v2put: successTAIR> get k2KEY: k2, LEN: 2
其中10.10.7.144:51980是config server IP:PORT,group_1是group name,在group.conf里配置的。


3.4 部署过程中的错误记录

如果启动不成功或测试put/get时出现问题,那么需要查看config server端的logs/config.log和data server端的logs/server.log日志文件,里面会有具体的报错信息。


3.4.1  Too many open files 

[2014-07-09 10:37:24.863119] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001013.stat] failed: Too many open files[2014-07-09 10:37:24.863132] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001014.stat] failed: Too many open files[2014-07-09 10:37:24.863145] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001015.stat] failed: Too many open files[2014-07-09 10:37:24.863154] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001016.stat] failed: Too many open files[2014-07-09 10:37:24.863162] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001017.stat] failed: Too many open files
由于我的存储引擎选择的是ldb,而ldb有一个配置ldb_max_open_files=65535,即默认最多能打开的文件个数是65535个,但是我的系统不允许,可以通过“ulimit -n”查看系统运行程序中打开的最多文件个数,一般为1024个,远远小于65535,这时有两个办法来解决,一是修改ldb_max_open_files的值,使其小于1024;二是修改系统最多允许打开文件个数(下面的参考资料有提供修改的方法),由于我是测试使用,因此这里直接修改了ldb_max_open_files的值。


3.4.2 data server问题


dataserver没配置好会报各种错误,下面列举一些我遇到的错误:


问题1:

TAIR> put abc a put: unknow TAIR> put a 11 put: unknow TAIR> put abc 33 put: unknow TAIR> get a get failed: data not exists.

问题2:

ERROR wakeup_wait_object (../../src/common/wait_object.hpp:302) [140627106383616] [3] packet is null
这些都是dataserver开始启动起来了,但是使用put/get时报错,然后dataserver马上down掉的情况,这时候就要根据log查看具体报错信息,修改错误的配置。

还有下面这样的报错信息:

[2014-07-09 09:08:11.646430] ERROR rebuild (group_info.cpp:879) [139740048353024] can not get enough data servers. need 1 lef 0
这是config server在启动时找不到data server,也就是data server必须要先启动成功后才能启动config server。


3.4.3 端口问题

start tair_cfg_srv listen port 5199 error

有时候使用默认的端口号也不一定行,需要根据系统限制进行设置,比如我的系统环境只能运行普通用户使用30000以上的端口号,因此这里我就不能使用默认端口号了,改下即可。


4. Java客户端测试

Tair是一个分布式的key/value存储系统,数据往往存储在多个数据节点上。客户端需要决定数据存储的具体节点,然后才能完成具体的操作。

Tair的客户端通过和configserver交互获取这部分信息。configserver会维护一张表,这张表包含hash值与存储其对应数据的节点的对照关系。客户端在启动时,需要先和configserver通信,获取这张对照表。

在获取到对照表后,客户端便可以开始提供服务。客户端会根据请求的key的hash值,查找对照表中负责该数据的数据节点,然后通过和数据节点通信完成用户的请求。


Tair当前支持Java和c++语言的客户端。Java客户端已有相应的实现(可从这里下载到相应的jar包),我们直接使用封装的接口操作即可,但C++客户端目前还没看到实现版本(需要自己实现)。这里以简单的Java客户端为例进行客户端测试。


4.1 依赖jar包

Java测试程序除了需要封装好的tair相关jar包之外,还需要tair依赖的一些jar包,具体的有下面几个(不一定是这个版本号):

commons-logging-1.1.3.jarslf4j-api-1.7.7.jarslf4j-log4j12-1.7.7.jarlog4j-1.2.17.jarmina-core-1.1.7.jartair-client-2.3.1.jar

4.2 Java客户端程序


首先请参考Tair用户指南里面的关于java客户端的接口说明,下面直接给出示例,很容易理解。


package tair.client;import java.util.ArrayList;import java.util.List;import com.taobao.tair.DataEntry;import com.taobao.tair.Result;import com.taobao.tair.ResultCode;import com.taobao.tair.impl.DefaultTairManager;/** * @author WangJianmin * @date 2014-7-9 * @description Java-client test application for tair. * */public class TairClientTest {public static void main(String[] args) {// 创建config server列表List<String> confServers = new ArrayList<String>();confServers.add("10.10.7.144:51980"); //confServers.add("10.10.7.144:51980"); // 可选// 创建客户端实例DefaultTairManager tairManager = new DefaultTairManager();tairManager.setConfigServerList(confServers);// 设置组名tairManager.setGroupName("group_1");// 初始化客户端tairManager.init();// put 10 itemsfor (int i = 0; i < 10; i++) {// 第一个参数是namespace,第二个是key,第三是value,第四个是版本,第五个是有效时间ResultCode result = tairManager.put(0, "k" + i, "v" + i, 0, 10);System.out.println("put k" + i + ":" + result.isSuccess());if (!result.isSuccess())break;}// get one// 第一个参数是namespce,第二个是keyResult<DataEntry> result = tairManager.get(0, "k3");System.out.println("get:" + result.isSuccess());if (result.isSuccess()) {DataEntry entry = result.getValue();if (entry != null) {// 数据存在System.out.println("value is " + entry.getValue().toString());} else {// 数据不存在System.out.println("this key doesn't exist.");}} else {// 异常处理System.out.println(result.getRc().getMessage());}}}

运行结果:

log4j:WARN No appenders could be found for logger (com.taobao.tair.impl.ConfigServer).log4j:WARN Please initialize the log4j system properly.log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.put k0:trueput k1:trueput k2:trueput k3:trueput k4:trueput k5:trueput k6:trueput k7:trueput k8:trueput k9:trueget:truevalue is v3

注意事项:测试如果不是在config server或data server上进行,那么一定要确保测试端系统与config server和data server能互相通信,即ping通。否则有可能会报下面这样的错误:

Exception in thread "main" java.lang.RuntimeException: init config failed at com.taobao.tair.impl.DefaultTairManager.init(DefaultTairManager.java:80) at tair.client.TairClientTest.main(TairClientTest.java:27)

我已将示例程序、需要的jar包及Makefile文件(我在Linux系统下测试,未用Eclipse跑程序)打包,需要的可以从这里下载。



5. 参考资料


1. TAIR home page

2. Tair用户指南

3. Too many open files 问题的解决