分布式文件系统fastdfs_V5.09配置双tracker负载均衡
环境准备
操作系统:
Centos 7
服务器:
IP:192.168.2.238
IP:192.168.2.239
软件包:
fastdfs-5.09.tar.gz
fastdfs-nginx-module_v1.16.tar.gz
libfastcommon-master
nginx-1.7.0.tar.gz
ngx_http_lower_upper_case-master
注:统一放置/home/soft目录,并且解压到当前目录
依赖包:
#
yum install gettext gettext-devel libXft libXft-devel libXpm libXpm-devel automake autoconf libXtst-devel gtk+-devel gcc gcc-c++zlib-devel libpng-devel gtk2-devel glib-devel pcre*
-y
一、安装部署fastdfs(192.168.2.238,192.168.2.239)以及相关模块、依赖组件
1、安装部署fastdfs-5.09
#cd fastdfs-5.09# ./make.sh #./make.sh install
2、编译安装部署nginx
#cd nginx-1.7.0#./configure --user=nginx --group=nginx --prefix=/usr/local/nginx --add-module=/home/soft/fastdfs-nginx-module/src/ --add-module=/home/soft/ngx_http_lower_upper_case-master/#make && make install
3、安装libfastcommon插件
#cd /home/soft/libfastcommon-master/#./make.sh#./make.sh install
4、创建相关目录以及链接文件
#mkdir -p /fastdfst/{tracker,storage,data1}#ln -s /fastdfs/data M00#ln -s /fastdfs/data1/data M01
两台服务器(192.168.2.238,192.168.2.239)均以上操作
二、配置相关的conf文件
1、安装fastdfs后配置conf文件,在/etc/fdfs/目录下
# ll /etc/fdfs/client.confclient.conf.samplehttp.confmime.typesmod_fastdfs.confmod_fastdfs.conf.bakstorage.confstorage.conf.samplestorage_ids.confstorage_ids.conf.sampletracker.conftracker.conf.sample
2、192.168.2.238修改tracker.conf、 storage.conf、storage_ids.conf、mod_fastdfs.conf、nginx.conf各配置文件如下:
#vim /etc/fdfs/tracker.conf
# is this config file disabled# false for enabled# true for disableddisabled=false # bind an address of this host# empty for bind all addresses of this hostbind_addr=192.168.2.238 # the tracker server portport=22124 # connect timeout in seconds# default value is 30sconnect_timeout=30 # network timeout in seconds# default value is 30snetwork_timeout=60 # the base path to store data and log filesbase_path=/fastdfs/tracker/#base_path=/data/fdfs/ # max concurrent connections this server supportedmax_connections=256 # accept thread count# default value is 1# since V4.07accept_threads=1 # work thread count, should <= max_connections# default value is 4# since V2.00work_threads=4 # the method of selecting group to upload files# 0: round robin# 1: specify group# 2: load balance, select the max free space group to upload filestore_lookup=2 # which group to upload file# when store_lookup set to 1, must set store_group to the group namestore_group=g1 # which storage server to upload file# 0: round robin (default)# 1: the first server order by ip address# 2: the first server order by priority (the minimal)store_server=2 # which path(means disk or mount point) of the storage server to upload file# 0: round robin# 2: load balance, select the max free space path to upload filestore_path=0 # which storage server to download file# 0: round robin (default)# 1: the source storage server which the current file uploaded todownload_server=0 # reserved storage space for system or other applications.# if the free(available) space of any stoarge server in# a group <= reserved_storage_space,# no file can be uploaded to this group.# bytes unit can be one of follows:### G or g for gigabyte(GB)### M or m for megabyte(MB)### K or k for kilobyte(KB)### no unit for byte(B)### XX.XX% as ratio such as reserved_storage_space = 10%reserved_storage_space = 10% #standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=info #unix group name to run this program,#not set (empty) means run by the group of current userrun_by_group= #unix username to run this program,#not set (empty) means run by current userrun_by_user= # allow_hosts can ocur more than once, host can be hostname or ip address,# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or# host[01-08,20-25].domain.com, for example:# allow_hosts=10.0.1.[1-15,20]# allow_hosts=host[01-08,20-25].domain.comallow_hosts=* # sync log buff to disk every interval seconds# default value is 10 secondssync_log_buff_interval = 10 # check storage server alive interval secondscheck_active_interval = 120 # thread stack size, should >= 64KB# default value is 64KBthread_stack_size = 64KB # auto adjust when the ip address of the storage server changed# default value is truestorage_ip_changed_auto_adjust = true # storage sync file max delay seconds# default value is 86400 seconds (one day)# since V2.00storage_sync_file_max_delay = 86400 # the max time of storage sync a file# default value is 300 seconds# since V2.00storage_sync_file_max_time = 300 # if use a trunk file to store several small files# default value is false# since V3.00use_trunk_file = false # the min slot size, should <= 4KB# default value is 256 bytes# since V3.00slot_min_size = 256 # the max slot size, should > slot_min_size# store the upload file to trunk file when it's size <= this value# default value is 16MB# since V3.00slot_max_size = 16MB # the trunk file size, should >= 4MB# default value is 64MB# since V3.00trunk_file_size = 64MB # if create trunk file advancely# default value is false# since V3.06trunk_create_file_advance = false # the time base to create trunk file# the time format: HH:MM# default value is 02:00# since V3.06trunk_create_file_time_base = 02:00 # the interval of create trunk file, unit: second# default value is 38400 (one day)# since V3.06trunk_create_file_interval = 86400 # the threshold to create trunk file# when the free trunk file size less than the threshold, will create# the trunk files# default value is 0# since V3.06trunk_create_file_space_threshold = 20G # if check trunk space occupying when loading trunk free spaces# the occupied spaces will be ignored# default value is false# since V3.09# NOTICE: set this parameter to true will slow the loading of trunk spaces# when startup. you should set this parameter to true when neccessary.trunk_init_check_occupying = false # if ignore storage_trunk.dat, reload from trunk binlog# default value is false# since V3.10# set to true once for version upgrade when your version less than V3.10trunk_init_reload_from_binlog = false # the min interval for compressing the trunk binlog file# unit: second# default value is 0, 0 means never compress# FastDFS compress the trunk binlog when trunk init and trunk destroy# recommand to set this parameter to 86400 (one day)# since V5.01trunk_compress_binlog_min_interval = 0 # if use storage ID instead of IP address# default value is false# since V4.00###################use_storage_id = falseuse_storage_id = true # specify storage ids filename, can use relative or absolute path# since V4.00storage_ids_filename = storage_ids.conf # id type of the storage server in the filename, values are:## ip: the ip address of the storage server## id: the server id of the storage server# this paramter is valid only when use_storage_id set to true# default value is ip# since V4.03id_type_in_filename = ip # if store slave file use symbol link# default value is false# since V4.01store_slave_file_use_link = false # if rotate the error log every day# default value is false# since V4.02rotate_error_log = false # rotate error log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.02error_log_rotate_time=00:00 # rotate error log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_error_log_size = 0 # if use connection pool# default value is false# since V4.05use_connection_pool = false # connections whose the idle time exceeds this time will be closed# unit: second# default value is 3600# since V4.05connection_pool_max_idle_time = 3600 # HTTP port on this tracker serverhttp.server_port=8080 # check storage HTTP server alive interval seconds# <= 0 for never check# default value is 30http.check_alive_interval=30 # check storage HTTP server alive type, values are:# tcp : connect to the storge server with HTTP port only,# do not request and get response# http: storage check alive url must return http status 200# default value is tcphttp.check_alive_type=tcp # check storage HTTP server alive uri/url# NOTE: storage embed HTTP server support uri: /status.htmlhttp.check_alive_uri=/status.html
# vim /etc/fdfs/storage.conf
# is this config file disabled# false for enabled# true for disableddisabled=false # the name of the group this storage server belongs togroup_name=g1 # bind an address of this host# empty for bind all addresses of this hostbind_addr= # if bind an address of this host when connect to other servers# (this storage server as a client)# true for binding the address configed by above parameter: "bind_addr"# false for binding any address of this hostclient_bind=true # the storage server portport=23000 # connect timeout in seconds# default value is 30sconnect_timeout=30 # network timeout in seconds# default value is 30snetwork_timeout=60 # heart beat interval in secondsheart_beat_interval=30 # disk usage report interval in secondsstat_report_interval=60 # the base path to store data and log filesbase_path=/fastdfs/storage/ # max concurrent connections the server supported# default value is 256# more max_connections means more memory will be usedmax_connections=256 # the buff size to recv / send data# this parameter must more than 8KB# default value is 64KB# since V2.00buff_size = 256KB # accept thread count# default value is 1# since V4.07accept_threads=1 # work thread count, should <= max_connections# work thread deal network io# default value is 4# since V2.00work_threads=4 # if disk read / write separated## false for mixed read and write## true for separated read and write# default value is true# since V2.00disk_rw_separated = true # disk reader thread count per store base path# for mixed read / write, this parameter can be 0# default value is 1# since V2.00disk_reader_threads = 1 # disk writer thread count per store base path# for mixed read / write, this parameter can be 0# default value is 1# since V2.00disk_writer_threads = 1 # when no entry to sync, try read binlog again after X milliseconds# must > 0, default value is 200mssync_wait_msec=50 # after sync a file, usleep milliseconds# 0 for sync successively (never call usleep)sync_interval=0 # storage sync start time of a day, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59sync_start_time=00:00 # storage sync end time of a day, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59sync_end_time=23:59 # write to the mark file after sync N files# default value is 500write_mark_file_freq=500 # path(disk or mount point) count, default value is 1store_path_count=2 # store_path#, based 0, if store_path0 not exists, it's value is base_path# the paths must be existstore_path0=/fastdfs/store_path1=/fastdfs/data1/#store_path1=/data/fdfs/data1/#store_path1=/home/yuqing/data/fdfs2 # subdir_count * subdir_count directories will be auto created under each# store_path (disk), value can be 1 to 256, default value is 256subdir_count_per_path=256 # tracker_server can ocur more than once, and tracker_server format is# "host:port", host can be hostname or ip address#tracker_server=192.168.2.239:22124tracker_server=192.168.2.238:22124tracker_server=192.168.2.239:22124#standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=debug #unix group name to run this program,#not set (empty) means run by the group of current userrun_by_group= #unix username to run this program,#not set (empty) means run by current userrun_by_user= # allow_hosts can ocur more than once, host can be hostname or ip address,# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or# host[01-08,20-25].domain.com, for example:# allow_hosts=10.0.1.[1-15,20]# allow_hosts=host[01-08,20-25].domain.comallow_hosts=* # the mode of the files distributed to the data path# 0: round robin(default)# 1: random, distributted by hash codefile_distribute_path_mode=0 # valid when file_distribute_to_path is set to 0 (round robin),# when the written file count reaches this number, then rotate to next path# default value is 100file_distribute_rotate_count=100 # call fsync to disk when write big file# 0: never call fsync# other: call fsync when written bytes >= this bytes# default value is 0 (never call fsync)fsync_after_written_bytes=0 # sync log buff to disk every interval seconds# must > 0, default value is 10 secondssync_log_buff_interval=10 # sync binlog buff / cache to disk every interval seconds# default value is 60 secondssync_binlog_buff_interval=10 # sync storage stat info to disk every interval seconds# default value is 300 secondssync_stat_file_interval=300 # thread stack size, should >= 512KB# default value is 512KBthread_stack_size=512KB # the priority as a source server for uploading file.# the lower this value, the higher its uploading priority.# default value is 10upload_priority=1 # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a# multi aliases split by comma. empty value means auto set by OS type# default values is emptyif_alias_prefix= # if check file duplicate, when set to true, use FastDHT to store file indexes# 1 or yes: need check# 0 or no: do not check# default value is 0check_file_duplicate=0 # file signature method for check file duplicate## hash: four 32 bits hash code## md5: MD5 signature# default value is hash# since V4.01file_signature_method=hash # namespace for storing file indexes (key-value pairs)# this item must be set when check_file_duplicate is true / onkey_namespace=FastDFS # set keep_alive to 1 to enable persistent connection with FastDHT servers# default value is 0 (short connection)keep_alive=0 # you can use "#include filename" (not include double quotes) directive to# load FastDHT server list, when the filename is a relative path such as# pure filename, the base path is the base path of current/this config file.# must set FastDHT server list when check_file_duplicate is true / on# please see INSTALL of FastDHT for detail##include /home/yuqing/fastdht/conf/fdht_servers.conf # if log to access log# default value is false# since V4.00use_access_log = false # if rotate the access log every day# default value is false# since V4.00rotate_access_log = false # rotate access log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.00access_log_rotate_time=00:00 # if rotate the error log every day# default value is false# since V4.02rotate_error_log = false # rotate error log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.02error_log_rotate_time=00:00 # rotate access log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_access_log_size = 0 # rotate error log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_error_log_size = 0 # if skip the invalid record when sync file# default value is false# since V4.02file_sync_skip_invalid_record=false # if use connection pool# default value is false# since V4.05use_connection_pool = false # connections whose the idle time exceeds this time will be closed# unit: second# default value is 3600# since V4.05connection_pool_max_idle_time = 3600 # use the ip address of this storage server if domain_name is empty,# else this domain name will ocur in the url redirected by the tracker serverhttp.domain_name= # the port of the web server on this storage serverhttp.server_port=81
#vim /etc/fdfs/storage_ids.conf
# <id> <group_name> <ip_or_hostname># 100001 group1 192.168.0.196# 100002 group1 192.168.0.116200050 g1 192.168.2.239200051 g1 192.168.2.238
#vim /etc/fdfs/mod_fastdfs.conf
# connect timeout in seconds# default value is 30sconnect_timeout=2 # network recv and send timeout in seconds# default value is 30snetwork_timeout=30 # the base path to store log filesbase_path=/tmp # if load FastDFS parameters from tracker server# since V1.12# default value is falseload_fdfs_parameters_from_tracker=false # storage sync file max delay seconds# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# since V1.12# default value is 86400 seconds (one day)storage_sync_file_max_delay = 86400 # if use storage ID instead of IP address# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# default value is false# since V1.13use_storage_id = false # specify storage ids filename, can use relative or absolute path# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# since V1.13storage_ids_filename = storage_ids.conf # FastDFS tracker_server can ocur more than once, and tracker_server format is# "host:port", host can be hostname or ip address# valid only when load_fdfs_parameters_from_tracker is truetracker_server=192.168.2.239:22124tracker_server=192.168.2.238:22124 # the port of the local storage server# the default value is 23000storage_server_port=23000 # the group name of the local storage servergroup_name=g1 # if the url / uri including the group name# set to false when uri like /M00/00/00/xxx# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx# default value is falseurl_have_group_name = true # path(disk or mount point) count, default value is 1# must same as storage.confstore_path_count=2 # store_path#, based 0, if store_path0 not exists, it's value is base_path# the paths must be exist# must same as storage.confstore_path0=/fastdfsstore_path1=/fastdfs/data1#store_path3=/data/data1#store_path1=/home/yuqing/fastdfs1 # standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=info # set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log# empty for output to stderr (apache and nginx error_log file)log_filename= # response mode when the file not exist in the local file system## proxy: get the content from other storage server, then send to client## redirect: redirect to the original storage server (HTTP Header is Location)response_mode=proxy # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a# multi aliases split by comma. empty value means auto set by OS type# this paramter used to get all ip address of the local host# default values is emptyif_alias_prefix= # use "#include" directive to include HTTP config file# NOTE: #include is an include directive, do NOT remove the # before include#include http.conf # if support flv# default value is false# since v1.15flv_support = true # flv file extension name# default value is flv# since v1.15flv_extension = flv # set the group count# set to none zero to support multi-group# set to 0 for single group only# groups settings section as [group1], [group2], ..., [groupN]# default value is 0# since v1.14group_count = 0 # group settings for group #1# since v1.14# when support multi-group, uncomment following section#[group1]#group_name=group1#storage_server_port=23000#store_path_count=2#store_path0=/home/yuqing/fastdfs#store_path1=/home/yuqing/fastdfs1 # group settings for group #2# since v1.14# when support multi-group, uncomment following section as neccessary#[group2]#group_name=group2#storage_server_port=23000#store_path_count=1#store_path0=/home/yuqing/fastdfs
#vim /usr/local/nginx/conf/nginx.conf
user root;worker_processes 1; #error_log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024;} http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; client_max_body_size 10m; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 81; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location /g1/M00 { root /fastdfs/data; ngx_fastdfs_module; index index.html index.htm; client_max_body_size 10m; } location /g1/M01 { root /fastdfs/data1/data; ngx_fastdfs_module; index index.html index.htm; client_max_body_size 10m; } # location / { # lower $lower_uri "$request_uri"; # rewrite .* $lower_uri break; # root /var/www/static ; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # server { listen 8000; server_name _; location / { root html; index index.html index.htm; } } server { listen 8001; server_name _; location / { root html; index index2.html index2.htm; } } # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
3、192.168.2.239修改tracker.conf、 storage.conf、storage_ids.conf、mod_fastdfs.conf、nginx.conf各配置文件如下:
#vim /etc/fdfs/tracker.conf
# is this config file disabled# false for enabled# true for disableddisabled=false # bind an address of this host# empty for bind all addresses of this hostbind_addr=192.168.2.239 # the tracker server portport=22124 # connect timeout in seconds# default value is 30sconnect_timeout=30 # network timeout in seconds# default value is 30snetwork_timeout=60 # the base path to store data and log filesbase_path=/fastdfs/tracker/#base_path=/data/fdfs/ # max concurrent connections this server supportedmax_connections=256 # accept thread count# default value is 1# since V4.07accept_threads=1 # work thread count, should <= max_connections# default value is 4# since V2.00work_threads=4 # the method of selecting group to upload files# 0: round robin# 1: specify group# 2: load balance, select the max free space group to upload filestore_lookup=2 # which group to upload file# when store_lookup set to 1, must set store_group to the group namestore_group=g1 # which storage server to upload file# 0: round robin (default)# 1: the first server order by ip address# 2: the first server order by priority (the minimal)store_server=2 # which path(means disk or mount point) of the storage server to upload file# 0: round robin# 2: load balance, select the max free space path to upload filestore_path=0 # which storage server to download file# 0: round robin (default)# 1: the source storage server which the current file uploaded todownload_server=0 # reserved storage space for system or other applications.# if the free(available) space of any stoarge server in# a group <= reserved_storage_space,# no file can be uploaded to this group.# bytes unit can be one of follows:### G or g for gigabyte(GB)### M or m for megabyte(MB)### K or k for kilobyte(KB)### no unit for byte(B)### XX.XX% as ratio such as reserved_storage_space = 10%reserved_storage_space = 6G #standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=info #unix group name to run this program,#not set (empty) means run by the group of current userrun_by_group= #unix username to run this program,#not set (empty) means run by current userrun_by_user= # allow_hosts can ocur more than once, host can be hostname or ip address,# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or# host[01-08,20-25].domain.com, for example:# allow_hosts=10.0.1.[1-15,20]# allow_hosts=host[01-08,20-25].domain.comallow_hosts=* # sync log buff to disk every interval seconds# default value is 10 secondssync_log_buff_interval = 10 # check storage server alive interval secondscheck_active_interval = 120 # thread stack size, should >= 64KB# default value is 64KBthread_stack_size = 64KB # auto adjust when the ip address of the storage server changed# default value is truestorage_ip_changed_auto_adjust = true # storage sync file max delay seconds# default value is 86400 seconds (one day)# since V2.00storage_sync_file_max_delay = 86400 # the max time of storage sync a file# default value is 300 seconds# since V2.00storage_sync_file_max_time = 300 # if use a trunk file to store several small files# default value is false# since V3.00use_trunk_file = false # the min slot size, should <= 4KB# default value is 256 bytes# since V3.00slot_min_size = 256 # the max slot size, should > slot_min_size# store the upload file to trunk file when it's size <= this value# default value is 16MB# since V3.00slot_max_size = 16MB # the trunk file size, should >= 4MB# default value is 64MB# since V3.00trunk_file_size = 64MB # if create trunk file advancely# default value is false# since V3.06trunk_create_file_advance = false # the time base to create trunk file# the time format: HH:MM# default value is 02:00# since V3.06trunk_create_file_time_base = 02:00 # the interval of create trunk file, unit: second# default value is 38400 (one day)# since V3.06trunk_create_file_interval = 86400 # the threshold to create trunk file# when the free trunk file size less than the threshold, will create# the trunk files# default value is 0# since V3.06trunk_create_file_space_threshold = 10G # if check trunk space occupying when loading trunk free spaces# the occupied spaces will be ignored# default value is false# since V3.09# NOTICE: set this parameter to true will slow the loading of trunk spaces# when startup. you should set this parameter to true when neccessary.trunk_init_check_occupying = false # if ignore storage_trunk.dat, reload from trunk binlog# default value is false# since V3.10# set to true once for version upgrade when your version less than V3.10trunk_init_reload_from_binlog = false # the min interval for compressing the trunk binlog file# unit: second# default value is 0, 0 means never compress# FastDFS compress the trunk binlog when trunk init and trunk destroy# recommand to set this parameter to 86400 (one day)# since V5.01trunk_compress_binlog_min_interval = 0 # if use storage ID instead of IP address# default value is false# since V4.00###################use_storage_id = falseuse_storage_id = true # specify storage ids filename, can use relative or absolute path# since V4.00storage_ids_filename = storage_ids.conf # id type of the storage server in the filename, values are:## ip: the ip address of the storage server## id: the server id of the storage server# this paramter is valid only when use_storage_id set to true# default value is ip# since V4.03id_type_in_filename = ip # if store slave file use symbol link# default value is false# since V4.01store_slave_file_use_link = false # if rotate the error log every day# default value is false# since V4.02rotate_error_log = false # rotate error log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.02error_log_rotate_time=00:00 # rotate error log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_error_log_size = 0 # if use connection pool# default value is false# since V4.05use_connection_pool = false # connections whose the idle time exceeds this time will be closed# unit: second# default value is 3600# since V4.05connection_pool_max_idle_time = 3600 # HTTP port on this tracker serverhttp.server_port=8080 # check storage HTTP server alive interval seconds# <= 0 for never check# default value is 30http.check_alive_interval=30 # check storage HTTP server alive type, values are:# tcp : connect to the storge server with HTTP port only,# do not request and get response# http: storage check alive url must return http status 200# default value is tcphttp.check_alive_type=tcp # check storage HTTP server alive uri/url# NOTE: storage embed HTTP server support uri: /status.htmlhttp.check_alive_uri=/status.html
#vim /etc/fdfs/storage.conf
# is this config file disabled# false for enabled# true for disableddisabled=false # the name of the group this storage server belongs togroup_name=g1 # bind an address of this host# empty for bind all addresses of this hostbind_addr= # if bind an address of this host when connect to other servers# (this storage server as a client)# true for binding the address configed by above parameter: "bind_addr"# false for binding any address of this hostclient_bind=true # the storage server portport=23000 # connect timeout in seconds# default value is 30sconnect_timeout=30 # network timeout in seconds# default value is 30snetwork_timeout=60 # heart beat interval in secondsheart_beat_interval=30 # disk usage report interval in secondsstat_report_interval=60 # the base path to store data and log filesbase_path=/fastdfs/storage/ # max concurrent connections the server supported# default value is 256# more max_connections means more memory will be usedmax_connections=256 # the buff size to recv / send data# this parameter must more than 8KB# default value is 64KB# since V2.00buff_size = 256KB # accept thread count# default value is 1# since V4.07accept_threads=1 # work thread count, should <= max_connections# work thread deal network io# default value is 4# since V2.00work_threads=4 # if disk read / write separated## false for mixed read and write## true for separated read and write# default value is true# since V2.00disk_rw_separated = true # disk reader thread count per store base path# for mixed read / write, this parameter can be 0# default value is 1# since V2.00disk_reader_threads = 1 # disk writer thread count per store base path# for mixed read / write, this parameter can be 0# default value is 1# since V2.00disk_writer_threads = 1 # when no entry to sync, try read binlog again after X milliseconds# must > 0, default value is 200mssync_wait_msec=50 # after sync a file, usleep milliseconds# 0 for sync successively (never call usleep)sync_interval=0 # storage sync start time of a day, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59sync_start_time=00:00 # storage sync end time of a day, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59sync_end_time=23:59 # write to the mark file after sync N files# default value is 500write_mark_file_freq=500 # path(disk or mount point) count, default value is 1store_path_count=2 # store_path#, based 0, if store_path0 not exists, it's value is base_path# the paths must be existstore_path0=/fastdfs/store_path1=/fastdfs/data1/#store_path1=/data/fdfs/data1/#store_path1=/home/yuqing/data/fdfs2 # subdir_count * subdir_count directories will be auto created under each# store_path (disk), value can be 1 to 256, default value is 256subdir_count_per_path=256 # tracker_server can ocur more than once, and tracker_server format is# "host:port", host can be hostname or ip addresstracker_server=192.168.2.239:22124tracker_server=192.168.2.238:22124#standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=debug #unix group name to run this program,#not set (empty) means run by the group of current userrun_by_group= #unix username to run this program,#not set (empty) means run by current userrun_by_user= # allow_hosts can ocur more than once, host can be hostname or ip address,# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or# host[01-08,20-25].domain.com, for example:# allow_hosts=10.0.1.[1-15,20]# allow_hosts=host[01-08,20-25].domain.comallow_hosts=* # the mode of the files distributed to the data path# 0: round robin(default)# 1: random, distributted by hash codefile_distribute_path_mode=0 # valid when file_distribute_to_path is set to 0 (round robin),# when the written file count reaches this number, then rotate to next path# default value is 100file_distribute_rotate_count=100 # call fsync to disk when write big file# 0: never call fsync# other: call fsync when written bytes >= this bytes# default value is 0 (never call fsync)fsync_after_written_bytes=0 # sync log buff to disk every interval seconds# must > 0, default value is 10 secondssync_log_buff_interval=10 # sync binlog buff / cache to disk every interval seconds# default value is 60 secondssync_binlog_buff_interval=10 # sync storage stat info to disk every interval seconds# default value is 300 secondssync_stat_file_interval=300 # thread stack size, should >= 512KB# default value is 512KBthread_stack_size=512KB # the priority as a source server for uploading file.# the lower this value, the higher its uploading priority.# default value is 10upload_priority=1 # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a# multi aliases split by comma. empty value means auto set by OS type# default values is emptyif_alias_prefix= # if check file duplicate, when set to true, use FastDHT to store file indexes# 1 or yes: need check# 0 or no: do not check# default value is 0check_file_duplicate=0 # file signature method for check file duplicate## hash: four 32 bits hash code## md5: MD5 signature# default value is hash# since V4.01file_signature_method=hash # namespace for storing file indexes (key-value pairs)# this item must be set when check_file_duplicate is true / onkey_namespace=FastDFS # set keep_alive to 1 to enable persistent connection with FastDHT servers# default value is 0 (short connection)keep_alive=0 # you can use "#include filename" (not include double quotes) directive to# load FastDHT server list, when the filename is a relative path such as# pure filename, the base path is the base path of current/this config file.# must set FastDHT server list when check_file_duplicate is true / on# please see INSTALL of FastDHT for detail##include /home/yuqing/fastdht/conf/fdht_servers.conf # if log to access log# default value is false# since V4.00use_access_log = false # if rotate the access log every day# default value is false# since V4.00rotate_access_log = false # rotate access log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.00access_log_rotate_time=00:00 # if rotate the error log every day# default value is false# since V4.02rotate_error_log = false # rotate error log time base, time format: Hour:Minute# Hour from 0 to 23, Minute from 0 to 59# default value is 00:00# since V4.02error_log_rotate_time=00:00 # rotate access log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_access_log_size = 0 # rotate error log when the log file exceeds this size# 0 means never rotates log file by log file size# default value is 0# since V4.02rotate_error_log_size = 0 # if skip the invalid record when sync file# default value is false# since V4.02file_sync_skip_invalid_record=false # if use connection pool# default value is false# since V4.05use_connection_pool = false # connections whose the idle time exceeds this time will be closed# unit: second# default value is 3600# since V4.05connection_pool_max_idle_time = 3600 # use the ip address of this storage server if domain_name is empty,# else this domain name will ocur in the url redirected by the tracker serverhttp.domain_name= # the port of the web server on this storage serverhttp.server_port=81
#vim /etc/fdfs/storage_ids.conf
# <id> <group_name> <ip_or_hostname># 100001 group1 192.168.0.196# 100002 group1 192.168.0.116200050 g1 192.168.2.239200051 g1 192.168.2.238
#vim /etc/fdfs/mod_fastdfs.conf
# connect timeout in seconds# default value is 30sconnect_timeout=2 # network recv and send timeout in seconds# default value is 30snetwork_timeout=30 # the base path to store log filesbase_path=/tmp # if load FastDFS parameters from tracker server# since V1.12# default value is falseload_fdfs_parameters_from_tracker=false # storage sync file max delay seconds# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# since V1.12# default value is 86400 seconds (one day)storage_sync_file_max_delay = 86400 # if use storage ID instead of IP address# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# default value is false# since V1.13use_storage_id = false # specify storage ids filename, can use relative or absolute path# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# since V1.13storage_ids_filename = storage_ids.conf # FastDFS tracker_server can ocur more than once, and tracker_server format is# "host:port", host can be hostname or ip address# valid only when load_fdfs_parameters_from_tracker is truetracker_server=192.168.2.239:22124tracker_server=192.168.2.238:22124 # the port of the local storage server# the default value is 23000storage_server_port=23000 # the group name of the local storage servergroup_name=g1 # if the url / uri including the group name# set to false when uri like /M00/00/00/xxx# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx# default value is falseurl_have_group_name = true # path(disk or mount point) count, default value is 1# must same as storage.confstore_path_count=2 # store_path#, based 0, if store_path0 not exists, it's value is base_path# the paths must be exist# must same as storage.confstore_path0=/fastdfsstore_path1=/fastdfs/data1#store_path3=/data/data1#store_path1=/home/yuqing/fastdfs1 # standard log level as syslog, case insensitive, value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=info # set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log# empty for output to stderr (apache and nginx error_log file)log_filename= # response mode when the file not exist in the local file system## proxy: get the content from other storage server, then send to client## redirect: redirect to the original storage server (HTTP Header is Location)response_mode=proxy # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a# multi aliases split by comma. empty value means auto set by OS type# this paramter used to get all ip address of the local host# default values is emptyif_alias_prefix= # use "#include" directive to include HTTP config file# NOTE: #include is an include directive, do NOT remove the # before include#include http.conf # if support flv# default value is false# since v1.15flv_support = true # flv file extension name# default value is flv# since v1.15flv_extension = flv # set the group count# set to none zero to support multi-group# set to 0 for single group only# groups settings section as [group1], [group2], ..., [groupN]# default value is 0# since v1.14group_count = 0 # group settings for group #1# since v1.14# when support multi-group, uncomment following section#[group1]#group_name=group1#storage_server_port=23000#store_path_count=2#store_path0=/home/yuqing/fastdfs#store_path1=/home/yuqing/fastdfs1 # group settings for group #2# since v1.14# when support multi-group, uncomment following section as neccessary#[group2]#group_name=group2#storage_server_port=23000#store_path_count=1#store_path0=/home/yuqing/fastdfs
#vim /usr/local/nginx/conf/nginx.conf
user root;worker_processes 1; #error_log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024;} http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; client_max_body_size 10m; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 81; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location /g1/M00 { #root /fastdfs; root /fastdfs/data; ngx_fastdfs_module; index index.html index.htm; client_max_body_size 10m; } location /g1/M01 { root /fastdfs/data1/data; #root /fastdfs/data1/data; ngx_fastdfs_module; index index.html index.htm; client_max_body_size 10m; } # location / { # lower $lower_uri "$request_uri"; # rewrite .* $lower_uri break; # root /var/www/static ; # index index.html index.htm; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # server { listen 8000; server_name _; location / { root html; index index.html index.htm; } } server { listen 8001; server_name _; location / { root html; index index2.html index2.htm; } } # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
4、192.168.2.238、192.168.2.239启动相关服务
# /etc/init.d/fdfs_trackerd start# /etc/init.d/fdfs_storaged start# /usr/local/nginx/sbin/nginx# ss -tnlp (查看监听的tracker服务22124端口、storage服务23000端口、nginx服务81端口是否正常)
三、测试上传访问
1)在fastdfs服务器(192.168.2.239)上传239.jpg图片,并在通过返回的路径,在浏览器测试访问
[root@localhost ~]# fdfs_upload_file /etc/fdfs/client.conf /root/239.jpg
g1/M00/00/32/wKgC71kaaGqAbJDqAACKBbw6aXc203.jpg
访问测试
http://192.168.2.238:81/g1/M00/00/32/wKgC71kaaGqAbJDqAACKBbw6aXc203.jpg
http://192.168.2.239:81/g1/M00/00/32/wKgC71kaaGqAbJDqAACKBbw6aXc203.jpg
测试返回正常图片(详见下图)
2)在fastdfs服务器(192.168.2.238)上传238.jpg图片,并在通过返回的路径,在浏览器测试访问
[root@localhost ~]# fdfs_upload_file /etc/fdfs/client.conf /root/238.jpg
g1/M00/00/32/wKgC71kabh-AbUECAAC8W3Iysrc946.jpg
访问测试
http://192.168.2.238:81/g1/M00/00/32/wKgC71kabh-AbUECAAC8W3Iysrc946.jpg
http://192.168.2.239:81/g1/M00/00/32/wKgC71kabh-AbUECAAC8W3Iysrc946.jpg
测试返回正常图片(详见下图)
四、常见Q&A总结
1、fastdfs-nginx-module模块安装,make的时候会报一下错误,
root/fastdfs-nginx-module/src//common.c:21:25: fatal error: fdfs_define.h: No such file or directory
#include "fdfs_define.h"
^
compilation terminated.
make[1]: *** [objs/addon/src/ngx_http_fastdfs_module.o] Error 1
原因:编译安装nginx的fastdfs插件的头文件没有找到,由于编译nginx时候系统会到/usr/local/include,而编译安装fastdfs-nginx-module时则默认保存在了/usr/include目录。
解决办法:
vim /root/fastdfs-nginx-module/src/config
CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/"
CORE_LIBS="$CORE_LIBS -L/usr/lib -lfastcommon -lfdfsclient"
并且建立链接:
#ln -s /usr/include/fast* /usr/local/include/
#ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
#ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
#ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
#ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
重新编译,安装即可
2、在fastdfs_5.09,执行./make.sh 编译时提示编程
cc -Wall -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -g -O -DDEBUG_FLAG -c -o ../common/fdfs_global.o ../common/fdfs_global.c -I../common -I../tracker -I/usr/include/fastcommon
../common/fdfs_global.c:20:20: fatal error: logger.h: No such file or directory
#include "logger.h"
^
compilation terminated.
make: *** [../common/fdfs_global.o] Error 1、
解决方法:
# wget https://github.com/happyfish100/libfastcommon/archive/master.zip
# unzip master.zip
# cd libfastcommon-master/
# ./make.sh
# ./make.sh install
3、新增storage存储服务器后,无法启动
[2017-05-12 17:09:40] ERROR - file: tracker_proto.c, line: 48, server: 192.168.2.239:22124, response status 2 != 0
[2017-05-12 17:09:40] CRIT - file: storage_func.c, line: 1886, get my server id from tracker server fail, errno: 2, error info: No such file or directory
原因:启用了storage_ids.conf文件,来限制同一域或机房概念,必须把tracker和storage 的服务器ip必须在同一个域内
修改为/etc/fdfs/storage_ids.conf如下:
[root@localhost fdfs]# cat storage_ids.conf
# <id> <group_name> <ip_or_hostname>
# 100001 group1 192.168.0.196
# 100002 group1 192.168.0.116
200050 g1 192.168.2.239
200051 g1 192.168.2.238
4、(fastdfs版本v5.09)实际生产环境中出现这样的场景:当本身存在集群中只有一个tracker(本示例为10.x.x.3,并且当前为主leader)时,新加入一个tracker(如:本示例10.x.x.6为新加入的tracker)后,并且启动tracker服务,查看#tail -f /data/tracker/logs/trackerd.log 日志,报错如下:
[2017-05-19 23:04:33] ERROR - file: tracker_mem.c, line: 4277, get sys files from other trackers fail, errno: 2
[2017-05-19 23:04:52] ERROR - file: tracker_proto.c, line: 48, server: 10.x.x.3:22122, response status 5 != 0
[2017-05-19 23:04:52] INFO - file: tracker_mem.c, line: 4213, sys files loaded from tracker server 10.x.x.3:22122
[2017-05-19 23:04:52] ERROR - file: tracker_mem.c, line: 596, in the file "/data/tracker/data/storage_groups_new.dat", item "group_count" is not found
同时查看10.x.x.3 日志#tail -f /fastdfs/data/tracker/logs/trackerd.log,也会报以下错误:
[2017-05-19 23:01:54] ERROR - file: tracker_service.c, line: 2008, client ip: 10.x.x.6, read bytes: 229 != expect bytes: 230
[2017-05-19 23:02:04] ERROR - file: tracker_service.c, line: 2008, client ip: 10.x.x.6, read bytes: 229 != expect bytes: 230
原因:查看余大大的源码社区或谷歌搜索,是多tracker存在bug,github上更新了新版本V5.10已经修复这个问题。但经过验证该问题应该是新加入的tracker在集群同步时监测stroage同步情况存在差异时,有可能出现该问题。
解决方法:
1)、升级fastdfs版本至V5.10以上
2)、先重启所有的fdfs_stroaged服务,再建议先暂停当前主leader,然后清除新加入tracker 的/data/stracker/data/目录
下所有*.bat文件,再启动新加入的tracker服务,有可能出现如下信息(以下为示例实际操作中,提示的信息):
[2017-05-24 02:38:34] ERROR - file: tracker_mem.c, line: 596, in the file "/data/tracker/data/storage_groups_new.dat", item "group_count" is not found
[2017-05-24 02:38:34] ERROR - file: tracker_mem.c, line: 4277, get sys files from other trackers fail, errno: 2
[2017-05-24 02:38:42] ERROR - file: tracker_service.c, line: 883, client ip: 10.x.x.3, leader 10.x.x.3:22122 not exist
[2017-05-24 02:38:45] ERROR - file: tracker_service.c, line: 883, client ip: 10.x.x.3, leader 10.x.x.3:22122 not exist
[2017-05-24 02:38:55] ERROR - file: tracker_service.c, line: 883, client ip: 10.x.x.3, leader 10.x.x.3:22122 not exist
[2017-05-24 02:38:59] INFO - file: tracker_relationship.c, line: 383, selecting leader...
[2017-05-24 02:38:59] INFO - file: tracker_relationship.c, line: 401, I am the new tracker leader 10.x.x.6:22122 #####这时说明新的tracker被选举为新的主leader
再新tracker被选举为新leader过程中,估计要等待1-2分钟左右,应该是监测并确认原会话中的老的leader不存在之后才确定选举新的leader。查看以上信息再启动老的leader(本示例中为10.x.x.3)tracker服务,正常情况会看到以下日志
[2017-05-24 02:39:00] INFO - file: tracker_service.c, line: 969, the tracker leader is 10.x.x.6:22122
。
至此说明新的tracker已经被集群中所识别,并且实现双tracker下负载以及故障转移。
五、模拟故障演练
1、当前主leader为192.168.2.238,tail tracker日志可以查看状态如下:
#tail -f /fastdfs/tracker/log/tracker.log[2017-05-18 17:13:42] INFO - file: tracker_relationship.c, line: 401, I am the new tracker leader 192.168.2.238:22124
2、模拟故障stop 192.168.2.238 tracker服务
# /etc/init.d/fdfs_trackerd stopStopping fdfs_trackerd (via systemctl): [ OK ]
3、查看192.168.2.239,tracker日志,这时候192.168.2.239提升为新的leader,启动192.168.2.238的tracker自动变为备的
#tail -f /fastdfs/tracker/log/tracker.log[2017-05-18 17:17:58] INFO - file: tracker_relationship.c, line: 401, I am the new tracker leader 192.168.2.239:22124
#启动192.168.2.238 stracker服务,日志显示
# /etc/init.d/fdfs_trackerd start#tail -f /fastdfs/tracker/log/tracker.log[2017-05-18 17:20:44] INFO - FastDFS v5.09, base_path=/fastdfs/tracker, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=22124, bind_addr=192.168.2.238, max_connections=256, accept_threads=1, work_threads=4, min_buff_size=8192, max_buff_size=131072, store_lookup=2, store_group=, store_server=2, store_path=0, reserved_storage_space=10.00%, download_server=0, allow_ip_count=-1, sync_log_buff_interval=10s, check_active_interval=120s, thread_stack_size=64 KB, storage_ip_changed_auto_adjust=1, storage_sync_file_max_delay=86400s, storage_sync_file_max_time=300s, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, use_storage_id=1, id_type_in_filename=ip, storage_id_count=2, rotate_error_log=0, error_log_rotate_time=00:00, rotate_error_log_size=0, log_file_keep_days=0, store_slave_file_use_link=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s[2017-05-18 17:21:06] INFO - file: tracker_relationship.c, line: 383, selecting leader...[2017-05-18 17:21:06] INFO - file: tracker_relationship.c, line: 422, the tracker leader 192.168.2.239:22124
4、测试上传是否正常和浏览图片是否正常
http://192.168.2.239:81/g1/M00/00/32/wKgC71kabh-AbUECAAC8W3Iysrc946.jpg
高可用情况下建议,访问存储服务器使用域名轮询,如:file.abc.com +haroxy 负载至后端两台stroage存储服务器
acl fileserver hdr_beg(host) -i file.abc.comuse_backend file.abc.com if fileserver backend file.abc.commode httpoption forwardforoption httplogbalance roundrobin#reqirep ^Host:\ www.1card1.cn Host:\ file.abc.comserver db1 192.168.2.238:81 cookie files checkserver db2 192.168.2.239:81 cookie files check
注:以上各配置文件配置项示例简析请参阅其他文档或官方文档!!!
本文出自 “一万小时定律” 博客,请务必保留此出处http://daisywei.blog.51cto.com/7837970/1928922