转自:http://blog.csdn.net/ylqmf/article/details/7958804
本文提供下载:http://wenku.baidu.com/view/d57d1d1e227916888486d7a9.html
mongodb分片群集(sharding cluster)
目录
mongodb分片群集(sharding cluster) 1
变更记录 2
硬件说明 2
1. cpu 2
2. 内存 2
3. 硬盘 2
系统设置 3
权限管理 4
软件以及脚本准备 4
1. yum 安装支持软件 4
2. mongodb 4
3. V8引擎 4
4. GYP 4
目录结构 4
1. Mongodb所在的目录4
a) /opt/soft/mongo-2.2.04
2. 数据目录 4
3. 日志文件以及集群中间目录5
架构图 5
安装配置流程 5
1. 创建mongodb用户6
2. 创建安装目录 6
3. 软件准备 8
4. 需要在/etc/hosts中设置127.0.0.1 servername9
5. 配置mongod 9
6. 配置mongos config11
7. 测试分片 12
8. 测试V8引擎 13
维护命令 13
注意事项 16
备注 16
附1:op同学提供的iptables脚本,很好很强大。 16
附2:test.js 21
变更记录
日期 作者 版本说明
2012-09-08 袁立强1.0 初稿-小米电商DBA组
2012-09-12 袁立强1.0.1 采纳运维建议后更新
原文链接:http://blog.csdn.net/ylqmf/article/details/7958804
硬件说明
1. cpu
a) cpu情况为2 cpu 8 core 16 process。
b) 使用numactl来增强mongo对多核架构的利用效率,将每个服务器上的2个mongod实例分配到不同的cpu node 上,将arbiter、config 和 mongos分配到所有core上。
2. 内存
a) 内存64GB
b) 正测试使用/etc/security/limits.conf做限制,防止多实例争抢内存。
3. 硬盘
a) mongo的bson文件系统极其耗费磁盘空间、journal local moveChunk 目录下的文件操作读写频繁而数据基本不增长。
b) 放置mongodb数据文件所在的磁盘做raid 5,大小4.2TB。
c) 放置journal local moveChunk 目录放置在系统盘下做raid 1+0,大小300GB。
系统设置
操作系统:CentOS6.0 2.6.32-220.el6.x86_64 GNU/Linux
文件系统:
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/sdb1 on /data type ext4 (rw)
/dev/sda5 on /data1 type ext4 (rw)
因mongo集群中需要频繁的建立tcp连接,所以这里对tcp进行优化
cat >> /etc/sysctl.conf << EOF
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_wmem = 8192 436600 873200
net.ipv4.tcp_rmem = 32768 436600 873200
net.ipv4.tcp_mem = 94500000 91500000 92700000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_fin_timeout = 30
EOF
vi /etc/security/limits.conf
mongo soft stack 4096
mongo hard stack 10240
/sbin/sysctl -w vm.swappiness=0
/sbin/sysctl –p
权限管理
mongod 有参数 --auth 可以实现用户管理,但是在 mongodb cluster nodes中,大量的连接使用auth验证会带来性能的急剧下降,极端的时候qps会下降到1K以下。
所以这里选择了折中方案,开启iptables来限制访问来源,只允许线上web服务器和mysql服务器访问mongodb cluster中的27001端口,其余端口只能是集群内相互访问。
附1:op同学提供的iptables脚本,很好很强大。
软件以及脚本准备
1. yum 安装支持软件
a) yum -y install wget vim-enhanced subversion
b) yum -y install numactl python make pcre scons boost-devel boost-filesystem gcc gcc-c++ glibc-devel.i686 libstdc++.i686 libpcap libpcap-devel
2. mongodb
a) wget http://downloads.mongodb.org/src/mongodb-src-r2.2.0.tar.gz
3. V8引擎
a) svn checkout http://v8.googlecode.com/svn/trunk/ v8
4. GYP
a) svn co http://gyp.googlecode.com/svn/trunk build/gyp
目录结构
1. Mongodb所在的目录
a) /opt/soft/mongo-2.2.0
2. 数据目录
a) /data/mongodb/db/
config
shard11
shard32
shard20
3. 日志文件以及集群中间目录
a) /data1/logs/mongodb/
config
shard11
shard32
shard20
架构图
三切片:shard1 shard2 shard3
一主一从每个shard 主机 都在其他机器上做了一个从机,这样当三台服务器任何一台挂掉的时候,只须在另外一台机器上的监控机投票,选出从机替换主机即可实现自动切换。
安装配置流程
1. 创建mongodb用户
/usr/sbin/groupadd -g 1004 mongodb
/usr/sbin/useradd -g mongodb mongodb -u 1004 -s /sbin/nologin
2. 创建安装目录
Shard1:
mkdir -p /data/mongodb/db/shard11
mkdir -p /data/mongodb/db/shard32
mkdir -p /data/mongodb/db/shard20
mkdir -p /data/mongodb/db/config
mkdir -p /data1/logs/mongodb/shard11/ moveChunk
mkdir -p /data1/logs/mongodb/shard11/ local
mkdir -p /data1/logs/mongodb/shard11/journal
mkdir -p /data1/logs/mongodb/shard32/ moveChunk
mkdir -p /data1/logs/mongodb/shard32/ local
mkdir -p /data1/logs/mongodb/shard32/journal
mkdir -p /data1/logs/mongodb/shard20/ local
mkdir -p /data1/logs/mongodb/shard20/journal
mkdir -p /data1/logs/mongodb/config/config
mkdir -p /data1/logs/mongodb/config/ journal
ln -s /data1/logs/mongodb/shard11/ moveChunk /data/mongodb/db/shard11/ moveChunk
ln -s /data1/logs/mongodb/shard11/ local /data/mongodb/db/shard11/ local
ln -s /data1/logs/mongodb/shard11/journal /data/mongodb/db/shard11/journal
ln -s /data1/logs/mongodb/shard32/ moveChunk /data/mongodb/db/shard32/ moveChunk
ln -s /data1/logs/mongodb/shard32/ local /data/mongodb/db/shard32/ local
ln -s /data1/logs/mongodb/shard32/journal /data/mongodb/db/shard32/journal
ln -s /data1/logs/mongodb/shard20/ local /data/mongodb/db/shard20/ local
ln -s /data1/logs/mongodb/shard20/journal /data/mongodb/db/shard20/journal
ln -s /data1/logs/mongodb/config/config /data/mongodb/db/config/ config
ln –s /data1/logs/mongodb/config/journal /data/mongodb/db/ config /journal
chown mongodb:mongodb –R /data/mongodb/db/
chown mongodb:mongodb –R /data1/logs/mongodb/
Shard2:
mkdir -p /data/mongodb/db/shard12
mkdir -p /data/mongodb/db/shard21
mkdir -p /data/mongodb/db/shard30
mkdir -p /data/mongodb/db/config
mkdir -p /data1/logs/mongodb/shard12/ moveChunk
mkdir -p /data1/logs/mongodb/shard12/ local
mkdir -p /data1/logs/mongodb/shard12/journal
mkdir -p /data1/logs/mongodb/shard21/ moveChunk
mkdir -p /data1/logs/mongodb/shard21/ local
mkdir -p /data1/logs/mongodb/shard21/journal
mkdir -p /data1/logs/mongodb/shard30/ local
mkdir -p /data1/logs/mongodb/shard30/journal
mkdir -p /data1/logs/mongodb/config/config
mkdir -p /data1/logs/mongodb/config/ journal
ln -s /data1/logs/mongodb/shard12/ moveChunk /data/mongodb/db/shard12/ moveChunk
ln -s /data1/logs/mongodb/shard12/ local /data/mongodb/db/shard12/ local
ln -s /data1/logs/mongodb/shard12/journal /data/mongodb/db/shard12/journal
ln -s /data1/logs/mongodb/shard21/ moveChunk /data/mongodb/db/shard21/ moveChunk
ln -s /data1/logs/mongodb/shard21/ local /data/mongodb/db/shard21/ local
ln -s /data1/logs/mongodb/shard21/journal /data/mongodb/db/shard21/journal
ln -s /data1/logs/mongodb/shard30/ local /data/mongodb/db/shard30/ local
ln -s /data1/logs/mongodb/shard30/journal /data/mongodb/db/shard30/journal
ln -s /data1/logs/mongodb/config/config /data/mongodb/db/config/ config
ln –s /data1/logs/mongodb/config/journal /data/mongodb/db/ config /journal
chown mongodb:mongodb –R /data/mongodb/db/
chown mongodb:mongodb –R /data1/logs/mongodb/
Shard3:
mkdir -p /data/mongodb/db/shard22
mkdir -p /data/mongodb/db/shard31
mkdir -p /data/mongodb/db/shard10
mkdir -p /data/mongodb/db/config
mkdir -p /data1/logs/mongodb/shard22/ moveChunk
mkdir -p /data1/logs/mongodb/shard22/ local
mkdir -p /data1/logs/mongodb/shard22/journal
mkdir -p /data1/logs/mongodb/shard31/ moveChunk
mkdir -p /data1/logs/mongodb/shard31/ local
mkdir -p /data1/logs/mongodb/shard31/journal
mkdir -p /data1/logs/mongodb/shard10/ local
mkdir -p /data1/logs/mongodb/shard10/journal
mkdir -p /data1/logs/mongodb/config/config
mkdir -p /data1/logs/mongodb/config/ journal
ln -s /data1/logs/mongodb/shard22/ moveChunk /data/mongodb/db/shard22/ moveChunk
ln -s /data1/logs/mongodb/shard22/ local /data/mongodb/db/shard22/ local
ln -s /data1/logs/mongodb/shard22/journal /data/mongodb/db/shard22/journal
ln -s /data1/logs/mongodb/shard31/ moveChunk /data/mongodb/db/shard31/ moveChunk
ln -s /data1/logs/mongodb/shard31/ local /data/mongodb/db/shard31/ local
ln -s /data1/logs/mongodb/shard31/journal /data/mongodb/db/shard31/journal
ln -s /data1/logs/mongodb/shard10/ local /data/mongodb/db/shard10/ local
ln -s /data1/logs/mongodb/shard10/journal /data/mongodb/db/shard10/journal
ln -s /data1/logs/mongodb/config/config /data/mongodb/db/config/ config
ln –s /data1/logs/mongodb/config/journal /data/mongodb/db/ config /journal
chown mongodb:mongodb –R /data/mongodb/db/
chown mongodb:mongodb –R /data1/logs/mongodb/
3. 软件准备
在每台服务器上都要执行:
#如果服务器上已经支持python make wget vim和svn这一步可以不做
yum -y install python make wget vim-enhanced subversion
#安装支持编译mongodb和V8引擎的软件
yum -y install numactl vim-enhanced python make pcre scons boost-devel boost-filesystem gcc gcc-c++ glibc-devel.i686 libstdc++.i686 libpcap libpcap-devel
#创建mongo编译目录
mkdir -p /home/download/mongo
#创建mongodb目标目录
mkdir -p /opt/soft/mongo-2.2.0
cd /home/download/mongo
#下载mongo源码包2.2.0版
wget http://downloads.mongodb.org/src/mongodb-src-r2.2.0.tar.gz
tar zxf mongodb-src-r2.2.0.tar.gz
#获取google code V8引擎源码
svn checkout http://v8.googlecode.com/svn/trunk/ v8
cd v8
#获取V8的编译器 GYP,如果使用scons编译,这一步可以不做
svn co http://gyp.googlecode.com/svn/trunk build/gyp
#setup.py build
#setup.py install
#python build/gyp_v8
#build/gyp_v8 -Dtarget_arch=x64
#make x64.release debuggersupport=off OUTDIR=foo
#编译V8引擎
scons arch=x64 mode=release snapshot=on
cp libv8.* libv8preparser.* /usr/lib
cp -r include/* /usr/include/
#编译安装mongodb
scons all --usev8 mode=release snapshot=on
scons --prefix=/opt/soft/mongodb-2.2.0 --full --usev8 install mode=release snapshot=on
4. 需要在/etc/hosts中设置127.0.0.1 servername否则在后面配置replica set时会报错:
#error
all members and seeds must be reachable to initiate set
5. 配置mongod
#切换到mongodb用户下执行:
shard1:
#启动shard1主库
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard1 -port 29001 -dbpath /data/mongodb/db/shard11 -oplogSize 10240 -logpath /data1/logs/mongodb/shard11.log -logappend -fork --nohttpinterface --directoryperdb
#启动shard3从库
nice -n -20 numactl --cpunodebind=1 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard3 -port 29003 -dbpath /data/mongodb/db/shard32 -oplogSize 10240 -logpath /data1/logs/mongodb/shard32.log -logappend -fork --nohttpinterface --directoryperdb
#启动shard2仲裁
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard2 -port 29002 -dbpath /data/mongodb/db/shard20 -oplogSize 100 -logpath /data1/logs/mongodb/shard20.log -logappend -fork --nohttpinterface –directoryperdb
#配置replica set
/opt/soft/mongodb-2.2.0/bin/mongo -port 29001
config = {_id: 'shard1', members: [
{_id: 0, host: '10.100.2.117:29001',priority:1},
{_id: 1, host: '10.100.2.118:29001',priority:0},
{_id: 2, host: '10.100.2.119:29001',arbiterOnly:true}]};
rs.initiate(config);
shard2:
#启动shard1从库
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard1 -port 29001 -dbpath /data/mongodb/db/shard12 -oplogSize 10240 -logpath /data1/logs/mongodb/shard12.log -logappend -fork --nohttpinterface --directoryperdb
#启动shard2主库
nice -n -20 numactl --cpunodebind=1 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard2 -port 29002 -dbpath /data/mongodb/db/shard21 -oplogSize 10240 -logpath /data1/logs/mongodb/shard21.log -logappend -fork --nohttpinterface --directoryperdb
#启动shard3仲裁
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard3 -port 29003 -dbpath /data/mongodb/db/shard30 -oplogSize 100 -logpath /data1/logs/mongodb/shard30.log -logappend -fork --nohttpinterface --directoryperdb
#配置replica set
/opt/soft/mongodb-2.2.0/bin/mongo -port 29002
config = {_id: 'shard2', members: [
{_id: 0, host: '10.100.2.118:29002',priority:1},
{_id: 1, host: '10.100.2.119:29002',priority:0},
{_id: 2, host: '10.100.2.117:29002',arbiterOnly:true}]};
rs.initiate(config);
shard3:
#启动shard2从库
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard2 -port 29002 -dbpath /data/mongodb/db/shard22 -oplogSize 10240 -logpath /data1/logs/mongodb/shard22.log -logappend -fork --nohttpinterface --directoryperdb
#启动shard3主库
nice -n -20 numactl --cpunodebind=1 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard3 -port 29003 -dbpath /data/mongodb/db/shard31 -oplogSize 10240 -logpath /data1/logs/mongodb/shard31.log -logappend -fork --nohttpinterface –directoryperdb
#启用shard1仲裁
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -shardsvr -replSet shard1 -port 29001 -dbpath /data/mongodb/db/shard10 -oplogSize 100 -logpath /data1/logs/mongodb/shard10.log -logappend -fork --nohttpinterface –directoryperdb
#配置replica set
/opt/soft/mongodb-2.2.0/bin/mongo -port 29003
config = {_id: 'shard3', members: [
{_id: 0, host: '10.100.2.119:29003',priority:1},
{_id: 1, host: '10.100.2.117:29003',priority:0},
{_id: 2, host: '10.100.2.118:29003',arbiterOnly:true}]};
rs.initiate(config);
6. 配置mongos config
在每台服务器上执行:
#启动config
nice -n -20 numactl --cpunodebind=0 --localalloc /opt/soft/mongo-2.2.0/bin/mongod -configsvr -dbpath /data/mongodb/db/config -port 29000 -logpath /data1/logs/mongodb/config.log -logappend -fork --nohttpinterface --directoryperdb
#启动mongos
nice -n -20 numactl --cpunodebind=1 --localalloc /opt/soft/mongo-2.2.0/bin/mongos -configdb 10.100.2.117:29000,10.100.2.118:29000,10.100.2.119:29000 -port 27001 -chunkSize 32 -logpath /data1/logs/mongodb/mongos.log -logappend -fork –nohttpinterface
#以上mongod mongos启动命令可以放入rc.local中
#配置sharding
/opt/soft/mongodb-2.2.0/bin/mongo -port 27001
use admin
db.runCommand( { addshard:"shard1/10.100.2.117:29001,10.100.2.118:29001,10.100.2.119:29001",name:"s1"} );
db.runCommand( { addshard:"shard2/10.100.2.118:29002,10.100.2.119:29002,10.100.2.117:29002",name:"s2"} );
db.runCommand( { addshard:"shard3/10.100.2.119:29003,10.100.2.117:29003,10.100.2.118:29003",name:"s3"} );
#查看sharding 配置
db.runCommand( { listshards : 1 } )
#对数据库进行分片
db.runCommand( { enablesharding : "xm_pulse"} );
#对表进行分片
db.runCommand( { shardcollection : "xm_pulse.tb_user",key : {user_id: 1}})
7. 测试分片
for (var i = 1; i <= 5000000; i++) db. tb_user.save({"order_id" : i, "user_id" : new Date(), "order_status" : i, "consignee" : "吴某某", "country" : i, "province" : i, "city" : i, "district" : i, "address" : "德胜门外大街129号随碟附送丁莱夫", "zipcode" : "100088", "tel" : "18688888888", "email" : "w00y3sdfdsf3@gmail.com", "best_time" : "0", "postscript" : "", "invoice_title" : "普通个人发票", "invoice_type" : i, "express_id" : i, "pay_id" : i, "pay_bank" : "CMB", "pickup_id" : 0, "goods_amount" : 100, "imprest" : i, "shipment_expense" : i, "weight" : i, "express_sn" : "", "express_update_time" : i, "add_time" : 1330394117, "p_order_id" : i, "complete_time" : i, "trade_no" : null, "notes" : "", "order_type" : 8, "zprov" : "北京", "zcity" : "北京市", "zdistrict" : "西城区", "express_name" : "顺丰(北京)", "pay_type" : "网银支付", "user_name" : "wooyee", "items" : [ ], "order_id_str" : "1120228357173102", "ext_time":i});
通过 db.tb_user.stats()可以看到数据在各个分片上的分布情况
8. 测试V8引擎
可以下载:
http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.2.0.tgz
安装后运行测试脚本,对比V8引擎和SpiderMonkey的计算能力。
V8引擎:
/opt/soft/mongodb-2.2.0/bin/mongo -port 27001 /home/mongodb/test.js
MongoDB shell version: 2.2.0
connecting to: 127.0.0.1:27001/test
begin: Thu Sep 06 2012 23:53:09 GMT+0800 (CST)
result: 867000000
end: Thu Sep 06 2012 23:53:09 GMT+0800 (CST)
total time: 75
SpiderMonkey:
/opt/soft/mongodb-linux-x86_64-2.2.0/bin/mongo -port 27001 /home/mongodb/test.js
MongoDB shell version: 2.2.0
connecting to: 127.0.0.1:27001/test
begin: Thu Sep 06 2012 23:53:18 GMT+0800 (CST)
result: 867000000
end: Thu Sep 06 2012 23:53:20 GMT+0800 (CST)
total time: 1849
附2:test.js
维护命令
1. #查看活跃节点
db.isMaster();
2. #添加仲裁节点
rs.addArb("192.168.1.88:28802")
3. #删除节点
rs.remove("192.168.1.88:28802")
4. #查看节点状态
rs.status()
5. #服务器是否sharding cluster
db.runCommand({isdbgrid:1});
6. #服务器状态
db.runCommand({"serverStatus":1})
db.runCommand({ dbStats: 1, scale: 1 })
7. #db status
db.runCommand({ listDatabases: 1 })
8. #迁移主分片
db.runCommand({ moveprimary : "test", to : "shard0001" })
9. #分片状态
db.printShardingStatus()
10. #复制信息
db.printReplicationInfo();
11. #数据库命令查询
db.listCommands();
12. #关闭平衡器
use config
db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true );
13. #备份
mongodump --database config
14. #开启平衡器
use config
db.settings.update( { _id: "balancer" }, { $set : { stopped: false } } , true );
use config
15. #定时关闭平衡器
db.settings.update( { _id : "balancer" }, { $set : { activeWindow : { start : "6:00", stop : "23:00" } } }, true )
16. #mongo 刷新数据锁定写
db.fsyncLock();
17. #mongo 写打开
db.fsyncUnlock();
18. #processlist
db.currentOp(true);
db.currentOp();
db.$cmd.sys.inprog.find()
db.currentOp().inprog.forEach(function(d){if(d.active && d.lockType == "write") printjson(d)})
db.currentOp().inprog.forEach(function(d){if(d.active && d.lockType == "read") printjson(d)})
db.currentOp().inprog.forEach(function(d){if(d.waitingForLock && d.lockType != "read") printjson(d)})
19. # see how to run from drivers
db.commandHelp("profile")
show profile
20. # set Profiling Level
db.setProfilingLevel(2);
{"was" : 0 , "slowms" : 100, "ok" : 1} // "was" is the old setting
db.getProfilingLevel()
2
db.setProfilingLevel(1,20) // log slow operations, slow threshold=20ms
db.getProfilingStatus() // new shell helper method as of v1.7+
{ "was" : 1, "slowms" : 20 }
21. # set Profiling Level through the command-line/config-file
$ mongod --profile=1 --slowms=15
22. #Changing the system.profile Collection Size
db.system.profile.drop()
db.createCollection("system.profile", {capped:true, size:4000000})
db.system.profile.stats()
23. #重新开启profiling
turn off profiling => db.setProfilingLevel(0);
drop the collection => db.system.profile.drop()
start again profiling => db.setProfilingLevel(1); / db.setProfilingLevel(2);
24. #As an example, to see output without $cmd (command) operations, invoke:
db.system.profile.find( function() { return this.info.indexOf('$cmd')<0; } )
25. #To view operations for a particular collection
db.system.profile.find( { info: /test.foo/ } )
{"ts" : "Thu Jan 29 2009 15:19:40 GMT-0500 (EST)" , "info" : "insert test.foo" , "millis" : 0}
{"ts" : "Thu Jan 29 2009 15:19:42 GMT-0500 (EST)" , "info" : "insert test.foo" , "millis" : 0}
{"ts" : "Thu Jan 29 2009 15:19:45 GMT-0500 (EST)" , "info" : "query test.foo ntoreturn:0 reslen:102 nscanned:2 <br>query: {} nreturned:2 bytes:86" , "millis" : 0}
{"ts" : "Thu Jan 29 2009 15:21:17 GMT-0500 (EST)" , "info" : "query test.foo ntoreturn:0 reslen:36 nscanned:2 <br>query: { $not: { x: 2 } } nreturned:0 bytes:20" , "millis" : 0}
{"ts" : "Thu Jan 29 2009 15:21:27 GMT-0500 (EST)" , "info" : "query test.foo ntoreturn:0 exception bytes:53" , "millis" : 88}
26. #To view operations slower than a certain number of milliseconds:
db.system.profile.find( { millis : { $gt : 5 } } )
{"ts" : "Thu Jan 29 2009 15:21:27 GMT-0500 (EST)" , "info" : "query test.foo ntoreturn:0 exception bytes:53" , "millis" : 88}
27. #To see newest information first
db.system.profile.find().sort({$natural:-1})
28. #To view information from a certain time range
db.system.profile.find(
...{ts:{$gt:new ISODate("2011-07-12T03:00:00Z"),
... $lt:new ISODate("2011-07-12T03:40:00Z")}
...})
29. #In the next example we look at the time range, suppress the user field from the output to make it easier to read, and sort the results by how long each operation took to run.
db.system.profile.find(
...{ts:{$gt:new ISODate("2011-07-12T03:00:00Z"),
... $lt:new ISODate("2011-07-12T03:40:00Z")}
...}
...,{user:0}).sort({millis:-1})
30. #查看大于5ms的慢查询
db.system.profile.find( { millis : { $gt : 5 } } )
31. #查看最新的慢查询
db.system.profile.find().sort({$natural:-1}).limit(1)
32. #杀慢查询
db.currentOp().inprog.forEach(function(o) { if(o.secs_running > 1 && o.waitingForLock==true) db.killOp(o.opid) })
注意事项
1. Mongodb sharding cluster 自动平衡性能非常差,而且非常缓慢,服务器长期忙于平衡数据。建议对sharded为false的表进行手动切片操作,先关闭自动平衡,这样切分效果和对数据影响较小。
2. 切换replica set 的主从时,原主库数据会rollback,应当避免在大量写入数据的时候做类似的操作。
备注
附1:op同学提供的iptables脚本,很好很强大。
#!/bin/sh
#
export LANG=C
#
#
/etc/init.d/iptables stop >/dev/null 2>&1
#
arptables -F
#
# reset the default policies in the filter table.
/sbin/iptables -P INPUT ACCEPT
/sbin/iptables -P FORWARD ACCEPT
/sbin/iptables -P OUTPUT ACCEPT
#
# reset the default policies in the nat table.
#
/sbin/iptables -t nat -P PREROUTING ACCEPT
/sbin/iptables -t nat -P POSTROUTING ACCEPT
/sbin/iptables -t nat -P OUTPUT ACCEPT
#
# reset the default policies in the mangle table.
#
/sbin/iptables -t mangle -P PREROUTING ACCEPT
/sbin/iptables -t mangle -P OUTPUT ACCEPT
#
# flush all the rules in the filter and nat tables.
#
/sbin/iptables -F
/sbin/iptables -t nat -F
/sbin/iptables -t mangle -F
#
# erase all chains that's not default in filter and nat table.
#
/sbin/iptables -X
/sbin/iptables -t nat -X
/sbin/iptables -t mangle -X
#Zero counters in all chains
/sbin/iptables -Z
/sbin/iptables -t nat -Z
/sbin/iptables -t mangle -Z
# flush all the rules in the filter and nat tables.
#
/sbin/iptables -F
/sbin/iptables -t nat -F
/sbin/iptables -t mangle -F
#
#echo '1255350' > /proc/sys/net/ipv4/ip_conntrack_max
#
if [ "$1" = 'stop' ]
then
/etc/init.d/iptables stop >/dev/null 2>&1
#
arptables -F
exit 0
fi
#
/sbin/modprobe ip_conntrack_ftp
/sbin/modprobe ip_conntrack_pptp
/sbin/modprobe ip_conntrack_proto_sctp
/sbin/modprobe ip_nat_ftp
/sbin/modprobe ip_nat_pptp
#
/sbin/sysctl -q -w net.ipv4.ip_forward=1
#
#arp filter
test `ip addr list dev em2 | grep -c 'inet 192.168.'` -eq 0 && arptables -A IN -i em2 -d 192.168.0.0/16 -j DROP
test `ip addr list dev em2 | grep -c 'inet 10.'` -eq 0 && arptables -A IN -i em2 -d 10.0.0.0/8 -j DROP
test `ip addr list dev em2 | grep -c 'inet 172.'` -eq 0 && arptables -A IN -i em2 -d 172.16.0.0/12 -j DROP
#
/sbin/iptables -A INPUT -i em2 -m state --state RELATED,ESTABLISHED -j ACCEPT
#
#tcp accept
#
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp --dport 5666 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp --dport 199 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.116/32 --dport 27001 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.200/32 --dport 27001 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.204/32 --dport 27001 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.207/32 --dport 27001 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.117/32 --dport 29000:29003 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.118/32 --dport 29000:29003 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.119/32 --dport 29000:29003 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 180.186.32.0/24 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 58.68.247.0/27 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 58.68.235.0/27 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 58.68.235.64/26 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 211.103.219.162/32 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 59.108.40.194/32 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.237.0.0/16 --dport 22 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -s 10.100.2.0/24 --dport 22 -m state --state NEW -j ACCEPT
#
/sbin/iptables -I FORWARD -i ppp+ -t mangle -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1280
/sbin/iptables -I FORWARD -i tun+ -t mangle -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1360
#
#default block tcp
/sbin/iptables -A INPUT -i em2 -p tcp -m tcp -m state --state NEW -j DROP
#
#udp accept
#
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 1000:3000 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 5000:65535 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 465 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 123 -m state --state NEW -j ACCEPT
/sbin/iptables -A INPUT -i em2 -p udp -m udp --dport 161 -s 180.186.32.213 -m state --state NEW -j ACCEPT
#
#default block tcp
/sbin/iptables -A INPUT -i em2 -p udp -m udp -m state --state NEW -j DROP
#
#
#icmp accept
#
/sbin/iptables -A INPUT -i em2 -p icmp -m icmp --icmp-type 8 -m state --state NEW -j ACCEPT
#
#default block icmp
/sbin/iptables -A INPUT -i em2 -p icmp -j DROP
#
/sbin/iptables -I POSTROUTING -t nat -o em2 -s 192.168.0.0/16 -j MASQUERADE
/sbin/iptables -I POSTROUTING -t nat -o em2 -s 172.16.0.0/12 -j MASQUERADE
/sbin/iptables -I POSTROUTING -t nat -o em2 -s 10.0.0.0/8 -j MASQUERADE
#
#temp for re-form
####
###/sbin/iptables -I POSTROUTING -t nat -o eth1 -j MASQUERADE
###/sbin/iptables -I POSTROUTING -t nat -o eth1 -s 192.168.0.0/16 -j RETURN
###/sbin/iptables -I POSTROUTING -t nat -o eth1 -s 10.2.0.0/16 -j RETURN
#############for MiVPN
#########
########tunlist=`ip route | grep 'dev tunl' | grep 'proto kernel scope link src' | awk '{ print $3 }'`
########if [ -z "$tunlist" ]
########then
######## echo "`date` INFO: IPIP tunnel no exist."
########else
######## for onetunl in $tunlist
######## do
######## if [ "$onetunl" == 'tunl1' ]
######## then
######## echo "`date` INFO: skipped IPIP tunnel $onetunl"
######## continue;
######## fi
######## onetunlip=`ip route | grep "dev $onetunl" | awk -F'src' '{ print $2 }' | awk '{ print $1 }'`
######## if [ -z "$onetunlip" ]
######## then
######## echo "`date` WARNING: IPIP tunnel $onetunl ip no found."
######## continue;
######## fi
######## stra=`echo $onetunlip | awk -F'.' '{ print $1 }'`
######## strb=`echo $onetunlip | awk -F'.' '{ print $2 }'`
######## strc=`echo $onetunlip | awk -F'.' '{ print $3 }'`
######## snatip="$stra.$strb.$strc.100-$stra.$strb.$strc.200:5000-65000"
######## /sbin/iptables -I POSTROUTING -t nat -o $onetunl -j MASQUERADE
######## /sbin/iptables -I POSTROUTING -t nat -p icmp -o $onetunl -j MASQUERADE
######## #/sbin/iptables -I POSTROUTING -t nat -o $onetunl -p udp -j RETURN
######## #/sbin/iptables -I POSTROUTING -t nat -o $onetunl -p tcp -j RETURN
######## /sbin/iptables -I POSTROUTING -t nat -o $onetunl -p udp -j SNAT --to-source $snatip
######## /sbin/iptables -I POSTROUTING -t nat -o $onetunl -p tcp -j SNAT --to-source $snatip
######## done
########fi
#########
#
附2:test.js
function dotest() {
var str = "xxxxxxxxxxxxxxxxx0000000000";
var data = str + str + str + str;
var data = data + data + data + data;
var max = 10000;
var arr = [];
var total = 0;
for(var a=0; a<100; a++) {
for(var i=0; i<max;i++){
arr.push( data + " . " + data);
}
for(var i=0; i<max;i++){
total += arr[i].length;
}
arr = [];
}
return total;
}
myecho = (typeof console !== 'undefined' && typeof console.log == 'function') ? console.log : print;
a = new Date();
myecho("begin:\t" + a);
myecho("result:\t" + dotest());
b = new Date();
myecho("end:\t" + b);
myecho("total time:\t" + (b - a)); 原文链接:http://blog.csdn.net/ylqmf/article/details/7958804