标签:




| 用途 | IP | 端口 | 备注 | 安装路径 |
| ConfigeServer | 172.16.16.120 | 30001 | /db/configS | |
| 172.16.16.121 | 30001 | /db/configS | ||
| 172.16.16.122 | 30001 | /db/configS | ||
| share1 | 172.16.16.124 | 40001 | Shard1主节点 | /db/shard1 |
| 172.16.16.125 | 40001 | Shard1副本节点 | /db/shard1 | |
| 172.16.16.126 | 40001 | Shard1仲裁节点 | /db/shard1 | |
| share2 | 172.16.16.125 | 40002 | Shard2主节点 | /db/shard2 |
| 172.16.16.126 | 40002 | Shard2副本节点 | /db/shard2 | |
| 172.16.16.131 | 40002 | Shard2仲裁节点 | /db/shard2 | |
| share3 | 172.16.16.126 | 40003 | Shard3主节点 | /db/shard3 |
| 172.16.16.131 | 40003 | Shard3副本节点 | /db/shard3 | |
| 172.16.16.124 | 40003 | Shard3仲裁节点 | /db/shard3 | |
| share4 | 172.16.16.121 | 40004 | Shard4主节点 | /db/shard4 |
| 172.16.16.124 | 40004 | Shard4副本节点 | /db/shard4 | |
| 172.16.16.125 | 40004 | Shard4仲裁节点 | /db/shard4 | |
| mongos | 172.16.16.124 | 50001 | 生产环境中一般直接部署在应用端 | /db/mongos |
| 172.16.16.125 | 50001 | /db/mongos | ||
| 172.16.16.126 | 50001 | /db/mongos | ||
| 172.16.16.131 | 50001 | /db/mongos |
opt]# tar zxvf mongodb-linux-x86_64-rhel55-3.0.7.gz opt]# mv mongodb-linux-x86_64-rhel55-3.0.7 /usr/local/mongodb opt]# useradd mongo opt]# passwd mongo Changing password for user mongo. New UNIX password: BAD PASSWORD: it is too simplistic/systematic Retype new UNIX password: passwd: all authentication tokens updated successfully. opt]# chown -R mongo:mongo /usr/local/mongodb/ opt]# chown -R mongo:mongo /db
#mkdir -p /db/configS/data & mkdir -p /db/configS/log (存放ConfigServer的数据、日志)
#mkdir -p /db/shard1/data & mkdir -p /db/shard1/log (存放shard1的数据、日志)
#mkdir -p /db/shard2/data & mkdir -p /db/shard2/log (存放shard2的数据、日志)
#mkdir -p /db/shard3/data & mkdir -p /db/shard3/log (存放shard3的数据、日志)
#mkdir -p /db/shard4/data & mkdir -p /db/shard4/log (存放shard4的数据、日志)
#mkdir -p /db/mongos/log (由于mongos只做路由使用,不存数据,所以只需要建立log目录)
#vim /usr/local/mongodb/conf/configServer.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/configS/log/configServer.log" #日志存储位置
logAppend: true
storage:
journal: #journal配置
enabled: true
dbPath: "/db/configS/data" #数据文件存储位置
directoryPerDB: true #是否一个库一个文件夹
engine: wiredTiger #数据引擎
wiredTiger: #WT引擎配置
engineConfig:
cacheSizeGB: 6 #设置为6G,默认为物理内存的一半
directoryForIndexes: true #是否将索引也按数据库名单独存储
journalCompressor: zlib
collectionConfig: #表压缩配置
blockCompressor: zlib
indexConfig: #索引配置
prefixCompression: true
net: #端口配置
port: 30001 #另外两台需要分别修改为30002、30003
processManagement: #配置启动管理方式
fork: true
sharding: #分片配置
clusterRole: configsvr #分片角色
conf]$ /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/configServer.conf
#vim /usr/local/mongodb/conf/mongos.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/mongos/log/mongos.log"
logAppend: true
net:
port: 50001
sharding:
configDB: 172.16.16.120:30001,172.16.16.121:30001,172.16.16.122:30001
processManagement:
fork: true
conf]$ /usr/local/mongodb/bin/mongos -f /usr/local/mongodb/conf/mongos.conf
#vim /usr/local/mongodb/conf/shard1.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/shard1/log/shard1.log" #日志存储位置
logAppend: true
storage:
journal: #journal配置
enabled: true
dbPath: "/db/shard1/data" #数据文件存储位置
directoryPerDB: true #是否一个库一个文件夹
engine: wiredTiger #数据引擎
wiredTiger: #WT引擎配置
engineConfig:
cacheSizeGB: 6 #设置为6G,默认为物理内存的一半
directoryForIndexes: true #是否将索引也按数据库名单独存储
journalCompressor: zlib
collectionConfig: #表压缩配置
blockCompressor: zlib
indexConfig: #索引配置
prefixCompression: true
net: #端口配置
port: 40001
processManagement: #配置启动管理方式
fork: true
sharding: #分片配置
clusterRole: shardsvr
replication:
replSetName: shard1 #配置副本集名称
conf]$ /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/shard1.conf
#vim /usr/local/mongodb/conf/shard2.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/shard2/log/shard2.log" #日志存储位置
logAppend: true
storage:
journal: #journal配置
enabled: true
dbPath: "/db/shard2/data" #数据文件存储位置
directoryPerDB: true #是否一个库一个文件夹
engine: wiredTiger #数据引擎
wiredTiger: #WT引擎配置
engineConfig:
cacheSizeGB: 6 #设置为6G,默认为物理内存的一半
directoryForIndexes: true #是否将索引也按数据库名单独存储
journalCompressor: zlib
collectionConfig: #表压缩配置
blockCompressor: zlib
indexConfig: #索引配置
prefixCompression: true
net: #端口配置
port: 40002
processManagement: #配置启动管理方式
fork: true
sharding: #分片配置
clusterRole: shardsvr
replication:
#oplogSizeMB:
replSetName: shard2 #配置副本集名称
启动shard2 mongod:
conf]$ /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/shard2.conf
#vim /usr/local/mongodb/conf/shard3.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/shard3/log/shard3.log" #日志存储位置
logAppend: true
storage:
journal: #journal配置
enabled: true
dbPath: "/db/shard3/data" #数据文件存储位置
directoryPerDB: true #是否一个库一个文件夹
engine: wiredTiger #数据引擎
wiredTiger: #WT引擎配置
engineConfig:
cacheSizeGB: 6 #设置为6G,默认为物理内存的一半
directoryForIndexes: true #是否将索引也按数据库名单独存储
journalCompressor: zlib
collectionConfig: #表压缩配置
blockCompressor: zlib
indexConfig: #索引配置
prefixCompression: true
net: #端口配置
port: 40003
processManagement: #配置启动管理方式
fork: true
sharding: #分片配置
clusterRole: shardsvr
replication:
#oplogSizeMB:
replSetName: shard3 #配置副本集名称
conf]$ /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/shard3.conf
#vim /usr/local/mongodb/conf/shard4.conf
#!/bin/bash
systemLog:
destination: file
path: "/db/shard4/log/shard4.log" #日志存储位置
logAppend: true
storage:
journal: #journal配置
enabled: true
dbPath: "/db/shard4/data" #数据文件存储位置
directoryPerDB: true #是否一个库一个文件夹
engine: wiredTiger #数据引擎
wiredTiger: #WT引擎配置
engineConfig:
cacheSizeGB: 6 #设置为6G,默认为物理内存的一半
directoryForIndexes: true #是否将索引也按数据库名单独存储
journalCompressor: zlib
collectionConfig: #表压缩配置
blockCompressor: zlib
indexConfig: #索引配置
prefixCompression: true
net: #端口配置
port: 40004
processManagement: #配置启动管理方式
fork: true
sharding: #分片配置
clusterRole: shardsvr
replication:
#oplogSizeMB:
replSetName: shard4 #复制集名
conf]$ /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/conf/shard4.conf
bin]$ ./mongo 172.16.16.124:40001
MongoDB shell version: 3.0.7
connecting to: 172.16.16.124:40001/test
> use admin
switched to db admin
> config = { _id:"shard1", members:[
{_id:0,host:"172.16.16.124:40001"},
{_id:1,host:"172.16.16.125:40001"},
{_id:2,host:"172.16.16.126:40001",arbiterOnly:true}]
}
#以下为输出
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "172.16.16.124:40001"
},
{
"_id" : 1,
"host" : "172.16.16.125:40001"
},
{
"_id" : 2,
"host" : "172.16.16.126:40001",
"arbiterOnly" : true
}
]
}
> rs.initiate(config); #初始化配置
{ "ok" : 1 }
bin]$ ./mongo 172.16.16.125:40002
MongoDB shell version: 3.0.7
connecting to: 172.16.16.125:40002/test
> use admin
switched to db admin
> config = { _id:"shard2", members:[
{_id:0,host:"172.16.16.125:40002"},
{_id:1,host:"172.16.16.126:40002"},
{_id:2,host:"172.16.16.131:40002",arbiterOnly:true}]
}
#以下为输出
{
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "172.16.16.125:40002"
},
{
"_id" : 1,
"host" : "172.16.16.126:40002"
},
{
"_id" : 2,
"host" : "172.16.16.131:40002",
"arbiterOnly" : true
}
]
}
> rs.initiate(config); #初始化配置
{ "ok" : 1 }
bin]$ ./mongo 172.16.16.126:40003
MongoDB shell version: 3.0.7
connecting to: 172.16.16.126:40003/test
> use admin
switched to db admin
> config = { _id:"shard3", members:[
{_id:0,host:"172.16.16.126:40003"},
{_id:1,host:"172.16.16.131:40003"},
{_id:2,host:"172.16.16.124:40003",arbiterOnly:true}]
}
#以下为输出
{
"_id" : "shard3",
"members" : [
{
"_id" : 0,
"host" : "172.16.16.126:40003"
},
{
"_id" : 1,
"host" : "172.16.16.131:40003"
},
{
"_id" : 2,
"host" : "172.16.16.124:40003",
"arbiterOnly" : true
}
]
}
> rs.initiate(config); #初始化配置
{ "ok" : 1 }
bin]$ ./mongo 172.16.16.131:40004
MongoDB shell version: 3.0.7
connecting to: 172.16.16.131:40004/test
> use admin
switched to db admin
> config = { _id:"shard4", members:[
{_id:0,host:"172.16.16.131:40004"},
{_id:1,host:"172.16.16.124:40004"},
{_id:2,host:"172.16.16.125:40004",arbiterOnly:true}]
}
#以下为输出
{
"_id" : "shard4",
"members" : [
{
"_id" : 0,
"host" : "172.16.16.131:40004"
},
{
"_id" : 1,
"host" : "172.16.16.124:40004"
},
{
"_id" : 2,
"host" : "172.16.16.125:40004",
"arbiterOnly" : true
}
]
}
> rs.initiate(config); #初始化配置
{ "ok" : 1 }
bin]$ ./mongo 172.16.16.124:50001
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"shard1/172.16.16.124:40001,172.16.16.125:40001,172.16.16.126:40001"});
{ "shardAdded" : "shard1", "ok" : 1 }
mongos>db.runCommand({addshard:"shard2/172.16.16.125:40002,172.16.16.126:40002,172.16.16.131:40002"});
{ "shardAdded" : "shard2", "ok" : 1 }
mongos>db.runCommand({addshard:"shard3/172.16.16.126:40003,172.16.16.131:40003,172.16.16.124:40003"});
{ "shardAdded" : "shard3", "ok" : 1 }
mongos>db.runCommand({addshard:"shard4/172.16.16.131:40004,172.16.16.124:40004,172.16.16.125:40004"});
{ "shardAdded" : "shard4", "ok" : 1 }
mongos> db.runCommand( { listshards : 1 } );
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/172.16.16.124:40001,172.16.16.125:40001"
},
{
"_id" : "shard2",
"host" : "shard2/172.16.16.125:40002,172.16.16.126:40002"
},
{
"_id" : "shard3",
"host" : "shard3/172.16.16.126:40003,172.16.16.131:40003"
},
{
"_id" : "shard4",
"host" : "shard4/172.16.16.124:40004,172.16.16.131:40004"
}
],
"ok" : 1
}
bin]$ ./mongo 172.16.16.131:50001
MongoDB shell version: 3.0.7
connecting to: 172.16.16.131:50001/test
mongos> use ljaidb
switched to db ljaidb
mongos> for (var i=1;i<=10000;i++) db.ljaitable.save({"name":"ljai","age":27,"addr":"fuzhou"})
WriteResult({ "nInserted" : 1 })
mongos> db.ljaitable.stats()
{
"sharded" : false,
"primary" : "shard1",
"ns" : "ljaidb.ljaitable",
"count" : 10000,
"size" : 670000,
"avgObjSize" : 67,
"storageSize" : 49152,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
}
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5625fc29e3c17fdff8517b73")
}
shards:
{ "_id" : "shard1", "host" : "shard1/172.16.16.124:40001,172.16.16.125:40001" }
{ "_id" : "shard2", "host" : "shard2/172.16.16.125:40002,172.16.16.126:40002" }
{ "_id" : "shard3", "host" : "shard3/172.16.16.126:40003,172.16.16.131:40003" }
{ "_id" : "shard4", "host" : "shard4/172.16.16.124:40004,172.16.16.131:40004" }
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Tue Oct 20 2015 21:01:26 GMT+0800 (CST) by DataServer-04:50001:1445330413:1804289383:Balancer:846930886
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard1" }
{ "_id" : "ljaidb", "partitioned" : false, "primary" : "shard1" }
bin]$ ./mongo 172.16.16.124:40001 MongoDB shell version: 3.0.7 connecting to: 172.16.16.124:40001/test shard1:PRIMARY> show dbs ljaidb 0.000GB local 0.000GB shard1:PRIMARY> use ljaidb switched to db ljaidb shard1:PRIMARY> show tables ljaitable shard1:PRIMARY> db.ljaitable.find().count() 10000
bin]$ ./mongo 172.16.16.125:40002 MongoDB shell version: 3.0.7 connecting to: 172.16.16.125:40002/test shard2:PRIMARY> show dbs local 0.000GB
bin]$ ./mongo 172.16.16.124:50001
MongoDB shell version: 3.0.7
connecting to: 172.16.16.124:50001/test
mongos> use admin
switched to db admin
mongos> db.runCommand( { enablesharding :"lymdb"});
{ "ok" : 1 }
mongos> db.runCommand( { shardcollection : "lymdb.lymtable",key : {_id: 1} } )
{ "collectionsharded" : "lymdb.lymtable", "ok" : 1 }
1 import java.util.ArrayList;
2 import java.util.List;
3
4 import com.mongodb.BasicDBObject;
5 import com.mongodb.DB;
6 import com.mongodb.DBCollection;
7 import com.mongodb.DBObject;
8 import com.mongodb.MongoClient;
9 import com.mongodb.ServerAddress;
10
11 public class TestMongoDBShards {
12
13 public static void main(String[] args) {
14 try {
15 List<ServerAddress> addresses = new ArrayList<ServerAddress>();
16 ServerAddress address1 = new ServerAddress("172.16.16.124" , 50001);
17 ServerAddress address2 = new ServerAddress("172.16.16.125" , 50001);
18 ServerAddress address3 = new ServerAddress("172.16.16.126" , 50001);
19 ServerAddress address4 = new ServerAddress("172.16.16.131" , 50001);
20 addresses.add(address1);
21 addresses.add(address2);
22 addresses.add(address3);
23
24 MongoClient client = new MongoClient(addresses);
25 DB db = client.getDB( "lymdb" );
26 DBCollection coll = db.getCollection( "lymtable" );
27
28 // BasicDBObject object = new BasicDBObject();
29 // object.append( "id" , 1);
30
31 // DBObject dbObject = coll.findOne(object);
32
33 for(int i=1;i<=1000000;i++) {
34 DBObject saveData=new BasicDBObject();
35 saveData.put("id", i);
36 saveData.put("userName", "baiwan" + i);
37 saveData.put("age", "26");
38 saveData.put("gender", "m");
39
40
41 coll.save(saveData);
42 }
43
44
45
46
47 // System. out .println(dbObject);
48
49 } catch (Exception e) {
50 e.printStackTrace();
51 }
52 // TODO Auto-generated method stub
53
54 }
55
56 }
python连接代码:
1 #encoding=UTF-8
2 import datetime
3
4 ISOTIMEFORMAT = ‘%Y-%m-%d %X‘
5
6 from pymongo import MongoClient
7 conn = MongoClient("172.16.16.124",50001)
8 db = conn.funodb
9 def dateDiffInSeconds(date1,date2):
10 timedelta = date2 - date1
11 return timedelta.days*24*3600 +timedelta.seconds
12 db.funotable.drop()
13 date1 = datetime.datetime.now()
14 for i in range(0,1000000): db.funotable.insert({"name":"ljai","age":i,"addr":"fuzhou"})
15 c = db.funotable.find().count()
16 print("count is ",c)
17 date2 = datetime.datetime.now()
18 print(date1)
19 print(date2)
20 print("消耗:",dateDiffInSeconds(date1,date2),"seconds")
21 conn.close()
mongos> db.lymtable.getShardDistribution() Shard shard1 at shard1/172.16.16.124:40001,172.16.16.125:40001 data : 96.46MiB docs : 1216064 chunks : 4 estimated data per chunk : 24.11MiB estimated docs per chunk : 304016 Shard shard2 at shard2/172.16.16.125:40002,172.16.16.126:40002 data : 44.9MiB docs : 565289 chunks : 4 estimated data per chunk : 11.22MiB estimated docs per chunk : 141322 Shard shard3 at shard3/172.16.16.126:40003,172.16.16.131:40003 data : 99.39MiB docs : 1259979 chunks : 4 estimated data per chunk : 24.84MiB estimated docs per chunk : 314994 Shard shard4 at shard4/172.16.16.124:40004,172.16.16.131:40004 data : 76.46MiB docs : 958668 chunks : 4 estimated data per chunk : 19.11MiB estimated docs per chunk : 239667 Totals data : 317.22MiB docs : 4000000 chunks : 16 Shard shard1 contains 30.4% data, 30.4% docs in cluster, avg obj size on shard : 83B Shard shard2 contains 14.15% data, 14.13% docs in cluster, avg obj size on shard : 83B Shard shard3 contains 31.33% data, 31.49% docs in cluster, avg obj size on shard : 82B Shard shard4 contains 24.1% data, 23.96% docs in cluster, avg obj size on shard : 83B
MongoDBV3.0.7版本(shard+replica)集群的搭建及验证
标签:
原文地址:http://www.cnblogs.com/mfc-itblog/p/5230723.html