一、环境准备
1.1.主机信息(机器配置要求见硬件及开发标准规范文档V1.0)
| 序号 | 主机名 | IP | 
| 1 | DB_01 | 10.202.105.52 | 
| 2 | DB_02 | 10.202.105.53 | 
| 3 | DB_03 | 10.202.105.54 | 
| 4 | CNSZ17PL0897 | 10.117.176.215(暂时待定备份监控机) | 
| 
 | 
 | 
 | 
| 服务器52(DB_01) | 服务器53(DB_02) | 服务器54(DB_03) | 
| mongos | mongos | mongos | 
| config server | config server | config server | 
| shard server1 主节点 | shard server1 副节点 | shard server1 仲裁 | 
| shard server2 仲裁 | shard server2 主节点 | shard server2 副节点 | 
| shard server3 副节点 | shard server3 仲裁 | shard server3 主节点 | 
端口分配:(端口可以根据实际情况进行更改)
mongos:20000config:21000shard1:27001shard2:27002shard3:27003
1.2.软件版本
| 组件 | 版本 | 备注 | 
| MongoDB | 3.4.X | 
 | 
1.3.软件下载
https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.4.7.tgz
二、软件部署
2.1软件安装
useradd mongodb
passwd mongodb
chown -R mongodb:mongodb /app/mongodb/
chown -R mongodb:mongodb /data/mongodb/
并用mongo用户在一台机器上生成集群间验证文件并同步文件到其余两台机器相应位置赋予同样权限
openssl rand -base64 100 > /data/keyFile/keyFilers0.key
chmod 600 /data/keyFile/keyFilers0.key
| 1、新建目录(三台机器均操作) mkdir -p /data/mongodb/conf mkdir -p /data/mongodb/mongos/log mkdir -p /data/mongodb/config/data mkdir -p /data/mongodb/config/log mkdir -p /data/mongodb/shard1/data mkdir -p /data/mongodb/shard1/log mkdir -p /data/mongodb/shard2/data mkdir -p /data/mongodb/shard2/log mkdir -p /data/mongodb/shard3/data mkdir -p /data/mongodb/shard3/log 
 2、安装Mongodb(三台机器均操作,目录为/app/mongo/…<mongo为超链接>) tar –zxvf mongodb-linux-x86_64-rhel70-3.4.7 tar.gz 
 然后配置环境变量 
 vim /etc/profile # 内容 export MONGODB_HOME=… export PATH=$MONGODB_HOME/bin:$PATH # 使立即生效 source /etc/profile 
 
 | 
2.2配置文件
1、打开config.conf,修改以下配置项:(三台机器均操作)
| vi /data/mongodb/conf/config.conf 
 
 systemLog: destination: file logAppend: true path: /data/mongodb/config/log/congigsrv.log storage: dbPath: /data/mongodb/config/data journal: enabled: true directoryPerDB: true engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 1 journalCompressor: zlib collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true processManagement: fork: true pidFilePath: /data/mongodb/config/log/configsrv.pid net: bindIp: 0.0.0.0 port: 21000 #security: #keyFile: /data/keyFile/keyFilers0.key #clusterAuthMode: keyFile operationProfiling: slowOpThresholdMs: 1000 mode: slowOp replication: replSetName: configs sharding: clusterRole: configsvr | 
2、打开shard1.conf,修改以下配置项:修改以下配置项:(三台机器均操作)
| vi /data/mongodb/conf/shard1.conf 
 systemLog: destination: file logAppend: true logRotate: reopen path: /data/mongodb/shard1/log/shard1.log # Where and howto store data. ##########operationProfilingOptions storage: dbPath: /data/mongodb/shard1/data journal: enabled: true directoryPerDB: true engine: wiredTiger 
 #########storage.wiredTigerOptions wiredTiger: engineConfig: cacheSizeGB: 2 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true operationProfiling: slowOpThresholdMs: 100 mode: "all" ########ProcessManagementOptions processManagement: fork: true pidFilePath: /data/mongodb/shard1/log/shard1.pid net: port: 27001 bindIp: 0.0.0.0 maxIncomingConnections: 20000 security: #authorization: enabled #clusterAuthMode: keyFile #keyFile: /data/keyFile/keyFilers0.key #keyFile: /srv/mongodb/keyfile javascriptEnabled: true setParameter: enableLocalhostAuthBypass: false authenticationMechanisms: SCRAM-SHA-1 #operationProfiling: ##########replicationOptions replication: ##oplog大小 oplogSizeMB: 10000 replSetName: shard1 sharding: clusterRole: shardsvr #configsvr or shardsvr ##Enterprise-Only Options #auditLog: #snmp: #setParameter: #enableLocalhostAuthBypass: true #replWriterThreadCount: 32 #wiredTigerConcurrentReadTransactions: 1000 #wiredTigerConcurrentWriteTransactions: 1000 | 
3、打开shard2.conf,修改以下配置项:修改以下配置项:(三台机器均操作)
| vi /data/mongodb/conf/shard2.conf 
 systemLog: destination: file logAppend: true logRotate: reopen path: /data/mongodb/shard2/log/shard2.log # Where and howto store data. ##########operationProfilingOptions storage: dbPath: /data/mongodb/shard2/data journal: enabled: true directoryPerDB: true engine: wiredTiger 
 #########storage.wiredTigerOptions wiredTiger: engineConfig: cacheSizeGB: 128 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true operationProfiling: slowOpThresholdMs: 100 mode: "all" ########ProcessManagementOptions processManagement: fork: true pidFilePath: /data/mongodb/shard2/log/shard2.pid net: port: 27002 bindIp: 0.0.0.0 maxIncomingConnections: 20000 #security: #keyFile:/data/keyFile/keyFilers0.key security: #authorization: enabled #clusterAuthMode: keyFile #keyFile: /data/keyFile/keyFilers0.key #keyFile: /srv/mongodb/keyfile javascriptEnabled: true setParameter: enableLocalhostAuthBypass: false authenticationMechanisms: SCRAM-SHA-1 #operationProfiling: ##########replicationOptions replication: ##oplog大小 oplogSizeMB: 10000 replSetName: shard2 sharding: clusterRole: shardsvr #configsvr or shardsvr 
 ##Enterprise-Only Options #auditLog: #snmp: #setParameter: #enableLocalhostAuthBypass: true #replWriterThreadCount: 32 #wiredTigerConcurrentReadTransactions: 1000 #wiredTigerConcurrentWriteTransactions: 1000 | 
4、打开shard3.conf,修改以下配置项:修改以下配置项:(三台机器均操作)
| vi /data/mongodb/conf/shard3.conf 
 systemLog: destination: file logAppend: true logRotate: reopen path: /data/mongodb/shard3/log/shard3.log # Where and howto store data. ##########operationProfilingOptions storage: dbPath: /data/mongodb/shard3/data journal: enabled: true directoryPerDB: true engine: wiredTiger 
 #########storage.wiredTigerOptions wiredTiger: engineConfig: cacheSizeGB: 128 directoryForIndexes: true collectionConfig: blockCompressor: zlib indexConfig: prefixCompression: true operationProfiling: slowOpThresholdMs: 100 mode: "all" ########ProcessManagementOptions processManagement: fork: true pidFilePath: /data/mongodb/shard3/log/shard3.pid net: port: 27003 bindIp: 0.0.0.0 maxIncomingConnections: 20000 security: #authorization: enabled #clusterAuthMode: keyFile #keyFile: /data/keyFile/keyFilers0.key #keyFile: /srv/mongodb/keyfile javascriptEnabled: true setParameter: enableLocalhostAuthBypass: false authenticationMechanisms: SCRAM-SHA-1 #operationProfiling: ##########replicationOptions replication: ##oplog大小 oplogSizeMB: 10000 replSetName: shard3 sharding: clusterRole: shardsvr #configsvr or shardsvr 
 ##Enterprise-Only Options #auditLog: #snmp: #setParameter: #enableLocalhostAuthBypass: true #replWriterThreadCount: 32 #wiredTigerConcurrentReadTransactions: 1000 #wiredTigerConcurrentWriteTransactions: 1000 | 
5、打开mongos.conf,修改以下配置项:修改以下配置项:(三台机器均操作)
| vi /data/mongodb/conf/mongos.conf 
 
 systemLog: quiet: false path: /data/mongodb/mongos/log/mongos.log logAppend: true destination: file processManagement: fork: true pidFilePath: /data/mongodb/mongos/log/mongos.pid net: bindIp: 0.0.0.0 port: 20000 maxIncomingConnections: 20000 wireObjectCheck: true ipv6: false 
 #security: 
 #clusterAuthMode: keyFile #keyFile: /data/keyFile/keyFilers0.key setParameter: enableLocalhostAuthBypass: false authenticationMechanisms: SCRAM-SHA-1 replication: localPingThresholdMs: 15 sharding: configDB: configs/10.202.105.52:21000,10.202.105.53:21000,10.202.105.54:21000 | 
2.3初始化服务及副本集
| 1.启动三台服务器的config server并配置副本集(三台均启动) mongod -f /data/mongodb/conf/config.conf 
 登录任意一台配置服务器,初始化配置副本集 #连接 mongo --port 21000 
 use admin #config变量 config = { ... _id : "configs", ... members : [ ... {_id : 0, host : "10.202.105.52:21000" }, ... {_id : 1, host : "10.202.105.53:21000" }, ... {_id : 2, host : "10.202.105.54:21000" } ... ] ... } 
 #初始化副本集 rs.initiate(config) 其中,"_id" : "configs"应与配置文件中配置的 replicaction.replSetName 一致,"members" 中的 "host" 为三个节点的 ip 和 port 2.启动第一个分片副本集(shard1 server)并配置副本集(三台均启动) 
 mongod -f /data/mongodb/conf/shard1.conf 
 登陆任意一台服务器,初始化副本集 
 mongo --port 27001 #使用admin数据库 use admin #定义副本集配置,第三个节点的 "arbiterOnly":true 代表其为仲裁节点。 config = { ... _id : "shard1", ... members : [ ...         {_id : 0, host : "10.202.105.52:27001"  ...         {_id : 1, host : "10.202.105.53:27001"  ... {_id : 2, host : "10.202.105.54:27001" ,arbiterOnly: true } ... ] ... } #初始化副本集配置 rs.initiate(config); 3.启动第二个分片副本集(shard2 server)并配置副本集(三台均启动) 
 mongod -f /data/mongodb/conf/shard2.conf 
 登陆任意一台服务器,初始化副本集 
 mongo --port 27002 #使用admin数据库 use admin #定义副本集配置 config = { ... _id : "shard2", ... members : [ ...         {_id : 0, host : "10.202.105.53:27002"  ...         {_id : 1, host : "10.202.105.54:27002"  ... {_id : 2, host : "10.202.105.52:27002" ,arbiterOnly: true } ... ] ... } 
 #初始化副本集配置 rs.initiate(config); 
 4.启动第三个分片副本集(shard3 server)并配置副本集(三台均启动) 
 #初始化副本集配置 mongod -f /data/mongodb/conf/shard3.conf 
 登陆任意一台服务器,初始化副本集 
 mongo --port 27003 #使用admin数据库 use admin #定义副本集配置 config = { ... _id : "shard3", ... members : [ ...         {_id : 0, host : "10.202.105.54:27003"  ...         {_id : 1, host : "10.202.105.52:27003"  ... {_id : 2, host : "10.202.105.53:27003" ,arbiterOnly: true } ... ] ... } 
 #初始化副本集配置 rs.initiate(config); 5.启动三台服务器的mongos server 
 mongos -f /data/mongodb/conf/mongos.conf 6.启动分片 
 目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。 
 登陆任意一台mongos 
 mongo --port 20000 #使用admin数据库 user admin #串联路由服务器与分配副本集 sh.addShard("shard1/10.202.105.52:27001,10.202.105.53:27001,10.202.105.54:27001") sh.addShard("shard2/10.202.105.52:27002,10.202.105.53:27002,10.202.105.54:27002") sh.addShard("shard3/10.202.105.52:27003,10.202.105.53:27003,10.202.105.54:27003") #查看集群状态 sh.status() 
 
 
 ss -atunlp|grep mong 查看端口 
 7.测试分片() 
 目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。 
 #指定zhaobo数据库分片功能生效 sh.enableSharding("zhaobo"); 
 #指定数据库里需要分片的集合和片键 sh.shardCollection("zhaobo.table",{ id: "hashed" }); 
 我们设置数据库zhaobo的 table 表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有mongodb的数据库和表都需要分片! 
 测试分片配置结果 
 mongo 127.0.0.1:20000 #使用zhaobo use zhaobo #插入测试数据 
 for (i=1;i<=10000;i++) { db.table.save({id:i,"message":"message"+i}); } 
 然后db.table.status()查看分片情况 | 
2.4增加安全机制
| http://blog.csdn.net/u011191463/article/details/68485529 //mongo用户类型引导 
 http://www.jianshu.com/p/f585f71acbf2 //用户添加步骤 
 1)在Router下,切换到admin库,用db.createUser(用户名,密码)添加好认证帐号,再切换到目标数据库做类似的帐号添加操作。步骤如下 创建用户 use admin db.createUser( { user: "admin", pwd: "123456", roles: [ {role: "userAdminAnyDatabase",db: "admin"}, { role: "readAnyDatabase", db:"admin" }, { role: "dbOwner", db:"admin" }, { role: "userAdmin", db:"admin" }, { role: "root", db: "admin"}, { role: "clusterMonitor", db:"admin" }, { role: "dbAdmin", db:"admin" }, ] } ) 用户名为admin,密码为123456 验证用户db.auth("admin","123456"); 出现1则为验证成功 再切换到zhaobo此数据库,添加用户 db.createUser( { user: "zhaobo", pwd: "123456", roles: [ { role: "dbOwner", db:"zhaobo" }, { role: "userAdmin", db:"zhaobo" }, { role: "dbAdmin", db:"zhaobo" }, ] } ) 验证用户db.auth("zhaobo","123456"); 出现1则为验证成功 2)mongo 进入config路由副本集 mongo –port 21000 use config db.createUser( { user: "admin", pwd: "123456", roles: [ { role: "dbAdmin", db:"config" }, ] } ) db.auth("admin","123456") mongo进入每个分片副本集的primary mongo --port 27001 27002 27003 在shard1/shard2/shard3的各自的primary节点上。 use admin db.createUser( { user: "shard3", pwd: "123456", roles: [ { role: "root", db: "admin"}, { role: "clusterMonitor", db:"admin" }, { role: "dbAdmin", db:"admin" }, ] } ) db.auth("shard3","123456") 3)生成keyfile,可以使用如下命令:(在开始安装之前已经生成就不需要此步骤了) openssl rand -base64 100 > /data/keyFile/keyFilers0.key 将keyfile文件复制到各服务器备用,注意:需要给keyfile设置好权限,权限必须为600。 chmod 600 /data/keyFile/keyFilers0.key 4)杀掉所有的mongod和mongos进程。去掉配置文件的sectury下面的注释,本文以配置文件方式设置启动参数,启动各Sharding、Config和Router服务器 
 5)完成以后就可以使用上面配置的帐号密码访问Router了。在客户端Robo 3T测试成功 本地登陆数据库之后验证db.auth("admin","123456"); 即可正常使用 mongodb的启动顺序是,先启动配置服务器,再启动分片,最后启动mongos. 
 mongod -f /data/mongodb/conf/config.conf mongod -f /data/mongodb/conf/shard1.conf mongod -f /data/mongodb/conf/shard2.conf mongod -f /data/mongodb/conf/shard3.conf mongos -f /data/mongodb/conf/mongos.conf 
 关闭mongos use admin db.shutdownServer() | 
2.5启动关闭运维脚本
| 
 1.mongodb的启动顺序是,先启动配置服务器,再启动分片,最后启动mongos. 
 mongod -f /data/mongodb/conf/config.conf mongod -f /data/mongodb/conf/shard1.conf mongod -f /data/mongodb/conf/shard2.conf mongod -f /data/mongodb/conf/shard3.conf mongos -f /data/mongodb/conf/mongos.conf 
 
 use admin db.shutdownServer() 然后查看所有mongo进程pid号 ps –ef |grep mongo 分别在三台机器上执行kill掉对应的mongod进程 | 
 
        