一、MongoDB介绍
mongodb属于NoSQL的一种,是c++编写的,基于分布式的。MongoDB将数据存储为一个文档,数据结构由键值对组成。MongoDB文档类似于JSON对象,字段值可以包含其他文档、数组及文档数组。
JSON:JavaScript 对象表示法(JavaScript Object Notation)。
JSON 是存储和交换文本信息的语法。类似 XML。
JSON 比 XML 更小、更快,更易解析。
{ "employees": [ { "firstName":"Bill" , "lastName":"Gates" }, { "firstName":"George" , "lastName":"Bush" }, { "firstName":"Thomas" , "lastName":"Carter" } ] }
这个 employee 对象是包含 3 个员工记录(对象)的数组。
什么是 JSON ?
-
JSON 指的是 JavaScript 对象表示法(JavaScript Object Notation)
-
JSON 是轻量级的文本数据交换格式
-
JSON 独立于语言 *
-
JSON 具有自我描述性,更易理解
-
JSON 使用 JavaScript 语法来描述数据对象,但是 JSON 仍然独立于语言和平台。JSON 解析器和 JSON 库支持许多不同的编程语言。
二、MongoDB安装
官方文档:https://docs.mongodb.com/manual/administration/install-on-linux/
1、创建repo文件
[root@node1 ~]# vim /etc/yum.repos.d/mongodb.repo [mongodb-org-4.0] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
2、yum安装mongodb
[root@node1 ~]# yum install mongodb-org -y
三、连接MongoDB
启动:
[root@node1 ~]# systemctl start mongod [root@node1 ~]#
直接输入mongo即可进入mongo shell中。
[root@node1 ~]# mongo MongoDB shell version v4.0.0 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 4.0.0 Server has startup warnings: 2018-07-29T14:37:22.419+0800 I CONTROL [initandlisten] 2018-07-29T14:37:22.419+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-07-29T14:37:22.419+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-07-29T14:37:22.419+0800 I CONTROL [initandlisten] 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-07-29T14:37:22.420+0800 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service to collect and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL created for you. Anyone you share the URL with will also be able to view this page. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() --- >
如果设置了验证:
mongo -uusername -ppasswd –authenticationDatabase db –host ip –port端口
默认端口:27017
配置文件: /etc/mongod.conf
四、MongoDB用户管理
1、切换到admin库:
> use admin switched to db admin >
2、创建用户
> db.createUser({user:"admin",customData:{description:"superuser"},pwd:"123456",roles:[{role:"root",db:"admin"}]})
创建用户的命令有点复杂。
> db.createUser({user:"admin",customData:{description:"superuser"},pwd:"123456",roles:[{role:"root",db:"admin"}]}) Successfully added user: { "user" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] } >
解释:
user:指定用户
customData:说明注释,可以省略
pwd:密码
roles:指定用户的角色
db:指定数据库
用户角色:
read:允许用户读取指定数据库
readWrite:允许用户读写指定数据库
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制相关函数的管理权限
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的useAdmin权限
dbAdminDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限
root:只在admin数据库中可用。超级账号,超级权限。
创建用户,创建库:
> db.createUser({user:"test",pwd:"123456",roles:[{role:"read",db:"testdb"}]}) Successfully added user: { "user" : "test", "roles" : [ { "role" : "read", "db" : "testdb" } ] } >
创建库:切换库的同时会创建
创建用户test创建db1库
> use db1 switched to db db1 > db.createUser({user:"test",pwd:"123456",roles:[{role:"readWrite",db:"db2"}]}) Successfully added user: { "user" : "test", "roles" : [ { "role" : "readWrite", "db" : "db2" } ] } > use db1 switched to db db1 > db.auth("test","123456") 1
3、列出用户信息
> use admin switched to db admin > db.system.users.find() { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "N/+ghBCkjLHfWBfUYlqTOA==", "storedKey" : "w/iUuIVz4fhUHzNnhdGDmHvUlRA=", "serverKey" : "XosIjW/gr7yWkhCxMAcABLV2QVM=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "ycyB0yeZz/j1NPl/kUz6CA+4oshUc5RucpJwoA==", "storedKey" : "gMx1Ur7Xxzi3yBtkvgwY2Mn+EjBOFdaFLt1QeRD8OBY=", "serverKey" : "kTR6DklJcma9qXiaBmq9cLDSQjJRc8QiUNUxtfaVVgQ=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] } >
4、查看当前库下的所有用户
> show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } >
5、删除用户
删除admin用户:
> db.dropUser('admin') true >
6、要想创建的用户生效
编辑/usr/lib/systemd/system/mongod.service
验证登录:
1、创建test用户
> use admin switched to db admin > show users { "_id" : "admin.test", "user" : "test", "db" : "admin", "roles" : [ { "role" : "read", "db" : "testdb" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } > db.dropUser('test') > db.createUser({user:"test",pwd:"123456",roles:[{role:"root",db:"admin"}]})
2、在OPTIONS=后添加–auth
[root@node1 ~]# vim /usr/lib/systemd/system/mongod.service ... Environment="OPTIONS=--auth -f /etc/mongod.conf" ...
3、重启:
[root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart mongod
4、登录:
[root@node1 ~]# mongo -u test -p 123456 --authenticationDatabase admin MongoDB shell version v4.0.0 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 4.0.0 Server has startup warnings: 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-07-29T16:51:06.688+0800 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service to collect and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL created for you. Anyone you share the URL with will also be able to view this page. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() --- >
五、MongoDB创建集合、数据管理
为了方便实验,去掉用户认证登录。
把/usr/lib/systemd/system/mongod.service中的–auth删除,然后重启
[root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart mongod
创建集合:
> db.createCollection("mycol",{capped:true,autoIndexId:true,size:6142800,max:10000}) { "note" : "the autoIndexId option is deprecated and will be removed in a future release", "ok" : 1 } >
参数解释:
语法:db.createCollection(name,options)
name:集合的名字
options:选项参数,具体的有:
capped true/false:是否启用封顶集合。封顶集合是固定大小的集合
autoIndexId true/false:是否自动创建索引id,more是false
size:指定最大大小字节封顶集合,单位是B
max:指定封顶集合允许在文件的最大数量。
查看集合: show tables或show collections
> show tables mycol > show collections mycol >
插入数据:
插入数据时,如果集合不存在会自动创建。
> db.Account.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) > db.Account.insert({AccountID:2,UserName:"aaa",password:"123456"}) WriteResult({ "nInserted" : 1 }) > show collections Account mycol >
更新数据:
> db.Account.update({AccountID:1},{"$set":{"Age":20}}) WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) >
查看:
> db.Account.find() { "_id" : ObjectId("5b5d84c40a68baa2366411aa"), "UserName" : "123", "password" : "123456" } { "_id" : ObjectId("5b5d852b0a68baa2366411ab"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 } { "_id" : ObjectId("5b5d85390a68baa2366411ac"), "AccountID" : 2, "UserName" : "aaa", "password" : "123456" } >
根据条件查询:
> db.Account.find({AccountID:1}) { "_id" : ObjectId("5b5d852b0a68baa2366411ab"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 } >
删除某行:
> db.Account.remove({AccountID:1}) WriteResult({ "nRemoved" : 1 }) > db.Account.find() { "_id" : ObjectId("5b5d84c40a68baa2366411aa"), "UserName" : "123", "password" : "123456" } { "_id" : ObjectId("5b5d85390a68baa2366411ac"), "AccountID" : 2, "UserName" : "aaa", "password" : "123456" } >
删除全部:
> db.Account.drop() true > show tables mycol > db.mycol.drop() true > show tables >
查看集合状态:
> db.Account.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) > db.printCollectionStats() Account { "ns" : "test.Account", "size" : 80, "count" : 1, "avgObjSize" : 80, "storageSize" : 4096, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, ...
六、php的MongoDB扩展
官方文档:https://docs.mongodb.com/ecosystem/drivers/php/
1、mongodb.so下载安装
或者使用源码包安装。源码包:http://pecl.php.net/package/mongodb
[root@node1 ~]# git clone https://github.com/mongodb/mongo-php-driver.git [root@node1 ~]# cd mongo-php-driver/ [root@node1 mongo-php-driver]# git submodule update --init [root@node1 mongo-php-driver]# /usr/local/php7/bin/phpize Configuring for: PHP Api Version: 20170718 Zend Module Api No: 20170718 Zend Extension Api No: 320170718 [root@node1 mongo-php-driver]# ./configure --with-php-config=/usr/local/php7/bin/php-config && make && make install
2、在php.ini中配置mongodb.so
[root@node1 ~]# vim /etc/php7/php.ini extension=mongodb.so
七、php的mongo扩展
php的mongo扩展下载地址:http://pecl.php.net/package/mongo
不支持php7版本
下面使用php5.6.37版本。
下载安装php的mongo扩展模块方法:
[root@node1 ~]# curl -O http://pecl.php.net/get/mongo-1.6.16.tgz [root@node1 ~]# tar xf mongo-1.6.16.tgz [root@node1 ~]# cd mongo-1.6.16 [root@node1 mongo-1.6.16]# /usr/local/php5/bin/phpize Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226 [root@node1 mongo-1.6.16]# [root@node1 mongo-1.6.16]# ./configure --with-php-config=/usr/local/php5/bin/php-config && make && make install ... Don't forget to run 'make test'. Installing shared extensions: /usr/local/php5/lib/php/extensions/no-debug-non-zts-20131226/ [root@node1 mongo-1.6.16]#
2、在php.ini中配置mongo.so
#vim /usr/local/php5/etc/php.ini extension=mongo.so
八、MongoDB副本集介绍
早期版本使用master-slave,一主一从和mysql类似,但slave在此架构中只读,当主库宕机后,从库不能自动切换为主。
目前已经淘汰了master-slave,改为副本集。此模式中有一个主(primary),和多个从(secondary),只读。支持给它们设置权重,当主宕机后,权重最高的从切换为主。
在此架构中,还可以建立一个仲裁(arbiter)的角色,它只负责裁决,而不存储数据。
此架构中读写数据都在主上,要想实现负载均衡则需要手动指定读库的目标server。
九、MongoDB副本集搭建
三台机子:
node1:192.168.10.205(primary)
node2:192.168.10.206(secondary)
node3:192.168.10.200(secondary)
所有机子清空防火墙:iptables -F
临时关闭selinux:setenforce 0
1、node1、node2、node3都安装mongodb
MongoDB源:
# vim /etc/yum.repos.d/mongodb.repo [mongodb-org-4.0] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
yum安装MongoDB:
# yum install mongodb-org -y
2、配置文件配置
node1、node2、node3这3台机子的/etc/mongod.conf 文件中添加:
replication:
oplogSizeMB: 20 #这一行前面有2个空格,冒号后面有1个空格
replSetName: repl#这一行前面有2个空格,冒号后面有1个空格同时修改监听ip:
node1: bindIp: 127.0.0.1,192.168.10.205
node2: bindIp: 127.0.0.1,192.168.10.206
node3: bindIp: 127.0.0.1,192.168.10.200
下面以node1为例修改配置文件:
[root@node1 ~]# vim /etc/mongod.conf replication: oplogSizeMB: 20 replSetName: repl
然后重启mongodb:
[root@node1 ~]# systemctl restart mongod [root@node1 ~]#
node2、node3按照上面的方法修改,过程省略。
3、连接主
在主上运行mongo命令:
[root@node1 ~]# mongo > use admin switched to db admin > > config={_id:"repl",members:[{_id:0,host:"192.168.10.205:27017"},{_id:1,host:"192.168.10.206:27017"},{_id:2,host:"192.168.10.200:27017"}]} { "_id" : "repl", "members" : [ { "_id" : 0, "host" : "192.168.10.205:27017" }, { "_id" : 1, "host" : "192.168.10.206:27017" }, { "_id" : 2, "host" : "192.168.10.200:27017" } ] } >
初始化:
> rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1532870458, 1), "$clusterTime" : { "clusterTime" : Timestamp(1532870458, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } repl:SECONDARY>
查看状态:
repl:SECONDARY> rs.status() { "set" : "repl", "date" : ISODate("2018-07-29T13:21:19.644Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1532870470, 1), "members" : [ { "_id" : 0, "name" : "192.168.10.205:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 74, "optime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-29T13:21:11Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1532870469, 1), "electionDate" : ISODate("2018-07-29T13:21:09Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.10.206:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-29T13:21:11Z"), "optimeDurableDate" : ISODate("2018-07-29T13:21:11Z"), "lastHeartbeat" : ISODate("2018-07-29T13:21:19.120Z"), "lastHeartbeatRecv" : ISODate("2018-07-29T13:21:17.833Z"), "pingMs" : NumberLong(4), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.10.205:27017", "syncSourceHost" : "192.168.10.205:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.10.200:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1532870471, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-29T13:21:11Z"), "optimeDurableDate" : ISODate("2018-07-29T13:21:11Z"), "lastHeartbeat" : ISODate("2018-07-29T13:21:19.119Z"), "lastHeartbeatRecv" : ISODate("2018-07-29T13:21:17.826Z"), "pingMs" : NumberLong(4), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.10.205:27017", "syncSourceHost" : "192.168.10.205:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1532870471, 1), "$clusterTime" : { "clusterTime" : Timestamp(1532870471, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } repl:PRIMARY>
从状态信息显示:一个primary,两个secondary。
十、MongoDB副本集测试
1、创建库
在主上创建库、创建集合
repl:PRIMARY> use mydb switched to db mydb repl:PRIMARY> db.acc.insert({AccountID:1,UserName:"zhangsan",password:"123456"}) WriteResult({ "nInserted" : 1 }) repl:PRIMARY> db.acc.insert({AccountID:2,UserName:"lisi",password:"123456"}) WriteResult({ "nInserted" : 1 }) repl:PRIMARY> db.acc.insert({AccountID:3,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) repl:PRIMARY>
查看库:
repl:PRIMARY> show tables acc repl:PRIMARY> show dbs; admin 0.000GB config 0.000GB local 0.000GB mydb 0.000GB repl:PRIMARY>
在从上查看:
[root@node2 ~]# mongo repl:SECONDARY> repl:SECONDARY> show dbs 2018-07-29T21:30:36.503+0800 E QUERY [js] Error: listDatabases failed:{ "operationTime" : Timestamp(1532871030, 1), "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk", "$clusterTime" : { "clusterTime" : Timestamp(1532871030, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1 shellHelper.show@src/mongo/shell/utils.js:865:19 shellHelper@src/mongo/shell/utils.js:755:15 @(shellhelp2):1:1 repl:SECONDARY>
报错。解决方法: rs.slaveOk()
repl:SECONDARY> rs.slaveOk()
再查看:
repl:SECONDARY> show dbs admin 0.000GB config 0.000GB local 0.000GB mydb 0.000GB repl:SECONDARY>
主上创建的mydb,在从上能看到。
2、权重设置
查看权重: 运行rs.config()命令,priority字段就是权重。
默认的权重是1。
node1模拟宕机:
[root@node1 ~]# iptables -I INPUT -p tcp --dport 27017 -j DROP
原来node1是主,在node2上查看:
repl:PRIMARY> rs.status() ... "name" : "192.168.10.205:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", ... "_id" : 1, "name" : "192.168.10.206:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1163, ....
node2为主了。把node1防火墙规则清空之后,node1就回来,但不会变成主,因为3台机子的权重一样都是1。
更改权重:
cfg=rs.conf()
cfg.members[0].priority=3
cfg.members[1].priority=2
cfg.members[2].priority=1
rs.reconfig(cfg)
在主上执行上面的命令。
repl:PRIMARY> cfg=rs.conf() repl:PRIMARY> cfg.members[0].priority=3 3 repl:PRIMARY> cfg.members[1].priority=2 2 repl:PRIMARY> cfg.members[2].priority=1 1 repl:PRIMARY> rs.reconfig(cfg)
设置完之后,node1为主了。
十一、MongoDB分片介绍
分片:就是将数据库进行拆分,将大型集会分割到不同的服务器上。比如100G的数据,可以分割成10份存储到10台服务器上,这样每台服务器有10G数据。
通过一个mongos的进程(路由)实现分片后的数据存储与访问,也就是说mongos是整个分片架构的核心,对客户端而言是不知道是否有分片的,客户端只需要把读写操作转达给mongos即可。
虽然分片会把数据分隔到很多台服务器上,但是每一个节点都是需要有一个备用角色的,这样能保证数据高可用。
当系统需要更多空间或者资源的时候,分片可以让我们按需方便扩展,只需要把mongodb服务器的机器加入到分片集群中即可。
相关概念:
**mongos: **数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生成环境通常有多mongos作为请求入口,防止其中一个宕机。
config server:配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些信息。mongos第一次启动或关掉重启就会从config server加载配置信息,以后如果配置服务器信息变化会通知到所有的mogos更新自己的状态,这样momgos就能继续准确路由。在生成环境通常有多个config server配置服务器,因为它存储了分片路由的元数据,防止数据丢失。
shard:存储了一个集合部分数据的MongoDB实例,每个分片是单独的mongodb服务或这副本集,在生成环境,所有的分片都应该是副本集。
十二、MongoDB分片搭建
三台机子:node1、node2、node3
node1:mongos、config server、副本集1主节点、副本集2仲裁、副本集3从节点
node2:mongos、config server、副本集1从节点、副本集2主节点、副本集3仲裁
node3:mongos、config server、副本集1仲裁、副本集2从节点、副本集3主节点
端口分配:mongos 20000、config 21000、副本集1:27001、副本集2:27002、副本集3:27003
3台机子清空防火墙规则,关闭selinux。
1、分别在三台机子上创建以下目录:
mkdir /data/mongodb/mongos/log -p
mkdir /data/mongodb/config/{data,log} -p
mkdir /data/mongodb/shard1/{data,log} -p
mkdir /data/mongodb/shard2/{data,log} -p
mkdir /data/mongodb/shard3/{data,log} -p
执行如下:
[root@node1 ~]# mkdir /data/mongodb/mongos/log -p [root@node1 ~]# mkdir /data/mongodb/config/{data,log} -p [root@node1 ~]# mkdir /data/mongodb/shard1/{data,log} -p [root@node1 ~]# mkdir /data/mongodb/shard2/{data,log} -p [root@node1 ~]# mkdir /data/mongodb/shard3/{data,log} -p [root@node1 ~]#
2、config server配置
(1)三台机子配置mongoab配置文件
mongodb3.4之后需要对config server创建副本集
node1:
[root@node1 ~]# mkdir /etc/mongod [root@node1 ~]# vim /etc/mongod/config.conf pidfilepath=/var/run/mongodb/configsrv.pid dbpath=/data/mongodb/config/data logpath=/data/mongodb/config/log/configsrv.log logappend=true bind_ip=192.168.10.205 port=21000 fork=true configsvr=true #declare this is a config db of a cluster replSet=configs #副本集 maxConns=20000 #设置最大连接数
node2:
[root@node2 ~]# mkdir /etc/mongod [root@node2 ~]# vim /etc/mongod/config.conf pidfilepath=/var/run/mongodb/configsrv.pid dbpath=/data/mongodb/config/data logpath=/data/mongodb/config/log/configsrv.log logappend=true bind_ip=192.168.10.206 port=21000 fork=true configsvr=true #declare this is a config db of a cluster replSet=configs #副本集 maxConns=20000 #设置最大连接数
node3:
[root@node3 ~]# mkdir /etc/mongod [root@node3 ~]# vim /etc/mongod/config.conf pidfilepath=/var/run/mongodb/configsrv.pid dbpath=/data/mongodb/config/data logpath=/data/mongodb/config/log/configsrv.log logappend=true bind_ip=192.168.10.200 port=21000 fork=true configsvr=true #declare this is a config db of a cluster replSet=configs #副本集 maxConns=20000 #设置最大连接数
(2)启动config server
3台机子都要执行命令:mongod -f /etc/mongod/config.conf
执行提示如下:
[root@node1 ~]# mongod -f /etc/mongod/config.conf 2018-07-31T18:58:26.611+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 2219 child process started successfully, parent exiting [root@node1 ~]#
(3)登录任意一台机子的21000端口,初始化副本集
这里登录node1
[root@node1 ~]# mongo --host 192.168.10.205 --port 21000 > config={_id:"configs",members:[{_id:0,host:"192.168.10.205:21000"},{_id:1,host:"192.168.10.206:21000"},{_id:2,host:"192.168.10.200:21000"}]} { "_id" : "configs", "members" : [ { "_id" : 0, "host" : "192.168.10.205:21000" }, { "_id" : 1, "host" : "192.168.10.206:21000" }, { "_id" : 2, "host" : "192.168.10.200:21000" } ] } > > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1533035155, 1), "$gleStats" : { "lastOpTime" : Timestamp(1533035155, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0), "$clusterTime" : { "clusterTime" : Timestamp(1533035155, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:SECONDARY>
显示:"ok" : 1,表示初始化成功。
查看一下状态:
configs:SECONDARY> rs.status() { "set" : "configs", "date" : ISODate("2018-07-31T11:07:27.747Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1533035228, 1), "members" : [ { "_id" : 0, "name" : "192.168.10.205:21000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 541, "optime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-31T11:07:18Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1533035166, 1), "electionDate" : ISODate("2018-07-31T11:06:06Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.10.206:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 92, "optime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-31T11:07:18Z"), "optimeDurableDate" : ISODate("2018-07-31T11:07:18Z"), "lastHeartbeat" : ISODate("2018-07-31T11:07:26.515Z"), "lastHeartbeatRecv" : ISODate("2018-07-31T11:07:27.017Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.10.205:21000", "syncSourceHost" : "192.168.10.205:21000", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.10.200:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 92, "optime" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1533035238, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-07-31T11:07:18Z"), "optimeDurableDate" : ISODate("2018-07-31T11:07:18Z"), "lastHeartbeat" : ISODate("2018-07-31T11:07:26.505Z"), "lastHeartbeatRecv" : ISODate("2018-07-31T11:07:27.014Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.10.205:21000", "syncSourceHost" : "192.168.10.205:21000", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1533035238, 1), "$gleStats" : { "lastOpTime" : Timestamp(1533035155, 1), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1533035238, 1), "$clusterTime" : { "clusterTime" : Timestamp(1533035238, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:PRIMARY>
OK,config server配置成功。
2、分片配置
(1)添加配置文件(三台机器都要操作)
shard1配置:
node1:
[root@node1 ~]# vim /etc/mongod/shard1.conf pidfilepath=/var/run/mongodb/shard1.pid dbpath=/data/mongodb/shard1/data logpath=/data/mongodb/shard1/log/shard1.log logappend=true bind_ip=192.168.10.205 port=27001 fork=true #httpinterface=true #rest=true replSet=shard1 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node2:
[root@node2 ~]# vim /etc/mongod/shard1.conf pidfilepath=/var/run/mongodb/shard1.pid dbpath=/data/mongodb/shard1/data logpath=/data/mong0db/shard1/log/shard1.log logappend=true bind_ip=192.168.10.206 port=27001 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard1 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node3:
[root@node3 ~]# vim /etc/mongod/shard1.conf pidfilepath=/var/run/mongodb/shard1.pid dbpath=/data/mongodb/shard1/data logpath=/data/mong0db/shard1/log/shard1.log logappend=true bind_ip=192.168.10.200 port=27001 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard1 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
shard2配置:
node1:
[root@node1 ~]# vim /etc/mongod/shard2.conf pidfilepath=/var/run/mongodb/shard2.pid dbpath=/data/mongodb/shard2/data logpath=/data/mong0db/shard2/log/shard2.log logappend=true bind_ip=192.168.10.205 port=27002 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard2 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node2:
[root@node2 ~]# vim /etc/mongod/shard2.conf pidfilepath=/var/run/mongodb/shard2.pid dbpath=/data/mongodb/shard2/data logpath=/data/mong0db/shard2/log/shard2.log logappend=true bind_ip=192.168.10.206 port=27002 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard2 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node3:
[root@node3 ~]# vim /etc/mongod/shard2.conf pidfilepath=/var/run/mongodb/shard2.pid dbpath=/data/mongodb/shard2/data logpath=/data/mong0db/shard2/log/shard2.log logappend=true bind_ip=192.168.10.200 port=27002 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard2 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
shard3配置:
node1:
[root@node1 ~]# vim /etc/mongod/shard3.conf pidfilepath=/var/run/mongodb/shard3.pid dbpath=/data/mongodb/shard3/data logpath=/data/mong0db/shard3/log/shard3.log logappend=true bind_ip=192.168.10.205 port=27003 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard3 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node2:
[root@node2 ~]# vim /etc/mongod/shard3.conf pidfilepath=/var/run/mongodb/shard3.pid dbpath=/data/mongodb/shard3/data logpath=/data/mong0db/shard3/log/shard3.log logappend=true bind_ip=192.168.10.206 port=27003 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard3 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
node3:
[root@node3 ~]# vim /etc/mongod/shard3.conf pidfilepath=/var/run/mongodb/shard3.pid dbpath=/data/mongodb/shard3/data logpath=/data/mong0db/shard3/log/shard3.log logappend=true bind_ip=192.168.10.200 port=27003 fork=true #httpinterface=true #打开web监控 #rest=true replSet=shard3 #副本集名称 shardsvr=true #declare this is a shard db of a cluster maxConns=20000
(2)创建shard1、shard2、shard3副本集
·三台机子启动shard1、shard2、shard3
node1:
[root@node1 ~]# mongod -f /etc/mongod/shard1.conf [root@node1 ~]# mongod -f /etc/mongod/shard2.conf [root@node1 ~]# mongod -f /etc/mongod/shard3.conf
node2:
[root@node2 ~]# mongod -f /etc/mongod/shard1.conf [root@node2 ~]# mongod -f /etc/mongod/shard2.conf [root@node2 ~]# mongod -f /etc/mongod/shard3.conf
node3:
[root@node3 ~]# mongod -f /etc/mongod/shard1.conf [root@node3 ~]# mongod -f /etc/mongod/shard2.conf [root@node3 ~]# mongod -f /etc/mongod/shard3.conf
·创建shard1副本集
config={_id:"shard1",members:[{_id:0,host:"192.168.10.205:27001",arbiterOnly:true},{_id:1,host:"192.168.10.206:27001"},{_id:2,host:"192.168.10.200:27001"}]}
config={_id:"shard2",members:[{_id:0,host:"192.168.10.205:27002"},{_id:1,host:"192.168.10.206:27002",arbiterOnly:true},{_id:2,host:"192.168.10.200:27002"}]}
config={_id:"shard3",members:[{_id:0,host:"192.168.10.205:27003"},{_id:1,host:"192.168.10.206:27003"},{_id:2,host:"192.168.10.200:27003",arbiterOnly:true}]}
node1作为仲裁,所以不能登录node1创建shard1副本集,可以登录node2或node3
[root@node2 ~]# mongo --host 192.168.10.206 --port 27001 > use admin switched to db admin > config={_id:"shard1",members:[{_id:0,host:"192.168.10.205:27001",arbiterOnly:true},{_id:1,host:"192.168.10.206:27001"},{_id:2,host:"192.168.10.200:27001"}]} { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.10.205:27001", "arbiterOnly" : true }, { "_id" : 1, "host" : "192.168.10.206:27001" }, { "_id" : 2, "host" : "192.168.10.200:27001" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1533039117, 1), "$clusterTime" : { "clusterTime" : Timestamp(1533039117, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard1:SECONDARY>
·创建shard2副本集
node2作为仲裁,所以不能登录node2创建shard2副本集,可以登录node1或node3
[root@node1 ~]# mongo --host 192.168.10.206 --port 27002 > use admin switched to db admin > config={_id:"shard2",members:[{_id:0,host:"192.168.10.205:27002"},{_id:1,host:"192.168.10.206:27002",arbiterOnly:true},{_id:2,host:"192.168.10.200:27002"}]} { "_id" : "shard2", "members" : [ { "_id" : 0, "host" : "192.168.10.205:27002" }, { "_id" : 1, "host" : "192.168.10.206:27002", "arbiterOnly" : true }, { "_id" : 2, "host" : "192.168.10.200:27002" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1533039345, 1), "$clusterTime" : { "clusterTime" : Timestamp(1533039345, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard2:SECONDARY>
·创建shard3副本集
node3作为仲裁,所以不能登录node3创建shard3副本集,可以登录node1或node2
[root@node1 ~]# mongo --host 192.168.10.206 --port 27003 > use admin switched to db admin > config={_id:"shard3",members:[{_id:0,host:"192.168.10.205:27003"},{_id:1,host:"192.168.10.206:27003"},{_id:2,host:"192.168.10.200:27003",arbiterOnly:true}]} { "_id" : "shard3", "members" : [ { "_id" : 0, "host" : "192.168.10.205:27003" }, { "_id" : 1, "host" : "192.168.10.206:27003" }, { "_id" : 2, "host" : "192.168.10.200:27003", "arbiterOnly" : true } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1533039405, 1), "$clusterTime" : { "clusterTime" : Timestamp(1533039405, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard3:OTHER>
OK。至此,shard1、2、3副本集创建成功。
3、mongos配置(路由服务器)
(1)三台机子配置
node1:
[root@node1 ~]# vim /etc/mongod/mongos.conf pidfilepath=/var/run/mongodb/mongos.pid logpath=/data/mongodb/mongos/log/mongos.log logappend=true bind_ip=0.0.0.0 port=20000 fork=true configdb=configs/192.168.10.205:21000,192.168.10.206:21000,192.168.10.200:21000#监听的配置服务器,只能有1个或3个,configs为配置服务器的副本>集名 maxConns=20000
node2:
[root@node2 ~]# vim /etc/mongod/mongos.conf pidfilepath=/var/run/mongodb/mongos.pid logpath=/data/mongodb/mongos/log/mongos.log logappend=true bind_ip=0.0.0.0 port=20000 fork=true configdb=configs/192.168.10.205:21000,192.168.10.206:21000,192.168.10.200:21000#监听的配置服务器,只能有1个或3个,configs为配置服务器的副本>集名 maxConns=20000
node3:
[root@node3 ~]# vim /etc/mongod/mongos.conf pidfilepath=/var/run/mongodb/mongos.pid logpath=/data/mongodb/mongos/log/mongos.log logappend=true bind_ip=0.0.0.0 port=20000 fork=true configdb=configs/192.168.10.205:21000,192.168.10.206:21000,192.168.10.200:21000#监听的配置服务器,只能有1个或3个,configs为配置服务器的副本>集名 maxConns=20000
(2)启动mongos
三台机子启动mongos
node1:
[root@node1 ~]# mongos -f /etc/mongod/mongos.conf 2018-07-31T20:35:09.384+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 6992 child process started successfully, parent exiting [root@node1 ~]#
node2:
[root@node2 ~]# mongos -f /etc/mongod/mongos.conf 2018-07-31T20:36:35.153+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 2130 child process started successfully, parent exiting [root@node2 ~]#
node3:
[root@node3 ~]# mongos -f /etc/mongod/mongos.conf 2018-07-31T20:36:34.240+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 3101 child process started successfully, parent exiting
4、启用分片
1、登录任意一台机子的20000端口,这里登录node1:
[root@node1 ~]# mongo --port 20000 MongoDB shell version v4.0.0 connecting to: mongodb://127.0.0.1:20000/ MongoDB server version: 4.0.0 Server has startup warnings: 2018-07-31T20:35:09.395+0800 I CONTROL [main] 2018-07-31T20:35:09.395+0800 I CONTROL [main] ** WARNING: Access control is not enabled for the database. 2018-07-31T20:35:09.395+0800 I CONTROL [main] ** Read and write access to data and configuration is unrestricted. 2018-07-31T20:35:09.395+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. 2018-07-31T20:35:09.395+0800 I CONTROL [main] mongos>
2、把所有的分片和路由串联
串联分片和路由命令如下:
sh.addShard("shard1/192.168.10.205:27001,192.168.10.206:27001,192.168.10.200:27001")
sh.addShard("shard2/192.168.10.205:27002,192.168.10.206:27002,192.168.10.200:27002")
sh.addShard("shard3/192.168.10.205:27003,192.168.10.206:27003,192.168.10.200:27003")
登录node1的20000端口执行:
mongos> sh.addShard("shard1/192.168.10.205:27001,192.168.10.206:27001,192.168.10.200:27001") { "shardAdded" : "shard1", "ok" : 1, "operationTime" : Timestamp(1533040904, 6), "$clusterTime" : { "clusterTime" : Timestamp(1533040904, 6), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard2/192.168.10.205:27002,192.168.10.206:27002,192.168.10.200:27002") { "shardAdded" : "shard2", "ok" : 1, "operationTime" : Timestamp(1533040916, 2), "$clusterTime" : { "clusterTime" : Timestamp(1533040916, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard3/192.168.10.205:27003,192.168.10.206:27003,192.168.10.200:27003") { "shardAdded" : "shard3", "ok" : 1, "operationTime" : Timestamp(1533040940, 6), "$clusterTime" : { "clusterTime" : Timestamp(1533040940, 6), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
查看状态:
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b60429f04da904f3ad80750") } shards: { "_id" : "shard1", "host" : "shard1/192.168.10.200:27001,192.168.10.206:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.10.200:27002,192.168.10.205:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.10.205:27003,192.168.10.206:27003", "state" : 1 } active mongoses: "4.0.0" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } mongos>
OK,至此,MongoDB分片搭建成功。
3台机子的进程:
[root@node1 ~]# ps aux | grep mongo mongod 1560 1.8 9.6 1443544 96624 ? Sl 18:42 2:17 /usr/bin/mongod -f /etc/mongod.conf root 2219 1.9 9.6 1571168 96548 ? Sl 18:58 2:01 mongod -f /etc/mongod/config.conf root 5511 1.2 8.0 1498384 80452 ? Sl 19:57 0:33 mongod -f /etc/mongod/shard2.conf root 5609 1.2 6.1 1499360 61732 ? Sl 19:58 0:33 mongod -f /etc/mongod/shard3.conf root 6606 1.2 4.9 1125368 48984 ? Sl 20:25 0:12 mongod -f /etc/mongod/shard1.conf root 6992 0.5 1.6 260568 16840 ? Sl 20:35 0:02 mongos -f /etc/mongod/mongos.conf root 7140 0.0 2.1 784764 21756 pts/0 Sl+ 20:38 0:00 mongo --port 20000 root 7247 0.0 0.0 112704 968 pts/1 S+ 20:43 0:00 grep --color=auto mongo [root@node1 ~]# ssh 192.168.10.206 " ps aux | grep mongo" root@192.168.10.206's password: mongod 1390 2.1 9.7 1463164 97304 ? Sl 18:41 2:38 /usr/bin/mongod -f /etc/mongod.conf root 1519 1.8 10.0 1445928 99804 ? Sl 18:58 2:00 mongod -f /etc/mongod/config.conf root 1878 1.2 9.4 1502780 94164 ? Sl 20:02 0:31 mongod -f /etc/mongod/shard1.conf root 1909 0.9 7.0 1137636 70512 ? Sl 20:03 0:23 mongod -f /etc/mongod/shard2.conf root 1940 1.1 9.5 1434464 95128 ? Sl 20:03 0:29 mongod -f /etc/mongod/shard3.conf root 2130 0.4 1.6 260584 16200 ? Sl 20:36 0:02 mongos -f /etc/mongod/mongos.conf root 2242 1.0 0.1 113172 1568 ? Ss 20:45 0:00 bash -c ps aux | grep mongo root 2248 0.0 0.0 112704 944 ? S 20:45 0:00 grep mongo [root@node1 ~]# ssh 192.168.10.200 " ps aux | grep mongo" root@192.168.10.200's password: mongod 1502 2.1 11.1 1386148 110784 ? Sl 18:41 2:42 /usr/bin/mongod -f /etc/mongod.conf root 1937 2.0 10.2 1446152 102416 ? Sl 18:58 2:10 mongod -f /etc/mongod/config.conf root 2737 0.9 5.9 1136712 59688 ? Sl 20:04 0:23 mongod -f /etc/mongod/shard3.conf root 2801 1.3 7.4 1442580 74772 ? Sl 20:04 0:33 mongod -f /etc/mongod/shard2.conf root 2903 1.3 9.7 1444408 97752 ? Sl 20:04 0:32 mongod -f /etc/mongod/shard1.conf root 3101 0.4 1.7 260592 17496 ? Sl 20:36 0:02 mongos -f /etc/mongod/mongos.conf root 3263 0.0 0.1 113172 1556 ? Ss 20:45 0:00 bash -c ps aux | grep mongo root 3269 0.0 0.0 112704 932 ? S 20:45 0:00 grep mongo [root@node1 ~]#
十三、MongoDB分片测试
前面已经成功搭建分片了,现在测试一下。
登录任意一台机子20000端口:
[root@node2 ~]# mongo --port 20000 MongoDB shell version v4.0.0 connecting to: mongodb://127.0.0.1:20000/ MongoDB server version: 4.0.0 Server has startup warnings: 2018-07-31T20:36:35.176+0800 I CONTROL [main] 2018-07-31T20:36:35.176+0800 I CONTROL [main] ** WARNING: Access control is not enabled for the database. 2018-07-31T20:36:35.176+0800 I CONTROL [main] ** Read and write access to data and configuration is unrestricted. 2018-07-31T20:36:35.176+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. 2018-07-31T20:36:35.176+0800 I CONTROL [main] mongos>
使用admin库,
指定分片的数据库:
db.runCommand({enablesharding:"testdb"})
或者:
sh.enableSharding("testdb")
指定数据库里需要分片的集合和片键:
db.runCommand({shardcollection:"testdb.table1",key:{id:1}})
或者:
sh.shardCollection("testdb.table1",{"id":1})
操作如下:
mongos> db.runCommand({enablesharding:"testdb"}) { "ok" : 1, "operationTime" : Timestamp(1533041774, 5), "$clusterTime" : { "clusterTime" : Timestamp(1533041774, 5), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> db.runCommand({shardcollection:"testdb.table1",key:{id:1}}) { "collectionsharded" : "testdb.table1", "collectionUUID" : UUID("fe557099-6b25-4982-9c14-b4346908d010"), "ok" : 1, "operationTime" : Timestamp(1533041843, 15), "$clusterTime" : { "clusterTime" : Timestamp(1533041843, 15), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
使用testdb库,插入测试数据:
mongos> use testdb switched to db testdb mongos> for(var i=1;i<=10000;i++) db.table1.save({id:i,"test1":"testval1"}) WriteResult({ "nInserted" : 1 }) mongos>
插入数据成功,查看一下数据库:
mongos> show dbs admin 0.000GB config 0.001GB testdb 0.000GB mongos> sh.status() ... databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("70f6e1c0-8943-44de-b676-d7ccfa328263"), "lastMod" : 1 } } testdb.table1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard3 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0)
可以创建多个库:
mongos> sh.enableSharding("db2") { "ok" : 1, "operationTime" : Timestamp(1533042209, 5), "$clusterTime" : { "clusterTime" : Timestamp(1533042209, 5), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> mongos> sh.shardCollection("db2.c1",{"id":1}) { "collectionsharded" : "db2.c1", "collectionUUID" : UUID("17a79683-c9b9-4683-8b8d-7857d3b24c28"), "ok" : 1, "operationTime" : Timestamp(1533042272, 6), "$clusterTime" : { "clusterTime" : Timestamp(1533042272, 6), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
再查看一下数据库状态:
mongos> sh.status() ... { "_id" : "db2", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : UUID("dcadba30-dcc2-45b2-b17f-e271f9fcd582"), "lastMod" : 1 } } db2.c1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard2 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("70f6e1c0-8943-44de-b676-d7ccfa328263"), "lastMod" : 1 } } testdb.table1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard3 1 ....
从上面可以看到,testdb在shard3里面,db2在shard2里面
再创建一个db3:
mongos> sh.enableSharding("db3") { "ok" : 1, "operationTime" : Timestamp(1533042486, 4), "$clusterTime" : { "clusterTime" : Timestamp(1533042486, 4), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.shardCollection("db3.c3",{"id":1}) { "collectionsharded" : "db3.c3", "collectionUUID" : UUID("2d80c1ed-2b3e-4183-8a97-fc87a1fc694f"), "ok" : 1, "operationTime" : Timestamp(1533042500, 7), "$clusterTime" : { "clusterTime" : Timestamp(1533042500, 7), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
再查看状态:
mongos> sh.status() ... { "_id" : "db2", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : UUID("dcadba30-dcc2-45b2-b17f-e271f9fcd582"), "lastMod" : 1 } } db2.c1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard2 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) { "_id" : "db3", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("a040af34-0ffe-44fd-95ea-bfe362c7f83c"), "lastMod" : 1 } } db3.c3 shard key: { "id" : 1 } unique: false balancing: true chunks: shard1 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("70f6e1c0-8943-44de-b676-d7ccfa328263"), "lastMod" : 1 } } testdb.table1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard3 1
可以看到:testdb在shard3、db2在shard2、db3在shard1,算是平均分布了。
查看table1状态: db.table1.stats()
十四、MongoDB备份恢复
1、备份指定数据库
[root@node1 ~]# mongodump -h 127.0.0.1 --port 20000 -d testdb -o /tmp/testdbbak 2018-07-31T21:12:06.335+0800 writing testdb.table1 to 2018-07-31T21:12:06.449+0800 done dumping testdb.table1 (10000 documents) [root@node1 ~]#
参数解释:
-h、–host:主机
–port:分片的端口
-d:要备份的库
-o:备份到哪里
2、备份所有库
[root@node1 ~]# mongodump --host 127.0.0.1 --port 20000 -o /tmp/allmongodb 2018-07-31T21:14:53.174+0800 writing admin.system.version to 2018-07-31T21:14:53.181+0800 done dumping admin.system.version (1 document) 2018-07-31T21:14:53.183+0800 writing testdb.table1 to ... 2018-07-31T21:14:53.222+0800 done dumping db3.c3 (0 documents) 2018-07-31T21:14:53.282+0800 done dumping testdb.table1 (10000 documents) [root@node1 ~]#
3、指定备份集合
[root@node1 ~]# mongodump --host 127.0.0.1 --port 20000 -d db2 -c c1 -o /tmp/db2_c1 2018-07-31T21:16:37.170+0800 writing db2.c1 to 2018-07-31T21:16:37.176+0800 done dumping db2.c1 (0 documents) [root@node1 ~]#
参数:-c:集合
4、导出集合为JSON文件
[root@node1 ~]# mongoexport --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/testdb.json 2018-07-31T21:18:34.320+0800 connected to: 127.0.0.1:20000 2018-07-31T21:18:34.562+0800 exported 10000 records [root@node1 ~]#
查看一下刚备份的JSON:
[root@node1 ~]# tail /tmp/testdb.json {"_id":{"$oid":"5b605da9aa5c263279ebee94"},"id":9991.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee95"},"id":9992.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee96"},"id":9993.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee97"},"id":9994.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee98"},"id":9995.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee99"},"id":9996.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee9a"},"id":9997.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee9b"},"id":9998.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee9c"},"id":9999.0,"test1":"testval1"} {"_id":{"$oid":"5b605da9aa5c263279ebee9d"},"id":10000.0,"test1":"testval1"} [root@node1 ~]#
5、恢复
1、恢复所有的库
先把testdb、db2、db3删除:
[root@node1 ~]# mongo --port 20000 mongos> use testdb switched to db testdb mongos> db.dropDatabase() mongos> use db2 switched to db db2 mongos> db.dropDatabase() mongos> use db3 switched to db db3 mongos> db.dropDatabase()
恢复:
[root@node1 ~]# ls /tmp/allmongodb/ admin config db2 db3 testdb [root@node1 ~]# mongorestore --host 127.0.0.1 --port 20000 /tmp/allmongodb 2018-07-31T21:32:51.448+0800 preparing collections to restore from 2018-07-31T21:32:51.450+0800 Failed: cannot do a full restore on a sharded system - remove the 'config' directory from the dump directory first [root@node1 ~]#
不能恢复config,所有把备份目录中的config删除后,再恢复:
[root@node1 ~]# rm -rf /tmp/allmongodb/config/ [root@node1 ~]# ls /tmp/allmongodb/ admin db2 db3 testdb [root@node1 ~]# mongorestore --host 127.0.0.1 --port 20000 /tmp/allmongodb 2018-07-31T21:33:58.100+0800 preparing collections to restore from 2018-07-31T21:33:58.105+0800 reading metadata for testdb.table1 from /tmp/allmongodb/testdb/table1.metadata.json 2018-07-31T21:33:58.111+0800 reading metadata for db2.c1 from /tmp/allmongodb/db2/c1.metadata.json 2018-07-31T21:33:58.127+0800 reading metadata for db3.c3 from /tmp/allmongodb/db3/c3.metadata.json 2018-07-31T21:33:58.273+0800 restoring db3.c3 from /tmp/allmongodb/db3/c3.bson 2018-07-31T21:33:58.279+0800 restoring indexes for collection db3.c3 from metadata 2018-07-31T21:33:58.298+0800 restoring testdb.table1 from /tmp/allmongodb/testdb/table1.bson 2018-07-31T21:33:58.336+0800 restoring db2.c1 from /tmp/allmongodb/db2/c1.bson 2018-07-31T21:33:58.339+0800 restoring indexes for collection db2.c1 from metadata 2018-07-31T21:33:58.352+0800 finished restoring db3.c3 (0 documents) 2018-07-31T21:33:58.401+0800 finished restoring db2.c1 (0 documents) 2018-07-31T21:33:58.778+0800 restoring indexes for collection testdb.table1 from metadata 2018-07-31T21:33:58.838+0800 finished restoring testdb.table1 (10000 documents) 2018-07-31T21:33:58.838+0800 done [root@node1 ~]#
–drop:恢复之前先把之前的数据删除,不建议使用
2、恢复指定的库
[root@node1 ~]# mongorestore --host 127.0.0.1 --port 20000 -d testdb --drop /tmp/allmongodb/testdb/ 2018-07-31T21:36:04.402+0800 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead 2018-07-31T21:36:04.403+0800 building a list of collections to restore from /tmp/allmongodb/testdb dir 2018-07-31T21:36:04.503+0800 reading metadata for testdb.table1 from /tmp/allmongodb/testdb/table1.metadata.json 2018-07-31T21:36:04.589+0800 restoring testdb.table1 from /tmp/allmongodb/testdb/table1.bson 2018-07-31T21:36:05.114+0800 restoring indexes for collection testdb.table1 from metadata 2018-07-31T21:36:05.158+0800 finished restoring testdb.table1 (10000 documents) 2018-07-31T21:36:05.158+0800 done [root@node1 ~]#
3、恢复集合
注意:恢复要指定备份的json文件,而不是备份的目录。
[root@node1 ~]# mongorestore --host 127.0.0.1 --port 20000 -d testdb -c table1 /tmp/allmongodb/testdb/table1.bson 2018-07-31T21:43:55.687+0800 checking for collection data in /tmp/allmongodb/testdb/table1.bson 2018-07-31T21:43:55.692+0800 reading metadata for testdb.table1 from /tmp/allmongodb/testdb/table1.metadata.json 2018-07-31T21:43:55.797+0800 restoring testdb.table1 from /tmp/allmongodb/testdb/table1.bson 2018-07-31T21:43:56.211+0800 restoring indexes for collection testdb.table1 from metadata 2018-07-31T21:43:56.324+0800 finished restoring testdb.table1 (10000 documents) 2018-07-31T21:43:56.324+0800 done [root@node1 ~]#
-d:要恢复的数据库
-c:要恢复的集合
4、导入集合
要指定json文件,用–file选项指定备份的JSON文件
[root@node1 ~]# mongoimport --host 127.0.0.1 --port 2000 -d testdb -c table1 --file /tmp/testdb.json 2018-07-31T21:39:29.090+0800 [........................] testdb.table1 0B/731KB (0.0%) 2018-07-31T21:39:29.607+0800 [........................] testdb.table1 0B/731KB (0.0%) 2018-07-31T21:39:29.607+0800 Failed: error connecting to db server: no reachable servers 2018-07-31T21:39:29.607+0800 imported 0 documents [root@node1 ~]#