master-slave
主的配置文件 /Users/olifer/middle/mongo/master-slave/master/mongod.conf
bind_ip = 127.0.0.1port = 27017dbpath = /Users/olifer/middle/mongo/master-slave/master/data/master = true
从的配置文件
bind_ip = 127.0.0.1port = 27018dbpath = /Users/olifer/middle/mongo/master-slave/slave/data/slave = truesource = 127.0.0.1:27017
启动主服务器,并且看到下面的日志,说明配置成功。
mongod -f ~/middle/mongo/master-slave/master/mongod.conf2017-12-05T15:35:22.479+0800 I JOURNAL [initandlisten] journal dir=/Users/olifer/middle/mongo/master-slave/master/data/journal2017-12-05T15:35:22.479+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed2017-12-05T15:35:22.500+0800 I JOURNAL [durability] Durability thread started2017-12-05T15:35:22.500+0800 I JOURNAL [journal writer] Journal writer thread started2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] MongoDB starting : pid=6368 port=27017 dbpath=/Users/olifer/middle/mongo/master-slave/master/data/ master=1 64-bit host=oliferdeMacBook-Pro.local2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] db version v3.0.72017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] git version: nogitversion2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] build info: Darwin yosemitevm.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_492017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] allocator: system2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] options: { config: "/Users/olifer/middle/mongo/master-slave/master/mongod.conf", master: true, net: { bindIp: "127.0.0.1", port: 27017 }, storage: { dbPath: "/Users/olifer/middle/mongo/master-slave/master/data/" } }2017-12-05T15:35:22.505+0800 I INDEX [initandlisten] allocating new ns file /Users/olifer/middle/mongo/master-slave/master/data/local.ns, filling with zeroes...2017-12-05T15:35:22.570+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/master/data/local.0, filling with zeroes...2017-12-05T15:35:22.570+0800 I STORAGE [FileAllocator] creating directory /Users/olifer/middle/mongo/master-slave/master/data/_tmp2017-12-05T15:35:22.786+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/master/data/local.0, size: 64MB, took 0.215 secs2017-12-05T15:35:22.972+0800 I REPL [initandlisten] ******2017-12-05T15:35:22.972+0800 I REPL [initandlisten] creating replication oplog of size: 192MB...2017-12-05T15:35:22.972+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/master/data/local.1, filling with zeroes...2017-12-05T15:35:24.061+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/master/data/local.1, size: 256MB, took 1.088 secs2017-12-05T15:35:24.117+0800 I REPL [initandlisten] ******2017-12-05T15:35:24.119+0800 I NETWORK [initandlisten] waiting for connections on port 27017
启动从服务器,并且看到下面的日志,说明配置成功。
mongod -f ~/middle/mongo/master-slave/slave/mongod.conf2017-12-05T15:37:11.518+0800 I JOURNAL [initandlisten] journal dir=/Users/olifer/middle/mongo/master-slave/slave/data/journal2017-12-05T15:37:11.518+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed2017-12-05T15:37:11.535+0800 I JOURNAL [durability] Durability thread started2017-12-05T15:37:11.535+0800 I JOURNAL [journal writer] Journal writer thread started2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] MongoDB starting : pid=7304 port=27018 dbpath=/Users/olifer/middle/mongo/master-slave/slave/data/ slave=1 64-bit host=oliferdeMacBook-Pro.local2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] db version v3.0.72017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] git version: nogitversion2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] build info: Darwin yosemitevm.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_492017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] allocator: system2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] options: { config: "/Users/olifer/middle/mongo/master-slave/slave/mongod.conf", net: { bindIp: "127.0.0.1", port: 27018 }, slave: true, source: "127.0.0.1:27017", storage: { dbPath: "/Users/olifer/middle/mongo/master-slave/slave/data/" } }2017-12-05T15:37:11.536+0800 I INDEX [initandlisten] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/local.ns, filling with zeroes...2017-12-05T15:37:11.605+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/local.0, filling with zeroes...2017-12-05T15:37:11.605+0800 I STORAGE [FileAllocator] creating directory /Users/olifer/middle/mongo/master-slave/slave/data/_tmp2017-12-05T15:37:11.824+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/local.0, size: 64MB, took 0.218 secs2017-12-05T15:37:11.890+0800 I NETWORK [initandlisten] waiting for connections on port 270182017-12-05T15:37:12.894+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:270172017-12-05T15:37:17.930+0800 I REPL [replslave] repl: sleep 2 sec before next pass2017-12-05T15:37:19.935+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:270172017-12-05T15:37:24.958+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:270172017-12-05T15:37:33.188+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:270172017-12-05T15:37:43.193+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:270172017-12-05T15:37:53.198+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
我们可以通过slave的日志中可以看到,slave一直同master保持同步数据的联系。
我们通过客户端链接上服务端
mongo 127.0.0.1:27017 #主mongo 127.0.0.1:27018 #从
在主上操作
> db.test.insert({"name":"linyang"}) WriteResult({ "nInserted" : 1 }) > db.test.find({}); { "_id" : ObjectId("5a264e0ecb6f7d3713c516a7"), "name" : "linyang" }
往里面插入了一条记录,可以再从的日志中发现同步的记录
2017-12-05T15:43:10.387+0800 I INDEX [replslave] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/test.ns, filling with zeroes...2017-12-05T15:43:10.450+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, filling with zeroes...2017-12-05T15:43:10.646+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, size: 64MB, took 0.196 secs2017-12-05T15:43:10.702+0800 I REPL [replslave] resync: dropping database test2017-12-05T15:43:10.709+0800 I JOURNAL [replslave] journalCleanup...2017-12-05T15:43:10.710+0800 I JOURNAL [replslave] removeJournalFiles2017-12-05T15:43:10.713+0800 I JOURNAL [replslave] journalCleanup...2017-12-05T15:43:10.713+0800 I JOURNAL [replslave] removeJournalFiles2017-12-05T15:43:10.715+0800 I REPL [replslave] resync: cloning database test to get an initial copy2017-12-05T15:43:10.718+0800 I INDEX [replslave] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/test.ns, filling with zeroes...2017-12-05T15:43:10.803+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, filling with zeroes...2017-12-05T15:43:11.056+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, size: 64MB, took 0.252 secs2017-12-05T15:43:11.107+0800 I INDEX [replslave] build index on: test.test properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "test.test" }2017-12-05T15:43:11.107+0800 I INDEX [replslave] building index using bulk method2017-12-05T15:43:11.107+0800 I INDEX [replslave] build index done. scanned 1 total records. 0 secs2017-12-05T15:43:11.107+0800 I STORAGE [replslave] copying indexes for: { name: "test", options: {} }2017-12-05T15:43:11.108+0800 I REPL [replslave] resync: done with initial clone for db: test
我们再从的client中查看有没有信息。
mongo 127.0.0.1:27018 MongoDB shell version: 3.0.7 connecting to: 127.0.0.1:27018/test> db.test.find({}); { "_id" : ObjectId("5a264e0ecb6f7d3713c516a7"), "name" : "linyang" } >
也确认同步过来了。然后再从上插入记录
> db.test.insert({"age":34}); WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } }) >
从上不能插入记录。如果主挂掉了,从又不能写数据,那么是否mongo集群就挂掉了。答案当然是否定的,因为,mongo还有副本集。
副本集
我们使用三台mongo来模拟.
第一台的配置文件 /Users/olifer/middle/mongo/replica/a/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data port = 8001bind_ip = 127.0.0.1replSet = child/127.0.0.1:8002
第二台的配置文件 /Users/olifer/middle/mongo/replica/b/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data port = 8002bind_ip = 127.0.0.1replSet = child/127.0.0.1:8003
第三台的配置文件 /Users/olifer/middle/mongo/replica/c/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data port = 8003bind_ip = 127.0.0.1replSet = child/127.0.0.1:8001
启动服务器
mongod -f ~/middle/mongo/replica/a/mongod.conf mongod -f ~/middle/mongo/replica/b/mongod.conf mongod -f ~/middle/mongo/replica/c/mongod.conf
启动成功后,进入三个服务的任何一个客户端
mongo 127.0.0.1:8002 > config = {_id: 'child', members: [ { "_id":1, "host":"127.0.0.1:8001" }, { "_id":2, "host":"127.0.0.1:8002" }, { "_id":3, "host":"127.0.0.1:8003" } ] } { "_id" : "child", "members" : [ { "_id" : 1, "host" : "127.0.0.1:8001" }, { "_id" : 2, "host" : "127.0.0.1:8002" }, { "_id" : 3, "host" : "127.0.0.1:8003" } ] } > rs.initiate(config); { "ok" : 1 } child:SECONDARY>
配置完后发现前缀发生了改变,我们进入另外两个渠道的客户端
mongo 127.0.0.1:8001 MongoDB shell version: 3.0.7 connecting to: 127.0.0.1:8001/testchild:PRIMARY> mongo 127.0.0.1:8003 MongoDB shell version: 3.0.7 connecting to: 127.0.0.1:8003/testchild:SECONDARY>
其中child:PRIMARY>表示活跃节点。其余为备份节点。注意:只有活跃节点才能进行查询数据库的信息操作,备份节点不能进行会报错,在任意客户端执行rs.status() 来查看所有状态
child:PRIMARY> rs.status() { "set" : "child", "date" : ISODate("2017-12-05T08:56:44.523Z"), "myState" : 1, "members" : [ { "_id" : 1, "name" : "127.0.0.1:8001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 620, "optime" : Timestamp(1512463818, 1), "optimeDate" : ISODate("2017-12-05T08:50:18Z"), "electionTime" : Timestamp(1512463821, 1), "electionDate" : ISODate("2017-12-05T08:50:21Z"), "configVersion" : 1, "self" : true }, { "_id" : 2, "name" : "127.0.0.1:8002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 387, "optime" : Timestamp(1512463818, 1), "optimeDate" : ISODate("2017-12-05T08:50:18Z"), "lastHeartbeat" : ISODate("2017-12-05T08:56:43.756Z"), "lastHeartbeatRecv" : ISODate("2017-12-05T08:56:43.756Z"), "pingMs" : 0, "configVersion" : 1 }, { "_id" : 3, "name" : "127.0.0.1:8003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 387, "optime" : Timestamp(1512463818, 1), "optimeDate" : ISODate("2017-12-05T08:50:18Z"), "lastHeartbeat" : ISODate("2017-12-05T08:56:43.756Z"), "lastHeartbeatRecv" : ISODate("2017-12-05T08:56:43.756Z"), "pingMs" : 0, "configVersion" : 1 } ], "ok" : 1 }
搭建成功后,我们来验证一下。
在活跃的节点创建数据,在备份库拉取数据
child:PRIMARY> db.repl.insert({"name":"123"}); WriteResult({ "nInserted" : 1 }) child:SECONDARY> db.repl.find(); { "_id" : ObjectId("5a26602f5cefb1fdb377843b"), "name" : "123" }
功能正常。
从服务器不能写
child:SECONDARY> db.repl.insert({"age":444}); WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
关闭活跃的节点,从服务器会通过选举得到最新的活跃的节点
当我关闭原来活跃的服务器8001后,通过内部选举,8002成了最新的活跃节点没从前缀也可以看出
child:SECONDARY>child:PRIMARY>
现在看一下最新的状态
child:PRIMARY> rs.status(); { "set" : "child", "date" : ISODate("2017-12-05T09:09:31.666Z"), "myState" : 1, "members" : [ { "_id" : 1, "name" : "127.0.0.1:8001", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2017-12-05T09:09:31.193Z"), "lastHeartbeatRecv" : ISODate("2017-12-05T09:07:12.773Z"), "pingMs" : 0, "lastHeartbeatMessage" : "Failed attempt to connect to 127.0.0.1:8001; couldn't connect to server 127.0.0.1:8001 (127.0.0.1), connection attempt failed", "configVersion" : -1 }, { "_id" : 2, "name" : "127.0.0.1:8002", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1308, "optime" : Timestamp(1512464432, 2), "optimeDate" : ISODate("2017-12-05T09:00:32Z"), "electionTime" : Timestamp(1512464835, 1), "electionDate" : ISODate("2017-12-05T09:07:15Z"), "configVersion" : 1, "self" : true }, { "_id" : 3, "name" : "127.0.0.1:8003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1152, "optime" : Timestamp(1512464432, 2), "optimeDate" : ISODate("2017-12-05T09:00:32Z"), "lastHeartbeat" : ISODate("2017-12-05T09:09:30.963Z"), "lastHeartbeatRecv" : ISODate("2017-12-05T09:09:30.963Z"), "pingMs" : 0, "configVersion" : 1 } ], "ok" : 1 }
说明内部的切换确实成功了。还剩下一个分片,我们下次再讲。
作者:数齐
链接:https://www.jianshu.com/p/1f5dd7492228
共同学习,写下你的评论
评论加载中...
作者其他优质文章