背景:
我们一般配置的Mongodb主从,或者Mongodb复制集,数据同步都是实时的。但如果在主节点上进行了错误的数据操作,这时候就会导致整个集群的数据都出错。因此,我们可以在一个集群中,挑选一个mongodb实例,用作复制延迟。当在主节点上误操作的时候,集群中有一个实例是不受影响的。这时候就可以利用这台不受影响的实例进行数据恢复。
以上就是mongodb的延迟复制节点的功能,当主节点进行一次数据操作后,延迟复制节不立马进行数据同步操作,而是在一段时间后,才同步数据。
配置:
以我的实验环境为例,以下为我的mongodb复制集:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
cmh0:PRIMARY> rs.status()
{
"set"
:
"cmh0"
,
"date"
: ISODate(
"2016-08-22T02:43:16.240Z"
),
"myState"
: 1,
"members"
: [
{
"_id"
: 1,
"name"
:
"192.168.52.128:27017"
,
"health"
: 1,
"state"
: 1,
"stateStr"
:
"PRIMARY"
,
"uptime"
: 82,
"optime"
: Timestamp(1470581983, 1),
"optimeDate"
: ISODate(
"2016-08-07T14:59:43Z"
),
"electionTime"
: Timestamp(1471833721, 1),
"electionDate"
: ISODate(
"2016-08-22T02:42:01Z"
),
"configVersion"
: 1,
"self"
:
true
},
{
"_id"
: 2,
"name"
:
"192.168.52.135:27017"
,
"health"
: 1,
"state"
: 2,
"stateStr"
:
"SECONDARY"
,
"uptime"
: 71,
"optime"
: Timestamp(1470581983, 1),
"optimeDate"
: ISODate(
"2016-08-07T14:59:43Z"
),
"lastHeartbeat"
: ISODate(
"2016-08-22T02:43:15.138Z"
),
"lastHeartbeatRecv"
: ISODate(
"2016-08-22T02:43:14.978Z"
),
"pingMs"
: 0,
"lastHeartbeatMessage"
:
"could not find member to sync from"
,
"configVersion"
: 1
},
{
"_id"
: 3,
"name"
:
"192.168.52.135:27019"
,
"health"
: 1,
"state"
: 2,
"stateStr"
:
"SECONDARY"
,
"uptime"
: 75,
"optime"
: Timestamp(1470581983, 1),
"optimeDate"
: ISODate(
"2016-08-07T14:59:43Z"
),
"lastHeartbeat"
: ISODate(
"2016-08-22T02:43:15.138Z"
),
"lastHeartbeatRecv"
: ISODate(
"2016-08-22T02:43:15.138Z"
),
"pingMs"
: 0,
"configVersion"
: 1
}
],
"ok"
: 1
}
|
这时还未配置延迟复制节点,所以数据是实时同步的:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
cmh0:PRIMARY> use cmhtest
switched to db cmhtest
cmh0:PRIMARY> db.cmh.insert({
"name"
:
"ChenMinghui"
})
WriteResult({
"nInserted"
: 1 })
cmh0:PRIMARY> rs.printReplicationInfo()
configured oplog size: 990MB
log length start to end: 195secs (0.05hrs)
oplog first event
time
: Mon Aug 22 2016 10:51:22 GMT+0800 (CST)
oplog last event
time
: Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
now: Mon Aug 22 2016 10:55:00 GMT+0800 (CST)
cmh0:PRIMARY> rs.printSlaveReplicationInfo()
source
: 192.168.52.135:27017
syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source
: 192.168.52.135:27019
syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
|
可以看到两个Secondary节点都在同一时间实时同步了数据。
配置192.168.52.135:27017为延迟复制节点:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
|
cmh0:PRIMARY> cfg=rs.conf();
{
"_id"
:
"cmh0"
,
"version"
: 1,
"members"
: [
{
"_id"
: 1,
"host"
:
"192.168.52.128:27017"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 1,
"tags"
: {
},
"slaveDelay"
: 0,
"votes"
: 1
},
{
"_id"
: 2,
"host"
:
"192.168.52.135:27017"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 1,
"tags"
: {
},
"slaveDelay"
: 0,
"votes"
: 1
},
{
"_id"
: 3,
"host"
:
"192.168.52.135:27019"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 1,
"tags"
: {
},
"slaveDelay"
: 0,
"votes"
: 1
}
],
"settings"
: {
"chainingAllowed"
:
true
,
"heartbeatTimeoutSecs"
: 10,
"getLastErrorModes"
: {
},
"getLastErrorDefaults"
: {
"w"
: 1,
"wtimeout"
: 0
}
}
}
cmh0:PRIMARY> cfg.members[1].priority=0
0
cmh0:PRIMARY> cfg.members[1].slaveDelay=30
30
cmh0:PRIMARY> rs.reconfig(cfg);
{
"ok"
: 1 }
cmh0:PRIMARY> rs.conf()
{
"_id"
:
"cmh0"
,
"version"
: 2,
"members"
: [
{
"_id"
: 1,
"host"
:
"192.168.52.128:27017"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 1,
"tags"
: {
},
"slaveDelay"
: 0,
"votes"
: 1
},
{
"_id"
: 2,
"host"
:
"192.168.52.135:27017"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 0,
"tags"
: {
},
"slaveDelay"
: 30,
"votes"
: 1
},
{
"_id"
: 3,
"host"
:
"192.168.52.135:27019"
,
"arbiterOnly"
:
false
,
"buildIndexes"
:
true
,
"hidden"
:
false
,
"priority"
: 1,
"tags"
: {
},
"slaveDelay"
: 0,
"votes"
: 1
}
],
"settings"
: {
"chainingAllowed"
:
true
,
"heartbeatTimeoutSecs"
: 10,
"getLastErrorModes"
: {
},
"getLastErrorDefaults"
: {
"w"
: 1,
"wtimeout"
: 0
}
}
}
|
可以看到192.168.52.135:27017节点出现了"slaveDelay":30的值,说明该节点的同步时间向后推迟了30秒。
具体大家可以测试一下,延迟复制时间大概会在30秒左右。有一点要注意,mongodb的系统时间必须一致,否则会造成延迟复制异常,导致在规定同步时间到了之后不进行同步操作。
本文转自 icenycmh 51CTO博客,原文链接:http://blog.51cto.com/icenycmh/1841001,如需转载请自行联系原作者