- Replication >
- Replication Reference >
- Troubleshoot Replica Sets
Troubleshoot Replica Sets¶
On this page
This section describes common strategies for troubleshooting replica set deployments.
Check Replica Set Status¶
To display the current state of the replica set and current state of
each member, run the rs.status()
method in a mongo
shell connected to the replica set’s primary. For descriptions
of the information displayed by rs.status()
, see
replSetGetStatus.
Note
The rs.status()
method is a wrapper that runs the
replSetGetStatus
database command.
Check the Replication Lag¶
Replication lag is a delay between an operation on the primary and the application of that operation from the oplog to the secondary. Replication lag can be a significant issue and can seriously affect MongoDB replica set deployments. Excessive replication lag makes “lagged” members ineligible to quickly become primary and increases the possibility that distributed read operations will be inconsistent.
To check the current length of replication lag:
In a
mongo
shell connected to the primary, call thers.printSlaveReplicationInfo()
method.Returns the
syncedTo
value for each member, which shows the time when the last oplog entry was written to the secondary, as shown in the following example:A delayed member may show as
0
seconds behind the primary when the inactivity period on the primary is greater than themembers[n].slaveDelay
value.Note
The
rs.status()
method is a wrapper around thereplSetGetStatus
database command.Monitor the rate of replication by checking for non-zero or increasing oplog time values in the Replication Lag graph available in Cloud Manager and in Ops Manager.
Replication Lag Causes¶
Possible causes of replication lag include:
Network Latency
Check the network routes between the members of your set to ensure that there is no packet loss or network routing issue.
Use tools including
ping
to test latency between set members andtraceroute
to expose the routing of packets network endpoints.Disk Throughput
If the file system and disk device on the secondary is unable to flush data to disk as quickly as the primary, then the secondary will have difficulty keeping state. Disk-related issues are incredibly prevalent on multi-tenant systems, including virtualized instances, and can be transient if the system accesses disk devices over an IP network (as is the case with Amazon’s EBS system.)
Use system-level tools to assess disk status, including
iostat
orvmstat
.Concurrency
In some cases, long-running operations on the primary can block replication on secondaries. For best results, configure write concern to require confirmation of replication to secondaries. This prevents write operations from returning if replication cannot keep up with the write load.
You can also use the database profiler to see if there are slow queries or long-running operations that correspond to the incidences of lag.
Appropriate Write Concern
If you are performing a large data ingestion or bulk load operation that requires a large number of writes to the primary, particularly with
unacknowledged write concern
, the secondaries will not be able to read the oplog fast enough to keep up with changes.To prevent this, request write acknowledgement write concern after every 100, 1,000, or another interval to provide an opportunity for secondaries to catch up with the primary.
For more information see:
Flow Control¶
Starting in MongoDB 4.2, administrators can limit the rate at which
the primary applies its writes with the goal of keeping the majority
committed
lag under
a configurable maximum value flowControlTargetLagSeconds
.
By default, flow control is enabled
.
Note
For flow control to engage, the replica set/sharded cluster must
have: featureCompatibilityVersion (FCV) of
4.2
and read concern majority enabled
. That is, enabled flow
control has no effect if FCV is not 4.2
or if read concern
majority is disabled.
With flow control enabled, as the lag grows close to the
flowControlTargetLagSeconds
, writes on the primary must obtain
tickets before taking locks to apply writes. By limiting the number of
tickets issued per second, the flow control mechanism attempts to keep the
the lag under the target.
For information on flow control statistics, see:
Slow Application of Oplog Entries¶
Starting in version 4.2 (also available starting in 4.0.6), secondary members of a replica set now
log oplog entries that take longer than the slow
operation threshold to apply. These slow oplog messages are logged
for the secondaries in the diagnostic log
under the REPL
component with the text applied
op: <oplog entry> took <num>ms
. These slow oplog entries depend
only on the slow operation threshold. They do not depend on the log
levels (either at the system or component level), or the profiling
level, or the slow operation sample rate. The profiler does not
capture slow oplog entries.
Test Connections Between all Members¶
All members of a replica set must be able to connect to every other member of the set to support replication. Always verify connections in both “directions.” Networking topologies and firewall configurations can prevent normal and required connectivity, which can block replication.
Starting in MongoDB 3.6, MongoDB binaries, mongod
and
mongos
, bind to localhost by default. If the
net.ipv6
configuration file setting or the --ipv6
command line option is set for the binary, the binary additionally binds
to the localhost IPv6 address.
Previously, starting from MongoDB 2.6, only the binaries from the official MongoDB RPM (Red Hat, CentOS, Fedora Linux, and derivatives) and DEB (Debian, Ubuntu, and derivatives) packages bind to localhost by default.
When bound only to the localhost, these MongoDB 3.6 binaries can only
accept connections from clients (including the mongo
shell,
other members in your deployment for replica sets and sharded clusters)
that are running on the same machine. Remote clients cannot connect to
the binaries bound only to localhost.
To override and bind to other ip addresses, you can use the
net.bindIp
configuration file setting or the
--bind_ip
command-line option to specify a list of hostnames or ip
addresses.
Warning
Before binding to a non-localhost (e.g. publicly accessible) IP address, ensure you have secured your cluster from unauthorized access. For a complete list of security recommendations, see Security Checklist. At minimum, consider enabling authentication and hardening network infrastructure.
For example, the following mongod
instance binds to both
the localhost and the hostname My-Example-Associated-Hostname
, which is
associated with the ip address 198.51.100.1
:
In order to connect to this instance, remote clients must specify
the hostname or its associated ip address 198.51.100.1
:
Consider the following example of a bidirectional test of networking:
Example
Given a replica set with three members running on three separate hosts:
m1.example.net
m2.example.net
m3.example.net
All three use the default port 27017
.
Test the connection from
m1.example.net
to the other hosts with the following operation setm1.example.net
:Test the connection from
m2.example.net
to the other two hosts with the following operation set fromm2.example.net
, as in:You have now tested the connection between
m2.example.net
andm1.example.net
in both directions.Test the connection from
m3.example.net
to the other two hosts with the following operation set from them3.example.net
host, as in:
If any connection, in any direction fails, check your networking and firewall configuration and reconfigure your environment to allow these connections.
Socket Exceptions when Rebooting More than One Secondary¶
When you reboot members of a replica set, ensure that the set is able
to elect a primary during the maintenance. This means ensuring that a majority of
the set’s members[n].votes
are
available.
When a set’s active members can no longer form a majority, the set’s primary steps down and becomes a secondary. Starting in MongoDB 4.2, when the primary steps down, it no longer closes all client connections. In MongoDB 4.0 and earlier, when the primary steps down, it closes all client connections.
Clients cannot write to the replica set until the members elect a new primary.
Example
Given a three-member replica set where every member has one vote, the set can elect a primary if at least two members can connect to each other. If you reboot the two secondaries at once, the primary steps down and becomes a secondary. Until at least another secondary becomes available, i.e. at least one of the rebooted secondaries also becomes available, the set has no primary and cannot elect a new primary.
For more information on votes, see Replica Set Elections. For related information on connection errors, see Does TCP keepalive time affect MongoDB Deployments?.
Check the Size of the Oplog¶
A larger oplog can give a replica set a greater tolerance for lag, and make the set more resilient.
To check the size of the oplog for a given replica set member,
connect to the member in a mongo
shell and run the
rs.printReplicationInfo()
method.
The output displays the size of the oplog and the date ranges of the operations contained in the oplog. In the following example, the oplog is about 10 MB and is able to fit about 26 hours (94400 seconds) of operations:
The oplog should be long enough to hold all transactions for the longest downtime you expect on a secondary. [1] At a minimum, an oplog should be able to hold minimum 24 hours of operations; however, many users prefer to have 72 hours or even a week’s work of operations.
For more information on how oplog size affects operations, see:
Note
You normally want the oplog to be the same size on all members. If you resize the oplog, resize it on all members.
To change oplog size, see the Change the Size of the Oplog tutorial.
[1] | Starting in MongoDB 4.0, the oplog can grow past its configured size
limit to avoid deleting the majority commit point . |
Oplog Entry Timestamp Error¶
Consider the following error in mongod
output and logs:
Often, an incorrectly typed value in the ts
field in the last
oplog entry causes this error. The correct data type is
Timestamp.
Check the type of the ts
value using the following two queries
against the oplog collection:
The first query returns the last document in the oplog, while the
second returns the last document in the oplog where the ts
value
is a Timestamp. The $type
operator allows you to select
BSON type 17, is the Timestamp data type.
If the queries don’t return the same document, then the last document in
the oplog has the wrong data type in the ts
field.
Example
If the first query returns this as the last oplog entry:
And the second query returns this as the last entry where ts
has the Timestamp
type:
Then the value for the ts
field in the last oplog entry is of the
wrong data type.
To set the proper type for this value and resolve this issue, use an update operation that resembles the following:
Modify the timestamp values as needed based on your oplog entry. This operation may take some period to complete because the update must scan and pull the entire oplog into memory.