- Release Notes >
- Release Notes for MongoDB 4.4
Release Notes for MongoDB 4.4¶
On this page
- Minor Releases
- Aggregation
- Replica Sets
- Sharded Clusters
- Projection
- Transactions
- Security Improvements
- Structured Logging
- Platform Support
- Mongo Shell
- Tools
- Drivers
- Indexes
- Removed Commands
- Networking
- General Improvements
- Changes Affecting Compatibility
- Upgrade Procedures
- Download
- Known Issues
- Report an Issue
Minor Releases¶
4.4.1 - Sep 9, 2020¶
Issues fixed:
- SERVER-48531: 3 way deadlock can happen between chunk splitter, prepared transactions and stepdown thread.
- SERVER-48641: Deadlock due to the MigrationDestinationManager waiting for write concern with the session checked-out
- SERVER-49546: setFCV to 4.4 should insert range deletion tasks in batches rather than one at a time
- SERVER-49694: On a sharded cluster, nearest or hedged reads may not be routed to a near shard replica.
- SERVER-50137: MongoDB crashes with Invariant failure due to oplog entries generated in 3.4
- SERVER-50140: Initial sync cannot survive unclean restart of the sync source
- SERVER-50170: Fix server selection failure on mongos
- WT-6623: Set the connection level file id in recovery file scan
- All JIRA issues closed in 4.4.1
- 4.4.1 Changelog
Aggregation¶
Union All ($unionWith
Stage)¶
MongoDB 4.4 adds the $unionWith
aggregation stage,
providing the ability to combines pipeline results from multiple
collections into a single result set.
For details, see $unionWith
.
Custom Aggregation Expressions¶
Starting in version 4.4, MongoDB provides the following new operators that allow users to define custom aggregation expressions:
With the addition of these new operators, you can use aggregation to
write custom JavaScript expressions instead of relying on
mapReduce
and $where
.
Note
Even before version 4.4, various map-reduce expressions could also
be rewritten using other aggregation pipeline operators, such as $group
,
$merge
, etc., without requiring custom functions.
For more information, see Map-Reduce to Aggregation Pipeline.
New Aggregation Operators¶
Operator | Description |
---|---|
$accumulator |
Returns the result of a user-defined accumulator operator. |
$binarySize |
Returns the size of a given string or binary data value’s content in bytes. |
$bsonSize |
Returns the size in bytes of a given document (i.e. bsontype
Object ) when encoded as BSON. |
$first |
Returns the first element in an array. |
$function |
Defines a custom aggregation expression. |
$last |
Returns the last element in an array. |
$isNumber |
Returns boolean Returns boolean |
$replaceOne |
Replaces the first instance of a matched string in a given input. |
$replaceAll |
Replaces all instances of a matched string in a given input. |
General Aggregation Improvements¶
$out
¶
Starting in MongoDB 4.4, $out
can output to a collection in
a different database. In earlier versions, $out
can output
to a collection to the same database where the aggregation is run.
$indexStats
¶
Starting in MongoDB 4.4 (also available starting in 4.2.4),
$indexStats
includes the following fields in its output:
$merge
¶
Starting in MongoDB 4.4, $merge
can output to the same
collection that is being aggregated. You can also output to a
collection which appears in other stages of the pipeline, such as
$lookup
.
Versions of MongoDB prior to 4.4 did not allow $merge
to
output to the same collection as the collection being aggregated.
Warning
When $merge
outputs to the same collection that is being
aggregated, documents may get updated multiple times or the operation
may result in an infinite loop. This behavior occurs when the update
performed by $merge
changes the physical location of
documents stored on disk. When the physical location of a document
changes, $merge
may view it as an entirely new document,
resulting in additional updates. For more information on this
behavior, see Halloween Problem.
$planCacheStats
Changes¶
Starting in version 4.4,
$planCacheStats
stage can be run onmongos
instances as well as onmongod
instances. In 4.2,$planCacheStats
stage can only run onmongod
instances.$planCacheStats
includes new fields: the host field and, when run against amongos
, the shard field.mongo
shell provides the methodPlanCache.list()
as a wrapper for$planCacheStats
aggregation stage.MongoDB removes the following:
planCacheListPlans
andplanCacheListQueryShapes
commands, andPlanCache.getPlansByQuery()
andPlanCache.listQueryShapes()
methods.
Use
$planCacheStats
orPlanCache.list()
instead.
$collStats
Changes¶
Starting in MongoDB 4.4, $collStats
accepts the
queryExecStats
field as an argument document. Providing this field
returns the following fields in the output:
The collectionScans
field contains an embedded document bearing the
following fields:
Field Name | Description |
---|---|
total |
A 64-bit integer giving the total number of queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor. |
nonTailable |
A 64-bit integer giving the number of queries that performed a collection scan that did not use a tailable cursor. |
explain
Changes¶
Starting in MongoDB 4.4, when you run the
db.collection.explain().aggregate()
method in executionStats
and allPlansExecution
modes, each
pipeline stage listed in the explain output includes nReturned
and
executionTimeMillisEstimate
.
Replica Sets¶
Resumable Initial Sync¶
Starting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.
By default, the secondary tries to resume initial sync for 24 hours.
MongoDB 4.4 adds the
initialSyncTransientErrorRetryPeriodSeconds
server
parameter for controlling the amount of time the secondary attempts to
resume initial sync. If the secondary cannot successfully resume the
initial sync process during the configured time period, it selects a new
healthy source from the replica set and restarts the initial
synchronization process from the beginning.
Prior to MongoDB 4.4, the secondary would restart the entire initial sync if it encountered an error during the process.
Streaming Replication¶
Starting in MongoDB 4.4, sync from sources send a continuous stream of oplog entries to their syncing secondaries.
Prior to MongoDB 4.4, secondaries fetched batches of oplog entries by issuing a request to their sync
from source and waiting for a response. This required a network roundtrip
for each batch of oplog entries. MongoDB
4.4 adds the oplogFetcherUsesExhaust
startup parameter for
disabling streaming replication and using the older replication behavior.
For details, see Streaming Replication.
Rollback Directory¶
Starting in Mongo 4.4, the rollback directory for a collection is named after the collection’s UUID rather than the collection namespace; e.g.
For details, see Rollback Data.
Minimum Oplog Retention Period¶
Starting in MongoDB 4.4, you can specify the minimum number of hours
to preserve an oplog entry. The mongod
only removes
an oplog entry if:
- The oplog has reached the maximum configured size, and
- The oplog entry is older than the configured number of hours based on the host system clock.
By default MongoDB does not set a minimum oplog retention period and automatically truncates the oplog starting with the oldest entries to maintain the configured maximum oplog size.
To configure the minimum oplog retention period when starting the
mongod
, either:
Add the
storage.oplogMinRetentionHours
setting to themongod
configuration file.-or-
Add the
--oplogMinRetentionHours
command line option.
To configure the minimum oplog retention period on a running
mongod
, use replSetResizeOplog
. Setting
the minimum oplog retention period while the mongod
is running overrides any values set on startup. You must update
the value of the corresponding configuration file setting or
command line option to persist those changes through a server
restart.
Important
The oplog can grow without constraint so as to retain oplog entries for the configured number of hours. This may result in reduction or exhaustion of system disk space due to a combination of high write volume and large retention period.
See also
Indexes Build Simultaneously on Data-Bearing Replica Set Members¶
Requires featureCompatibilityVersion
4.4+
Each mongod
in the replica set or sharded cluster
must have featureCompatibilityVersion set to at
least 4.4
to start index builds simultaneously across
replica set members.
MongoDB 4.4 running featureCompatibilityVersion: "4.2"
builds
indexes on the primary before replicating the index build to
secondaries.
Starting with MongoDB 4.4, index builds on a replica set or sharded
cluster build simultaneously across all data-bearing replica set
members. For sharded clusters, the index build occurs only on shards
containing data for the collection being indexed. The primary
requires a minimum number of data-bearing voting
members (i.e commit quorum), including itself,
that must complete the build before marking the index as ready for
use.
By default, index builds use a commit quorum of all data-bearing voting
members. To start an index build with a non-default commit quorum,
MongoDB 4.4 adds the commitQuorum parameter to
createIndexes
or its shell helpers
db.collection.createIndex()
and
db.collection.createIndexes()
.
To modify the quorum required for an in-progress index
build, MongoDB 4.4 introduces the new setIndexCommitQuorum
command.
See Index Builds in Replicated Environments for more information.
Replica Set Reconfiguration Changes¶
Changes to replSetReconfig
¶
Starting in MongoDB 4.4, the replSetReconfig
command
waits until a majority of voting members install the replica
configuration before returning success. A voting member is any
replica member where members[n].votes
is 1
, including
arbiters. First, the operation waits until the current
configuration is committed before installing the new configuration
on the primary. The operation then waits until a majority of voting
members install the new configuration before returning
successfully. See
Reconfiguration Waits Until a Majority of Members Install the Replica Configuration for more information.
replSetReconfig
waits indefinitely for a majority of
voting members to install the configuration by default. MongoDB 4.4
also adds the optional maxTimeMS parameter to
replSetReconfig
for specifying the maximum amount of
time to wait for the operation to return successfully.
Starting in MongoDB 4.4, the replSetReconfig
command
allows adding or removing no more than 1
voting
member at a time. To add or remove multiple
voting members, issue a series of replSetReconfig
or
rs.reconfig()
operations to add or remove one member at a
time. See Reconfiguration Can Add or Remove No More than One Voting Member at a Time for more
information.
Changes to replSetGetConfig
¶
replSetGetConfig
command
can specify a new option commitmentStatus: true when run on the primary. When
run with the option, the command includes in the output a
commitmentStatus
field. This output field indicates whether the replica set’s
previous reconfig has been committed, so that the replica set is
ready to be reconfigured again. For more information, see the
replSetGetConfig command.Changes to Replica Configuration Document¶
term
field to the replica set
configuration document.
Replica set members use term
and version
to
achieve consensus on the “newest” replica configuration. Setting
featureCompatibilityVersion (fCV) : “4.4”
implicitly performs a replSetReconfig
to add the
term
field to the configuration document and blocks until
the new configuration propagates to a majority of replica set
members. Similarly, downgrading to fCV : "4.2"
implicitly
performs a reconfiguration to remove the term
field.Preferred Initial Sync Source¶
Starting in MongoDB 4.4, you can specify the preferred initial sync
source using the initialSyncSourceReadPreference
parameter.
You can only set this parameter on mongod
startup, using
either the setParameter
configuration file setting or the
--setParameter
command line option.
initialSyncSourceReadPreference
supports following read
preference modes:
primary
primaryPreferred
(Default for voting replica set members)secondary
secondaryPreferred
nearest
(Default for newly added or non-voting replica set members)
If the replica set has disabled chaining
, the default
initialSyncSourceReadPreference
read preference mode
is primary
.
You cannot specify a tag set or maxStalenessSeconds
to
initialSyncSourceReadPreference
.
See also
Mirrored Reads¶
Starting in version 4.4, MongoDB provides mirrored reads to pre-warm electable secondary members’ cache with the most recently accessed data. With mirrored reads, the primary can mirror a subset of operations that it receives and send them to a subset of electable secondaries. Pre-warming the cache of a secondary can help restore performance more quickly after an election.
Note
The primary’s response to the client is not affected by the mirror reads. The mirrored reads are “fire-and-forget” operations by the primary; i.e., the primary does not await the response for the mirrored reads.
Mirrored Reads Parameter¶
MongoDB 4.4 adds the following mirrored reads parameter. You can set
the parameter at startup using the setParameter
configuration file setting or the --setParameter
command line option or at runtime with
setParameter
command:
Parameter | Description |
---|---|
mirrorReads |
Specifies the
A |
Mirrored Reads Metrics¶
The command serverStatus
and its corresponding
mongo
shell method db.serverStatus()
return
mirroredReads
if you specify the field’s inclusion
in the operation:
or
Sharded Clusters¶
Refinable Shard Keys¶
Starting in 4.4, MongoDB provides the
refineCollectionShardKey
command. With the new command,
you can refine a collection’s shard key by adding a suffix field or
fields to the existing key. Refining a collection’s shard key allows
for a more fine-grained data distribution and can address situations
where the existing key has led to jumbo (i.e. indivisible)
chunks due to insufficient cardinality.
For example, you may have an existing orders
collection with the
shard key { customer_id: 1 }
. You can change the shard key by
adding a suffix order_id
field to the shard key so that {
customer_id: 1, order_id: 1 }
becomes the new shard key, allowing
data distribution by both customer_id
and order_id
fields.
To use the refineCollectionShardKey
command, the sharded
cluster must have feature compatibility version (fcv)
of 4.4
. For more information, see the
refineCollectionShardKey
command.
Note
After you refine the shard key, it may be that not all documents in the collection have the suffix field(s). To populate the missing shard key field(s), see Missing Shard Key.
Before refining the shard key, ensure that all or most documents in the collection have the suffix fields, if possible, to avoid having to populate the field afterwards.
In earlier versions, once you select a shard key, you cannot modify the shard key.
Missing Shard Keys
Hedged Reads¶
To minimize latencies, mongos
instances, by default, can
use hedge reads. With hedged reads, the
mongos
instances can route read operations to multiple
members per each queried shard and return results from the first
respondent per shard. By default, mongos
instances
support using hedged reads. To turn off a mongos
instance’s support for hedged reads, set the
readHedgingMode
parameter for the mongos
.
Hedged reads are specified per operation as part of the read
preference. Non-primary
read preferences support hedged reads. Read preference
nearest
specifies hedged read by default.
For more information, see:
Hedged Read Parameters¶
Parameter | Description |
---|---|
readHedgingMode |
Enables or disables mongos instance’s support for
hedged reads. |
maxTimeMSForHedgedReads |
Specifies the maximimum time limit (in milliseconds) for the additional read sent to hedge a read operation. |
Hedged Read Option for Read Preference¶
To specify hedged read for a read preference, MongoDB 4.4 introduces the Hedged Read Option. To set using a MongoDB driver, refer to the driver read preference API documentation.
The following mongo
shell methods can accept hedge
options to enable hedged read for the specified read preference:
Hedged Read Metrics¶
serverStatus
and its corresponding
mongo
shell method db.serverStatus()
return
hedgingMetrics
.balancerCollectionStatus
Command¶
balancerCollectionStatus
and the mongo
shell helper method
sh.balancerCollectionStatus()
that return information about
whether the chunks of a sharded collection are balanced (i.e. do not
need to be moved) as of the time the command is run or need to be
moved. With the command, users can verify that initial chunk creation
and migration has finished.Improved mongos
Startup Procedure¶
Starting with MongoDB 4.4, mongos
adds the following new
default startup behavior:
mongos
will now preload a sharded cluster’s routing table on startup, rather than doing so on-demand for the first incoming client connection.mongos
will now prewarm its connection pool to shard hosts on startup, rather than doing so on-demand for incoming client connections.
This behavior results in faster servicing of initial client
connections after a mongos
instance is started or
restarted. In particular, this allows sites that employ multiple
mongos
instances to restart them as necessary, or add new
ones, without initial client requests to those instances needing to wait
on connection establishment.
Both routing table preloading and connection pool prewarming are enabled by default.
MongoDB 4.4 adds the following parameters for controlling this behavior:
loadRoutingTableOnStartup
- Default:
true
(Enabled) - Enables or disables support for preloading the routing table on
startup for the
mongos
.
- Default:
warmMinConnectionsInShardingTaskExecutorPoolOnStartup
- Default:
true
(Enabled) - Enables or disables support for prewarming the connection pool on
startup for the
mongos
.
- Default:
warmMinConnectionsInShardingTaskExecutorPoolOnStartupWaitMS
- Default:
2000
(2 seconds) - Sets the timeout in milliseconds before client connections
to the
mongos
are allowed regardless of established connection pool size.
- Default:
Improved Routing Table Updates¶
Running flushRouterConfig
is no longer required after
executing the movePrimary
or dropDatabase
commands. These two commands now automatically refresh a sharded
cluster’s routing table as needed when run. Manually issuing the
flushRouterConfig
command is still recommended in the cases
described under
flushRouterConfig Considerations.
Compound Hashed Shard Keys¶
Starting in MongoDB 4.4, you can shard a collection using a compound shard key with a single hashed field. Prior to 4.4, MongoDB did not support compound shard keys with a hashed field.
Compound hashed sharding supports features like zone sharding, where the prefix (i.e. first) non-hashed field or fields support zone ranges while the hashed field supports more even distribution of the sharded data. For example, the following operation shards a collection on a compound hashed shard key that supports zoned sharding:
Compound hashed sharding also supports shard keys with a hashed prefix for resolving data distribution issues related to monotonically increasing fields For example, the following operation shards a collection on a compound hashed shard key where the hashed field is the shard key prefix:
See also
Chunk Migration Failover Resiliency Improvements¶
Starting in MongoDB 4.4, the following changes improve chunk migrations and orphaned document cleanup resiliency during failover:
- Chunk ranges awaiting cleanup after a chunk migration are now
persisted in the
config.rangeDeletions
collection and replicated throughout the shard. In the event of a failover, the shard’s new primary reads the documents in theconfig.rangeDeletions
collection and resumes deleting the corresponding ranges. The document that describes a range awaiting cleanup is deleted from theconfig.rangeDeletions
collection after the range is deleted. - The
cleanupOrphaned
command no longer deletes orphaned documents from a shard. Instead,cleanupOrphaned
waits for orphaned documents that are scheduled for deletion from a shard to be deleted.
Set the disableResumableRangeDeleter
parameter to true
on a shard’s primary to pause range deletion on the shard.
General Sharded Clusters Improvements¶
Index Consistency Checks¶
Starting in MongoDB 4.4, the config server primary, by default, checks for
index inconsistencies across the shards for sharded collections. The
command serverStatus
returns the field
shardedIndexConsistency
to report on index
inconsistencies when run on the config server primary.
To configure the index consistency checks, MongoDB provides the following parameters:
Parameter | Description |
---|---|
enableShardedIndexConsistencyCheck |
Enable or disable the index consistency checks. |
shardedIndexConsistencyCheckIntervalMS |
The interval at which the config server’s primary checks the index consistency of sharded collections. |
Concurrent removeShard
Operations¶
Starting in MongoDB 4.4, you can have more than one
removeShard
operation in progress.
In earlier versions, removeShard
returns an error if
another removeShard
operation is in progress.
Shard Key Limit¶
Partial Results¶
Jumbo Chunk Migration¶
For chunks that are too large to migrate, starting in MongoDB 4.4:
- A new balancer setting
attemptToBalanceJumboChunks
allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Chunks that Exceed Size Limit for details. - The
moveChunk
command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.
Improved Catalog Cache Refresh¶
Starting in 4.4, if there is a stale chunk, the catalog cache is only refreshed when routers access a shard that previously had or currently has that chunk.
Prior to MongoDB 4.4, any stale chunk caused the entire chunk distribution
for a collection to be marked as stale and forced all
routers who contact the shard
to refresh their shard catalog cache. MongoDB 4.4 adds the
enableFinerGrainedCatalogCacheRefresh
startup parameter
for disabling catalog cache refresh for only a targeted shard and using
the older catalog cache refresh behavior. The
enableFinerGrainedCatalogCacheRefresh
parameter defaults to
true
.
Automatically Split system.sessions
Collection¶
Starting in version 4.4 (and 4.2.7), MongoDB automatically splits the
system.sessions
collection into at least 1024 chunks and
distributes the chunks uniformly across shards in the cluster.
Projection¶
Starting in MongoDB 4.4, as part of making
find
and
findAndModify
projection consistent with
aggregation’s $project
stage,
- The
find
andfindAndModify
projection can accept aggregation expressions and aggregation syntax, including the use of literals and aggregation variables. With the use of aggregation expressions and syntax, you can project new fields or project existing fields with new values. - The
find
andfindAndModify
projection can specify embedded fields using the nested form; e.g.{ field: { nestedfield: 1 } }
as well as dot notation. In earlier versions, you can only use the dot notation.
For more information, see:
db.collection.find()
db.collection.findOneAndDelete()
db.collection.findOneAndReplace()
db.collection.findOneAndUpdate()
db.collection.findAndModify()
See also
$meta
Operator¶
$meta
Keyword SupportStarting in MongoDB 4.4, the $meta
operator adds support for retrieving theindexKey
metadata. TheindexKey
metadata is for debugging purposes only and not for application logic. See$meta
for more information.{ $meta: "textScore" }
Usage withfind()
Starting in version 4.4, MongoDB makes the following
{ $meta: "textScore" }
changes when used withdb.collection.find()
:You must specify the
$text
operator in the query predicate to use{ $meta: "textScore" }
.You can sort the resulting documents by their search relevance, i.e.
{ $meta: "textScore" }
, without also having to project thetextScore
.In earlier versions, to include{ $meta: "textScore" }
expression in thesort()
, you must also include the same expression in the projection.If you include the
{ $meta: "textScore" }
expression in both the projection and sort, i.e.db.collection.find(<$text search>, <projection>).sort(<sort>)
, the projection and sort documents can have different field names for the expression.In previous versions of MongoDB, if you include the{ $meta: "textScore" }
in both the projection and sort, you must specify the same field name in both places.
For more information, see Text Score Metadata $meta: "textScore". For examples of
"textScore"
projections and sorts, see Text Search Score Examples.
Transactions¶
Starting in MongoDB 4.4 with feature compatibility version
(fcv) "4.4"
, you can create collections and indexes
inside a multi-document transaction
unless the transaction is a cross-shard write transaction.
When creating a collection inside a transaction:
- You can implicitly create a collection, such as with:
- an insert operation against a non-existing collection;
- an update/findAndModify operation with
upsert: true
against a non-existing collection.
- You can explicitly create a collection using the
create
command or its helperdb.createCollection()
.
When creating an index inside a transaction:
- You can create an index on a non-existing collection. The collection is created as part of the operation.
- You can create an index on a new empty collection created earlier in the same transaction.
For more details, see Create Collections and Indexes In a Transaction.
MongoDB 4.4 adds a new parameter
shouldMultiDocTxnCreateCollectionAndIndexes
which can
enable (default) or disable collection and index creation inside a
transaction. When setting the parameter for a sharded cluster, set
the parameter on all shards.
For explicit creation of a collection or an index inside a
transaction, the transaction read concern level must be
"local"
. Explicit creation is through:
Command | Method |
---|---|
create |
db.createCollection() |
createIndexes |
See also
Security Improvements¶
New Kerberos Validation Tool mongokerberos
¶
MongoDB Enterprise 4.4 provides a new mongokerberos
tool for validating your platform’s Kerberos configuration for use with
MongoDB, and for testing end-to-end client authentication through
Kerberos. When run, mongokerberos
returns a report
indicating any issues encountered, and provides potential advice for
resolving them. mongokerberos
is available in MongoDB
Enterprise only.
OCSP¶
Starting in version 4.4, MongoDB enables, by default, the use of OCSP
(Online Certificate Status Protocol) to check for certificate
revocation. The use of OCSP eliminates the need to periodically
download a Certificate Revocation List (CRL)
and restart the
mongod
/mongos
with the updated CRL.
In versions 4.0 and 4.2, the use of OCSP is available only through the
use of system certificate store
on Windows or macOS.
See also
OCSP Stapling/Must Staple¶
As part of its OCSP support, MongoDB 4.4 supports the following on Linux:
- OCSP stapling. With OCSP
stapling,
mongod
andmongos
instances attach or “staple” the OCSP status response to their certificates when providing these certificates to clients during the TLS/SSL handshake. By including the OCSP status response with the certificates, OCSP stapling obviates the need for clients to make a separate request to retrieve the OCSP status of the provided certificates. - OCSP must-staple extension. OCSP must-staple is an extension that can be added to the server certificate that tells the client to expect an OCSP staple when it receives a certificate during the TLS/SSL handshake.
OCSP Parameters¶
MongoDB 4.4 adds the following OCSP parameters. You can set these
parameters at startup using the setParameter
configuration
file setting or the --setParameter
command line option:
Parameter | Description |
---|---|
ocspEnabled |
Enables or disables the OCSP support. |
ocspValidationRefreshPeriodSecs |
Specifies the number of seconds to wait before refreshing the stapled OCSP status response. |
tlsOCSPStaplingTimeoutSecs |
Specifies the maximum number of seconds the
mongod /mongos instance should
wait to receive the OCSP status response for its certificates. |
tlsOCSPVerifyTimeoutSecs |
Specifies the maximum number of seconds that the
mongod /mongos should wait for
the OCSP response when verifying client certificates. |
x.509 Certificates Nearing Expiry Trigger Warnings¶
Starting in MongoDB 4.4, mongod
/mongos
logs a warning on connection if the presented x.509 certificate
expires within 30
days of the mongod/mongos
system clock.
Specifically, the following connections to a mongod
or mongos
can trigger x.509 certificate expiry warnings:
- A
mongo
shell or an application using a MongoDB driver establishing a TLS connection or performing x.509 client authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified tomongo --tlsCertificateKeyFile
ortlsCertificateKeyFile
). - A
mongod
cluster member performing x.509 membership authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified tonet.tls.clusterFile
,net.tls.clusterCertificateSelector
,mongod --tlsClusterFile
ormongod --tlsClusterCertificateSelector
). - A
mongos
cluster member performing x.509 membership authentication with a certificate expiring within 30 days. (i.e. the certificate specified to (i.e. the certificate specified tonet.tls.clusterFile
,net.tls.clusterCertificateSelector
,mongos --tlsClusterFile
ormongos --tlsClusterCertificateSelector
).
The warning log message resembles the following:
Consider proactively renewing client x.509 certificates nearing expiration to ensure continued connectivity to the cluster.
MongoDB 4.4 adds the tlsX509ExpirationWarningThresholdDays
parameter for controlling certificate expiration warning threshold. Set
the parameter to 0
to disable the warning. For complete
documentation, see tlsX509ExpirationWarningThresholdDays
.
TLS 1.3 Support¶
On CentOS 8 and RHEL 8, MongoDB 4.4 (as well as versions 4.2, 4.0, and 3.6) support TLS1.3.
User to DN Mapping Exits on Network or Authentication Failures¶
A mongod
, mongos
, or
mongoldap
returns an error if one of the user to
Distinguished Name (DN) mappings cannot be evaluated due
to networking or authentication failures to the LDAP server.
The mongod
, mongos
, or
mongoldap
rejects the connection request and does not
check the remaining mappings, if any.
To specify the user to DN mapping, see:
Structured Logging¶
Starting in MongoDB 4.4, mongod
/ mongos
instances now output all log messages in structured JSON format. Log entries are written as a series
of key-value pairs, where each key indicates a log message field type,
such as “severity”, and each corresponding value records the associated
logging information for that field type, such as “informational”.
This includes log output sent to the file, syslog, and stdout
(standard out) log destinations, as
well as the output of the getLog
command.
Previously, log entries were output as plaintext.
The following log messages in JSON format indicate that a
mongod
is listening and ready for connections:
Structured logging with key-value pairs allows for efficient log analysis by automated tools or log ingestion services, and makes programmatic log parsing easier and more powerful.
When working with MongoDB structured logging, the jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq
is an open-source JSON parser, and is available for
Linux, Windows, and macOS.
For more information on structured logging, including a detailed examination of log entry components as well as command-line parsing examples, see Log Messages.
Multiple LDAP Password Support¶
Starting in MongoDB 4.4, the ldapQueryPassword
setParameter
command accepts either a string or
an array of strings. If set to an array, each password is tried
until one succeeds. This can be used to perform a rollover of the
LDAP account password without downtime for MongoDB.
Platform Support¶
Added Platforms¶
MongoDB 4.4 adds support for the following platforms:
Removed Platforms¶
MongoDB 4.4 removes support for the following platforms:
- Amazon Linux 2013.03
- RHEL / CentOS / Oracle 6 on the s390x architecture
- Windows 7 / Server 2008 R2
- Windows 8 / Server 2012
- Windows 8.1 / Server 2012 R2
- macOS 10.12
See Supported Platforms for the full list of platforms and architectures supported in MongoDB 4.4.
Mongo Shell¶
Mongo Shell Supports AWS IAM Credentials for Atlas Clusters¶
Starting in MongoDB 4.4, the mongo
shell supports
using AWS IAM credentials
to authenticate to a MongoDB Atlas cluster that
has been configured for AWS IAM authentication.
Authenticating in this manner uses the new MONGODB-AWS
authentication mechanism
, and requires that
you provide an AWS access key ID and a secret
access key, which may be specified in the connection string or on the command-line via the
--username
and --password
options.
Additionally, if you are using an AWS session token
for authentication with temporary credentials when using an AssumeRole
request, or when working with AWS resources that specify this value such
as Lambda, you may provide that session token in the connection string
using the AWS_SESSION_TOKEN
authMechanismProperties
value, or on the command-line via the --awsIamSessionToken
option.
Alternatively, if the AWS access key ID, secret access key, or
session token are defined on your platform using their respective
AWS IAM environment variables
the mongo
shell will use these environment
variable values to authenticate; you do not need to specify them in the
connection string.
See Connection String Authentication Options for usage, and Connecting to an Atlas Cluster using MONGODB-AWS for examples.
Tools¶
Migration to MongoDB Database Tools Project¶
Starting in MongoDB 4.4, the documentation for the following tools have been migrated to the MongoDB Database Tools project:
The MongoDB Database Tools use the Apache License, Version 2.0. See mongodb/mongo-tools for the source code.
Note
For documentation on previous versions of the listed tools, reference that version of the MongoDB server manual.
Quick links to older documentation:
New mongokerberos
Kerberos Validation Tool¶
MongoDB Enterprise 4.4 provides a new mongokerberos
tool for validating your platform’s Kerberos configuration for use with
MongoDB, and for testing end-to-end client authentication through
Kerberos. When run, mongokerberos
returns a report
indicating any issues encountered, and provides potential advice for
resolving them. mongokerberos
is available in MongoDB
Enterprise only.
See the mongokerberos
reference page for more
information.
mongoreplay
Removed from MongoDB Packaging¶
Starting in MongoDB 4.4, mongoreplay
is removed from MongoDB
packaging. mongoreplay
and its related documentation are migrated to
the mongodb-labs
github project. Projects in mongodb-labs
are experimental and not
officially supported by MongoDB.
Quick links to older documentation
MongoDB Database Tools Not Packaged with Windows MSI¶
Starting in version 4.4, the
Windows MSI installer for both
Community and Enterprise editions does not
include the MongoDB Database Tools (mongoimport
,
mongoexport
, etc). To download and install
the MongoDB Database Tools on Windows, see
Installing the MongoDB Database Tools.
If you were relying on the MongoDB 4.2 or previous MSI installer to install the Database Tools along with the MongoDB Server, you must now download the Database Tools separately.
Drivers¶
New Drivers¶
- The official MongoDB Rust driver is now available.
- The official MongoDB Swift driver is now available.
Indexes¶
Compound Hashed Indexes¶
MongoDB 4.4 adds support for creating compound indexes with a single hashed field. MongoDB 4.2 and earlier only supported single field hashed indexes.
The following operation creates a compound hashed index on country
and _id
:
Compound hashed indexes require featureCompatibilityVersion set to 4.4
.
See also
dropIndexes
Can Abort In-Progress Index Builds¶
If an index specified to dropIndexes
is still building,
dropIndexes
attempts to abort the in-progress build.
Aborting an index build has the same effect as dropping the built index.
Prior to MongoDB 4.4, dropIndexes
would return an error if
the collection had any in-progress index builds. This behavior also
applies to the shell helpers db.collection.dropIndex()
and
db.collection.dropIndexes()
.
- The indexes specified to
dropIndexes
/dropIndexes()
must be the entire set of in-progress builds associated to a given index builder, i.e. the indexes built by a singlecreateIndexes
ordb.collection.createIndexes()
operation. - The index specified to
dropIndex()
must be the only index associated to the index builder, i.e. the indexes built by a singlecreateIndexes
ordb.collection.createIndexes()
operation.
To drop a specific index out of a set of related in-progress builds,
wait until the index builds complete and specify that index to
dropIndexes
or its shell helpers.
For more complete documentation, see:
- Abort In-Progress Index Builds for the
dropIndexes
command. - Aborts In-Progress Index Builds for the
dropIndexes()
method. - Aborts In-Progress Index Builds for the
dropIndex()
method.
drop()
Can Abort In-Progress Index Builds¶
Starting in MongoDB 4.4, the db.collection.drop()
method and
drop
command abort any in-progress index builds on the
target collection before dropping the collection. Prior to MongoDB
4.4, attempting to drop a collection with in-progress index builds
results in an error, and the collection is not dropped.
For replica sets or shard replica sets, aborting an index on the primary
does not simultaneously abort secondary index builds. MongoDB attempts
to abort the in-progress builds for the specified indexes on the
primary and if successful creates an associated abort
oplog entry. Secondary members with
replicated in-progress builds wait for a commit or abort oplog entry
from the primary before either committing or aborting the index build.
For replica sets or shard replica sets, aborting an index on the primary
does not simultaneously abort secondary index builds. MongoDB attempts
to abort the in-progress builds for the specified indexes on the
primary and if successful creates an associated abort
oplog entry. Secondary members with
replicated in-progress builds wait for a commit or abort oplog entry
from the primary before either committing or aborting the index build.
dropDatabase
Can Abort In-Progress Index Builds¶
Starting in MongoDB 4.4, the db.dropDatabase()
method and
dropDatabase
command abort any in-progress index builds
on collections in the target database before dropping the database.
Aborting an index build has the same effect as dropping the built
index. Prior to MongoDB 4.4, attempting to drop a database that
contains a collection with an in-progress index build results in an
error, and the database is not dropped.
See also
Deprecation of geoHaystack
Index and geoSearch
Command¶
MongoDB 4.4 deprecates the geoHaystack index and the
geoSearch
command. Use a 2d index
with $geoNear
or $geoWithin
instead.
Removed Commands¶
MongoDB removes the following command(s) and mongo
shell
helper(s):
Removed Command | Removed Helper | Alternatives |
---|---|---|
cloneCollection |
db.cloneCollection() |
|
planCacheListPlans |
PlanCache.getPlansByQuery() |
See also $planCacheStats Changes.
|
planCacheListQueryShapes |
PlanCache.listQueryShapes() |
See also $planCacheStats Changes.
|
Networking¶
Support for TCP Fast Open¶
Starting with MongoDB 4.4, mongod
and
mongos
support TCP Fast Open (TFO) connections by
default. TFO requires both the client and the mongod/mongos
host machines support and enable TFO:
- Windows
The following Windows operating systems support TFO:
- Microsoft Windows Server 2016 or later.
- Microsoft Windows 10 Update 1607 or later.
- macOS
- macOS 10.11 (El Capitan) and later support TFO
- Linux
Linux operating systems running Linux Kernel 3.7 or later can support inbound TFO connections.
Linux operating systems running Linux Kernel 4.11 or later can support both inbound and outbound TFO connections.
Set the value of
/proc/sys/net/ipv4/tcp_fastopen
to enable support for inbound and/or outbound TFO connections:- Set to
1
to enable only outbound TFO connections - Set to
2
to enable only inbound TFO connections - Set to
3
to enable inbound and outbound TFO connections.
- Set to
MongoDB 4.4 adds the following parameters for controlling TFO:
Parameter | Description |
---|---|
tcpFastOpenServer |
Default: Enables or disables support for inbound TFO connections to the
|
tcpFastOpenClient |
Default: Linux Operating System Only Enables or disables support for outbound TFO connections
from the |
tcpFastOpenQueueSize |
Default: Control the size of the queue of pending TFO connections. |
MongoDB 4.4 adds the following counters to the output of
serverStatus
and db.serverStatus()
:
Counter | Description |
---|---|
network.tcpFastOpen.kernelSetting |
Linux only Indicates kernel support for TFO. |
network.tcpFastOpen.serverSupported |
Indicates operating system support for incoming TFO connections. |
network.tcpFastOpen.clientSupported |
Indicates operating system support for outgoing TFO connections. |
network.tcpFastOpen.accepted |
Indicates the total number of accepted incoming TFO connections
to the mongod /mongos since the
mongod/mongos last started. |
A complete discussion of TFO is outside the scope of this documentation. For more information on TFO, start with the following external resources:
General Improvements¶
Blocking Sort Limit Increased¶
If MongoDB cannot use an index or indexes to obtain the sort order for a
given cursor.sort()
operation, MongoDB must perform a blocking
sort on the data. A blocking sort indicates that MongoDB must consume
and process all input documents to the sort before returning results.
Blocking sorts do not block concurrent operations on the
collection or database.
Prior to MongoDB 4.4, MongoDB returned an error if a blocking sort
operations required more than 32 megabytes of system memory. Starting in
MongoDB 4.4, blocking sort operations increase the limit on system
memory to use for the sort operation to 100 megabytes. For blocking sort
operations which require more than 100 megabytes of system memory,
MongoDB returns an error unless the query specifies
cursor.allowDiskUse()
(New in MongoDB 4.4).
For more information on sorting and index use, see Sort and Index Use.
find
Can Use Temporary Files To Support Large Non-Indexed Sorts¶
MongoDB 4.4 adds a new option allowDiskUse to the find
command. With
allowDiskUse: true, the operation can use
temporary files on disk when processing a non-indexed (“blocking”)
sort operation that exceeds the
100 megabyte memory limit. Prior to MongoDB 4.4, a find
operation with a blocking sort failed if it exceeded the memory limit
while processing the sort.
For the db.collection.find()
shell method with
cursor.sort()
, MongoDB 4.4 adds the
cursor.allowDiskUse()
cursor modifier.
allowDiskUse and
cursor.allowDiskUse()
have no effect if MongoDB can satisfy
the sort using an index, or if the
blocking sort requires less than 100 megabytes of memory.
For instructions on enabling allowDiskUse for queries issued through a MongoDB driver, defer to the documentation for your preferred MongoDB 4.4-compatible driver.
Collection Namespace Limit¶
Starting in MongoDB 4.4,
- For featureCompatibilityVersion set to
"4.4"
or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.
) separator, and the collection/view name (e.g.<database>.<collection>
), - For featureCompatibilityVersion set to
"4.2"
or earlier, the maximum length of the collection/view namespace remains 120 bytes.
Validation Data Throughput Information¶
Starting in version MongoDB 4.4,
- The
$currentOp
and thecurrentOp
command includedataThroughputAverage
anddataThroughputLastSecond
information for validate operations in progress. - The log messages for validate operations include
dataThroughputAverage
anddataThroughputLastSecond
information.
compact
Behavior Change¶
Blocking¶
Starting in MongoDB 4.4, compact
only blocks the following
metadata operations:
db.collection.drop
db.collection.createIndex
anddb.collection.createIndexes
db.collection.dropIndex
anddb.collection.dropIndexes
compact
does not block MongoDB CRUD Operations for the database it is
currently operating on.
Previously, compact
blocked all operations for
the database it was operating on, including MongoDB CRUD Operations, and was
therefore only appropriate for use during scheduled maintenance
periods.
force
Option¶
Starting in MongoDB 4.4, the force
flag forces compact
to run on the primary in a replica set.
Previously, the force
option, when set to true
enabled
compact
to run on the primary in a replica
set and if set to false
, returned an error when run on a
primary.
See also
mongod --repair
Behavior Change¶
Starting in MongoDB 4.4, the mongod --repair
rebuilds all
indexes for the following:
- Collections with inconsistencies between the collection data and one or more indexes.
- Salvaged and modified collections.
In earlier versions of MongoDB, the mongod --repair
option
rebuilt all indexes for all collections.
serverStatus
Output Change¶
Field Name Change¶
serverStatus
returns
flowControl.locksPerKiloOp
instead of
flowControl.locksPerOp
.
New Fields¶
serverStatus
includes the following new fields in its
output:
- Aggregation Metrics
metrics.aggStageCounters
(Also available in 4.2.6+ and 4.0.19+)- Connections Metrics
- Default Read Concern Write Concern Metrics
- Latch Metrics
- Mirrored Reads Metrics
- Query Execution Metrics
- Replication Metrics
- Network Metrics
- Security Metrics
- Sharding Metrics
replSetGetStatus
Output Change¶
replSetGetStatus
returns the following new fields:
db.auth() Can Prompt for Password¶
Starting in MongoDB 4.4, the mongo
shell method
db.auth(<username>, <password>) prompts for the password if
you do not pass in the password or the passwordPrompt()
method for the <password>
.
Support for $natural
Sort on Views¶
Starting in MongoDB 4.4, you can specify a $natural
sort when running a find
operation against a
view.
Support for Diagnostic Backtrace Generation¶
Starting in MongoDB 4.4, mongod
and mongos
processes running on Linux will now log a backtrace for each of their
running threads upon receipt of a SIGUSR2
signal. This backtrace can
be analyzed for diagnostic information or provided to MongoDB support as
needed. This functionality is currently available only on the x86_64
architecture. For more information on using this feature, see
Generate a Backtrace.
Container-aware FTDC Reporting¶
Starting in MongoDB 4.4, FTDC now reports utilization
data for a mongod
running in a container from the
perspective of the container, as opposed to the host operating system.
See Full Time Diagnostic Data Capture for more information.
Updated ulimit
Startup Warning¶
Starting in MongoDB 4.4, mongod
will log a startup
warning if a platform’s configured ulimit
value for number of open
files is under 64000
. Previously, a warning would only be
logged if this value was under 1000
. See
Recommended ulimit Settings for more information.
New replanReason
Database Profiler Output Field¶
MongoDB 4.4 adds the replanReason
field to
database profiler output
and diagnostic log messages. The
replanReason
field contains the reason the query system evicted a
cached plan.
dbStats
and collStats
Output¶
The dbStats
command and its mongo
shell
helper db.stats()
return:
totalSize
, which is the sum ofstorageSize
andindexSize
.
The collStats
command, its mongo
shell
helper db.collection.stats()
, and the $collStats
aggregation stage return:
totalSize
, which is the sum ofstorageSize
andtotalIndexSize
.freeStorageSize
, which is the amount of storage available for reuse.
Hint Available for Additional Database Commands¶
Starting in MongoDB 4.4, the following database commands can accept
a hint
argument to specify the index to use:
- The
delete
command and the associatedmongo
shell methodsdb.collection.deleteOne()
anddb.collection.deleteMany()
. - The
findAndModify
command and the associatedmongo
shell methods:
See:
JavaScript Execution on mongos
¶
Starting in MongoDB 4.4, MongoDB allows JavaScript execution on
mongos
instances. To disable JavaScript execution on a
mongos
instance:
- Set
security.javascriptEnabled
configuration option to false, or - Specify the
--noscripting
command-line option.
Earlier versions of MongoDB do not allow JavaScript execution on
mongos
instances.
Global Default Read and Write Concern¶
Requires featureCompatibilityVersion
4.4+
Each mongod
in the replica set or sharded cluster
must have featureCompatibilityVersion set to at
least 4.4
to configure global default read and write concern.
Starting in MongoDB 4.4, replica sets and sharded clusters support configuring global default read and write concern settings. Clients which do not explicitly specify a given read or write concern setting inherit the corresponding global default setting.
To configure default global default read or write concern, MongoDB adds
the setDefaultRWConcern
administrative command. For replica
sets, issue the command against the primary member. For sharded
clusters, issue the command from a mongos
.
To retrieve the global default read or write concern settings, MongoDB
adds the getDefaultRWConcern
administrative command.
Read Concern Provenance¶
Starting in MongoDB 4.4, read concern objects may include a
provenance
field, indicating where the read concern originated.
The following table shows the possible read concern provenance
values and their significance:
Provenance | Description |
---|---|
clientSupplied |
The read concern was specified in the application. |
customDefault |
The read concern originated from a custom defined
default value. See setDefaultRWConcern . |
implicitDefault |
The read concern originated from the server in absence of all other read concern specifications. |
If a read operation is logged or profiled, the operation entry contains
the read concern object, including the provenance
field.
MongoDB does not recommended specifying the provenance
field in
requests to the server. This field should only be used for diagnostic
purposes.
Write Concern Provenance¶
Starting in MongoDB 4.4, write concern objects may include a
provenance
field, indicating where the write concern originated.
The following table shows the possible write concern provenance
values and their significance:
Provenance | Description |
---|---|
clientSupplied |
The write concern was specified in the application. |
customDefault |
The write concern originated from a custom defined
default value. See setDefaultRWConcern . |
getLastErrorDefaults |
The write concern originated from the replica set’s
settings.getLastErrorDefaults field. |
implicitDefault |
The write concern originated from the server in absence of all other write concern specifications. |
If a write operation is logged or profiled, the operation entry contains
the write concern object, including the provenance
field.
MongoDB does not recommended specifying the provenance
field in
requests to the server. This field should only be used for diagnostic
purposes.
currentOp
Output¶
- The
$currentOp
includesdataThroughputAverage
anddataThroughputLastSecond
information when reporting on validate operations in progress. - The
currentOp
command includesdataThroughputAverage
anddataThroughputLastSecond
information when reporting on validate operations in progress.
New KMIP Connection Parameters for mongod
¶
MongoDB 4.4 Enterprise introduces two new configuration settings to enhance the initial connection to a KMIP server, as part of Kerberos authentication:
Connection Retries¶
To control the number of times the mongod
retries a
failed initial connection to the KMIP server:
- Set the
security.kmip.connectRetries
configuration option, or - Specify the
mongod --kmipConnectRetries
command-line option.
Connection Timeout¶
To control the timeout, in milliseconds, to wait for the initial response from the KMIP server before giving up, or retrying:
- Set the
security.kmip.connectTimeoutMS
configuration option, or - Specify the
mongod --kmipConnectTimeoutMS
command-line option.
These settings are available in MongoDB Enterprise only.
New Startup Option for mongod
¶
The new processUmask
startup option for mongod
allows you to set permissions through umask for groups and other users when
honorSystemUmask
is set to false
.
mapReduce
Ignores verbose
Option¶
Starting with MongoDB 4.4, the mapReduce
command and the
db.collection.mapReduce()
shell method ignore the
verbose option.
explain
Support for mapReduce
¶
Starting with MongoDB 4.4, you can use the explain
command
or the db.collection.explain()
shell method to preview the
results of mapReduce
or
db.collection.mapReduce()
.
Improvements to explain
Results¶
Starting in version 4.4:
- Explain results for commands run
on sharded clusters include a top-level serverInfo
object for the
mongos
in addition to theserverInfo
objects returned for each shard. This is also available in versions 4.2.2, 4.0.14, and 3.6.16. - Explain results include the
serverInfo object when
optimizedPipeline
istrue
. In previous versions of MongoDB,explain
results would occasionally not include theserverInfo
object whenoptimizedPipeline
wastrue
. This is also available in versions 4.2.2, 4.0.14, and 3.6.16. - Explain results for aggregation include
the
nReturned
andexecutionTimeMillisEstimate
fields for each pipeline stage when you rundb.collection.explain().aggregate()
method inexecutionStats
andallPlansExecution
modes.
See also
comment
Option Available to all Database Commands¶
Starting in MongoDB 4.4, all database commands support specifying a
comment
field, in the following fashion:
Example
Once set, the comment appears alongside records of the command in the following locations:
- mongod log messages, in the
attr.command.cursor.comment
field. - Database profiler output, in the
command.comment
field. currentOp
output, in thecommand.comment
field.
A comment must be a valid BSON object (string, integer, array, etc).
Improvements to Positional $
Operator¶
Starting in MongoDB 4.4, when using the positional $
operator, you can specify different array fields between the query
document and projection document.
For example, if you insert the following document into a collection:
Starting in MongoDB 4.4, you can use the following query to project
only the first element from field b
for a document that matches
the query specified on field a
:
Important
To ensure expected behavior, the arrays used in the query document and the projection document must be the same length. If the arrays are different lenghts, the operation may error in certain scenarios.
In previous versions of MongoDB, this operation fails because the array field being limited must appear in the query document.
Changes Affecting Compatibility¶
Some changes can affect compatibility and may require user actions. For a detailed list of compatibility changes, see Compatibility Changes in MongoDB 4.4.
Upgrade Procedures¶
Feature Compatibility Version
To upgrade from 4.2 deployment, the 4.2 deployment must have
featureCompatibilityVersion
set to 4.2
. To check the version:
For specific details on verifying and setting the
featureCompatibilityVersion
as well as information on other
prerequisites/considerations for upgrades, refer to the individual
upgrade instructions:
If you need guidance on upgrading to 4.4, MongoDB offers major version upgrade services to help ensure a smooth transition without interruption to your MongoDB application.
Known Issues¶
In Version | Issues | Status |
---|---|---|
4.4.0 | SERVER-45042: MongoDB Server Installation MSI for both Community and Enterprise no longer contain binaries for the MongoDB Database Tools. For more information, see Tools Changes. | Unresolved |
4.4.0 | SERVER-49694: On a sharded cluster, nearest
reads or hedged reads may not be
routed to a near shard replica. |
Unresolved |
4.4.0 | WT-6623: Set the connection level file id in recovery file scan | Unresolved |
Report an Issue¶
To report an issue, see https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports for instructions on how to file a JIRA ticket for the MongoDB server or one of the related projects.