Skip to content

Instantly share code, notes, and snippets.

View DiannaHohensee's full-sized avatar

Dianna Hohensee DiannaHohensee

View GitHub Profile
diff --git a/src/mongo/db/s/balancer/migration_manager.cpp b/src/mongo/db/s/balancer/migration_manager.cpp
index 4af6282..59c3f76 100644
--- a/src/mongo/db/s/balancer/migration_manager.cpp
+++ b/src/mongo/db/s/balancer/migration_manager.cpp
@@ -296,9 +296,6 @@ void MigrationManager::startRecoveryAndAcquireDistLocks(OperationContext* txn) {
auto distLockManager = Grid::get(txn)->catalogClient(txn)->getDistLockManager();
- // Free any leftover locks from previous instantiations.
- distLockManager->unlockAll(txn, distLockManager->getProcessID());
diff --git a/src/mongo/db/s/balancer/migration_manager.cpp b/src/mongo/db/s/balancer/migration_manager.cpp
index 4af6282..59c3f76 100644
--- a/src/mongo/db/s/balancer/migration_manager.cpp
+++ b/src/mongo/db/s/balancer/migration_manager.cpp
@@ -296,9 +296,6 @@ void MigrationManager::startRecoveryAndAcquireDistLocks(OperationContext* txn) {
auto distLockManager = Grid::get(txn)->catalogClient(txn)->getDistLockManager();
- // Free any leftover locks from previous instantiations.
- distLockManager->unlockAll(txn, distLockManager->getProcessID());
data syspool xdf0 xdf1
kps tps serv kps tps serv kps tps serv kps tps serv
7770 163 294 83 7 31 0 0 0 0 0 0
103019 841 2478 0 0 0 0 0 0 0 0 0
17625 195 675 0 0 0 0 0 0 0 0 0
209 17 18 0 0 0 0 0 0 0 0 0
264 46 3 0 0 0 0 0 0 0 0 0
590 37 10 0 0 0 0 0 0 0 0 0
255 17 3 0 0 0 0 0 0 0 0 0
502 67 1 0 0 0 0 0 0 0 0 0
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.781+0000 c20512| 2017-04-25T05:18:42.780+0000 I REPL [rsSync] transition to primary complete; database writes are now permitted
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.849+0000 c20512| 2017-04-25T05:18:42.849+0000 I COMMAND [conn13] Attempting to step down in response to replSetStepDown command
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.850+0000 c20512| 2017-04-25T05:18:42.849+0000 I REPL [conn13] transition to SECONDARY
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.850+0000 c20512| 2017-04-25T05:18:42.849+0000 I NETWORK [conn13] legacy transport layer closing all connections
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.851+0000 c20512| 2017-04-25T05:18:42.849+0000 I NETWORK [conn13] Skip closing connection for connection # 16
[js_test:movechunk_interrupt_at_primary_stepdown] 2017-04-25T05:18:42.851+0000 c20512| 2017-04-25T0
[js_test:copydb_from_mongos] 2017-06-29T22:07:33.645+0000 c20261| 2017-06-29T22:07:33.645+0000 I COMMAND [conn22] Attempting to step down in response to replSetStepDown command
[js_test:copydb_from_mongos] 2017-06-29T22:07:33.647+0000 s20264| 2017-06-29T22:07:33.645+0000 I NETWORK [conn1] Marking host sles12-z-3.maristisv.build.10gen.cc:20261 as failed :: caused by :: InterruptedDueToReplStateChange: operation was interrupted
[js_test:copydb_from_mongos] 2017-06-29T22:07:33.648+0000 s20264| 2017-06-29T22:07:33.646+0000 W NETWORK [conn1] No primary detected for set test-configRS
// election takes 20 seconds
[js_test:copydb_from_mongos] 2017-06-29T22:07:33.645+0000 c20261| 2017-06-29T22:07:33.645+0000 I COMMAND [conn22] Attempting to step down in response to replSetStepDown command
[js_test:copydb_from_mongos] 2017-06-29T22:07:39.199+0000 c20261| 2017-06-29T22:07:39.199+0000 I REPL [ReplicationExecutor] Not starting an election, since we are not electable due to: Not standing for election because I am still waiting for stepdown period to end at 2017-06-29T22:07:43.645+0000 (mask 0x20)
[js_test:copydb_from_mongos] 2017-06-29T22:07:44.288+0000 c20263| 2017-06-29T22:07:44.287+0000 I REPL [ReplicationExecutor] Not starting an election, since we are not electable due to: Not standing for election again; already candidate
[js_test:copydb_from_mongos] 2017-06-29T22:07:44.289+0000 c20262| 2017-06-29T22:07:44.289+0000 I REPL [ReplicationExecutor] Not starting an election, since we are not electable due to: Not standing for election again; already candidate
[js_test:copydb_from
ContainerUniquePtrs A
ContainerUniquePtrs B
bool equality = [&A, &B]() {
for (int i = 0 ; i < A.size && i < B.size ; ++i) {
if (A[i] != B[i])
return false;
}
return true;
}();
diff --git a/src/mongo/s/shard_server_test_fixture.cpp b/src/mongo/s/shard_server_test_fixture.cpp
index ec77c5a..63173e0 100644
--- a/src/mongo/s/shard_server_test_fixture.cpp
+++ b/src/mongo/s/shard_server_test_fixture.cpp
@@ -84,6 +84,9 @@ void ShardServerTestFixture::setUp() {
uassertStatusOK(
initializeGlobalShardingStateForMongodForTest(ConnectionString(kConfigHostAndPort)));
+ // Initialize the CatalogCache so that metadata refreshes will work.
+ catalogCache()->initializeReplicaSetRole(true);
diff --git a/src/mongo/s/shard_server_test_fixture.cpp b/src/mongo/s/shard_server_test_fixture.cpp
index ec77c5a..63173e0 100644
--- a/src/mongo/s/shard_server_test_fixture.cpp
+++ b/src/mongo/s/shard_server_test_fixture.cpp
@@ -84,6 +84,9 @@ void ShardServerTestFixture::setUp() {
uassertStatusOK(
initializeGlobalShardingStateForMongodForTest(ConnectionString(kConfigHostAndPort)));
+ // Initialize the CatalogCache so that shard server metadata refreshes will work.
+ catalogCache()->initializeReplicaSetRole(true);
Local Remote
no metadata no metadata
no metadata metadata
metadata metadata
metadata more metadata
metadata new epoch metadata
metadata new epoch but mixed chunk old and new versions
metadata no metadata