Skip to content

Instantly share code, notes, and snippets.

@CaptTofu
Created April 19, 2016 14:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CaptTofu/e7ce1a4740dfb6281842d402cc0f64d5 to your computer and use it in GitHub Desktop.
Save CaptTofu/e7ce1a4740dfb6281842d402cc0f64d5 to your computer and use it in GitHub Desktop.
mongos> db.testData.getShardDistribution()
Shard west1 at west1/ec2-52-53-243-185.us-west-1.compute.amazonaws.com:27017,ec2-54-153-95-198.us-west-1.compute.amazonaws.com:27017,ip-172-31-28-205:27017
data : 9.42MiB docs : 205846 chunks : 3
estimated data per chunk : 3.14MiB
estimated docs per chunk : 68615
Totals
data : 9.42MiB docs : 205846 chunks : 3
Shard west1 contains 100% data, 100% docs in cluster, avg obj size on shard : 48B
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5713c23c28b7146220118d0a")
}
shards:
{ "_id" : "west1", "host" : "west1/ec2-52-53-243-185.us-west-1.compute.amazonaws.com:27017,ec2-54-153-95-198.us-west-1.compute.amazonaws.com:27017,ip-172-31-28-205:27017" }
{ "_id" : "west2", "host" : "west2/ec2-54-186-125-103.us-west-2.compute.amazonaws.com:27017,ec2-54-200-15-235.us-west-2.compute.amazonaws.com:27017,ip-172-31-17-143:27017" }
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Tue Apr 19 2016 14:36:00 GMT+0000 (UTC) by ip-172-31-28-205:27018:1461037929:1804289383:Balancer:846930886
Collections with active migrations:
testdb.restaurants started at Tue Apr 19 2016 14:36:00 GMT+0000 (UTC)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
2 : Failed with error 'migration already in progress', from west1 to west2
77 : Failed with error 'moveChunk could not contact to: shard west2 to start transfer :: caused by :: 10009 ReplicaSetMonitor no master found for set: west2', from west1 to west2
1 : Failed with error 'could not acquire collection lock for testdb.restaurants to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in testdb.restaurants is taken.', from west1 to west2
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "temp", "partitioned" : true, "primary" : "west1" }
{ "_id" : "test", "partitioned" : false, "primary" : "west1" }
{ "_id" : "testdb", "partitioned" : true, "primary" : "west1" }
testdb.restaurants
shard key: { "_id" : 1 }
chunks:
west1 6
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("57163efe5895fd10a3e92c8c") } on : west1 Timestamp(1, 1)
{ "_id" : ObjectId("57163efe5895fd10a3e92c8c") } -->> { "_id" : ObjectId("57163efe5895fd10a3e92f72") } on : west1 Timestamp(1, 2)
{ "_id" : ObjectId("57163efe5895fd10a3e92f72") } -->> { "_id" : ObjectId("57163efe5895fd10a3e930e6") } on : west1 Timestamp(1, 3)
{ "_id" : ObjectId("57163efe5895fd10a3e930e6") } -->> { "_id" : ObjectId("57163efe5895fd10a3e9325a") } on : west1 Timestamp(1, 4)
{ "_id" : ObjectId("57163efe5895fd10a3e9325a") } -->> { "_id" : ObjectId("57163efe5895fd10a3e933ce") } on : west1 Timestamp(1, 5)
{ "_id" : ObjectId("57163efe5895fd10a3e933ce") } -->> { "_id" : { "$maxKey" : 1 } } on : west1 Timestamp(1, 6)
testdb.testData
shard key: { "x" : 1 }
chunks:
west1 3
{ "x" : { "$minKey" : 1 } } -->> { "x" : 2 } on : west1 Timestamp(1, 1)
{ "x" : 2 } -->> { "x" : 22 } on : west1 Timestamp(1, 2)
{ "x" : 22 } -->> { "x" : { "$maxKey" : 1 } } on : west1 Timestamp(1, 3)
mongos>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment