Skip to content

Instantly share code, notes, and snippets.

@maikelsperandio
Last active November 25, 2019 17:14
Show Gist options
  • Save maikelsperandio/379cfd8e60de242a865dbb85a4929162 to your computer and use it in GitHub Desktop.
Save maikelsperandio/379cfd8e60de242a865dbb85a4929162 to your computer and use it in GitHub Desktop.
Some commands needed to starting up a replica set in MongoDB
MongoDB Setting up a replica set
The configuration file for the first node (/data/node1.conf):
storage:
dbPath: /var/mongodb/db/node1
net:
bindIp: 192.168.103.100,localhost
port: 27011
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node1/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-example
Creating the keyfile and setting permissions on it:
sudo mkdir -p /var/mongodb/pki/
sudo chown vagrant:vagrant /var/mongodb/pki/
openssl rand -base64 741 > /var/mongodb/pki/m103-keyfile
chmod 400 /var/mongodb/pki/m103-keyfile
Creating the dbpath for node1:
mkdir -p /var/mongodb/db/node1
Starting a mongod with node1.conf:
mongod -f node1.conf
Copying node1.conf to node2.conf and node3.conf:
cp node1.conf node2.conf
cp node2.conf node3.conf
node2.conf, after changing the dbpath, port, and logpath:
storage:
dbPath: /var/mongodb/db/node2
net:
bindIp: 192.168.103.100,localhost
port: 27012
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node2/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-example
node3.conf, after changing the dbpath, port, and logpath:
storage:
dbPath: /var/mongodb/db/node3
net:
bindIp: 192.168.103.100,localhost
port: 27013
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node3/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-example
Creating the data directories for node2 and node3:
mkdir /var/mongodb/db/{node2,node3}
Starting mongod processes with node2.conf and node3.conf:
mongod -f node2.conf
mongod -f node3.conf
Connecting to node1:
mongo --port 27011
Initiating the replica set:
This command rs.initiate will initiate the replica set. And we actually need to run it on one of the nodes. Because we ran it here, we just have to add the other two nodes from this node.
rs.initiate()
Creating a user:
use admin
db.createUser({
user: "m103-admin",
pwd: "m103-pass",
roles: [
{role: "root", db: "admin"}
]
})
Exiting out of the Mongo shell and connecting to the entire replica set:
All right, so this command created our m103 super user, called m103-admin, that has root access and authenticates against the admin database.
Now I'm just going to exit out of this mongod and then log back in as that user.
So this is the command that we're going to use to connect to the replica set.
And in addition to authenticating here with a username password, we have to specify the name of the replica set in the host name.
This will tell the mongo shell to connect directly to the replica set, instead of just this one node that we specify.
exit
mongo --host "m103-example/192.168.103.100:27011" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
Getting replica set status:
rs.status()
Adding other members to replica set:
rs.add("m103.mongodb.university:27012")
rs.add("m103.mongodb.university:27013")
Getting an overview of the replica set topology:
rs.isMaster()
Stepping down the current primary:
rs.stepDown()
Checking replica set overview after election:
rs.isMaster()
===========================================================================================================================
Some information about replica set
===========================================================================================================================
After initiating our node and adding the node to the replica set, the oplog.rs collection is created.
By default we take 5% of the available disk.
But this value can also be set by configuring it through the oplog size in megabytes under the replication section of our configuration file.
As operations get logged into the oplog, like inserts or deletes kind of operations, the oplog.rs collection starts to accumulate the operations and statements, until it reaches the oplog size limit.
Once that happens, the first operations in our oplog start to be overwritten with newer operations.
The time it takes to fill in fully our oplog and start rewriting the early statements determines the replication window.
Every node in our replica set has its own oplog.
Now if for some reason one of the nodes gets disconnected the replica set keeps on accumulating new writes.
Basically what will happen is that a recovering node will check for its last oplog entry and try to find it in one of the available nodes.
The size of our oplog.rs collection is an important aspect to keep in mind.
The replication window measured in hours will be proportional to the load of your system.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment