Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save harveyconnor/518e088bad23a273cae6ba7fc4643549 to your computer and use it in GitHub Desktop.
Save harveyconnor/518e088bad23a273cae6ba7fc4643549 to your computer and use it in GitHub Desktop.
MongoDB Replica Set / docker-compose / mongoose transaction with persistent volume

This will guide you through setting up a replica set in a docker environment using.

  • Docker Compose
  • MongoDB Replica Sets
  • Mongoose
  • Mongoose Transactions

Thanks to https://gist.github.com/asoorm for helping with their docker-compose file!

mongo-setup:
container_name: mongo-setup
image: mongo
restart: on-failure
networks:
default:
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ] # Make sure this file exists (see below for the setup.sh)
depends_on:
- mongo1
- mongo2
- mongo3
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo
expose:
- 27017
ports:
- 27017:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data1/db:/data/db # This is where your volume will persist. e.g. VOLUME-DIR = ./volumes/mongodb
- <VOLUME-DIR>/mongo/data1/configdb:/data/configdb
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo
expose:
- 27017
ports:
- 27018:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data2/db:/data/db # Note the data2, it must be different to the original set.
- <VOLUME-DIR>/mongo/data2/configdb:/data/configdb
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo
expose:
- 27017
ports:
- 27019:27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ]
volumes:
- <VOLUME-DIR>/mongo/data3/db:/data/db
- <VOLUME-DIR>/mongo/data3/configdb:/data/configdb
# NOTE: This is the simplest way of achieving a replicaset in mongodb with Docker.
# However if you would like a more automated approach, please see the setup.sh file and the docker-compose file which includes this startup script.
# run this after setting up the docker-compose This will instantiate the replica set.
# The id and hostname's can be tailored to your liking, however they MUST match the docker-compose file above.
docker-compose up -d
docker exec -it localmongo1 mongo
rs.initiate(
{
_id : 'rs0',
members: [
{ _id : 0, host : "mongo1:27017" },
{ _id : 1, host : "mongo2:27017" },
{ _id : 2, host : "mongo3:27017", arbiterOnly: true }
]
}
)
exit
// If on a linux server, use the hostname provided by the docker compose file
// e.g. HOSTNAME = mongo1, mongo2, mongo3
// If on MacOS add the following to your /etc/hosts file.
// 127.0.0.1 mongo1
// 127.0.0.1 mongo2
// 127.0.0.1 mongo3
// And use localhost as the HOSTNAME
mongoose.connect('mongodb://<HOSTNAME>:27017,<HOSTNAME>:27018,<HOSTNAME>:27019/<DBNAME>', {
useNewUrlParser : true,
useFindAndModify: false, // optional
useCreateIndex : true,
replicaSet : 'rs0', // We use this from the entrypoint in the docker-compose file
})
#!/bin/bash
#MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
#MONGODB2=`ping -c 1 mongo2 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
#MONGODB3=`ping -c 1 mongo3 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB1=mongo1
MONGODB2=mongo2
MONGODB3=mongo3
echo "**********************************************" ${MONGODB1}
echo "Waiting for startup.."
until curl http://${MONGODB1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
# echo curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1
# echo "Started.."
echo SETUP.sh time now: `date +"%T" `
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs0",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
"priority": 2
},
{
"_id": 1,
"host": "${MONGODB2}:27017",
"priority": 0
},
{
"_id": 2,
"host": "${MONGODB3}:27017",
"priority": 0
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.slaveOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSlaveOk();
EOF
async function transaction() {
// Start the transaction.
const session = await ModelA.startSession();
session.startTransaction();
try {
const options = { session };
// Try and perform operation on Model.
const a = await ModelA.create([{ ...args }], options);
// If the first operation succeeds this next one will get called.
await ModelB.create([{ ...args }], options);
// If all succeeded with no errors, commit and end the session.
await session.commitTransaction();
session.endSession();
return a;
} catch (e) {
// If any error occured, the whole transaction fails and throws error.
// Undos changes that may have happened.
await session.abortTransaction();
session.endSession();
throw e;
}
}
@bentu-noodoe
Copy link

This worked for me. But only sometimes. Empirically, I find that if I have N replicas in the set, then I have to wait until they are all ready before sending the json config to one of them. Eg, with 3 replicas defined in docker-compose.yaml with service names mongo1, mongo2, and mongo3 then:

# setup.sh
...
for n in $(seq 3); do
  until mongo --host "mongo${n}" --eval "print(\"waited for connection\")"; do
      echo -n .; sleep 2
  done
done
...

Thank you, this works for me.

@AhmedBHameed
Copy link

AhmedBHameed commented Mar 23, 2024

[RESOLVED] I found a solution for my following issue. See below.

I'm using Linux. I configured replicaset successfully and even connected via a server running at the same network of the mongo containers.

However, If I wanted to connect mongo compass but all my tries failed!!

mongodb://<USER>:<PASS>@mongo1:27017,mongo2:27018,mongo3:27019/?replicaSet=rs0 // failed
mongodb://<USER>:<PASS>@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0 // failed

Even using my local IP address failed to connect. I do see docker sensing the connecting but paining with the following error:

mongo1  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":11}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":12}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":22}}
mongo1  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn78","msg":"client metadata","attr":{"remote":"192.168.2.84:56344","client":"conn78","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn81","msg":"client metadata","attr":{"remote":"192.168.2.84:58228","client":"conn81","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn92","msg":"client metadata","attr":{"remote":"192.168.2.84:40516","client":"conn92","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo1  | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn78","msg":"Connection ended","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":10}}
mongo2  | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn81","msg":"Connection ended","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":11}}
mongo3  | {"t":{"$date":"2024-03-23T17:23:58.514+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn92","msg":"Connection ended","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":21}}

Running rs.status() giving me the following

{
  set: 'rs0',
  date: ISODate('2024-03-23T17:24:32.085Z'),
  myState: 2,
  term: Long('2'),
  syncSourceHost: 'mongo3:27019',
  syncSourceId: 2,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    lastCommittedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    appliedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    durableOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
    lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
    lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1711214658, i: 1 }),
  electionParticipantMetrics: {
    votedForCandidate: true,
    electionTerm: Long('2'),
    lastVoteDate: ISODate('2024-03-23T17:11:28.736Z'),
    electionCandidateMemberId: 2,
    voteReason: '',
    lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
    maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
    priorityAtElection: 1,
    newTermStartDate: ISODate('2024-03-23T17:11:28.780Z'),
    newTermAppliedDate: ISODate('2024-03-23T17:11:28.802Z')
  },
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 794,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      syncSourceHost: 'mongo3:27019',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27018',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 793,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
      lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo3:27019',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 2
    },
    {
      _id: 2,
      name: 'mongo3:27019',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 793,
      optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
      optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
      lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
      lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
      lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1711213888, i: 1 }),
      electionDate: ISODate('2024-03-23T17:11:28.000Z'),
      configVersion: 1,
      configTerm: 2
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1711214668, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('VClxfA6lXgDXDuYJP68mAP6veSw=', 0),
      keyId: Long('7349605288829255686')
    }
  },
  operationTime: Timestamp({ t: 1711214668, i: 1 })
}

Any idea how to use mongo compass app with docker containers of mongo replicaset ?

test

UPDATE:

I managed to make it working by adding IP mapping in /etc/hosts

127.0.0.1       mongo1
127.0.0.1       mongo2
127.0.0.1       mongo3

Then connection was working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment