Skip to content

Instantly share code, notes, and snippets.

@beyond-code-github
Last active January 18, 2017 14:52
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save beyond-code-github/0983eb77a5a9913981b9 to your computer and use it in GitHub Desktop.
Save beyond-code-github/0983eb77a5a9913981b9 to your computer and use it in GitHub Desktop.
Configuring a mongo cluster over VPN in vagrant
input {
file {
path => "/home/pete/code/hmrc/mongovagrant/code/fasttrack/logs/*"
codec => json
}
file {
path => "/home/pete/code/hmrc/mongovagrant/code/auth-provider/logs/*"
codec => json
}
file {
path => "/home/pete/code/hmrc/mongovagrant/code/fasttrack-frontend/logs/*"
codec => json
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
}
}

Reproducing the Mongo Xombie Connection issue bug

Bringing up the vagrant boxes

Install vagrant/virtualbox Create a folder called mongo-vagrant

Create a file called 'Vagrantfile' inside with the following contents:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # set to false, if you do NOT want to check the correct VirtualBox Guest Additions version when booting this box
  #if defined?(VagrantVbguest::Middleware)
  #  config.vbguest.auto_update = true
  #end

  hosts = {
    'mongo' => { 'ip_address' => '10.16.0.1', 'ip_address2' => '10.16.30.1' },
    'mongo2' => { 'ip_address' => '10.16.0.2' , 'ip_address2' => '10.16.31.1'},
    'mongo3' => { 'ip_address' => '10.16.0.3' , 'ip_address2' => '10.16.32.1'},
    }

  hosts.each do |host, host_config|
    config.vm.define host do |node|
      node.vm.box = "ubuntu/trusty64"
      node.vm.hostname = host
      node.vm.network :private_network, ip: host_config['ip_address']
      node.vm.network "private_network", ip: host_config['ip_address2'],
        virtualbox__intnet: host
      node.vm.provider :virtualbox do |vb|
        vb.customize ["modifyvm", :id, "--cpus", "2", "--memory", "6144"]
      end
    end
  end
end

Bring up the machines from inside mongo-vagrant by typing vagrant up This will create three boxes. mongo, mongo2 and mongo3

Mongo and mongo2 will be regular nodes, and mongo3 will be our arbiter.

Installing the Strongswan VPN

By default each mongo VM has two network interfaces. One is on subnet 10.16.0.x and will allow us to SSH into the box and communicate with the host machine. The other is on a subnet specific to the VM itself, which you can see defined in the vagrantfile.

We will use strongswan VPN to connect the various subnets and allow communication between the nodes. SSH into the first mongo box using vagrant ssh mongo

Install strongswan by running:

sudo apt-get install strongswan-starter

We need to new configure two VPN tunnels to the other nodes. Edit /etc/ipsec.conf so it looks like this:

# ipsec.conf - strongSwan IPsec configuration file

# basic configuration

config setup
        # strictcrlpolicy=yes
        # uniqueids = no

# Add connections here.
conn mongo-mongo2
    authby=secret
    auto=route
    keyexchange=ike
    leftid=@mongo
    leftsubnet=10.16.30.0/24
    left=10.16.0.1
    rightid=@mongo2
    right=10.16.0.2
    rightsubnet=10.16.31.0/24

conn mongo-mongo3
    authby=secret
    auto=route
    keyexchange=ike
    leftid=@mongo
    leftsubnet=10.16.30.0/24
    left=10.16.0.1
    rightid=@mongo3
    right=10.16.0.3
    rightsubnet=10.16.32.0/24

To secure our vpn we're using pre-shared keys. As this is just a local vagrant setup, we're keeping it quite simple but perhaps do something more robust if you ever do this for real(!)

Edit /etc/ispec.secrets to specify the keys for each connection:

# This file holds shared secrets or RSA private keys for authentication.

# RSA private key for this host, authenticating it to any other host
# which knows the public part.  Suitable public keys, for ipsec.conf, DNS,
# or configuration of other implementations, can be extracted conveniently
# with "ipsec showhostkey".
@mongo @mongo2 : PSK "vpn"
@mongo @mongo3 : PSK "vpn"

Repeat these steps for the other mongo nodes, changing the connection names and ip addresses accordingly. Remember each node needs a VPN tunnel to the other two.

Restart the VPN on each box using sudo ipsec restart

After a short delay, connections should be established. You can check this by running sudo ipsec statusall. You should see an output like:

Status of IKE charon daemon (strongSwan 5.1.2, Linux 3.13.0-83-generic, x86_64):
  uptime: 55 seconds, since Mar 24 11:00:42 2016
  malloc: sbrk 1757184, mmap 0, used 341184, free 1416000
  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 0
  loaded plugins: charon test-vectors aes rc2 sha1 sha2 md4 md5 rdrand random nonce x509 revocation constraints pkcs1 pkcs7 pkcs8 pkcs12 pem openssl xcbc cmac hmac ctr ccm gcm attr kernel-netlink resolve socket-default stroke updown eap-identity addrblock
Listening IP addresses:
  10.0.2.15
  10.16.0.1
  10.16.30.1
Connections:
mongo-mongo2:  10.16.0.1...10.16.0.2  IKEv1/2
mongo-mongo2:   local:  [mongo] uses pre-shared key authentication
mongo-mongo2:   remote: [mongo2] uses pre-shared key authentication
mongo-mongo2:   child:  10.16.30.0/24 === 10.16.31.0/24 TUNNEL
mongo-mongo3:  10.16.0.1...10.16.0.3  IKEv1/2
mongo-mongo3:   local:  [mongo] uses pre-shared key authentication
mongo-mongo3:   remote: [mongo3] uses pre-shared key authentication
mongo-mongo3:   child:  10.16.30.0/24 === 10.16.32.0/24 TUNNEL
Routed Connections:
mongo-mongo3{2}:  ROUTED, TUNNEL
mongo-mongo3{2}:   10.16.30.0/24 === 10.16.32.0/24 
mongo-mongo2{1}:  ROUTED, TUNNEL
mongo-mongo2{1}:   10.16.30.0/24 === 10.16.31.0/24 
Security Associations (0 up, 0 connecting):
  none

A good way to test that the network is set up correctly is to try to ping one of the other nodes via the IP for it's personal subnet:

vagrant@mongo:~$ ping 10.16.31.1
PING 10.16.31.1 (10.16.31.1) 56(84) bytes of data.
64 bytes from 10.16.31.1: icmp_seq=2 ttl=64 time=0.376 ms
64 bytes from 10.16.31.1: icmp_seq=3 ttl=64 time=0.296 ms
64 bytes from 10.16.31.1: icmp_seq=4 ttl=64 time=0.336 ms
^C
--- 10.16.31.1 ping statistics ---
4 packets transmitted, 3 received, 25% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.296/0.336/0.376/0.032 ms

Then stop the VPN using sudo ipsec stop and try again. Ping should now hang indefinitely.

vagrant@mongo:~$ sudo ipsec stop
Stopping strongSwan IPsec...

vagrant@mongo:~$ ping 10.16.31.1
PING 10.16.31.1 (10.16.31.1) 56(84) bytes of data.
^C
--- 10.16.31.1 ping statistics ---
70 packets transmitted, 0 received, 100% packet loss, time 69521ms

If ping is succesful even after terminating the VPN then it may be routing through another network gateway. Try disabling the default gateway http://ubuntuforums.org/showthread.php?t=1088474

Don't forget to start the VPN again when you're done using sudo ipsec start

Installing mongo

We'll be using the instructions from https://docs.mongodb.org/v3.0/installation/ to install Mongo 3.0.10 specifically, rather than a later version. Run the following commands on each mongo box:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org=3.0.10 mongodb-org-server=3.0.10 mongodb-org-shell=3.0.10 mongodb-org-mongos=3.0.10 mongodb-org-tools=3.0.10

We want to configure mongo to use a replica set, but also so that it only binds to the VM-specific subnet address. Edit /etc/mongod.conf on each box to look like this, making sure to change the IP according to the node

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# network interfaces
net:
  port: 27017
  bindIp: 10.16.30.1

replication:
  replSetName: rs0

Ensure the mongod instance is not running using sudo service mongod start Run mongod in the background using our settings using sudo mongod -f /etc/mongod.conf --fork

vagrant@mongo:~$ sudo mongod -f /etc/mongod.conf --fork
about to fork child process, waiting until server is ready for connections.
forked process: 2690
child process started successfully, parent exiting

Now access the mongo shell. You'll need to specify the host IP directly as the default is 127.0.0.1, which we are deliberately not listening on.

vagrant@mongo:~$ mongo 10.16.30.1
MongoDB shell version: 3.0.10
connecting to: 10.16.30.1/test
Server has startup warnings: 
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] 
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] 
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] 
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-24T11:16:44.888+0000 I CONTROL  [initandlisten] 
>

Initialising the replica set

From the mongo console on the first node ONLY, run rs.initiate()

> rs.initiate()
{
	"info2" : "no configuration explicitly specified -- making one",
	"me" : "10.16.30.1:27017",
	"ok" : 1
}
rs0:OTHER>

Now add the other nodes using their internal addresses. Note we are adding Mongo3 as an arbiter.

rs0:PRIMARY> rs.add("10.16.31.1")
{ "ok" : 1 }
rs0:PRIMARY> rs.addArb("10.16.32.1")
{ "ok" : 1 }

Finally, we can check the status of the cluster using rs.status():

rs0:PRIMARY> rs.status()
{
	"set" : "rs0",
	"date" : ISODate("2016-03-24T11:30:25.955Z"),
	"myState" : 1,
	"members" : [
		{
			"_id" : 0,
			"name" : "10.16.30.1:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 373,
			"optime" : Timestamp(1458818983, 1),
			"optimeDate" : ISODate("2016-03-24T11:29:43Z"),
			"electionTime" : Timestamp(1458818804, 2),
			"electionDate" : ISODate("2016-03-24T11:26:44Z"),
			"configVersion" : 3,
			"self" : true
		},
		{
			"_id" : 1,
			"name" : "10.16.31.1:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 51,
			"optime" : Timestamp(1458818983, 1),
			"optimeDate" : ISODate("2016-03-24T11:29:43Z"),
			"lastHeartbeat" : ISODate("2016-03-24T11:30:25.356Z"),
			"lastHeartbeatRecv" : ISODate("2016-03-24T11:30:24.962Z"),
			"pingMs" : 0,
			"syncingTo" : "10.16.30.1:27017",
			"configVersion" : 3
		},
		{
			"_id" : 2,
			"name" : "10.16.32.1:27017",
			"health" : 1,
			"state" : 7,
			"stateStr" : "ARBITER",
			"uptime" : 42,
			"lastHeartbeat" : ISODate("2016-03-24T11:30:25.356Z"),
			"lastHeartbeatRecv" : ISODate("2016-03-24T11:30:24.359Z"),
			"pingMs" : 0,
			"configVersion" : 3
		}
	],
	"ok" : 1
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment