Skip to content

Instantly share code, notes, and snippets.

@nicholasjackson
Last active May 10, 2023 12:18
Show Gist options
  • Save nicholasjackson/b9e357522f455d59baddbb31d18d6028 to your computer and use it in GitHub Desktop.
Save nicholasjackson/b9e357522f455d59baddbb31d18d6028 to your computer and use it in GitHub Desktop.
Codemotion Madrid

Codemotion Madrid

Let's see how you can use Docker to create fake Virtual Machines

VMs with Docker

If you have ever used cloud computing you will know that most virtual machines give you the capability to install software using cloud init.

With Docker you can do something very similar

file snippet
vms/Dockerfiles/Dockerfile 1_dockerfile
ARG TARGETARCH

FROM ubuntu:latest

RUN apt update && apt install -y openssh-server supervisor zip

RUN mkdir /run/sshd

Then you can do something like the following to install some applications, we are going to install our api app, fake-service and consul that we will use later.

file snippet
vms/Dockerfiles/Dockerfile 2_dockerfile
WORKDIR /tmp

# Install fake-service
RUN wget https://github.com/nicholasjackson/fake-service/releases/download/v0.25.1/fake_service_linux_amd64.zip -O ./fake-service.zip && \
  unzip ./fake-service.zip && \
  mv ./fake-service /usr/local/bin/fake-service && \
  chmod +x /usr/local/bin/fake-service

# Install Consul
RUN wget https://releases.hashicorp.com/consul/1.15.2/consul_1.15.2_linux_amd64.zip -O ./consul.zip && \
  unzip ./consul.zip && \
  mv ./consul /usr/local/bin/consul && \
  chmod +x /usr/local/bin/fake-service

But what about running applications, if you were dealing with a real VM you would probably use systemd. While it is possible to make systemd work in a docker container, it is not really portable, last time I tired it you also had to run the container as privileged which is not idea.

The answer is to use supervisord, this runs easilly in the container, as a final element in our container we just add the command to run supervisord.

CMD ["/usr/bin/supervisord"]

We can then create a supervisor config file like so

file snippet
vms/Dockerfiles/Dockerfile
[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
stderr_logfile=/var/log/sshd.err.log
stdout_logfile=/var/log/sshd.out.log

[program:consul]
command=/usr/local/bin/consul agent -datacenter %(ENV_CONSUL_DATACENTER)s -config-dir /config -client 0.0.0.0 -retry-join %(ENV_CONSUL_SERVER)s -bind 0.0.0.0 -grpc-port=8502 -data-dir /etc/consul -hcl "enable_central_service_config = true"
autostart=true
autorestart=true
stderr_logfile=/var/log/consul.err.log
stdout_logfile=/var/log/consul.out.log

[program:fake-service]
command=/usr/local/bin/fake-service
autostart=true
autorestart=true
stderr_logfile=/var/log/fake-service.err.log
stdout_logfile=/var/log/fake-service.out.log

Building

To build the application it is pretty straight forward, all we need to do is run the following command.

docker build -t nicholasjackson:vm:latest -f ./vms/Dockerfiles/Dockerfile ./vms

Running

So what about running the application well you could use Docker Compose, a Docker Compose file to run our application would look like this.

file snippet
vms/Docker-Compose/compose.yaml
services:
  vm1:
    image: nicholasjackson/vm:0.1.0

# but what about ssh
services:
  generate_ssh_key:
    image: ubuntu:latest
    command: ["ssh-keygen", "-t", "ed25519","-C", "your_email@example.com"]
    working_directory: "/files"
    volumes:
        - ./files:/files
  vm1:
    image: nicholasjackson/vm:0.1.0

I will be honest with you there are problems with this, the core is that there is no dependency, what you would really end up doing is writing a bash script, this gets hard really fast.

For this reason my buddy Erik veld and I created Jumppad, let's look at how we can use that along with many more complex scenarios so you can build reusable labs on your local machine.

Jumppad VMs

First thing first, I need to create a network that I am going to run my vms on. This is just a Docker network but we can condigure the ip ranges through the netmask.

file snippet
main.hcl 4_network
resource "network" "vpc1" {
  subnet = "10.5.0.0/16"
}

Next let`s start to define the vm, I am going to put these into a folder, by doing so I am creating a re-usable component.

First we need to create a key for our servers ssh access, we can do that with the certificate_ca resource.

file snippet
./vms/vm1.hcl 5_ssh_key
resource "certificate_ca" "ssh_key" {
  output = data("ssh_key")
}

And then we need to create our container but, before we do, are we sure that the port will be availbe to run it on?

Let's create a random port to expose port 22 for ssh to.

file snippet
./vms/vm1.hcl 6_random_port
resource "random_number" "port" {
  minimum = 10000
  maximum = 20000
}

Now let's define our container

file snippet
./vms/vm1.hcl 7_container
resource "container" "vm1" {
  network {
    id = variable.network
  }

  image {
    name = "nicholasjackson/vm:0.1.0"
  }

  volume {
    source      = "./files/supervisor.conf"
    destination = "/etc/supervisor/conf.d/ssh.conf"
  }

  volume {
    source      = data("temp")
    destination = "/init"
  }

  ## Public SSH
  port {
    host   = resource.random_number.port.value
    local  = 22
    remote = 22
  }

  environment = {
    NAME              = "API - vm1"
    MESSAGE           = "Hi I am running in a Virtual Machine"
    CONSUL_DATACENTER = "dc1"
    CONSUL_SERVER     = variable.consul_server
  }
}
file snippet
./vms/vm1.hcl 7_container

We already discussed that we need to add the SSH key to it, let's see how we can do that using the template resource.

file snippet
./vms/vm1.hcl 8_template
resource "template" "vm_init" {
  source = <<-EOF
    #! /bin/bash
    mkdir -p ~/.ssh
    chmod 700 ~/.ssh
    echo "ssh-rsa ${resource.certificate_ca.ssh_key.public_key_ssh.contents}" >> ~/.ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
  EOF

  destination = "${data("temp")}/init.sh"
}

And now we have the problem of running it, we can do that too using the remote_exec type.

file snippet
./vms/vm1.hcl 9_remote_exec
resource "remote_exec" "vm_init" {
  depends_on = ["resource.template.vm_init"]
  target     = resource.container.vm1.id

  command = [
    "/bin/bash",
    "/init/init.sh"
  ]
}

Running the example

Let's run the example and see it in action, before we do we need to add it to our main.hcl file.

file snippet
./main.hcl 10_vms_module
module "vms" {
  source = "./vms"
  variables = {
    network       = resource.network.vpc1.id
  }
}

This allows us to define a reusable component, we are going to configure it through variables, let's look at the variables file to see the definition of these.

file snippet
./vms/variables.hcl

Before we run this, we created some output variables that contain the locations for the ssh key and things like that. These are only availble to the vms module, to expose them to the main module we need to create an output variable that references them.

file snippet
./vms/variables.hcl 11_outputs
output "vm_ssh_key" {
  value = module.vms.output.ssh_private_key
}

output "vm_ssh_addr" {
  value = module.vms.output.ssh_addr
}

output "vm_ssh_port" {
  value = module.vms.output.ssh_port
}

Now we can run it ...

jp up

And let's ssh into it, I can see the output variables by running the following command.

jp output

and specifically the output for the ssh command

jp output ssh_command

Let me use this to ssh into our machine, and curl our fake-service.

curl localhost:9091
ssh root@localhost -p 15458 -i /Users/nicj/.jumppad/data/ssh_key/vms/ssh_key.key

more than one machine

So this is great we rarely have a server in issolation, let's create another virtual machine. I actually have already prepared this, I am just going to rename this file and run jumppad again.

file snippet
./vms/vm2.hcl
jp up

Awesome, but how do we get traffic to these machines

Load balancing with Nginx

A common way is to use a load balancer, the free opensource project Nginx is perfect for this.

Let's create an nginx loadbalancer using the techniques we have learned to expose our vms.

The first thing we need to do is to create a config file for Nginx, we can use the template resource for that.

file snippet
./nginx/nginx.hcl 12_nginx_template
resource "template" "nginx_template" {
  source = <<-EOF
    upstream backend {
      vm1.vms.container.jumppad.dev;
      vm2.vms.container.jumppad.dev;
    }

    server {
       listen 80;

       location / {
          proxy_pass http://backend;
       }
    }
  EOF

  destination = "${data("nginx")}/nginx_template.hcl"
}

And then we create the resource itself

file snippet
./nginx/nginx.hcl 13_nginx_resource
resource "container" "nginx" {
  network {
    id = variable.network
  }

  image {
    name = "nicholasjackson/nginx:0.1.0"
  }

  volume {
    source      = "./files/supervisor.conf"
    destination = "/etc/supervisor/conf.d/ssh.conf"
  }

  volume {
    source      = data("temp")
    destination = "/init"
  }

  volume {
    source      = resource.template.nginx_template.destination
    destination = "/etc/nginx/conf.d/load-balancer.conf.ctmpl"
  }

  ## Public HTTP
  port {
    host   = 80
    local  = 80
    remote = 80
  }

  environment = {
    NAME              = "nginx"
    CONSUL_DATACENTER = "dc1"
    CONSUL_SERVER     = variable.consul_server
  }
}

then we add the module

module "nginx" {
  source = "./nginx"
  variables = {
    network = resource.network.vpc1.id
    //consul_server = module.consul.output.consul_server
  }
}

now let's run our application

jp up

And we can test it by curling nginx

curl localhost

Making everything dynamic

So this is all well and good, however, what happens when the names of the vms change? Manually update the config? In microservices we solved this problem by using a dynamic service catalog, Consul from HashiCorp is commonly used for this process. Let's create a consul server cluster and modify our applications to use it.

To save time I have created a module using the techniques I have just shown you to create a consul cluster.

We can add this to our config like so

file snippet
./main.hcl 14_consul
module "consul" {
  source = "./consul"

  variables = {
    consul_nodes = 1
    network      = resource.network.vpc1.id
  }
}

Modifying our vms

Now let's modify our vms to register the service in consul

file snippet
./vms/vm1.hcl 15_consul_service
resource "template" "consul_config_1" {
  source = <<-EOF
    service {
      id = "api-vm1"
      name = "api"
      port = 9090
    }
  EOF

  destination = "${data("consul_config")}/service1.hcl"
}

To register this we add the consul daemon to our box, this was actually already in the init script.

let's restart our applications now

If we go to the browser we can see the the two services have been registerd, let's modify nginx to use consul rather than the hard coded values.

Modifying nginx

First let's register nginx as a service too in consul

file snippet
./nginx/nginx.hcl 16_nginx_service
resource "template" "consul_config_1" {
  source = <<-EOF
    service {
      id = "nginx-1"
      name = "nginx"
      port = 80
    }
  EOF

  destination = "${data("consul_config")}/nginx.hcl"
}

Then we need to generate a config file for consul template

file snippet
./nginx/nginx.hcl 17_consul_template
resource "template" "consul_template" {
  source = <<-EOF
    consul {
      address = "localhost:8500"

      retry {
        enabled  = true
        attempts = 12
        backoff  = "250ms"
      }
    }

    template {
      source      = "/etc/nginx/conf.d/load-balancer.conf.ctmpl"
      destination = "/etc/nginx/conf.d/load-balancer.conf"
      perms       = 0600
      command     = "/etc/init.d/nginx reload"
    }
  EOF

  destination = "${data("nginx")}/consul_template.hcl"
}

Now we are going to replace the static nginx config file with a dynamic config file.

file snippet
./nginx/nginx.hcl 18_nginx_template
resource "template" "nginx_template" {
  source = <<-EOF
    upstream backend {
    {{- range service "api" }}
      server {{ .Address }}:{{ .Port }};
    {{- end }}
    }

    server {
       listen 80;

       location / {
          proxy_pass http://backend;
       }
    }
  EOF

  destination = "${data("nginx")}/nginx_template.hcl"
}

Finally I am going to replace my existing resource with a new one that uses this template.

file snippet
./nginx/nginx.hcl 19_container_nginx
resource "container" "nginx" {
  network {
    id = variable.network
  }

  image {
    name = "nicholasjackson/nginx:0.1.0"
  }

  volume {
    source      = "./files/supervisor.conf"
    destination = "/etc/supervisor/conf.d/ssh.conf"
  }

  volume {
    source      = data("temp")
    destination = "/init"
  }

  volume {
    source      = resource.template.consul_config_1.destination
    destination = "/config/consul/service.hcl"
  }

  volume {
    source      = resource.template.consul_template.destination
    destination = "/config/consul_template/consul_template.hcl"
  }

  volume {
    source      = resource.template.nginx_template.destination
    destination = "/etc/nginx/conf.d/load-balancer.conf.ctmpl"
  }

  ## Public HTTP
  port {
    host   = 80
    local  = 80
    remote = 80
  }

  environment = {
    NAME              = "nginx"
    CONSUL_DATACENTER = "dc1"
    CONSUL_SERVER     = variable.consul_server
  }
}

Let's run this.

jp down
jp up

And curl our nginx

If we jump into the container we can see all that template has been rendered.

curl localhost

Dynamic servers with Nomad

file snippet
./nomad/nomad.hcl 20_nomad_config
resource "template" "agent_config" {
  source = <<-EOF
    datacenter = "dc1"
    retry_join = ["${variable.consul_server}"]
  EOF

  destination = "${data("nomad")}/consul.hcl"
}

Then we can create our cluster

file snippet
./nomad/nomad.hcl 21_nomad_cluster
resource "nomad_cluster" "dev" {
  client_nodes = variable.client_nodes

  consul_config = resource.template.agent_config.destination

  network {
    id = variable.network
  }
}

And deploy the job to it

file snippet
./nomad/nomad.hcl 22_nomad_job
resource "nomad_job" "api" {
  cluster = resource.nomad_cluster.dev.id

  paths = ["./jobs/api.hcl"]
}

Finally, let's add this module to our main file

file snippet
./main.hcl 23_nomad_module
module "nomad" {
  source = "./nomad"

  variables = {
    network       = resource.network.vpc1.id
    consul_server = module.consul.output.consul_server
    client_nodes  = 3
  }
}

And we can updated everything

jp up

And if we look in consul we can see the new services

And when we curl

curl localhost

Finally we can destroy everything

jp down
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment