Skip to content

Instantly share code, notes, and snippets.

@d4n13lbc
Last active November 1, 2017 13:14
Show Gist options
  • Save d4n13lbc/1d99f37404dd8c6a130e4258b3e29a1c to your computer and use it in GitHub Desktop.
Save d4n13lbc/1d99f37404dd8c6a130e4258b3e29a1c to your computer and use it in GitHub Desktop.

--- tutorial pdsh ----- crear dos maquinas virtuales con las ips 192.168.33.2 192.168.33.3

apt install pdsh export PDSH_RCMD_TYPE=ssh

vi hosts root@192.168.33.2 root@192.168.33.3

export WCOLL=/home/daniel/Escritorio/consul/hosts

ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.33.2 ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.33.3 ssh-add

apt install ssh-askpass

pdsh 'cat /proc/cpuinfo | grep bogomips'

referencias http://www.linux-magazine.com/Issues/2014/166/Parallel-Shells

---- tutorial consul ------------------------------------ --> scp consul_0.9.0_linux_amd64.zip vagrant@192.168.33.2:/tmp

ssh vagrant@192.168.33.2 mkdir -p /tmp/consul cd /tmp/consul sudo yum install wget -y wget https://releases.hashicorp.com/consul/0.9.0/consul_0.9.0_linux_amd64.zip yum install unzip -y unzip consul_0.9.0_linux_amd64.zip sudo cp consul /usr/bin

agente en modo desarrollador

instalar dependencias (dig) yum install bind-utils sudo rpm -ql bind-utils | grep -v "gz$" <- solo para verificar los paquetes que incorpora

instalar un procesador de json en consola yum install -y epel-release yum install jq

iniciar el agente consul agent -dev

listar miembros <- https://www.consul.io/docs/internals/gossip.html consul members

listar miembros usando la API <- https://www.consul.io/api/index.html curl localhost:8500/v1/catalog/nodes | jq .

listar nodos usando la interfaz DNS de consul dig @127.0.0.1 -p 8600 localhost.localdomain

parar el agente de consul <- https://www.consul.io/docs/internals/consensus.html ctrl + c

registrando un servicio

crear directorio para las configuraciones de consul sudo mkdir /etc/consul.d

Adicionar la definicion de un servicio echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}'
| sudo tee /etc/consul.d/web.json

iniciar el agente especificando el directorio de configuraciones consul agent -dev -config-dir=/etc/consul.d

consultando un servicio

listar la ubicación del servicio web dig @127.0.0.1 -p 8600 web.service.consul

listar información adicional dig @127.0.0.1 -p 8600 web.service.consul SRV

listar la ubicación del servicio web usando la api curl http://localhost:8500/v1/catalog/service/web

listar los servicios saludables curl 'http://localhost:8500/v1/health/service/web?passing'

Nota: La API HTTP permite adicionar, eliminar o modificar servicios

vagrant@n1:~$ consul agent -server -bootstrap-expect=1
-data-dir=/tmp/consul -node=agent-one -bind=192.168.33.2
-enable-script-checks=true -config-dir=/etc/consul.d

Nota: para permitir consultas externas debera adicionar el flag -client 192.168.33.2, esto ocasionara que consul members no funcione, por tanto debera usar la API curl 192.168.33.2:8500/v1/agent/members

vagrant@n2:~$consul agent -data-dir=/tmp/consul -node=agent-two
-bind=192.168.33.3 -enable-script-checks=true -config-dir=/etc/consul.d

vagrant@n1:$ consul join 192.168.33.3 <- https://www.consul.io/docs/internals/gossip.html vagrant@n1:$ consul members

consultando nodos vagrant@n1:~$ dig @127.0.0.1 -p 8600 agent-two.node.consul

adicionando checks

Los chequeos de salud se pueden adicionar por medio de un archivo de configuración o por medio de la API HTTP

vagrant@n2:~# echo '{"check": {"name": "ping", "script": "ping -c1 google.com >/dev/null", "interval": "30s"}}' \

/etc/consul.d/ping.json

vagrant@n2:~# echo '{"service": {"name": "web", "tags": ["rails"], "port": 80, "check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}}}' \

/etc/consul.d/web.json

vagrant@n2:~$ consul reload

obteniendo información de los checks

vagrant@n1:~$ curl http://localhost:8500/v1/health/state/critical [{"Node":"agent-two","CheckID":"service:web","Name":"Service 'web' check","Status":"critical","Notes":"","Output":"","ServiceID":"web","ServiceName":"web","ServiceTags":["rails"],"CreateIndex":570,"ModifyIndex":688}]

dig @127.0.0.1 -p 8600 web.service.consul debe retornar vacio, aqui deje el web.json en el primero y no retorno vacio, tuve que hacer consul reload para desregistrar el servicio (cambiar un archivo de configuracion, requiere reiniciar)

accediendo externamente

para poder acceder a los checks desde afuera de la maquina debera haber adicionado el flag -client 192.168.33.2 y en caso de usar imagenes oficiales de centos abrir los puertos respectivos vagrant@n1:$ sudo firewall-cmd --zone=public --add-port=8500/tcp --permanent vagrant@n1:$ sudo firewall-cmd reload

curl http://192.168.33.2:8500/v1/health/state/critical dig @192.168.33.2 -p 8600 web.service.consul

instale httpd en la maquina 192.168.33.3 y vuelva a digitar los dos comandos anteriores vagrant@n2:$ sudo yum install httpd vagrant@n2:$ systemctl start httpd

curl http://192.168.33.2:8500/v1/health/state/critical dig @192.168.33.2 -p 8600 web.service.consul

KV Data

consul kv get redis/config/minconns consul kv put redis/config/minconns 1 consul kv put redis/config/maxconns 25 consul kv put -flags=42 redis/config/users/admin abcd1234 <- averiguar la utilidad de los flags consul kv get redis/config/minconns consul kv get -detailed redis/config/minconns consul kv get -detailed redis/config/users/admin

obteniendo todas las llaves consul kv get -recurse

borrando llaves consul kv delete redis/config/minconns consul kv delete -recurse redis

actualizando llaves consul kv put foo bar consul kv get foo consul kv put foo zip consul kv get foo

Para operaciones atomicas usar Check-And-Set operation (no me funciono)

para activar la ui, adicionar el flag -ui

vagrant@n1:~$ consul agent -server -bootstrap-expect=1
-data-dir=/tmp/consul -node=agent-one -bind=192.168.33.2
-enable-script-checks=true -config-dir=/etc/consul.d -client 192.168.33.2 -ui

http://192.168.33.2:8500/ui

consul template

...ejemplo basico

screen vagrant@n1:~$ consul agent -server -bootstrap-expect=1
-data-dir=/tmp/consul -node=agent-one -bind=192.168.33.2
-enable-script-checks=true -config-dir=/etc/consul.d

----- no es necesario ------ at screen mode: ctlr+a, c vagrant@n1:$ cd /tmp vagrant@n1:$ wget https://releases.hashicorp.com/consul-template/0.19.0/consul-template_0.19.0_linux_amd64.zip vagrant@n1:$ unzip consul-template_0.19.0_linux_amd64.zip vagrant@n1:$ sudo mv consul-template /usr/bin

vagrant@n1:~$ vi in.tpl {{ key "foo" }}

vagrant@n1:~$ consul-template -template "in.tpl:out.txt" -once

at screen mode: ctrl+a, c vagrant@n1:$ consul kv put foo bar vagrant@n1:$ cat out.txt ---- no es necesario ----

screen vagrant@n2:~$consul agent -data-dir=/tmp/consul -node=agent-two
-bind=192.168.33.3 -enable-script-checks=true -config-dir=/etc/consul.d

at screen mode: ctlr+a, c vagrant@n2:$ cd /tmp vagrant@n2:$ wget https://releases.hashicorp.com/consul-template/0.19.0/consul-template_0.19.0_linux_amd64.zip vagrant@n2:$ unzip consul-template_0.19.0_linux_amd64.zip vagrant@n2:$ sudo mv consul-template /usr/bin

vagrant@n2:~$ vi in.tpl {{ key "foo" }}

vagrant@n2:~$ consul-template -template "in.tpl:out.txt" -once

change in n1 must be reflected in n2 <- verificar sin que out.txt exista

vagrant@n1:$ consul kv put foo bar vagrant@n2:$ cat out.txt

...ejemplo httpd

httpd must be installed

vagrant@n2:$ sudo mkdir /etc/consul-template vagrant@n2:$ sudo cp /etc/httpd/conf/httpd.conf /etc/consul-template/httpd.tpl vagrant@n2:~$ sudo vi /etc/consul-template/httpd.tpl Listen {{ key "service/apache/port" }}

vagrant@n2:~$ consul-template -template "/etc/consul-template/httpd.tpl:/etc/httpd/conf/httpd.conf:systemctl restart httpd"

desde otro equipo o el anfitrion curl http://192.168.33.3/

vagrant@n1:~$ consul kv put service/apache/port 8080

desde otro equipo o el anfitrion curl http://192.168.33.3:8080/

consul en modo federado (raft consensus)

referencias http://blog.smalleycreative.com/linux/nslookup-is-dead-long-live-dig-and-host/

https://www.consul.io/api/index.html https://www.consul.io/docs/internals/consensus.html https://www.consul.io/docs/agent/dns.html https://www.consul.io/docs/internals/gossip.html https://www.consul.io/docs/internals/architecture.html


1740 docker pull gliderlabs/registrator:latest 1741 docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://localhost:8500 1742 docker ps 1743 docker logs 1744 docker ps -a 1745 docker logs bfef71c88830 1746 docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap 1747 docker ps 1748 docker ps -a 1749 docker logs 8fc70b0186f5 1750 docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=127.0.0.1: 1751 docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=127.0.0.1 1752 docker rm -f 8f 1753 docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap -advertise=127.0.0.1 1754 docker ps 1755 docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://localhost:8500 1756 docker rm -f bf 1757 docker run -d --name=registrator --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://localhost:8500 1758 docker ps 1759 curl 127.0.0.1:8500/v1/catalog/services 1760 docker run -d -P --name=redis redis 1761 docker run -d -P --name=redis redis:alpine 1762 docker ps 1763 curl 127.0.0.1:8500/v1/catalog/services 1764 curl 127.0.0.1:8500/v1/catalog/service/redis 1765 docker run -d -P --name=redis2 redis:alpine 1766 docker ps 1767 curl 127.0.0.1:8500/v1/catalog/services 1768 curl 127.0.0.1:8500/v1/catalog/service/redis 1769 docker run -d -P --name=redis3 redis:alpine 1770 curl 127.0.0.1:8500/v1/catalog/service/redis 1771 docker ps 1772 docker exec -it 81 /bin/bash 1773 docker exec -it 81 /bin/sh 1774 history

https://tech.bellycard.com/blog/load-balancing-docker-containers-with-nginx-and-consul-template/ https://github.com/thechane/consul/blob/master/docker-compose.yml

./consul-template -consul-addr "127.0.0.1:8500" -template "/etc/haproxy/haproxy.tpl:/etc/haproxy/haproxy.cfg:service haproxy restart"


global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # Default ciphers to use on SSL-enabled listening sockets.
    # For more information, see ciphers(1SSL).
    ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL

defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http

frontend localnodes bind *:8088 mode http default_backend nodes

backend nodes mode http balance roundrobin option forwardfor http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } option httpchk HEAD / HTTP/1.1\r\nHost:localhost

{{ range service "web" }}
server {{ .Name }} {{ .Address }}:80 check {{ end }}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment