Create a gist now

Instantly share code, notes, and snippets.

@ryancnelson /-
Last active Aug 29, 2015

What would you like to do?
Get VPN creds from keith. Otherwise:
This is for OpenVPN clients and is suitable for use with Tunnelblick
(, or you can use the simple OpenVPN (v2)
command-line client.
To get on the console:
IPs are found here:
/usr/pkg/bin/ipmitool -I lanplus -U ADMIN -P ADMIN -H sol activate
# Manta Init
manta-init -c 10 -e -s production -r staging
# Topology
Serial Host Server Uuid Ip Console Ip DC Purpose
S12612523710138 RA10138 cf4414d0-3047-11e3-8545-002590c3f2d4 staging-1 headnode
S12612523714872 RA14872 aac3c402-3047-11e3-b451-002590c57864 staging-1 manta services / sdc HA services
S12612523710146 RA10146 445aab6c-3048-11e3-9816-002590c3f3bc staging-1 manta compute
S12612523710134 RA10134 c9e944c4-3047-11e3-b334-002590c3f060 staging-1 provisionable
S12612523710111 RA10111 17ca43dc-3048-11e3-9cae-002590c3f2e0 staging-2 headnode
S12612523710127 RA10127 8f998b4e-3047-11e3-b50a-002590c3eefc staging-2 manta services / sdc HA services
S12612523710141 RA10141 cc5ce3dc-3047-11e3-8bbe-002590c3ece4 staging-2 manta compute
S12612523714867 RA14867 f2ba52d0-3047-11e3-b968-002590c3effc staging-3 headnode
S12612523710129 RA10129 cc6abe76-3047-11e3-be2c-002590c3f0e0 staging-3 manta services / sdc HA services
S12612523714871 RA14871 cbb7a548-3047-11e3-870f-002590c7c6dc staging-3 manta compute
* aac3c402-3047-11e3-b451-002590c57864 staging-1 (S12612523714872)
* Binder #1
* Postgres, Shard 1, #1 (marlin)
* Postgres, Shard 2, #1 (index)
* Postgres, Shard 3, #1 (index)
* Moray, Shard 1, #1
* Moray, Shard 2, #1
* Moray, Shard 3, #1
* Electric Moray #1
* Webapi #1
* Job Supervisor #1
* Job Puller #1
* Medusa
* 8f998b4e-3047-11e3-b50a-002590c3eefc staging-2 (S12612523710127)
* Binder #2
* Postgres, Shard 1, #2
* Postgres, Shard 2, #2
* Postgres, Shard 3, #2
* Moray, Shard 1, #2
* Moray, Shard 2, #2
* Moray, Shard 3, #2
* Authcache #1
* Webapi #2
* Load balancer #1
* Ops
* cc6abe76-3047-11e3-be2c-002590c3f0e0 staging-3 (S12612523710129)
* Binder #3
* Postgres, Shard 1, #3
* Postgres, Shard 2, #3
* Postgres, Shard 3, #3
* Electric Moray #2
* Authcache #2
* Load balancer #2
* Job Supervisor #2
* Job Puller #2
* Madtom
* Marlin Dashboard
* Marlin nodes (S12612523710146, S12612523710141, S12612523714871):
445aab6c-3048-11e3-9816-002590c3f3bc staging-1
cc5ce3dc-3047-11e3-8bbe-002590c3ece4 staging-2
cbb7a548-3047-11e3-870f-002590c7c6dc staging-3
* marlin-agent in GZ
* Storage [N .. N + 2]
* Marlin [N .. N + 127]
# Installing a new Headnode:
## Verify the host is on an external network
[root@ /]# echo 'nameserver' > /etc/resolv.conf
[root@ /]# cat /etc/resolv.conf
If not, you'll need to generate the answers.json now, get it onto the node, find
the external vlan if for that host and put the node on the external network:
nate:lab nfitch$ node ./support/genanswers.js -f foobarbaz -r staging-1 | json | pbcopy
[root@ /tmp]# echo '[paste]' > /tmp/answers.json
[root@ /tmp]# /mnt/usbkey/scripts/ /tmp/answers.json [vlan id]
## Find the latest usb-headnode, download it, and unpack it.
ops$ mls /Joyent_Dev/stor/builds/usbheadnode | sort | tail -2
ops$ mls /Joyent_Dev/stor/builds/usbheadnode/master-20140304T202356Z/usbheadnode/usb-master-20140304T202356Z-g274e533.tgz
ops$ MANTA_URL= msign /Joyent_Dev/stor/builds/usbheadnode/master-20140304T202356Z/usbheadnode/usb-master-20140304T202356Z-g274e533.tgz
ops$ mget /Joyent_Dev/stor/builds/usbheadnode/master-20140304T202356Z/md5sums.txt | grep usb-master
[root@ /tmp]# curl -k '[msign output]' > usb-master-20140304T202356Z-g274e533.tgz
[root@ /tmp]# openssl dgst -md5 usb-master-20140304T202356Z-g274e533.tgz
[root@ /tmp]# tar -xzf usb-master-20140304T202356Z-g274e533.tgz
## Find which device the usbkey is
[root@ /mnt/usbkey]# df -h | grep usbkey
/dev/dsk/c17t0d0p0:1 3.7G 2.2G 1.5G 60% /mnt/usbkey
Change dsk to rdsk:
[root@ /mnt/usbkey]# ls /dev/rdsk/c17t0d0p0
## Unmount the usbkey:
[root@ /mnt/usbkey]# umount -f /mnt/usbkey
## Overwrite the usbkey
[root@ /mnt/usbkey]# dd bs=1M if=/tmp/usb-master-20140304T202356Z-g274e533-4gb.img of=/dev/rdsk/c17t0d0p0
## Remount the usbkey to see that what's there is sane:
[root@ /mnt/usbkey]# mount -F pcfs /dev/dsk/$(disklist -r)p1 /mnt/usbkey
[root@ /mnt/usbkey]# cat version
[root@ /mnt/usbkey]# ls /mnt/usbkey/boot/grub/menu.lst && find /mnt/usbkey/os/ -name "boot_archive"
## Generate and put the answers.json in the right place, then reboot
nate:lab nfitch$ node ./support/genanswers.js -f foobarbaz -r staging-1 | json | pbcopy
[root@ /]# echo '[...]' > /mnt/usbkey/private/answers.json
[root@ /]# reboot
## You should see multiple reboots and sdc should automagically set up.
# Slaving services on one headnode to another
## In each DC, add the routes for the other admin networks
$ sdc-napi /networks | json -Ha -c ' === "admin"' uuid subnet provision_start_ip
staging-1 579a2a1c-888f-4c1b-98b5-29248f662644
staging-2 d828e85d-6fb4-42bb-8550-b0b3b0fae7b1
staging-3 32ed62bb-a4f9-4364-82fd-67aa4a1defcf
sdc-napi /networks/579a2a1c-888f-4c1b-98b5-29248f662644 -X PUT -d@- <<EOF
"routes": {
"": "",
"": ""
sdc-napi /networks/d828e85d-6fb4-42bb-8550-b0b3b0fae7b1 -X PUT -d@- <<EOF
"routes": {
"": "",
"": ""
sdc-napi /networks/32ed62bb-a4f9-4364-82fd-67aa4a1defcf -X PUT -d@- <<EOF
"routes": {
"": "",
"": ""
Now you should be able to ping between all hosts on the admin networks.
## Make some ufds instances become slaves...
Note: Until HEAD-1926, you may have to comment out the external nic check.
Get the ip address for ufds:
[root@headnode (staging-1) /usbkey/scripts]# sdc-login ufds ifconfig
Then on the other nodes:
[root@headnode (staging-3) /opt/smartdc/bin]# sdc-ufds-m2s
Follow the prompts.
## Add all the devs to ldap
nate:~ nfitch$ scp ~/projects/manta-deployment/ufds/devs.ldif staging-1:/var/tmp/.
On the master ufds:
[root@headnode (staging-1) ~]# /opt/smartdc/bin/sdc-ldap add -c -f /var/tmp/devs.ldif
Verify that something is found in all dcs.
[root@headnode (staging-1) ~]# sdc-ldap search login=nfitch
## Make some sapi services slaves...
SAPI works by pointing the *sapi service* at a moray in a different dc. So,
first find the moray master in another dc, on the admin network:
[root@headnode (staging-1) /usbkey/scripts]# sdc-login moray ifconfig
Then update the sapis in the other two data centers:
[root@headnode (staging-3) /opt/smartdc/bin]# sapiadm update $(sdc-sapi /services?name=sapi | json -Ha uuid) metadata.MASTER_MORAY_IP=
[root@headnode (staging-3) /opt/smartdc/bin]# sapiadm update $(sdc-sapi /services?name=sapi | json -Ha uuid) metadata.MASTER_MORAY_PORT=2020
Then log into the sapis, make sure that they have a new config, and restart
[root@headnode (staging-3) /opt/smartdc/bin]# sdc-login sapi "json -f /opt/smartdc/sapi/etc/config.json moray"
[root@headnode (staging-3) /opt/smartdc/bin]# sdc-login sapi "svcadm restart sapi"
If you're paranoid, you can log into the sapi zone and check the logs for:
[2014-03-06T00:31:27.646Z] INFO: sapi/68535 on 399f1aa5-e1ff-4249-b768-15d610dbb9f0: moray: connected (tag=master_moray, client="[object MorayClient<host=>]")
# Other CNS
## Bringing them up on a new platform
PXE Boot all the cns that should attach themselves to the headnode, then verify
that they are all listed under
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha uuid sysinfo."Serial Number"; done
## Set them up
For each of the CNs, they need to be set up as SDC nodes. In a new DC you can find all the CNs that need to be set up with:
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha -c "headnode === false && setup === false" uuid sysinfo."Serial Number"; done
For each of those uuids, run this where xxxxxx are the last 5 digits from the serial number:
[root@headnode (staging-1) ~]# sdc-server setup 445aab6c-3048-11e3-9816-002590c3f3bc hostname=RAxxxxx
You can find the job uuids with:
[root@headnode (staging-1) ~]# sdc-workflow /jobs | json -Ha -c 'execution === "running"' uuid
Once all CNs have been setup, verify:
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha uuid setup hostname; done
## Mark the Manta CNs as Manta CNs
For each server that is a manta node:
echo '{"comments": "Manta Node","traits": {"internal": "Manta Node"}}' | sdc-cnapi /servers/$SERVER_UUID -X POST -d@-
Verify you have the right ones with:
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha uuid sysinfo."Serial Number" traits; done
## Set default_console to serial
Make sure that all nodes' (including the headnode) default console is serial:
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do echo -n "$l "; sdc-cnapi /boot/$l | json -Ha default_console; done
If not:
[root@headnode (staging-1) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /boot/$l -d '{ "default_console": "serial" }' -X POST; done
Then reboot any that don't have the correct Boot Parameters (ttyb in our case):
[root@headnode (staging-2) ~]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha uuid setup hostname default_console sysinfo."Boot Parameters".console; done
# Setup Manta
## Manta Deployment Zone
On all headnodes:
[root@headnode (staging-1) ~]# /usbkey/scripts/
If any of the deployment zones are out of date, you'll need to upgrade:
Find the latest:
[root@headnode (staging-1) ~]# updates-imgadm list | grep manta-deployment | tail -1
See what the current deployed is:
[root@headnode (staging-1) ~]# sdc-vmapi /vms | json -Ha -c 'this.alias === "manta0"' image_uuid
If they aren't the same... get the script:
ops$ MANTA_URL= msign /Joyent_Dev/stor/builds/incr-upgrade/master-20140305T215942Z/incr-upgrade/incr-upgrade-master-20140305T215942Z-g4524598.tgz
[root@headnode (staging-1) /var/tmp]# curl -k '[incr-upgrade-url]' > incr-upgrade-master-20140305T215942Z-g4524598.tgz
[root@headnode (staging-1) /var/tmp]# tar -xvf incr-upgrade-master-20140305T215942Z-g4524598.tgz
[root@headnode (staging-1) /var/tmp]# ln -s incr-upgrade-master-20140305T215942Z-g4524598 incr-upgrade
Download and upgrade:
[root@headnode (staging-1) ~]# /var/tmp/incr-upgrade/ ec2874ba-c36a-6ceb-9e23-8c29d9bb6e61
[root@headnode (staging-3) /var/tmp]# sapiadm reprovision $(vmadm lookup alias=~manta[0-9]) ec2874ba-c36a-6ceb-9e23-8c29d9bb6e61
## Manta Networking
See examples in:
Some helpful tips:
"manta_nodes" - All nodes connected to the manta network (service and compute nodes)
"marlin_nodes" - All nodes connected to the mantanat (compute nodes)
All the networking information (vlans, ip cidrs, etc) should be provided by networking admin.
Mac mappings will also need to be provided by network admins.
"mac_mappings" - Service hosts should only have the "manta" property.
"mac_mappings" - Compute hosts should have both the "manta" and "mantanat" properties.
For staging we got the mac mappings two ways:
1. indicated that admin is on a different ixgbe than manta.
[root@headnode (staging-1) ~]# sdc-oneachnode -n 445aab6c-3048-11e3-9816-002590c3f3bc 'sysinfo | json "Network Interfaces"'
Got the nics and we took the mac address from the ixgbe interface that wasn't admin.
2. nate:lab nfitch$ node ./support/genanswers.js -f staging-2 -r staging-2 -h S12612523710141 | json admin_nic external_nic
The nics should be different, take the 2nd nic (the external one).
One of those needs to be created for each DC. Then run each of them in the
corresponding DC:
nate:tmp nfitch$ scp -r dev:~/projects/manta-deployment/networking/configs .
nate:tmp nfitch$ scp configs/staging-1.json staging-1:/var/tmp/.
[root@headnode (staging-1) ~]# ln -f -s /zones/$(vmadm lookup alias=~manta[0-9])/root/opt/smartdc/manta-deployment/networking /var/tmp/networking
[root@headnode (staging-1) ~]# cd /var/tmp/networking
[root@headnode (staging-1) /var/tmp/networking]# ./ /var/tmp/staging-1.json | tee /var/tmp/manta-net.log
[root@headnode (staging-1) /var/tmp/networking]# sdc-cnapi /servers | json -Ha uuid | while read l; do sdc-cnapi /servers/$l | json -Ha uuid sysinfo."Network Interfaces"; done
Verify that things look ok:
[root@headnode (staging-1) ~]# sdc-napi /networks
## Manta init
You need to manta-init in the same DC where the UFDS master is. Ideally, that
would be the same place as the sapi master. Log into the manta zone in the dc
where the ufds master is:
[root@33c7aa56-92fd-403e-b16b-bff137ffc8be (staging-1:manta0) ~]# manta-init -c 10 -e -s production -r staging
Check that the application looks like it should:
[root@headnode (staging-1) ~]# sdc-sapi /applications?name=manta
Once that looks good, log into the other manta zones on the other HNs and
manta-init like the above. When the manta-init is complete, verify that the
manta application is only where the sapi-master is:
[root@headnode (staging-2) ~]# sdc-sapi /applications | json -Ha name
[root@headnode (staging-2) ~]# sdc-sapi /applications?include_master=true | json -Ha name
## Self-signed cert-land?
If manta is going to be hosted with a self-signed cert, you'll want to turn
manta into "insecure" mode:
echo '{ "metadata": { "MANTA_REJECT_UNAUTHORIZED": false, "MANTA_TLS_INSECURE": "1" } }' | sapiadm update [manta app uuid]
## Manta Deploy services
You can either deploy all the services manually by selecting the service and
server to deploy to or use manta-adm.
Follow this order for deployments:
1. nameservice
1. postgres
1. moray
1. manta-shardadm
- manta-shardadm set -m "1.moray"
- manta-shardadm set -s "1.moray"
- manta-shardadm set -i "2.moray 3.moray"
1. generate ring
- -v 10000000 -p 2020
1. electic-moray
1. storage
1. authcache
1. webapi
1. loadbalancer
- sdc-napi /nics/90e2ba4a34cd
- sdc-server update-nictags -s 8f998b4e-3047-11e3-b50a-002590c3eefc "external_nic=90:e2:ba:4a:34:cd"
1. jobsupervisor
1. jobpuller
1. medusa
1. marlin (2 each)
1. manta-marlin -s <cn-uuid> (do this for each manta-cn)
1. ops
1. madtom
- /opt/smartdc/madtom/bin/
1. marlin-dashboard
- /opt/smartdc/marlin-dashboard/etc/stage-$REGION.json
- sdc-cnapi /servers/cbb7a548-3047-11e3-870f-002590c7c6dc | json -Ha sysinfo."Network Interfaces".ixgbe0."NIC Names".0 sysinfo."Network Interfaces".ixgbe0.ip4addr
- sdc-vmapi /vms/e103dd40-df0e-492d-b61f-5a80d9ecbcb5 | json -Ha nics | json -Ha nic_tag ip | grep admin
1. marlin (the rest)
- manta-adm show -j >/var/tmp/adm.json
- vi /var/tmp/adm.json //change # of marlin zones
- manta-adm update -n /var/tmp/adm.json
- manta-adm update /var/tmp/adm.json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment