Skip to content

Instantly share code, notes, and snippets.

@mancdaz
mancdaz / rcpe_deploy-new
Created February 10, 2012 11:35
conceptual implementation of targeted crowbar proposals within rcpe_deploy
{
"id": "rcb deployment setup",
"description": "Setup for deploy script",
"attributes": {
"network": {
"reserved": {
"bastion": "172.31.0.5",
"pxeapp": "172.31.0.6",
"infra": "172.31.0.9",
"infra_mac": "60:eb:69:6e:f0:31",
@mancdaz
mancdaz / nuke-instance
Created September 19, 2012 10:39
force remove instance from nova database (including all foreign key constraints)
USE nova;
SET @uuid = '0b8f832f-f013-441a-9f37-51b68f755359';
UPDATE floating_ips SET fixed_ip_id=NULL,host=NULL WHERE fixed_ip_id IN ( SELECT id FROM fixed_ips WHERE instance_id IN ( SELECT id FROM instances WHERE uuid = @uuid ) );
UPDATE fixed_ips SET updated_at=NULL,deleted_at=NULL,deleted=0,instance_id=NULL,allocated=0,leased=0,reserved=0,virtual_interface_id=NULL,host=NULL WHERE instance_id IN ( SELECT id FROM instances WHERE uuid = @uuid );
DELETE FROM security_group_instance_association WHERE instance_id IN ( SELECT id FROM instances WHERE uuid = @uuid );
DELETE FROM instance_info_caches WHERE instance_id IN ( SELECT id FROM instances WHERE uuid = @uuid );
DELETE FROM instances WHERE uuid = @uuid;
[2013-04-04T13:39:11+00:00] WARN: Cloning resource attributes for execute[apt-get update] from prior resource (CHEF-3694)
[2013-04-04T13:39:11+00:00] WARN: Previous execute[apt-get update]: /var/chef/cache/cookbooks/apt/recipes/default.rb:29:in `from_file'
[2013-04-04T13:39:11+00:00] WARN: Current execute[apt-get update]: /var/chef/cache/cookbooks/mysql/recipes/ruby.rb:23:in `from_file'
--
[2013-04-04T13:39:29+00:00] WARN: Cloning resource attributes for service[rabbitmq-server] from prior resource (CHEF-3694)
[2013-04-04T13:39:29+00:00] WARN: Previous service[rabbitmq-server]: /var/chef/cache/cookbooks/rabbitmq/recipes/default.rb:87:in `from_file'
[2013-04-04T13:39:29+00:00] WARN: Current service[rabbitmq-server]: /var/chef/cache/cookbooks/rabbitmq-openstack/recipes/server.rb:64:in `from_file'
[2013-04-04T13:39:29+00:00] WARN: Cloning resource attributes for rabbitmq_user[guest] from prior resource (CHEF-3694)
[2013-04-04T13:39:29+00:00] WARN: Previous rabbitmq_user[guest]: /var/chef/cache/cookbooks/rabbit
default["ha"]["available_services"]["keystone-service-api"] = {
"role" => "keystone-api",
"namespace" => "keystone",
"service" => "service-api",
"service_type" => "identity",
"lb_mode" => "http",
"lb_algorithm" => "roundrobin",
"lb_options" => ["forwardfor", "httpchk", "httplog"],
"vrid" => 0,
"vip_network" => "public"
@mancdaz
mancdaz / gist:6465064
Last active December 22, 2015 11:19
ceph/cinder/nova notes

##cinder/nova

  1. live migration cannot work. update: apparently it does work in grizzly
  • real live migration will fail as a preflight check that nova does is to test if you are using shared storage for your /var/lib/nova/instances directory (it writes a test file to the shared location on one host, then checks to see if it can see it on the other). Given that RBD is not 'shared storage' in this sense, the check fails. NOT fixed in Havana, though it's noted that ceph and libvirt will play nicely outside of nova

  • block migration will fail as there is no actual disk for the process to copy from one host to another

  1. each compute host needs to be 'manually' configured with the libvirt secret/key when using cephx
@mancdaz
mancdaz / gist:6609729
Last active December 23, 2015 08:48
rados benchmarks
individual bench per OSD:
All tests run from node1
# ceph tell osd.* bench -f plain
osd.0: bench: wrote 1024 MB in blocks of 4096 KB in 20.138275 sec at 52068 KB/sec
osd.1: bench: wrote 1024 MB in blocks of 4096 KB in 21.206290 sec at 49446 KB/sec
osd.2: bench: wrote 1024 MB in blocks of 4096 KB in 21.361698 sec at 49086 KB/sec
osd.3: bench: wrote 1024 MB in blocks of 4096 KB in 20.538838 sec at 51053 KB/sec
osd.4: bench: wrote 1024 MB in blocks of 4096 KB in 21.490303 sec at 48792 KB/sec
osd.5: bench: wrote 1024 MB in blocks of 4096 KB in 21.130678 sec at 49623 KB/sec
#!/bin/bash
cd /root/ansible-lxc-rpc/rpc_deployment
ansible-playbook -e @/etc/rpc_deploy/user_variables.yml playbooks/setup/destroy-containers.yml
ansible hosts -m shell -a 'rm -fr /openstack'
ansible hosts -m shell -a 'rm -fr /var/lib/lxc/*'
ansible hosts -m shell -a 'lvremove lxc -f'
rm /etc/rpc_deploy/rpc_inventory.json
@mancdaz
mancdaz / add images & security groups
Last active January 29, 2016 09:06
new build test setup
# add cirros image
cd /tmp
wget http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --container-format bare --disk-format qcow2 --file cirros-0.3.3-x86_64-disk.img --name cirros
# add trusty image
cd /tmp
wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
glance image-create --container-format bare --disk-format qcow2 --file trusty-server-cloudimg-amd64-disk1.img --name trusty

#Removing a container entirely

NB - these instructions are based on a juno install, so files and locations contain references to rpc_*

Let's say I have 3 keystone containers, and I want to remove one of them. There are essentially 3 steps

  1. Remove it from the user config file: /etc/rpc_deploy/rpc_user_config.yml
  2. Remove the container from the inventory - if you didn't do this, the container would get recreated the next time you ran the 'setup' part of the ansible playbooks, because the inventory still has references to it
  3. Destroy the container - this physically removes the container from the host that is running it

####given a commit SHA, find out if a deployed wheel contains that commit

i.e. Does our deployed neutron wheel contain a commit with a SHA of 9ff5bb967105451a1c3c1a6dfdbeb0ec979afaeb

#####1. find out the sha of the commit from which our wheel was built Log in to the neutron container of interest and get info on the wheel:

# lxc-attach -n aio1_neutron_agents_container-28d6792d
# pbr info neutron