Skip to content

Instantly share code, notes, and snippets.

Extra debug output with https://github.com/cwolferh/heat-scratch/tree/debug_eager_load_raw_template
See line numbers 665 and 533 below -- unclear why raw_template is not eager loaded in 2nd case
======================================================================
FAIL: heat.tests.convergence.test_converge.ScenarioTest.test_scenario(basic_update_delete)
tags: worker-4
----------------------------------------------------------------------
Empty attachments:
pythonlogging:'alembic'
pythonlogging:'barbicanclient'
# probably a good idea to stop h-eng
delete from event;
delete from stack_tag;
delete from resource_data;
delete from watch_rule;
delete from watch_data;
delete from software_config;
delete from software_deployment;
delete from snapshot;
delete from stack_lock;
# Sample ``local.conf`` for user-configurable variables in ``stack.sh``
# NOTE: Copy this file to the root DevStack directory for it to work properly.
# ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
# This gives it the ability to override any variables set in ``stackrc``.
# Also, most of the settings in ``stack.sh`` are written to only be set if no
# value has already been set; this lets ``local.conf`` effectively override the
# default values.
Using test_template_resource.py::TemplateResourceUpdateTest to show
how resource_properties_data are referenced from events by both
deleted and non-deleted stacks. See commentary at the end.
main_template = '''
HeatTemplateFormatVersion: '2012-12-12'
Resources:
the_nested:
Type: the.yaml
Properties:
@cwolferh
cwolferh / gist:b7ff12d6f6700171a112
Created May 19, 2015 00:18
osp7-ofi-pcs-resource-audit
puppet output Debug: try 1/4: /usr/sbin/pcs resource create memcached systemd:memcached --clone
pcmk/memcached.scenario:pcs resource create memcached systemd:memcached --clone
puppet output Debug: try 1/4: /usr/sbin/pcs resource create haproxy systemd:haproxy op monitor start-delay=10s --clone
pcmk/lb.scenario:pcs resource create lb-haproxy systemd:haproxy op monitor start-delay=10s --clone
puppet output Debug: try 1/4: /usr/sbin/pcs resource create galera galera additional_parameters='--open-files-limit=16384' enable_creation=true wsrep_cluster_address="gcomm://pcmk-c1a1,pcmk-c1a2,pcmk-c1a3" meta master-max=3 ordered=true op promote timeout=300s on-fail=block --master
pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://${node_list}" additional_parameters='--open-files-limit=16384' meta master-max=3 ordered=true op promote timeout=300s on-fail=block --master
need to support
# clone
pcs create ... --clone
pcs create ... --clone interleave=true
pcs create ... --clone globally-unique=true clone-max=3 interleave=true
pcs create ... --clone interleave=true --disabled --force # maybe, see compute-managed.scenario
# op
pcs create ... resource_name op monitor start-delay=10s ...
@cwolferh
cwolferh / gist:fdaef9daba266863c36a
Last active August 29, 2015 14:20
pcmk_resource tests
#pcmk/lb.scenario:pcs resource create lb-haproxy systemd:haproxy op monitor start-delay=10s --clone
pacemaker::resource::service {"lb-haproxy":
service_name => 'haproxy',
op_params => 'monitor start-delay=10s',
clone_params => '',
}
# results in Debug: /usr/sbin/pcs resource create lb-haproxy systemd:haproxy op monitor start-delay=10s --clone
#pcmk/lb.scenario: pcs resource create vip-${section} IPaddr2 ip=${PHD_VAR_network_internal}.${offset} nic=eth1
pacemaker::resource::ip {"ip-192.168.201.59":
# NOTE: In tests below c1a1 is one of 3 HA controllers and c1a5 is a nova compute node. Testing PR https://github.com/redhat-openstack/astapor/pull/489
[samurai@baremetal ha]# ## TEST manage_ceph_conf
[samurai@baremetal ha]# VMSET=fore1a vftool.bash run /mnt/vm-share/mcs/foreman/config/ha/ha-params.bash >/dev/null
Warning: Permanently added 'fore1a,192.168.7.186' (ECDSA) to the list of known hosts.
Connection to fore1a closed.
[samurai@baremetal ha]# VMSET=fore1a vftool.bash run '/mnt/vm-share/mcs/foreman/api/hosts.rb show_yaml c1a1' | grep manage_ceph
Warning: Permanently added 'fore1a,192.168.7.186' (ECDSA) to the list of known hosts.
manage_ceph_conf: false
Connection to fore1a closed.
@cwolferh
cwolferh / ceph-firefly.repo
Last active August 29, 2015 14:07
ceph-firefly.repo
[ceph-firefly-noarch]
name=ceph-firefly
baseurl=http://ceph.com/rpm-firefly/rhel7/noarch
gpgcheck=0
enabled=1
[ceph-firefly-x86_64]
name=ceph-firefly
baseurl=http://ceph.com/rpm-firefly/rhel7/x86_64
gpgcheck=0
#!/bin/bash
# On el6, to be run on the bare metal host after you have virtual networking
# set up.
yum install -y fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast
for i in `ls /sys/class/net/*br*/bridge/multicast_querier`; do echo 1 > $i; done
if [ ! -f /etc/cluster/fence_xvm.key ]; then