Skip to content

Instantly share code, notes, and snippets.

@danehans
Last active September 30, 2015 22:32
heat nested templates
2015-09-30 22:20:36.763 DEBUG heat.engine.scheduler [-] Task _check_for_completion running from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:223
2015-09-30 22:20:36.784 INFO heat.engine.environment [-] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:36.784 INFO heat.engine.environment [-] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:36.785 DEBUG heat.engine.scheduler [-] Task _check_for_completion complete from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:229
2015-09-30 22:20:36.786 INFO heat.engine.environment [-] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:36.786 INFO heat.engine.environment [-] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:36.806 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is a0e7731beb674123a0e9e3cd6abed02f from (pid=11440) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392
2015-09-30 22:20:36.825 INFO heat.engine.service [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Updating stack swarm-eyym7hxe3abm-swarm_nodes-xfq62skdbxx2
2015-09-30 22:20:36.825 INFO heat.engine.environment [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:36.826 INFO heat.engine.environment [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:36.827 INFO heat.engine.environment [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:36.827 INFO heat.engine.environment [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:36.829 INFO heat.common.urlfetch [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Fetching data from file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:36.869 DEBUG heat.engine.parameter_groups [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] <heat.engine.hot.template.HOTemplate20150430 object at 0x7f6773c50b90> from (pid=11452) __init__ /opt/stack/heat/heat/engine/parameter_groups.py:32
2015-09-30 22:20:36.869 DEBUG heat.engine.parameter_groups [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] <heat.engine.hot.parameters.HOTParameters object at 0x7f6772fe3850> from (pid=11452) __init__ /opt/stack/heat/heat/engine/parameter_groups.py:33
2015-09-30 22:20:36.870 DEBUG heat.engine.parameter_groups [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Validating Parameter Groups. from (pid=11452) validate /opt/stack/heat/heat/engine/parameter_groups.py:44
2015-09-30 22:20:36.870 DEBUG heat.engine.parameter_groups [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] ['OS::project_id', 'OS::stack_id'] from (pid=11452) validate /opt/stack/heat/heat/engine/parameter_groups.py:45
2015-09-30 22:20:36.870 INFO heat.engine.resource [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Validating file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml "0"
2015-09-30 22:20:36.871 DEBUG heat.engine.stack [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Property error: resources[0].properties.swarm_master_ip: Value must be a string from (pid=11452) validate /opt/stack/heat/heat/engine/stack.py:640
2015-09-30 22:20:36.872 DEBUG oslo_messaging.rpc.dispatcher [req-a24e153c-85d8-462d-a462-4cbee4c2cb00 None admin] Expected exception during message handling (Property error: resources[0].properties.swarm_master_ip: Value must be a string) from (pid=11452) _dispatch_and_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py:145
2015-09-30 22:20:36.875 ERROR heat.engine.resources.stack_resource [-] update_stack
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource Traceback (most recent call last):
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/engine/resources/stack_resource.py", line 435, in update_with_template
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource args)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/rpc/client.py", line 267, in update_stack
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource args=args))
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/rpc/client.py", line 59, in call
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource return client.call(ctxt, method, **kwargs)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 403, in call
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource return self.prepare().call(ctxt, method, **kwargs)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource retry=self.retry)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource timeout=timeout, retry=retry)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 431, in send
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource retry=retry)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 422, in _send
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource raise result
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource StackValidationFailed_Remote: Property error: resources[0].properties.swarm_master_ip: Value must be a string
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource Traceback (most recent call last):
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/common/context.py", line 305, in wrapped
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource return func(self, ctx, *args, **kwargs)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/engine/service.py", line 813, in update_stack
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource cnxt, current_stack, tmpl, params, files, args)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/engine/service.py", line 757, in _prepare_stack_updates
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource updated_stack.validate()
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource return f(*args, **kwargs)
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource File "/opt/stack/heat/heat/engine/stack.py", line 641, in validate
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource raise ex
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource StackValidationFailed: Property error: resources[0].properties.swarm_master_ip: Value must be a string
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.875 TRACE heat.engine.resources.stack_resource
2015-09-30 22:20:36.877 INFO heat.engine.resource [-] CREATE: ResourceGroup "swarm_nodes" [958d6a11-1dc6-4f4e-90d7-a0eb39871afd] Stack "swarm-eyym7hxe3abm" [a440f9b2-2abf-4612-91db-2cd2d30c2134]
2015-09-30 22:20:36.877 TRACE heat.engine.resource Traceback (most recent call last):
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 601, in _action_recorder
2015-09-30 22:20:36.877 TRACE heat.engine.resource yield
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 672, in _do_action
2015-09-30 22:20:36.877 TRACE heat.engine.resource yield self.action_handler_task(action, args=handler_args)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/scheduler.py", line 303, in wrapper
2015-09-30 22:20:36.877 TRACE heat.engine.resource step = next(subtask)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resource.py", line 643, in action_handler_task
2015-09-30 22:20:36.877 TRACE heat.engine.resource handler_data = handler(*args)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resources/openstack/heat/resource_group.py", line 382, in handle_create
2015-09-30 22:20:36.877 TRACE heat.engine.resource checkers[0].start()
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/scheduler.py", line 203, in start
2015-09-30 22:20:36.877 TRACE heat.engine.resource self.step()
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/scheduler.py", line 226, in step
2015-09-30 22:20:36.877 TRACE heat.engine.resource next(self._runner)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resources/openstack/heat/resource_group.py", line 397, in _run_to_completion
2015-09-30 22:20:36.877 TRACE heat.engine.resource timeout)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resources/stack_resource.py", line 438, in update_with_template
2015-09-30 22:20:36.877 TRACE heat.engine.resource self.raise_local_exception(ex)
2015-09-30 22:20:36.877 TRACE heat.engine.resource File "/opt/stack/heat/heat/engine/resources/stack_resource.py", line 329, in raise_local_exception
2015-09-30 22:20:36.877 TRACE heat.engine.resource raise exception.ResourceFailure(message, self, action=self.action)
2015-09-30 22:20:36.877 TRACE heat.engine.resource ResourceFailure: resources.swarm_nodes: Property error: resources[0].properties.swarm_master_ip: Value must be a string
2015-09-30 22:20:36.877 TRACE heat.engine.resource
2015-09-30 22:20:36.934 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "swarm-eyym7hxe3abm" [a440f9b2-2abf-4612-91db-2cd2d30c2134] sleeping from (pid=11440) _sleep /opt/stack/heat/heat/engine/scheduler.py:167
2015-09-30 22:20:37.301 INFO heat.engine.environment [req-f61274c2-8bab-48e1-8fa7-4be578b9772c None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:37.301 INFO heat.engine.environment [req-f61274c2-8bab-48e1-8fa7-4be578b9772c None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:37.323 INFO heat.engine.environment [req-ec87eb9e-68a5-43a4-9cb6-726dad8be74e None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:37.324 INFO heat.engine.environment [req-ec87eb9e-68a5-43a4-9cb6-726dad8be74e None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:37.547 INFO heat.engine.environment [req-ee8b2ca8-3a36-43db-a3e7-04036239f0d1 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:37.548 INFO heat.engine.environment [req-ee8b2ca8-3a36-43db-a3e7-04036239f0d1 None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:37.568 INFO heat.engine.environment [req-45e8c9aa-98d2-4fd5-baee-cf7b750003ba None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/master.yaml
2015-09-30 22:20:37.569 INFO heat.engine.environment [req-45e8c9aa-98d2-4fd5-baee-cf7b750003ba None admin] Registering file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml -> file:///opt/stack/magnum/magnum/templates/docker-swarm/node.yaml
2015-09-30 22:20:37.934 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "swarm-eyym7hxe3abm" [a440f9b2-2abf-4612-91db-2cd2d30c2134] running from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:223
2015-09-30 22:20:37.935 DEBUG heat.engine.scheduler [-] Task resource_action running from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:223
2015-09-30 22:20:37.935 DEBUG heat.engine.scheduler [-] Task resource_action complete from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:229
2015-09-30 22:20:37.968 INFO heat.engine.stack [-] Stack CREATE FAILED (swarm-eyym7hxe3abm): Resource CREATE failed: resources.swarm_nodes: Property error: resources[0].properties.swarm_master_ip: Value must be a string
2015-09-30 22:20:38.001 DEBUG heat.engine.scheduler [-] Task stack_task from Stack "swarm-eyym7hxe3abm" [a440f9b2-2abf-4612-91db-2cd2d30c2134] complete from (pid=11440) step /opt/stack/heat/heat/engine/scheduler.py:229
2015-09-30 22:20:38.002 INFO heat.engine.service [-] Stack create failed, status FAILED
#heat_template_version: 2013-05-23
heat_template_version: 2015-04-30
description: >
This is a nested stack that defines a single Swarm master, This stack is
included by an ResourceGroup resource in the parent template
(swarm.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
default: m1.small
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
default: lars
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
flannel_network_cidr:
type: string
description: network range for flannel overlay network
default: 10.100.0.0/16
flannel_network_subnetlen:
type: string
description: size of subnet assigned to each master
default: 24
flannel_use_vxlan:
type: string
description: >
if true use the vxlan backend, otherwise use the default
udp backend
default: "false"
constraints:
- allowed_values: ["true", "false"]
discovery_url:
type: string
description: >
Discovery URL used for bootstrapping the etcd cluster.
# The following are all generated in the parent template.
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
network_driver:
type: string
description: network driver to use for instantiating container networks
wait_condition_timeout:
type: number
description : >
timeout for the Wait Conditions
etcd_pool_id:
type: string
description: ID of the load balancer pool of etcd server.
etcd_server_ip:
type: string
description: IP address of the Etcd server.
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
docker_volume_size:
type: number
description: >
size of a cinder volume to allocate to docker for container/image
storage
user_token:
type: string
description: token used for communicating back to Magnum for TLS certs
bay_uuid:
type: string
description: identifier for the bay this template is generating
magnum_url:
type: string
description: endpoint to retrieve TLS certs from
insecure:
type: boolean
description: whether or not to enable TLS
resources:
master_wait_handle:
type: OS::Heat::WaitConditionHandle
master_wait_condition:
type: OS::Heat::WaitCondition
depends_on: swarm_master
properties:
handle: {get_resource: master_wait_handle}
timeout: {get_param: wait_condition_timeout}
######################################################################
#
# security groups. we need to permit network traffic of various
# sorts.
#
secgroup_base:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
secgroup_swarm:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: tcp
port_range_min: 2375 # swarm agent listening port
port_range_max: 2376 # swarm master listening port
- protocol: tcp
port_range_min: 2379 # etcd client communication
port_range_max: 2380 # etcd server-to-server communication
- protocol: udp
port_range_min: 8285 # flannel UDP backend
port_range_max: 8285
- protocol: udp
port_range_min: 8472 # flannel vxlan backend
port_range_max: 8472
######################################################################
#
# software configs. these are components that are combined into
# a multipart MIME user-data archive.
#
write_heat_params:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-heat-params-master.yaml}
params:
"$NETWORK_DRIVER": {get_param: network_driver}
"$FLANNEL_NETWORK_CIDR": {get_param: flannel_network_cidr}
"$FLANNEL_NETWORK_SUBNETLEN": {get_param: flannel_network_subnetlen}
"$FLANNEL_USE_VXLAN": {get_param: flannel_use_vxlan}
"$ETCD_DISCOVERY_URL": {get_param: discovery_url}
"$ETCD_SERVER_IP": {get_param: etcd_server_ip}
"$DOCKER_VOLUME": {get_resource: docker_volume}
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$SWARM_MASTER_IP": {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
"$SWARM_NODE_IP": {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
"$BAY_UUID": {get_param: bay_uuid}
"$USER_TOKEN": {get_param: user_token}
"$MAGNUM_URL": {get_param: magnum_url}
"$INSECURE": {get_param: insecure}
configure_swarm:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: {get_file: fragments/configure-swarm.sh}
remove_docker_key:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/remove-docker-key.sh}
make_cert:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: {get_file: fragments/make_cert.py}
configure_docker_storage:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-docker-storage.sh}
configure_etcd:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-etcd.sh}
write_network_config:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/write-network-config.sh}
network_config_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/network-config-service.sh}
network_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/network-service.sh}
write_docker_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/write-docker-service.sh}
write_docker_socket:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/write-docker-socket.yaml}
write_swarm_master_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-swarm-master-service.sh}
params:
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$ETCD_SERVER_IP": {get_param: etcd_server_ip}
"$INSECURE": {get_param: insecure}
enable_services:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/enable-services-master.sh}
master_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#!/bin/bash -v
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: {get_attr: [master_wait_handle, curl_cli]}
disable_selinux:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/disable-selinux.sh}
add_proxy:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/add-proxy.sh}
swarm_master_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: disable_selinux}
- config: {get_resource: remove_docker_key}
- config: {get_resource: write_heat_params}
- config: {get_resource: configure_swarm}
- config: {get_resource: add_proxy}
- config: {get_resource: make_cert}
- config: {get_resource: configure_etcd}
- config: {get_resource: write_network_config}
- config: {get_resource: network_config_service}
- config: {get_resource: network_service}
- config: {get_resource: configure_docker_storage}
- config: {get_resource: write_docker_service}
- config: {get_resource: write_docker_socket}
- config: {get_resource: write_swarm_master_service}
- config: {get_resource: enable_services}
- config: {get_resource: master_wc_notify}
######################################################################
#
# a single swarm master.
#
swarm_master:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: master_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: RAW
user_data: {get_resource: swarm_master_init}
networks:
- port: {get_resource: swarm_master_eth0}
swarm_master_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- {get_resource: secgroup_base}
- {get_resource: secgroup_swarm}
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
swarm_master_floating:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: swarm_master_eth0}
etcd_pool_member:
type: OS::Neutron::PoolMember
properties:
pool_id: {get_param: etcd_pool_id}
address: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
protocol_port: 2379
######################################################################
#
# docker storage. This allocates a cinder volume and attaches it
# to the node.
#
docker_volume:
type: OS::Cinder::Volume
properties:
size: {get_param: docker_volume_size}
docker_volume_attach:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: {get_resource: swarm_master}
volume_id: {get_resource: docker_volume}
mountpoint: /dev/vdb
outputs:
swarm_master_ip:
value: {get_attr: [swarm_master_eth0, fixed_ips, 0, ip_address]}
swarm_master_external_ip:
value: {get_attr: [swarm_master_floating, floating_ip_address]}
#heat_template_version: 2013-05-23
heat_template_version: 2015-04-30
description: >
This is a nested stack that defines a singleSwarm node, This stack is
included by an AutoScalingGroup resource in the parent template
(swarmcluster.yaml).
parameters:
server_image:
type: string
description: glance image used to boot the server
node_flavor:
type: string
default: m1.small
description: flavor to use when booting the server
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
default: lars
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
# The following are all generated in the parent template.
swarm_master_ip:
type: string
description: swarm master's ip address
etcd_server_ip:
type: string
description: IP address of the Etcd server.
fixed_network:
type: string
description: Network from which to allocate fixed addresses.
fixed_subnet:
type: string
description: Subnet from which to allocate fixed addresses.
network_driver:
type: string
description: network driver to use for instantiating container networks
wait_condition_timeout:
type: number
description : >
timeout for the Wait Conditions
http_proxy:
type: string
description: http proxy address for docker
https_proxy:
type: string
description: https proxy address for docker
no_proxy:
type: string
description: no proxies for docker
docker_volume_size:
type: number
description: >
size of a cinder volume to allocate to docker for container/image
storage
user_token:
type: string
description: token used for communicating back to Magnum for TLS certs
bay_uuid:
type: string
description: identifier for the bay this template is generating
magnum_url:
type: string
description: endpoint to retrieve TLS certs from
insecure:
type: boolean
description: whether or not to disable TLS
resources:
node_wait_handle:
type: OS::Heat::WaitConditionHandle
node_wait_condition:
type: OS::Heat::WaitCondition
depends_on: swarm_node
properties:
handle: {get_resource: node_wait_handle}
timeout: {get_param: wait_condition_timeout}
secgroup_all_open:
type: OS::Neutron::SecurityGroup
properties:
rules:
- protocol: icmp
- protocol: tcp
- protocol: udp
######################################################################
#
# software configs. these are components that are combined into
# a multipart MIME user-data archive.
#
write_heat_params:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-heat-params.yaml}
params:
"$SWARM_MASTER_IP": {get_param: swarm_master_ip}
# "$SWARM_MASTER_IP": {get_attr: [swarm_master, swarm_master_ip]}
# "$SWARM_MASTER_IP": {get_attr: [swarm_master, swarm_master_eth0, fixed_ips, 0, ip_address]}
"$SWARM_NODE_IP": {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
"$ETCD_SERVER_IP": {get_param: etcd_server_ip}
"$DOCKER_VOLUME": {get_resource: docker_volume}
"$NETWORK_DRIVER": {get_param: network_driver}
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$BAY_UUID": {get_param: bay_uuid}
"$USER_TOKEN": {get_param: user_token}
"$MAGNUM_URL": {get_param: magnum_url}
"$INSECURE": {get_param: insecure}
configure_swarm:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: {get_file: fragments/configure-swarm.sh}
remove_docker_key:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/remove-docker-key.sh}
make_cert:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: {get_file: fragments/make_cert.py}
write_docker_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/write-docker-service.sh}
configure_docker_storage:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/configure-docker-storage.sh}
network_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/network-service.sh}
write_docker_socket:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/write-docker-socket.yaml}
write_swarm_agent_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: {get_file: fragments/write-swarm-agent-service.yaml}
params:
"$HTTP_PROXY": {get_param: http_proxy}
"$HTTPS_PROXY": {get_param: https_proxy}
"$NO_PROXY": {get_param: no_proxy}
"$ETCD_SERVER_IP": {get_param: etcd_server_ip}
"$SWARM_NODE_IP": {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
network_service:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/network-service.sh}
enable_services:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/enable-services-node.sh}
node_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template: |
#!/bin/bash -v
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: {get_attr: [node_wait_handle, curl_cli]}
disable_selinux:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/disable-selinux.sh}
add_proxy:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config: {get_file: fragments/add-proxy.sh}
swarm_node_init:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: disable_selinux}
- config: {get_resource: remove_docker_key}
- config: {get_resource: write_heat_params}
- config: {get_resource: make_cert}
- config: {get_resource: configure_swarm}
- config: {get_resource: add_proxy}
- config: {get_resource: configure_docker_storage}
- config: {get_resource: network_service}
- config: {get_resource: write_swarm_agent_service}
- config: {get_resource: write_docker_service}
- config: {get_resource: write_docker_socket}
- config: {get_resource: enable_services}
- config: {get_resource: node_wc_notify}
######################################################################
#
# a single swarm node.
#
swarm_node:
type: OS::Nova::Server
properties:
image: {get_param: server_image}
flavor: {get_param: node_flavor}
key_name: {get_param: ssh_key_name}
user_data_format: RAW
user_data: {get_resource: swarm_node_init}
networks:
- port: {get_resource: swarm_node_eth0}
swarm_node_eth0:
type: OS::Neutron::Port
properties:
network: {get_param: fixed_network}
security_groups:
- get_resource: secgroup_all_open
fixed_ips:
- subnet: {get_param: fixed_subnet}
replacement_policy: AUTO
swarm_node_floating:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}
port_id: {get_resource: swarm_node_eth0}
######################################################################
#
# docker storage. This allocates a cinder volume and attaches it
# to the node.
#
docker_volume:
type: OS::Cinder::Volume
properties:
size: {get_param: docker_volume_size}
docker_volume_attach:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: {get_resource: swarm_node}
volume_id: {get_resource: docker_volume}
mountpoint: /dev/vdb
outputs:
swarm_node_ip:
value: {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
swarm_node_external_ip:
value: {get_attr: [swarm_node_floating, floating_ip_address]}
OS::stack_id:
value: {get_attr: [swarm_node_eth0, fixed_ips, 0, ip_address]}
$ heat stack-show swarm-z4lg7zmobtte
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| capabilities | [] |
| creation_time | 2015-09-30T22:29:12 |
| description | This template will boot a Swarm cluster with one or |
| | more nodes (as specified by the number_of_nodes |
| | parameter, which defaults to 1). |
| disable_rollback | True |
| id | 1ab717c9-664e-40bb-9bd3-7357e502479e |
| links | http://172.29.74.86:8004/v1/e2d247525bc147a297d985167519c95e/stacks/swarm-z4lg7zmobtte/1ab717c9-664e-40bb-9bd3-7357e502479e (self) |
| notification_topics | [] |
| outputs | [ |
| | { |
| | "output_value": [ |
| | "10.30.118.142" |
| | ], |
| | "description": "This is a list of public ip addresses of all Swarm master servers. Use these addresses to log in to the Swarm masters via ssh.\n", |
| | "output_key": "swarm_master_external" |
| | }, |
| | { |
| | "output_value": { |
| | "0": "10.0.0.4" |
| | }, |
| | "description": "This is a list of private ip addresses of all Swarm masters. Use these addresses to log in to the Swarm masters via ssh.\n", |
| | "output_key": "swarm_master" |
| | }, |
| | { |
| | "output_value": null, |
| | "description": "This is a list of the public addresses of all the Swarm nodes. Use these addresses to, e.g., log into the nodes.\n", |
| | "output_key": "swarm_nodes_external" |
| | }, |
| | { |
| | "output_value": null, |
| | "description": "This is a list of the private addresses of all the Swarm nodes.\n", |
| | "output_key": "swarm_nodes" |
| | } |
| | ] |
| parameters | { |
| | "OS::project_id": "e2d247525bc147a297d985167519c95e", |
| | "fixed_network_cidr": "10.0.0.0/24", |
| | "magnum_url": "http://172.29.74.86:9511/v1", |
| | "bay_uuid": "07f0624c-e855-4881-805e-416b94f7b007", |
| | "http_proxy": "", |
| | "user_token": "2dfd6621bd5741378fdf23ff1b2dbb07", |
| | "node_flavor": "m1.small", |
| | "wait_condition_timeout": "6000", |
| | "external_network": "public", |
| | "no_proxy": "", |
| | "https_proxy": "", |
| | "number_of_nodes": "1", |
| | "docker_volume_size": "2", |
| | "OS::stack_name": "swarm-z4lg7zmobtte", |
| | "insecure": "False", |
| | "nodes_to_remove": "", |
| | "flannel_use_vxlan": "true", |
| | "OS::stack_id": "1ab717c9-664e-40bb-9bd3-7357e502479e", |
| | "network_driver": "flannel", |
| | "master_flavor": "m1.small", |
| | "ssh_key_name": "danehans", |
| | "flannel_network_subnetlen": "26", |
| | "flannel_network_cidr": "10.1.0.0/16", |
| | "discovery_url": "https://discovery.etcd.io/83769ba634a88ff6da2ab3e674d69b44", |
| | "dns_nameserver": "172.29.74.154", |
| | "server_image": "fedora-21-atomic-3" |
| | } |
| parent | None |
| stack_name | swarm-z4lg7zmobtte |
| stack_owner | None |
| stack_status | CREATE_FAILED |
| stack_status_reason | Resource CREATE failed: resources.swarm_nodes: Property |
| | error: resources[0].properties.swarm_master_ip: Value |
| | must be a string |
| stack_user_project_id | 6d83bd7ce0264f7ea58b937a360ffdbc |
| tags | None |
| template_description | This template will boot a Swarm cluster with one or |
| | more nodes (as specified by the number_of_nodes |
| | parameter, which defaults to 1). |
| timeout_mins | None |
| updated_time | None |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
#heat_template_version: 2013-05-23
heat_template_version: 2015-04-30
description: >
This template will boot a Swarm cluster with one or more
nodes (as specified by the number_of_nodes parameter, which
defaults to 1).
parameters:
ssh_key_name:
type: string
description: name of ssh key to be provisioned on our server
external_network:
type: string
description: uuid/name of a network to use for floating ip addresses
default: public
server_image:
type: string
description: glance image used to boot the server
master_flavor:
type: string
default: m1.small
description: flavor to use when booting the server
node_flavor:
type: string
default: m1.small
description: flavor to use when booting the server
dns_nameserver:
type: string
description: address of a dns nameserver reachable in your environment
default: 8.8.8.8
http_proxy:
type: string
description: http proxy address for docker
default: ""
https_proxy:
type: string
description: https proxy address for docker
default: ""
no_proxy:
type: string
description: no proxies for docker
default: ""
number_of_nodes:
type: string
description: how many swarm nodes to spawn
default: 1
fixed_network_cidr:
type: string
description: network range for fixed ip network
default: 10.0.0.0/24
network_driver:
type: string
description: network driver to use for instantiating container networks
default: flannel
flannel_network_cidr:
type: string
description: network range for flannel overlay network
default: 10.100.0.0/16
flannel_network_subnetlen:
type: string
description: size of subnet assigned to each node
default: 24
flannel_use_vxlan:
type: string
description: >
if true use the vxlan backend, otherwise use the default
udp backend
default: "false"
constraints:
- allowed_values: ["true", "false"]
docker_volume_size:
type: number
description: >
size of a cinder volume to allocate to docker for container/image
storage
default: 25
wait_condition_timeout:
type: number
description: >
timeout for the Wait Conditions
default: 6000
nodes_to_remove:
type: comma_delimited_list
description: >
List of nodes to be removed when doing an update. Individual node may
be referenced several ways: (1) The resource name (e.g. ['1', '3']),
(2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should
be empty when doing an create.
default: []
discovery_url:
type: string
description: >
Discovery URL used for bootstrapping the etcd cluster.
user_token:
type: string
description: token used for communicating back to Magnum for TLS certs
bay_uuid:
type: string
description: identifier for the bay this template is generating
magnum_url:
type: string
description: endpoint to retrieve TLS certs from
insecure:
type: boolean
description: whether or not to enable TLS
default: False
resources:
######################################################################
#
# network resources. allocate a network and router for our server.
#
fixed_network:
type: OS::Neutron::Net
fixed_subnet:
type: OS::Neutron::Subnet
properties:
cidr: {get_param: fixed_network_cidr}
network: {get_resource: fixed_network}
dns_nameservers:
- {get_param: dns_nameserver}
extrouter:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: {get_param: external_network}
extrouter_inside:
type: OS::Neutron::RouterInterface
properties:
router_id: {get_resource: extrouter}
subnet: {get_resource: fixed_subnet}
######################################################################
#
# load balancers.
#
etcd_monitor:
type: OS::Neutron::HealthMonitor
properties:
type: TCP
delay: 5
max_retries: 5
timeout: 5
etcd_pool:
type: OS::Neutron::Pool
properties:
protocol: HTTP
monitors: [{get_resource: etcd_monitor}]
subnet: {get_resource: fixed_subnet}
lb_method: ROUND_ROBIN
vip:
protocol_port: 2379
######################################################################
#
# swarm masters. This is a resource group that will create
# 1 swarm master.
#
swarm_master:
type: OS::Heat::ResourceGroup
depends_on:
- extrouter_inside
properties:
resource_def:
type: master.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
master_flavor: {get_param: master_flavor}
external_network: {get_param: external_network}
wait_condition_timeout: {get_param: wait_condition_timeout}
network_driver: {get_param: network_driver}
flannel_network_cidr: {get_param: flannel_network_cidr}
flannel_network_subnetlen: {get_param: flannel_network_subnetlen}
flannel_use_vxlan: {get_param: flannel_use_vxlan}
discovery_url: {get_param: discovery_url}
fixed_network: {get_resource: fixed_network}
fixed_subnet: {get_resource: fixed_subnet}
etcd_pool_id: {get_resource: etcd_pool}
etcd_server_ip: {get_attr: [etcd_pool, vip, address]}
docker_volume_size: {get_param: docker_volume_size}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
user_token: {get_param: user_token}
bay_uuid: {get_param: bay_uuid}
magnum_url: {get_param: magnum_url}
insecure: {get_param: insecure}
######################################################################
#
# swarm nodes. This is an resource group that will initially
# create <number_of_nodes> nodes, and needs to be manually scaled.
#
swarm_nodes:
type: OS::Heat::ResourceGroup
depends_on:
- extrouter_inside
- swarm_master
properties:
count: {get_param: number_of_nodes}
removal_policies: [{resource_list: {get_param: nodes_to_remove}}]
resource_def:
type: node.yaml
properties:
ssh_key_name: {get_param: ssh_key_name}
server_image: {get_param: server_image}
node_flavor: {get_param: node_flavor}
fixed_network: {get_resource: fixed_network}
fixed_subnet: {get_resource: fixed_subnet}
network_driver: {get_param: network_driver}
external_network: {get_param: external_network}
# swarm_master_ip: {get_output: swarm_master}
# swarm_master_ip: {get_attr: [swarm_master, swarm_master_ip, 0]}
# swarm_master_ip: {get_attr: [swarm_master, swarm_master_eth0, fixed_ips, 0, ip_address]}
swarm_master_ip: {get_attr: [swarm_master, attributes, swarm_master_ip]}
etcd_server_ip: {get_attr: [etcd_pool, vip, address]}
docker_volume_size: {get_param: docker_volume_size}
wait_condition_timeout: {get_param: wait_condition_timeout}
http_proxy: {get_param: http_proxy}
https_proxy: {get_param: https_proxy}
no_proxy: {get_param: no_proxy}
user_token: {get_param: user_token}
bay_uuid: {get_param: bay_uuid}
magnum_url: {get_param: magnum_url}
insecure: {get_param: insecure}
outputs:
swarm_master:
# value: {get_attr: [swarm_master, swarm_master_ip]}
value: {get_attr: [swarm_master, attributes, swarm_master_ip]}
description: >
This is a list of private ip addresses of all Swarm masters.
Use these addresses to log in to the Swarm masters via ssh.
swarm_master_external:
value: {get_attr: [swarm_master, swarm_master_external_ip]}
description: >
This is a list of public ip addresses of all Swarm master servers.
Use these addresses to log in to the Swarm masters via ssh.
swarm_nodes:
value: {get_attr: [swarm_nodes, swarm_node_ip]}
description: >
This is a list of the private addresses of all the Swarm nodes.
swarm_nodes_external:
value: {get_attr: [swarm_nodes, swarm_node_external_ip]}
description: >
This is a list of the public addresses of all the Swarm nodes. Use
these addresses to, e.g., log into the nodes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment