-
-
Save danehans/8691464051cb43feacac11afd4bbc447 to your computer and use it in GitHub Desktop.
lbaasv2_mitaka_deploy_issue
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[root@mercury-build-server kolla]# neutron ext-list | |
+-------------------------+-----------------------------------------------+ | |
| alias | name | | |
+-------------------------+-----------------------------------------------+ | |
| dns-integration | DNS Integration | | |
| ext-gw-mode | Neutron L3 Configurable external gateway mode | | |
| binding | Port Binding | | |
| agent | agent | | |
| subnet_allocation | Subnet Allocation | | |
| l3_agent_scheduler | L3 Agent Scheduler | | |
| external-net | Neutron external network | | |
| flavors | Neutron Service Flavors | | |
| net-mtu | Network MTU | | |
| quotas | Quota management support | | |
| l3-ha | HA Router extension | | |
| provider | Provider Network | | |
| multi-provider | Multi Provider Network | | |
| extraroute | Neutron Extra Route | | |
| router | Neutron L3 Router | | |
| extra_dhcp_opt | Neutron Extra DHCP opts | | |
| lbaasv2 | LoadBalancing service v2 | | |
| service-type | Neutron Service Type Management | | |
| lbaas_agent_schedulerv2 | Loadbalancer Agent Scheduler V2 | | |
| security-group | security-group | | |
| dhcp_agent_scheduler | DHCP Agent Scheduler | | |
| rbac-policies | RBAC Policies | | |
| port-security | Port Security | | |
| allowed-address-pairs | Allowed Address Pairs | | |
| dvr | Distributed Virtual Router | | |
+-------------------------+-----------------------------------------------+ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bash-4.2$ cat /etc/neutron/lbaas_agent.ini | |
[DEFAULT] | |
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver | |
user_group = haproxy | |
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ash-4.2$ vi /var/log/neutron/lbaaas-agent.log | |
2016-06-06 21:09:15.819 22 INFO neutron.common.config [-] Logging enabled! | |
2016-06-06 21:09:15.819 22 INFO neutron.common.config [-] /usr/bin/neutron-lbaas-agent version 7.0.4 | |
2016-06-06 21:09:15.820 22 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron | |
.conf --config-file /etc/neutron/lbaas_agent.ini --log-file /var/log/neutron/lbaaas-agent.log setup_logging /usr/lib/python2.7/site-pack | |
ages/neutron/common/config.py:225 | |
2016-06-06 21:09:15.825 22 WARNING neutron.services.provider_configuration [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] The conf | |
igured driver neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver has been moved, automatically using neutron | |
_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver instead. Please update your config files, as this automati | |
c fixup will be removed in a future release. | |
2016-06-06 21:09:15.828 22 DEBUG oslo_concurrency.lockutils [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Acquired semaphore "sin | |
gleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:198 | |
2016-06-06 21:09:15.829 22 DEBUG oslo_concurrency.lockutils [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Releasing semaphore "si | |
ngleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:211 | |
2016-06-06 21:09:15.829 22 DEBUG oslo_concurrency.lockutils [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Acquired semaphore "sin | |
gleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:198 | |
2016-06-06 21:09:15.830 22 DEBUG oslo_concurrency.lockutils [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Releasing semaphore "si | |
ngleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:211 | |
2016-06-06 21:09:15.830 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Full set of CONF: _wait_for_e | |
xit_or_signal /usr/lib/python2.7/site-packages/oslo_service/service.py:253 | |
2016-06-06 21:09:15.831 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ***************************** | |
*************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2216 | |
2016-06-06 21:09:15.831 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Configuration options gathere | |
d from: log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2217 | |
2016-06-06 21:09:15.832 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] command line args: ['--config | |
-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/lbaas_agent.ini', '--log-file', '/var/log/neutron/lbaaas-agent.log'] | |
log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2218 | |
2016-06-06 21:09:15.832 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] config files: ['/etc/neutron/ | |
neutron.conf', '/etc/neutron/lbaas_agent.ini'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2219 | |
2016-06-06 21:09:15.833 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ============================= | |
=================================================== log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2220 | |
2016-06-06 21:09:15.833 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] advertise_mtu | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.834 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] allow_bulk | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.834 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] allow_overlapping_ips | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.834 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] allow_pagination | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.835 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] allow_sorting | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.835 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] api_extensions_path | |
= log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.836 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] api_paste_config | |
= api-paste.ini log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.836 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] auth_strategy | |
= keystone log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.837 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] backdoor_port | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.837 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] backlog | |
= 4096 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.838 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] base_mac | |
= fa:16:3e:00:00:00 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.839 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] bind_host | |
= 10.32.20.51 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.839 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] bind_port | |
= 9696 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.840 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] client_socket_timeout | |
= 900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.840 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] config_dir | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.841 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] config_file | |
= ['/etc/neutron/neutron.conf', '/etc/neutron/lbaas_agent.ini'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:222 | |
9 | |
2016-06-06 21:09:15.841 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] control_exchange | |
= neutron log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.842 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] core_plugin | |
= neutron.plugins.ml2.plugin.Ml2Plugin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.842 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] debug | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.843 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] default_ipv4_subnet_pool | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.843 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] default_ipv6_subnet_pool | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.844 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] default_log_levels | |
= ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'iso8601=WARN', 'requ | |
ests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN' | |
, 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN'] log_opt_values /usr | |
/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.845 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] device_driver | |
= ['neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver'] log_opt_values /usr/lib/python2.7/site-packages/o | |
slo_config/cfg.py:2229 | |
2016-06-06 21:09:15.845 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] dhcp_agent_notification | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.846 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] dhcp_lease_duration | |
= 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.846 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] dns_domain | |
= openstacklocal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.847 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] force_gateway_on_subnet | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.847 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] host | |
= mercury-control-server-1.novalocal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.848 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] instance_format | |
= [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.848 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] instance_uuid_format | |
= [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.849 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] interface_driver | |
= neutron.agent.linux.interface.BridgeInterfaceDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.849 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ipam_driver | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.850 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_config_append | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.850 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_date_format | |
= %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.851 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_dir | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.851 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_file | |
= /var/log/neutron/lbaaas-agent.log log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.852 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_format | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.852 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] log_options | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.852 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] logging_context_format_string | |
= %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values | |
/usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.853 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] logging_debug_format_suffix | |
= %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.853 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] logging_default_format_string | |
= %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python2.7/site-packag | |
es/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.854 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] logging_exception_prefix | |
= %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2 | |
229 | |
2016-06-06 21:09:15.854 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] mac_generation_retries | |
= 16 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.855 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] max_dns_nameservers | |
= 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.855 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] max_fixed_ips_per_port | |
= 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.856 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] max_header_line | |
= 16384 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.856 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] max_subnet_host_routes | |
= 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.856 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] network_device_mtu | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.857 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] notification_driver | |
= [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.857 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] notification_topics | |
= ['notifications'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.858 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] notify_nova_on_port_data_chan | |
ges = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.858 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] notify_nova_on_port_status_ch | |
anges = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.859 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_admin_auth_url | |
= http://10.32.20.50:35357/v2.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.859 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_admin_password | |
= **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.860 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_admin_tenant_id | |
= b3051dbd7933403ba5f9fcf67e0a48ec log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.860 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_admin_tenant_name | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.861 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_admin_username | |
= nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.861 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova_url | |
= http://10.32.20.50:8774/v2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.862 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ovs_integration_bridge | |
= br-int log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.862 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ovs_use_veth | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.863 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ovs_vsctl_timeout | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.863 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] pagination_max_limit | |
= -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.863 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] periodic_interval | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.864 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] publish_errors | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.865 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] retry_until_window | |
= 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.865 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] rpc_backend | |
= rabbit log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.866 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] rpc_response_timeout | |
= 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.866 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] run_external_periodic_tasks | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.867 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] send_events_interval | |
= 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.867 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] service_plugins | |
= ['neutron.services.l3_router.l3_router_plugin.L3RouterPlugin', 'neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2'] lo | |
g_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.868 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ssl_ca_file | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.868 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ssl_cert_file | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.869 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ssl_key_file | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.869 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] state_path | |
= /var/lib/neutron log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.870 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] syslog_log_facility | |
= LOG_USER log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.870 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] tcp_keepidle | |
= 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.870 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] transport_url | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.871 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] use_ssl | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.871 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] use_stderr | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.872 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] use_syslog | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.872 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] use_syslog_rfc_format | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.873 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] verbose | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.873 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] vlan_transparent | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.874 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] wsgi_keep_alive | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 | |
2016-06-06 21:09:15.874 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] haproxy.loadbalancer_state_pa | |
th = /var/lib/neutron/lbaas log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.875 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] haproxy.send_gratuitous_arp | |
= 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.875 22 WARNING oslo_config.cfg [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] Option "user_group" from group " | |
DEFAULT" is deprecated. Use option "user_group" from group "haproxy". | |
2016-06-06 21:09:15.876 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] haproxy.user_group | |
= haproxy log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.876 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] OVS.ovsdb_interface | |
= vsctl log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.877 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.amqp_au | |
to_delete = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.877 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.amqp_du | |
rable_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.878 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.fake_ra | |
bbit = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.878 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.heartbe | |
at_rate = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.879 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.heartbe | |
at_timeout_threshold = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.880 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_f | |
ailover_strategy = round-robin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.880 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_r | |
econnect_delay = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.881 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_r | |
econnect_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.881 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_s | |
sl_ca_certs = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.881 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_s | |
sl_certfile = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.882 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_s | |
sl_keyfile = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.882 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.kombu_s | |
sl_version = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.883 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
ha_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.883 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
host = localhost log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.884 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
hosts = ['10.32.20.51:5672', '10.32.20.52:5672', '10.32.20.53:5672'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py: | |
2237 | |
2016-06-06 21:09:15.884 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
login_method = AMQPLAIN log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.885 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
max_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.885 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.886 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
port = 5672 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.886 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
qos_prefetch_count = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.887 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
retry_backoff = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.887 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
retry_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.888 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
use_ssl = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.889 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
userid = guest log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.889 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rabbit_ | |
virtual_host = / log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.890 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.rpc_con | |
n_pool_size = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.890 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_messaging_rabbit.send_si | |
ngle_reply = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.891 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] service_providers.service_pro | |
vider = ['LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default'] log_opt_values /usr/lib | |
/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.891 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.backend | |
= sqlalchemy log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.892 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.connection | |
= **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.892 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.connection_debug | |
= 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.893 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.connection_trace | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.893 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.db_inc_retry_interva | |
l = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.894 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.db_max_retries | |
= 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.894 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.db_max_retry_interva | |
l = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.895 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.db_retry_interval | |
= 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.895 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.idle_timeout | |
= 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.896 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.max_overflow | |
= 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.896 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.max_pool_size | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.897 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.max_retries | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.897 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.min_pool_size | |
= 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.898 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.mysql_sql_mode | |
= TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.898 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.pool_timeout | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.899 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.retry_interval | |
= 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.900 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.slave_connection | |
= **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.900 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.sqlite_db | |
= log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.901 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.sqlite_synchronous | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.901 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] database.use_db_reconnect | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.902 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] AGENT.log_agent_heartbeats | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.902 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] AGENT.report_interval | |
= 30.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.903 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] AGENT.root_helper | |
= sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.903 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] AGENT.root_helper_daemon | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.904 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] AGENT.use_helper_for_ns_read | |
= True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.904 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.auth_plugin | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.905 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.auth_section | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.905 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.cafile | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.905 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.certfile | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.906 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.endpoint_type | |
= public log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.907 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.insecure | |
= False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.907 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.keyfile | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.908 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.region_name | |
= RegionOne log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.908 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] nova.timeout | |
= None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.908 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_concurrency.disable_proc | |
ess_locking = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.909 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_concurrency.lock_path | |
= /var/lib/neutron/lock log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.909 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ***************************** | |
*************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2239 | |
2016-06-06 21:09:15.911 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:09:15.916 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 10.32.20.51:5672 | |
2016-06-06 21:09:15.926 22 DEBUG neutron.common.rpc [-] Creating Consumer connection for Service n-lbaas_agent start /usr/lib/python2.7/ | |
site-packages/neutron/common/rpc.py:162 | |
2016-06-06 21:09:15.940 22 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is eae3cc1bc8614aa8ae499d92ca4ec731 _send /usr/lib/python | |
2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:392 | |
2016-06-06 21:09:15.941 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:09:15.942 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 10.32.20.52:5672 | |
2016-06-06 21:09:15.941 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:09:15.908 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_concurrency.disable_proc | |
ess_locking = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.909 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] oslo_concurrency.lock_path | |
= /var/lib/neutron/lock log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2237 | |
2016-06-06 21:09:15.909 22 DEBUG oslo_service.service [req-3203488e-552d-4364-8633-21568da976d2 - - - - -] ***************************** | |
*************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2239 | |
2016-06-06 21:09:15.911 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:09:15.916 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 10.32.20.51:5672 | |
2016-06-06 21:09:15.926 22 DEBUG neutron.common.rpc [-] Creating Consumer connection for Service n-lbaas_agent start /usr/lib/python2.7/ | |
site-packages/neutron/common/rpc.py:162 | |
2016-06-06 21:09:15.940 22 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is eae3cc1bc8614aa8ae499d92ca4ec731 _send /usr/lib/python | |
2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:392 | |
2016-06-06 21:09:15.941 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:09:15.942 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 10.32.20.52:5672 | |
2016-06-06 21:09:15.956 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 10.32.20.51:5672 | |
2016-06-06 21:09:15.958 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 10.32.20.52:5672 | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve ready devices | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent call last): | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_manager.py", line 152, in sync_state | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = set(self.plugin_rpc.get_r | |
eady_devices()) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_api.py", line 36, in get_ready_devices | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager return cctxt.call(self.context, 'get_ready_ | |
devices', host=self.host) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/rpc/client.py", line 158, in call | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/transport.py", line 90, in _send | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, retry=retry) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 431, in send | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 420, in _send | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager result = self._waiter.wait(msg_id, timeout) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 318, in wait | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager message = self.waiters.get(msg_id, timeout= | |
timeout) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 223, in get | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' % msg_id) | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed out waiting for a reply | |
to message ID eae3cc1bc8614aa8ae499d92ca4ec731 | |
2016-06-06 21:10:15.972 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager | |
2016-06-06 21:10:15.976 22 DEBUG oslo_messaging._drivers.amqp [-] Pool creating new connection create /usr/lib/python2.7/site-packages/o | |
slo_messaging/_drivers/amqp.py:103 | |
2016-06-06 21:10:15.977 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 10.32.20.53:5672 | |
2016-06-06 21:10:15.992 22 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 10.32.20.53:5672 | |
2016-06-06 21:10:16.024 22 DEBUG oslo_service.periodic_task [-] Running periodic task LbaasAgentManager.periodic_resync run_periodic_tas | |
ks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 | |
2016-06-06 21:10:16.024 22 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 6b59273669bc45acb7641d615e9a85c9 _send /usr/lib/python | |
2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:392 | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve ready devices | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent call last): | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_manager.py", line 152, in sync_state | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = set(self.plugin_rpc.get_r | |
eady_devices()) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_api.py", line 36, in get_ready_devices | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager return cctxt.call(self.context, 'get_ready_ | |
devices', host=self.host) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/rpc/client.py", line 158, in call | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/transport.py", line 90, in _send | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, retry=retry) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 431, in send | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 420, in _send | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager result = self._waiter.wait(msg_id, timeout) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 318, in wait | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager message = self.waiters.get(msg_id, timeout= | |
timeout) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 223, in get | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' % msg_id) | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed out waiting for a reply | |
to message ID 6b59273669bc45acb7641d615e9a85c9 | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager | |
2016-06-06 21:11:16.029 22 DEBUG oslo_service.periodic_task [-] Running periodic task LbaasAgentManager.collect_stats run_periodic_tasks | |
/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 | |
2016-06-06 21:11:16.029 22 WARNING oslo.service.loopingcall [-] Function 'neutron_lbaas.services.loadbalancer.agent.agent_manager.LbaasA | |
gentManager.run_periodic_tasks' run outlasted interval by 50.00 sec | |
2016-06-06 21:11:26.030 22 DEBUG oslo_service.periodic_task [-] Running periodic task LbaasAgentManager.periodic_resync run_periodic_tas | |
ks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 | |
2016-06-06 21:11:26.031 22 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 5e545b6f9ffc4a09970713ad16a55d52 _send /usr/lib/python | |
2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:392 | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve ready devices | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent call last): | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_manager.py", line 152, in sync_state | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = set(self.plugin_rpc.get_r | |
eady_devices()) | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutro | |
n_lbaas/services/loadbalancer/agent/agent_api.py", line 36, in get_ready_devices | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager return cctxt.call(self.context, 'get_ready_ | |
devices', host=self.host) | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/rpc/client.py", line 158, in call | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry) | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/transport.py", line 90, in _send | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, retry=retry) | |
2016-06-06 21:12:26.035 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/oslo_m | |
essaging/_drivers/amqpdriver.py", line 431, in send | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed out waiting for a reply | |
to message ID 6b59273669bc45acb7641d615e9a85c9 | |
2016-06-06 21:11:16.027 22 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager | |
2016-06-06 21:11:16.029 22 DEBUG oslo_service.periodic_task [-] Running periodic task LbaasAgentManager.collect_stats run_periodic_tasks | |
/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 | |
2016-06-06 21:11:16.029 22 WARNING oslo.service.loopingcall [-] Function 'neutron_lbaas.services.loadbalancer.agent.agent_manager.LbaasA | |
gentManager.run_periodic_tasks' run outlasted interval by 50.00 sec | |
<snip> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bash-4.2$ cat /etc/neutron/neutron.conf | |
[DEFAULT] | |
verbose = True | |
debug = True | |
notify_nova_on_port_status_changes = True | |
notify_nova_on_port_data_changes = True | |
nova_url = http://10.32.20.50:8774/v2 | |
nova_admin_auth_url = http://10.32.20.50:35357/v2.0 | |
nova_admin_username = nova | |
nova_admin_password = BJFJr73tLw4CF76P | |
nova_admin_tenant_id = b3051dbd7933403ba5f9fcf67e0a48ec | |
state_path = /var/lib/neutron | |
dhcp_agents_per_network = 2 | |
bind_host = 10.32.20.51 | |
bind_port = 9696 | |
auth_strategy = keystone | |
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin | |
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2 | |
allow_overlapping_ips = True | |
router_distributed = False | |
l3_ha = True | |
min_l3_agents_per_router = 2 | |
max_l3_agents_per_router = 3 | |
# Print more verbose output (set logging level to INFO instead of default WARNING level). | |
# verbose = True | |
# =========Start Global Config Option for Distributed L3 Router=============== | |
# Setting the "router_distributed" flag to "True" will default to the creation | |
# of distributed tenant routers. The admin can override this flag by specifying | |
# the type of the router on the create request (admin-only attribute). Default | |
# value is "False" to support legacy mode (centralized) routers. | |
# | |
# router_distributed = False | |
# | |
# ===========End Global Config Option for Distributed L3 Router=============== | |
# Print debugging output (set logging level to DEBUG instead of default WARNING level). | |
# debug = False | |
# Where to store Neutron state files. This directory must be writable by the | |
# user executing the agent. | |
# state_path = /var/lib/neutron | |
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s | |
# log_date_format = %Y-%m-%d %H:%M:%S | |
# use_syslog -> syslog | |
# log_file and log_dir -> log_dir/log_file | |
# (not log_file) and log_dir -> log_dir/{binary_name}.log | |
# use_stderr -> stderr | |
# (not user_stderr) and (not log_file) -> stdout | |
# publish_errors -> notification system | |
# use_syslog = False | |
# syslog_log_facility = LOG_USER | |
# use_stderr = False | |
# log_file = | |
# log_dir = | |
# publish_errors = False | |
# Address to bind the API server to | |
# bind_host = 0.0.0.0 | |
# Port the bind the API server to | |
# bind_port = 9696 | |
# Path to the extensions. Note that this can be a colon-separated list of | |
# paths. For example: | |
# api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions | |
# The __path__ of neutron.extensions is appended to this, so if your | |
# extensions are in there you don't need to specify them here | |
# api_extensions_path = | |
# (StrOpt) Neutron core plugin entrypoint to be loaded from the | |
# neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the | |
# plugins included in the neutron source distribution. For compatibility with | |
# previous versions, the class name of a plugin can be specified instead of its | |
# entrypoint name. | |
# | |
# core_plugin = | |
# Example: core_plugin = ml2 | |
# (StrOpt) Neutron IPAM (IP address management) driver to be loaded from the | |
# neutron.ipam_drivers namespace. See setup.cfg for the entry point names. | |
# If ipam_driver is not set (default behavior), no ipam driver is used. | |
# Example: ipam_driver = | |
# In order to use the reference implementation of neutron ipam driver, use | |
# 'internal'. | |
# Example: ipam_driver = internal | |
# (ListOpt) List of service plugin entrypoints to be loaded from the | |
# neutron.service_plugins namespace. See setup.cfg for the entrypoint names of | |
# the plugins included in the neutron source distribution. For compatibility | |
# with previous versions, the class name of a plugin can be specified instead | |
# of its entrypoint name. | |
# | |
# service_plugins = | |
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering,qos | |
# Paste configuration file | |
# api_paste_config = /usr/share/neutron/api-paste.ini | |
# (StrOpt) Hostname to be used by the neutron server, agents and services | |
# running on this machine. All the agents and services running on this machine | |
# must use the same host value. | |
# The default value is hostname of the machine. | |
# | |
# host = | |
# The strategy to be used for auth. | |
# Supported values are 'keystone'(default), 'noauth'. | |
# auth_strategy = noauth | |
# Base MAC address. The first 3 octets will remain unchanged. If the | |
# 4h octet is not 00, it will also be used. The others will be | |
# randomly generated. | |
# 3 octet | |
# base_mac = fa:16:3e:00:00:00 | |
# 4 octet | |
# base_mac = fa:16:3e:4f:00:00 | |
# DVR Base MAC address. The first 3 octets will remain unchanged. If the | |
# 4th octet is not 00, it will also be used. The others will be randomly | |
# generated. The 'dvr_base_mac' *must* be different from 'base_mac' to | |
# avoid mixing them up with MAC's allocated for tenant ports. | |
# A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00 | |
# The default is 3 octet | |
# dvr_base_mac = fa:16:3f:00:00:00 | |
# Maximum amount of retries to generate a unique MAC address | |
# mac_generation_retries = 16 | |
# DHCP Lease duration (in seconds). Use -1 to | |
# tell dnsmasq to use infinite lease times. | |
# dhcp_lease_duration = 86400 | |
# Domain to use for building the hostnames | |
# dns_domain = openstacklocal | |
# Allow sending resource operation notification to DHCP agent | |
# dhcp_agent_notification = True | |
# Enable or disable bulk create/update/delete operations | |
# allow_bulk = True | |
# Enable or disable pagination | |
# allow_pagination = False | |
# Enable or disable sorting | |
# allow_sorting = False | |
# Enable or disable overlapping IPs for subnets | |
# Attention: the following parameter MUST be set to False if Neutron is | |
# being used in conjunction with nova security groups | |
# allow_overlapping_ips = True | |
# Ensure that configured gateway is on subnet. For IPv6, validate only if | |
# gateway is not a link local address. Deprecated, to be removed during the | |
# K release, at which point the check will be mandatory. | |
# force_gateway_on_subnet = True | |
# Default maximum number of items returned in a single response, | |
# value == infinite and value < 0 means no max limit, and value must | |
# be greater than 0. If the number of items requested is greater than | |
# pagination_max_limit, server will just return pagination_max_limit | |
# of number of items. | |
# pagination_max_limit = -1 | |
# Maximum number of DNS nameservers per subnet | |
# max_dns_nameservers = 5 | |
# Maximum number of host routes per subnet | |
# max_subnet_host_routes = 20 | |
# Maximum number of fixed ips per port | |
# max_fixed_ips_per_port = 5 | |
# Maximum number of routes per router | |
# max_routes = 30 | |
# Default Subnet Pool to be used for IPv4 subnet-allocation. | |
# Specifies by UUID the pool to be used in case of subnet-create being called | |
# without a subnet-pool ID. The default of None means that no pool will be | |
# used unless passed explicitly to subnet create. If no pool is used, then a | |
# CIDR must be passed to create a subnet and that subnet will not be allocated | |
# from any pool; it will be considered part of the tenant's private address | |
# space. | |
# default_ipv4_subnet_pool = | |
# Default Subnet Pool to be used for IPv6 subnet-allocation. | |
# Specifies by UUID the pool to be used in case of subnet-create being | |
# called without a subnet-pool ID. Set to "prefix_delegation" | |
# to enable IPv6 Prefix Delegation in a PD-capable environment. | |
# See the description for default_ipv4_subnet_pool for more information. | |
# default_ipv6_subnet_pool = | |
# =========== items for MTU selection and advertisement ============= | |
# Advertise MTU. If True, effort is made to advertise MTU | |
# settings to VMs via network methods (ie. DHCP and RA MTU options) | |
# when the network's preferred MTU is known. | |
# advertise_mtu = False | |
# ======== end of items for MTU selection and advertisement ========= | |
# =========== items for agent management extension ============= | |
# Seconds to regard the agent as down; should be at least twice | |
# report_interval, to be sure the agent is down for good | |
# agent_down_time = 75 | |
# Agent starts with admin_state_up=False when enable_new_agents=False. | |
# In the case, user's resources will not be scheduled automatically to the | |
# agent until admin changes admin_state_up to True. | |
# enable_new_agents = True | |
# =========== end of items for agent management extension ===== | |
# =========== items for agent scheduler extension ============= | |
# Driver to use for scheduling network to DHCP agent | |
# network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler | |
# Driver to use for scheduling router to a default L3 agent | |
# router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler | |
# Driver to use for scheduling a loadbalancer pool to an lbaas agent | |
# loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler | |
# (StrOpt) Representing the resource type whose load is being reported by | |
# the agent. | |
# This can be 'networks','subnets' or 'ports'. When specified (Default is networks), | |
# the server will extract particular load sent as part of its agent configuration object | |
# from the agent report state, which is the number of resources being consumed, at | |
# every report_interval. | |
# dhcp_load_type can be used in combination with network_scheduler_driver = | |
# neutron.scheduler.dhcp_agent_scheduler.WeightScheduler | |
# When the network_scheduler_driver is WeightScheduler, dhcp_load_type can | |
# be configured to represent the choice for the resource being balanced. | |
# Example: dhcp_load_type = networks | |
# Values: | |
# networks - number of networks hosted on the agent | |
# subnets - number of subnets associated with the networks hosted on the agent | |
# ports - number of ports associated with the networks hosted on the agent | |
# dhcp_load_type = networks | |
# Allow auto scheduling networks to DHCP agent. It will schedule non-hosted | |
# networks to first DHCP agent which sends get_active_networks message to | |
# neutron server | |
# network_auto_schedule = True | |
# Allow auto scheduling routers to L3 agent. It will schedule non-hosted | |
# routers to first L3 agent which sends sync_routers message to neutron server | |
# router_auto_schedule = True | |
# Allow automatic rescheduling of routers from dead L3 agents with | |
# admin_state_up set to True to alive agents. | |
# allow_automatic_l3agent_failover = False | |
# Allow automatic removal of networks from dead DHCP agents with | |
# admin_state_up set to True. | |
# Networks could then be rescheduled if network_auto_schedule is True | |
# allow_automatic_dhcp_failover = True | |
# Number of DHCP agents scheduled to host a tenant network. | |
# If this number is greater than 1, the scheduler automatically | |
# assigns multiple DHCP agents for a given tenant network, | |
# providing high availability for DHCP service. | |
# dhcp_agents_per_network = 1 | |
# Enable services on agents with admin_state_up False. | |
# If this option is False, when admin_state_up of an agent is turned to | |
# False, services on it will be disabled. If this option is True, services | |
# on agents with admin_state_up False keep available and manual scheduling | |
# to such agents is available. Agents with admin_state_up False are not | |
# selected for automatic scheduling regardless of this option. | |
# enable_services_on_agents_with_admin_state_down = False | |
# =========== end of items for agent scheduler extension ===== | |
# =========== items for l3 extension ============== | |
# Enable high availability for virtual routers. | |
# l3_ha = False | |
# | |
# Maximum number of l3 agents which a HA router will be scheduled on. If it | |
# is set to 0 the router will be scheduled on every agent. | |
# max_l3_agents_per_router = 3 | |
# | |
# Minimum number of l3 agents which a HA router will be scheduled on. The | |
# default value is 2. | |
# min_l3_agents_per_router = 2 | |
# | |
# CIDR of the administrative network if HA mode is enabled | |
# l3_ha_net_cidr = 169.254.192.0/18 | |
# | |
# Enable snat by default on external gateway when available | |
# enable_snat_by_default = True | |
# | |
# The network type to use when creating the HA network for an HA router. | |
# By default or if empty, the first 'tenant_network_types' | |
# is used. This is helpful when the VRRP traffic should use a specific | |
# network which not the default one. | |
# ha_network_type = | |
# Example: ha_network_type = flat | |
# | |
# The physical network name with which the HA network can be created. | |
# ha_network_physical_name = | |
# Example: ha_network_physical_name = physnet1 | |
# =========== end of items for l3 extension ======= | |
# =========== items for metadata proxy configuration ============== | |
# User (uid or name) running metadata proxy after its initialization | |
# (if empty: agent effective user) | |
# metadata_proxy_user = | |
# Group (gid or name) running metadata proxy after its initialization | |
# (if empty: agent effective group) | |
# metadata_proxy_group = | |
# Enable/Disable log watch by metadata proxy, it should be disabled when | |
# metadata_proxy_user/group is not allowed to read/write its log file and | |
# 'copytruncate' logrotate option must be used if logrotate is enabled on | |
# metadata proxy log files. Option default value is deduced from | |
# metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent | |
# effective user id/name. | |
# metadata_proxy_watch_log = | |
# Location of Metadata Proxy UNIX domain socket | |
# metadata_proxy_socket = $state_path/metadata_proxy | |
# =========== end of items for metadata proxy configuration ============== | |
# ========== items for VLAN trunking networks ========== | |
# Setting this flag to True will allow plugins that support it to | |
# create VLAN transparent networks. This flag has no effect for | |
# plugins that do not support VLAN transparent networks. | |
# vlan_transparent = False | |
# ========== end of items for VLAN trunking networks ========== | |
# =========== WSGI parameters related to the API server ============== | |
# Number of separate API worker processes to spawn. If not specified or < 1, | |
# the default value is equal to the number of CPUs available. | |
# api_workers = <number of CPUs> | |
# Number of separate RPC worker processes to spawn. If not specified or < 1, | |
# a single RPC worker process is spawned by the parent process. | |
# rpc_workers = 1 | |
# Timeout for client connections socket operations. If an | |
# incoming connection is idle for this number of seconds it | |
# will be closed. A value of '0' means wait forever. (integer | |
# value) | |
# client_socket_timeout = 900 | |
# wsgi keepalive option. Determines if connections are allowed to be held open | |
# by clients after a request is fulfilled. A value of False will ensure that | |
# the socket connection will be explicitly closed once a response has been | |
# sent to the client. | |
# wsgi_keep_alive = True | |
# Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when | |
# starting API server. Not supported on OS X. | |
# tcp_keepidle = 600 | |
# Number of seconds to keep retrying to listen | |
# retry_until_window = 30 | |
# Number of backlog requests to configure the socket with. | |
# backlog = 4096 | |
# Max header line to accommodate large tokens | |
# max_header_line = 16384 | |
# Enable SSL on the API server | |
# use_ssl = False | |
# Certificate file to use when starting API server securely | |
# ssl_cert_file = /path/to/certfile | |
# Private key file to use when starting API server securely | |
# ssl_key_file = /path/to/keyfile | |
# CA certificate file to use when starting API server securely to | |
# verify connecting clients. This is an optional parameter only required if | |
# API clients need to authenticate to the API server using SSL certificates | |
# signed by a trusted CA | |
# ssl_ca_file = /path/to/cafile | |
# ======== end of WSGI parameters related to the API server ========== | |
# ======== neutron nova interactions ========== | |
# Send notification to nova when port status is active. | |
# notify_nova_on_port_status_changes = False | |
# Send notifications to nova when port data (fixed_ips/floatingips) change | |
# so nova can update it's cache. | |
# notify_nova_on_port_data_changes = False | |
# URL for connection to nova (Only supports one nova region currently). | |
# nova_url = http://127.0.0.1:8774/v2 | |
# Name of nova region to use. Useful if keystone manages more than one region | |
# nova_region_name = | |
# Username for connection to nova in admin context | |
# nova_admin_username = | |
# The uuid of the admin nova tenant | |
# nova_admin_tenant_id = | |
# The name of the admin nova tenant. If the uuid of the admin nova tenant | |
# is set, this is optional. Useful for cases where the uuid of the admin | |
# nova tenant is not available when configuration is being done. | |
# nova_admin_tenant_name = | |
# Password for connection to nova in admin context. | |
# nova_admin_password = | |
# Authorization URL for connection to nova in admin context. | |
# nova_admin_auth_url = | |
# CA file for novaclient to verify server certificates | |
# nova_ca_certificates_file = | |
# Boolean to control ignoring SSL errors on the nova url | |
# nova_api_insecure = False | |
# Number of seconds between sending events to nova if there are any events to send | |
# send_events_interval = 2 | |
# ======== end of neutron nova interactions ========== | |
# | |
# Options defined in oslo.messaging | |
# | |
# Use durable queues in amqp. (boolean value) | |
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues | |
# amqp_durable_queues=false | |
# Auto-delete queues in amqp. (boolean value) | |
# amqp_auto_delete=false | |
# Size of RPC connection pool. (integer value) | |
# rpc_conn_pool_size=30 | |
# Qpid broker hostname. (string value) | |
# qpid_hostname=localhost | |
# Qpid broker port. (integer value) | |
# qpid_port=5672 | |
# Qpid HA cluster host:port pairs. (list value) | |
# qpid_hosts=$qpid_hostname:$qpid_port | |
# Username for Qpid connection. (string value) | |
# qpid_username= | |
# Password for Qpid connection. (string value) | |
# qpid_password= | |
# Space separated list of SASL mechanisms to use for auth. | |
# (string value) | |
# qpid_sasl_mechanisms= | |
# Seconds between connection keepalive heartbeats. (integer | |
# value) | |
# qpid_heartbeat=60 | |
# Transport to use, either 'tcp' or 'ssl'. (string value) | |
# qpid_protocol=tcp | |
# Whether to disable the Nagle algorithm. (boolean value) | |
# qpid_tcp_nodelay=true | |
# The qpid topology version to use. Version 1 is what was | |
# originally used by impl_qpid. Version 2 includes some | |
# backwards-incompatible changes that allow broker federation | |
# to work. Users should update to version 2 when they are | |
# able to take everything down, as it requires a clean break. | |
# (integer value) | |
# qpid_topology_version=1 | |
# SSL version to use (valid only if SSL enabled). valid values | |
# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some | |
# distributions. (string value) | |
# kombu_ssl_version= | |
# SSL key file (valid only if SSL enabled). (string value) | |
# kombu_ssl_keyfile= | |
# SSL cert file (valid only if SSL enabled). (string value) | |
# kombu_ssl_certfile= | |
# SSL certification authority file (valid only if SSL | |
# enabled). (string value) | |
# kombu_ssl_ca_certs= | |
# How long to wait before reconnecting in response to an AMQP | |
# consumer cancel notification. (floating point value) | |
# kombu_reconnect_delay=1.0 | |
# The RabbitMQ broker address where a single node is used. | |
# (string value) | |
# rabbit_host=localhost | |
# The RabbitMQ broker port where a single node is used. | |
# (integer value) | |
# rabbit_port=5672 | |
# RabbitMQ HA cluster host:port pairs. (list value) | |
# rabbit_hosts=$rabbit_host:$rabbit_port | |
# Connect over SSL for RabbitMQ. (boolean value) | |
# rabbit_use_ssl=false | |
# The RabbitMQ userid. (string value) | |
# rabbit_userid=guest | |
# The RabbitMQ password. (string value) | |
# rabbit_password=guest | |
# the RabbitMQ login method (string value) | |
# rabbit_login_method=AMQPLAIN | |
# The RabbitMQ virtual host. (string value) | |
# rabbit_virtual_host=/ | |
# How frequently to retry connecting with RabbitMQ. (integer | |
# value) | |
# rabbit_retry_interval=1 | |
# How long to backoff for between retries when connecting to | |
# RabbitMQ. (integer value) | |
# rabbit_retry_backoff=2 | |
# Maximum number of RabbitMQ connection retries. Default is 0 | |
# (infinite retry count). (integer value) | |
# rabbit_max_retries=0 | |
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change | |
# this option, you must wipe the RabbitMQ database. (boolean | |
# value) | |
# rabbit_ha_queues=false | |
# If passed, use a fake RabbitMQ provider. (boolean value) | |
# fake_rabbit=false | |
# ZeroMQ bind address. Should be a wildcard (*), an ethernet | |
# interface, or IP. The "host" option should point or resolve | |
# to this address. (string value) | |
# rpc_zmq_bind_address=* | |
# MatchMaker driver. (string value) | |
# rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost | |
# ZeroMQ receiver listening port. (integer value) | |
# rpc_zmq_port=9501 | |
# Number of ZeroMQ contexts, defaults to 1. (integer value) | |
# rpc_zmq_contexts=1 | |
# Maximum number of ingress messages to locally buffer per | |
# topic. Default is unlimited. (integer value) | |
# rpc_zmq_topic_backlog= | |
# Directory for holding IPC sockets. (string value) | |
# rpc_zmq_ipc_dir=/var/run/openstack | |
# Name of this node. Must be a valid hostname, FQDN, or IP | |
# address. Must match "host" option, if running Nova. (string | |
# value) | |
# rpc_zmq_host=oslo | |
# Seconds to wait before a cast expires (TTL). Only supported | |
# by impl_zmq. (integer value) | |
# rpc_cast_timeout=30 | |
# Heartbeat frequency. (integer value) | |
# matchmaker_heartbeat_freq=300 | |
# Heartbeat time-to-live. (integer value) | |
# matchmaker_heartbeat_ttl=600 | |
# Size of RPC greenthread pool. (integer value) | |
# rpc_thread_pool_size=64 | |
# Driver or drivers to handle sending notifications. (multi | |
# valued) | |
# notification_driver= | |
# AMQP topic used for OpenStack notifications. (list value) | |
# Deprecated group/name - [rpc_notifier2]/topics | |
# notification_topics=notifications | |
# Seconds to wait for a response from a call. (integer value) | |
# rpc_response_timeout=60 | |
# A URL representing the messaging driver to use and its full | |
# configuration. If not set, we fall back to the rpc_backend | |
# option and driver specific configuration. (string value) | |
# transport_url= | |
# The messaging driver to use, defaults to rabbit. Other | |
# drivers include qpid and zmq. (string value) | |
# rpc_backend=rabbit | |
# The default exchange under which topics are scoped. May be | |
# overridden by an exchange name specified in the | |
# transport_url option. (string value) | |
# control_exchange=openstack | |
[matchmaker_redis] | |
# | |
# Options defined in oslo.messaging | |
# | |
# Host to locate redis. (string value) | |
# host=127.0.0.1 | |
# Use this port to connect to redis host. (integer value) | |
# port=6379 | |
# Password for Redis server (optional). (string value) | |
# password= | |
[matchmaker_ring] | |
# | |
# Options defined in oslo.messaging | |
# | |
# Matchmaker ring file (JSON). (string value) | |
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile | |
# ringfile=/etc/oslo/matchmaker_ring.json | |
[quotas] | |
# Default driver to use for quota checks | |
# quota_driver = neutron.db.quota.driver.DbQuotaDriver | |
# Resource name(s) that are supported in quota features | |
# This option is deprecated for removal in the M release, please refrain from using it | |
# quota_items = network,subnet,port | |
# Default number of resource allowed per tenant. A negative value means | |
# unlimited. | |
# default_quota = -1 | |
# Number of networks allowed per tenant. A negative value means unlimited. | |
# quota_network = 10 | |
# Number of subnets allowed per tenant. A negative value means unlimited. | |
# quota_subnet = 10 | |
# Number of ports allowed per tenant. A negative value means unlimited. | |
# quota_port = 50 | |
# Number of security groups allowed per tenant. A negative value means | |
# unlimited. | |
# quota_security_group = 10 | |
# Number of security group rules allowed per tenant. A negative value means | |
# unlimited. | |
# quota_security_group_rule = 100 | |
# Number of vips allowed per tenant. A negative value means unlimited. | |
# quota_vip = 10 | |
# Number of pools allowed per tenant. A negative value means unlimited. | |
# quota_pool = 10 | |
# Number of pool members allowed per tenant. A negative value means unlimited. | |
# The default is unlimited because a member is not a real resource consumer | |
# on Openstack. However, on back-end, a member is a resource consumer | |
# and that is the reason why quota is possible. | |
# quota_member = -1 | |
# Number of health monitors allowed per tenant. A negative value means | |
# unlimited. | |
# The default is unlimited because a health monitor is not a real resource | |
# consumer on Openstack. However, on back-end, a member is a resource consumer | |
# and that is the reason why quota is possible. | |
# quota_health_monitor = -1 | |
# Number of loadbalancers allowed per tenant. A negative value means unlimited. | |
# quota_loadbalancer = 10 | |
# Number of listeners allowed per tenant. A negative value means unlimited. | |
# quota_listener = -1 | |
# Number of v2 health monitors allowed per tenant. A negative value means | |
# unlimited. These health monitors exist under the lbaas v2 API | |
# quota_healthmonitor = -1 | |
# Number of routers allowed per tenant. A negative value means unlimited. | |
# quota_router = 10 | |
# Number of floating IPs allowed per tenant. A negative value means unlimited. | |
# quota_floatingip = 50 | |
# Number of firewalls allowed per tenant. A negative value means unlimited. | |
# quota_firewall = 1 | |
# Number of firewall policies allowed per tenant. A negative value means | |
# unlimited. | |
# quota_firewall_policy = 1 | |
# Number of firewall rules allowed per tenant. A negative value means | |
# unlimited. | |
# quota_firewall_rule = 100 | |
# Default number of RBAC entries allowed per tenant. A negative value means | |
# unlimited. | |
# quota_rbac_policy = 10 | |
[agent] | |
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf | |
# Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real | |
# root filter facility. | |
# Change to "sudo" to skip the filtering and just run the command directly | |
# root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf | |
# Set to true to add comments to generated iptables rules that describe | |
# each rule's purpose. (System must support the iptables comments module.) | |
# comment_iptables_rules = True | |
# Root helper daemon application to use when possible. | |
# root_helper_daemon = sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf | |
# Use the root helper when listing the namespaces on a system. This may not | |
# be required depending on the security configuration. If the root helper is | |
# not required, set this to False for a performance improvement. | |
# use_helper_for_ns_read = True | |
# The interval to check external processes for failure in seconds (0=disabled) | |
# check_child_processes_interval = 60 | |
# Action to take when an external process spawned by an agent dies | |
# Values: | |
# respawn - Respawns the external process | |
# exit - Exits the agent | |
# check_child_processes_action = respawn | |
# =========== items for agent management extension ============= | |
# seconds between nodes reporting state to server; should be less than | |
# agent_down_time, best if it is half or less than agent_down_time | |
# report_interval = 30 | |
# =========== end of items for agent management extension ===== | |
[keystone_authtoken] | |
auth_uri = http://10.32.20.50:5000/v2.0/ | |
identity_uri = http://10.32.20.50:35357/ | |
admin_tenant_name = service | |
admin_user = neutron | |
admin_password = s1wht51fAIVgfvOM | |
[database] | |
connection = mysql://neutron:VFwDhvj3fxpodbTa@10.32.20.50/neutron | |
# This line MUST be changed to actually run the plugin. | |
# Example: | |
# connection = mysql+pymysql://root:pass@127.0.0.1:3306/neutron | |
# Replace 127.0.0.1 above with the IP address of the database used by the | |
# main neutron server. (Leave it as is if the database runs on this host.) | |
# connection = sqlite:// | |
# NOTE: In deployment the [database] section and its connection attribute may | |
# be set in the corresponding core plugin '.ini' file. However, it is suggested | |
# to put the [database] section and its connection attribute in this | |
# configuration file. | |
# Database engine for which script will be generated when using offline | |
# migration | |
# engine = | |
# The SQLAlchemy connection string used to connect to the slave database | |
# slave_connection = | |
# Database reconnection retry times - in event connectivity is lost | |
# set to -1 implies an infinite retry count | |
# max_retries = 10 | |
# Database reconnection interval in seconds - if the initial connection to the | |
# database fails | |
# retry_interval = 10 | |
# Minimum number of SQL connections to keep open in a pool | |
# min_pool_size = 1 | |
# Maximum number of SQL connections to keep open in a pool | |
# max_pool_size = 10 | |
# Timeout in seconds before idle sql connections are reaped | |
# idle_timeout = 3600 | |
# If set, use this value for max_overflow with sqlalchemy | |
# max_overflow = 20 | |
# Verbosity of SQL debugging information. 0=None, 100=Everything | |
# connection_debug = 0 | |
# Add python stack traces to SQL as comment strings | |
# connection_trace = False | |
# If set, use this value for pool_timeout with sqlalchemy | |
# pool_timeout = 10 | |
[nova] | |
region_name = RegionOne | |
# Name of the plugin to load | |
# auth_plugin = | |
# Config Section from which to load plugin specific options | |
# auth_section = | |
# PEM encoded Certificate Authority to use when verifying HTTPs connections. | |
# cafile = | |
# PEM encoded client certificate cert file | |
# certfile = | |
# Type of the nova endpoint to use. This endpoint will be looked up in the | |
# keystone catalog and should be one of public, internal or admin. | |
# endpoint_type = public | |
# Verify HTTPS connections. | |
# insecure = False | |
# PEM encoded client certificate key file | |
# keyfile = | |
# Name of nova region to use. Useful if keystone manages more than one region. | |
# region_name = | |
# Timeout value for http requests | |
# timeout = | |
[oslo_concurrency] | |
lock_path = /var/lib/neutron/lock | |
# Directory to use for lock files. For security, the specified directory should | |
# only be writable by the user running the processes that need locking. | |
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, | |
# a lock path must be set. | |
# lock_path = $state_path/lock | |
# Enables or disables inter-process locks. | |
# disable_process_locking = False | |
[oslo_policy] | |
# The JSON file that defines policies. | |
# policy_file = policy.json | |
# Default rule. Enforced when a requested rule is not found. | |
# policy_default_rule = default | |
# Directories where policy configuration files are stored. | |
# They can be relative to any directory in the search path defined by the | |
# config_dir option, or absolute paths. The file defined by policy_file | |
# must exist for these directories to be searched. Missing or empty | |
# directories are ignored. | |
# policy_dirs = policy.d | |
[oslo_messaging_amqp] | |
# | |
# From oslo.messaging | |
# | |
# Address prefix used when sending to a specific server (string value) | |
# Deprecated group/name - [amqp1]/server_request_prefix | |
# server_request_prefix = exclusive | |
# Address prefix used when broadcasting to all servers (string value) | |
# Deprecated group/name - [amqp1]/broadcast_prefix | |
# broadcast_prefix = broadcast | |
# Address prefix when sending to any server in group (string value) | |
# Deprecated group/name - [amqp1]/group_request_prefix | |
# group_request_prefix = unicast | |
# Name for the AMQP container (string value) | |
# Deprecated group/name - [amqp1]/container_name | |
# container_name = | |
# Timeout for inactive connections (in seconds) (integer value) | |
# Deprecated group/name - [amqp1]/idle_timeout | |
# idle_timeout = 0 | |
# Debug: dump AMQP frames to stdout (boolean value) | |
# Deprecated group/name - [amqp1]/trace | |
# trace = false | |
# CA certificate PEM file for verifing server certificate (string value) | |
# Deprecated group/name - [amqp1]/ssl_ca_file | |
# ssl_ca_file = | |
# Identifying certificate PEM file to present to clients (string value) | |
# Deprecated group/name - [amqp1]/ssl_cert_file | |
# ssl_cert_file = | |
# Private key PEM file used to sign cert_file certificate (string value) | |
# Deprecated group/name - [amqp1]/ssl_key_file | |
# ssl_key_file = | |
# Password for decrypting ssl_key_file (if encrypted) (string value) | |
# Deprecated group/name - [amqp1]/ssl_key_password | |
# ssl_key_password = | |
# Accept clients using either SSL or plain TCP (boolean value) | |
# Deprecated group/name - [amqp1]/allow_insecure_clients | |
# allow_insecure_clients = false | |
[oslo_messaging_qpid] | |
# | |
# From oslo.messaging | |
# | |
# Use durable queues in AMQP. (boolean value) | |
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues | |
# amqp_durable_queues = false | |
# Auto-delete queues in AMQP. (boolean value) | |
# Deprecated group/name - [DEFAULT]/amqp_auto_delete | |
# amqp_auto_delete = false | |
# Size of RPC connection pool. (integer value) | |
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size | |
# rpc_conn_pool_size = 30 | |
# Qpid broker hostname. (string value) | |
# Deprecated group/name - [DEFAULT]/qpid_hostname | |
# qpid_hostname = localhost | |
# Qpid broker port. (integer value) | |
# Deprecated group/name - [DEFAULT]/qpid_port | |
# qpid_port = 5672 | |
# Qpid HA cluster host:port pairs. (list value) | |
# Deprecated group/name - [DEFAULT]/qpid_hosts | |
# qpid_hosts = $qpid_hostname:$qpid_port | |
# Username for Qpid connection. (string value) | |
# Deprecated group/name - [DEFAULT]/qpid_username | |
# qpid_username = | |
# Password for Qpid connection. (string value) | |
# Deprecated group/name - [DEFAULT]/qpid_password | |
# qpid_password = | |
# Space separated list of SASL mechanisms to use for auth. (string value) | |
# Deprecated group/name - [DEFAULT]/qpid_sasl_mechanisms | |
# qpid_sasl_mechanisms = | |
# Seconds between connection keepalive heartbeats. (integer value) | |
# Deprecated group/name - [DEFAULT]/qpid_heartbeat | |
# qpid_heartbeat = 60 | |
# Transport to use, either 'tcp' or 'ssl'. (string value) | |
# Deprecated group/name - [DEFAULT]/qpid_protocol | |
# qpid_protocol = tcp | |
# Whether to disable the Nagle algorithm. (boolean value) | |
# Deprecated group/name - [DEFAULT]/qpid_tcp_nodelay | |
# qpid_tcp_nodelay = true | |
# The number of prefetched messages held by receiver. (integer value) | |
# Deprecated group/name - [DEFAULT]/qpid_receiver_capacity | |
# qpid_receiver_capacity = 1 | |
# The qpid topology version to use. Version 1 is what was originally used by | |
# impl_qpid. Version 2 includes some backwards-incompatible changes that allow | |
# broker federation to work. Users should update to version 2 when they are | |
# able to take everything down, as it requires a clean break. (integer value) | |
# Deprecated group/name - [DEFAULT]/qpid_topology_version | |
# qpid_topology_version = 1 | |
[oslo_messaging_rabbit] | |
rabbit_hosts = 10.32.20.51:5672,10.32.20.52:5672,10.32.20.53:5672 | |
rabbit_userid = guest | |
rabbit_password = JLqzObrt0aDEtSWP | |
# | |
# From oslo.messaging | |
# | |
# Use durable queues in AMQP. (boolean value) | |
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues | |
# amqp_durable_queues = false | |
# Auto-delete queues in AMQP. (boolean value) | |
# Deprecated group/name - [DEFAULT]/amqp_auto_delete | |
# amqp_auto_delete = false | |
# Size of RPC connection pool. (integer value) | |
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size | |
# rpc_conn_pool_size = 30 | |
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and | |
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some | |
# distributions. (string value) | |
# Deprecated group/name - [DEFAULT]/kombu_ssl_version | |
# kombu_ssl_version = | |
# SSL key file (valid only if SSL enabled). (string value) | |
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile | |
# kombu_ssl_keyfile = | |
# SSL cert file (valid only if SSL enabled). (string value) | |
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile | |
# kombu_ssl_certfile = | |
# SSL certification authority file (valid only if SSL enabled). (string value) | |
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs | |
# kombu_ssl_ca_certs = | |
# How long to wait before reconnecting in response to an AMQP consumer cancel | |
# notification. (floating point value) | |
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay | |
# kombu_reconnect_delay = 1.0 | |
# The RabbitMQ broker address where a single node is used. (string value) | |
# Deprecated group/name - [DEFAULT]/rabbit_host | |
# rabbit_host = localhost | |
# The RabbitMQ broker port where a single node is used. (integer value) | |
# Deprecated group/name - [DEFAULT]/rabbit_port | |
# rabbit_port = 5672 | |
# RabbitMQ HA cluster host:port pairs. (list value) | |
# Deprecated group/name - [DEFAULT]/rabbit_hosts | |
# rabbit_hosts = $rabbit_host:$rabbit_port | |
# Connect over SSL for RabbitMQ. (boolean value) | |
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl | |
# rabbit_use_ssl = false | |
# The RabbitMQ userid. (string value) | |
# Deprecated group/name - [DEFAULT]/rabbit_userid | |
# rabbit_userid = guest | |
# The RabbitMQ password. (string value) | |
# Deprecated group/name - [DEFAULT]/rabbit_password | |
# rabbit_password = guest | |
# The RabbitMQ login method. (string value) | |
# Deprecated group/name - [DEFAULT]/rabbit_login_method | |
# rabbit_login_method = AMQPLAIN | |
# The RabbitMQ virtual host. (string value) | |
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host | |
# rabbit_virtual_host = / | |
# How frequently to retry connecting with RabbitMQ. (integer value) | |
# rabbit_retry_interval = 1 | |
# How long to backoff for between retries when connecting to RabbitMQ. (integer | |
# value) | |
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff | |
# rabbit_retry_backoff = 2 | |
# Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry | |
# count). (integer value) | |
# Deprecated group/name - [DEFAULT]/rabbit_max_retries | |
# rabbit_max_retries = 0 | |
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you | |
# must wipe the RabbitMQ database. (boolean value) | |
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues | |
# rabbit_ha_queues = false | |
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value) | |
# Deprecated group/name - [DEFAULT]/fake_rabbit | |
# fake_rabbit = false | |
[qos] | |
# Drivers list to use to send the update notification | |
# notification_drivers = message_queue | |
[service_providers] | |
service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
$ rpm -qa | grep neutron | |
python-neutron-7.0.4-2.el7ost.noarch | |
openstack-neutron-7.0.4-2.el7ost.noarch | |
python-neutronclient-3.1.0-1.el7ost.noarch | |
openstack-neutron-common-7.0.4-2.el7ost.noarch | |
python-neutron-lbaas-7.0.0-2.el7ost.noarch | |
openstack-neutron-lbaas-7.0.0-2.el7ost.noarch | |
openstack-neutron-ml2-7.0.4-2.el7ost.noarch |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bash-4.2$ rabbitmqctl cluster_status | |
Cluster status of node 'rabbit@mercury-control-server-1' ... | |
[{nodes,[{disc,['rabbit@mercury-control-server-1', | |
'rabbit@mercury-control-server-2', | |
'rabbit@mercury-control-server-3']}]}, | |
{running_nodes,['rabbit@mercury-control-server-3', | |
'rabbit@mercury-control-server-2', | |
'rabbit@mercury-control-server-1']}, | |
{cluster_name,<<"rabbit@mercury-control-server-1">>}, | |
{partitions,[]}] | |
...done. | |
bash-4.2$ rabbitmqctl list_queues | |
Listing queues ... | |
cinder-scheduler 0 | |
cinder-scheduler.mercury-control-server-1.novalocal 0 | |
cinder-scheduler.mercury-control-server-2.novalocal 0 | |
cinder-scheduler.mercury-control-server-3.novalocal 0 | |
cinder-scheduler_fanout_3d4271f4858c472e924fd1bec95942e4 0 | |
cinder-scheduler_fanout_6f45edf2b8b54d2e90497e655713e4d6 0 | |
cinder-scheduler_fanout_9b8e77277bef4d439e41ecf1155058c6 0 | |
cinder-volume 0 | |
cinder-volume.mercury-control-server-1.novalocal 0 | |
cinder-volume.mercury-control-server-2.novalocal 0 | |
cinder-volume.mercury-control-server-3.novalocal 0 | |
cinder-volume_fanout_49542fc9c15f49c3bf82e90015d06559 0 | |
cinder-volume_fanout_9f0a4a96aeb2422294187775e1019803 0 | |
cinder-volume_fanout_dba86d26eb124e6c9699d1a9469bf26f 0 | |
compute 0 | |
compute.mercury-compute-server-1.novalocal 0 | |
compute.mercury-compute-server-2.novalocal 0 | |
conductor 0 | |
conductor.mercury-control-server-1.novalocal 0 | |
conductor.mercury-control-server-2.novalocal 0 | |
conductor.mercury-control-server-3.novalocal 0 | |
conductor_fanout_135cf9cadce847f6bfd2df9942f1c676 0 | |
conductor_fanout_1f420d0206bf42ad828a74e4b13ccfce 0 | |
conductor_fanout_2012973d6a4549aab6b0239aa7003922 0 | |
conductor_fanout_25cb87c10e4e45e987e4771e2a31c524 0 | |
conductor_fanout_25e5e51029364b7a858fc94c5d2dd2bb 0 | |
conductor_fanout_32ec8144b8d94667883d4264e88d55b8 0 | |
conductor_fanout_35e801e91de042d480a936928b8f4d57 0 | |
conductor_fanout_3e554029fa31422591818cab8c034a85 0 | |
conductor_fanout_438a8526d4c2460d91555ca836a78aa5 0 | |
conductor_fanout_43ab2a3761b34a43940f1f440f2673c9 0 | |
conductor_fanout_491bb5ae7f12421d8a579f1cca0385dc 0 | |
conductor_fanout_6f544dc7bf51457bbf26284e50415b69 0 | |
conductor_fanout_77f69f55731e4d84820026359c32fdf2 0 | |
conductor_fanout_9257d5862b3d431cb4a2c731e44eb400 0 | |
conductor_fanout_92e0f1efd6a1416cade95659c0930287 0 | |
conductor_fanout_9b06a6255e1742859ed88775da5fbcdb 0 | |
conductor_fanout_a5a7f72e22304263bafd8b7ec7f5e371 0 | |
conductor_fanout_c86d88bdb2534512975ec9a646ac508a 0 | |
conductor_fanout_ccbb2ecbd85243d1ae231d62a4a100f3 0 | |
conductor_fanout_d6a4af2910034cc7bbb7d23bbe2f3913 0 | |
conductor_fanout_d726db92118340abad10b86f7194e070 0 | |
conductor_fanout_e2e932ccd930494ab6d1cfb14c26f43d 0 | |
conductor_fanout_ea8a46d6d57440d4baa19e5f3bff1b55 0 | |
conductor_fanout_ef04a981f78d49e487acad3b7df5a20b 0 | |
consoleauth 0 | |
consoleauth.mercury-control-server-1.novalocal 0 | |
consoleauth.mercury-control-server-2.novalocal 0 | |
consoleauth.mercury-control-server-3.novalocal 0 | |
consoleauth_fanout_1c8bfd31459146e3b98c18d4f7fb12ee 0 | |
consoleauth_fanout_1e014668390a43febf1e1f59844545fd 0 | |
consoleauth_fanout_66c8e82763254742b00aa5c526c9bbfb 0 | |
dhcp_agent 0 | |
dhcp_agent.mercury-control-server-1.novalocal 0 | |
dhcp_agent.mercury-control-server-2.novalocal 0 | |
dhcp_agent.mercury-control-server-3.novalocal 0 | |
dhcp_agent_fanout_664d6601f5454f71ab1c2b7a679bd313 0 | |
dhcp_agent_fanout_6903fce1aedd45799b49e26eb6449162 0 | |
dhcp_agent_fanout_f7be048029a84aa48158e01f48f1274d 0 | |
engine 0 | |
engine.mercury-control-server-1.novalocal 0 | |
engine.mercury-control-server-2.novalocal 0 | |
engine.mercury-control-server-3.novalocal 0 | |
engine_fanout_011cb7e176504b31b485d5ab6c0a5b8a 0 | |
engine_fanout_02031a51251045ea887968e7c0ee020e 0 | |
engine_fanout_191fb130c14a4b9f849a1c4796a22f2a 0 | |
engine_fanout_1e49e7fc1cee4faf9757d8e6404f0347 0 | |
engine_fanout_1f7df711f3ce4666b1ac5500b686fe06 0 | |
engine_fanout_24238a7afb8040f48f14076784959c1c 0 | |
engine_fanout_29643590216949f5b77ce3985fb22a00 0 | |
engine_fanout_474e2e55665f442990bd4f5965d8a339 0 | |
engine_fanout_59dc42eda0b84badbae3892a96867ae0 0 | |
engine_fanout_7b088044a1124367b2a93a9ef4dc24ef 0 | |
engine_fanout_7b5efce4f5904e5c843c711292801d02 0 | |
engine_fanout_81c78e4f8d5f4401bda50a524682f0a8 0 | |
engine_fanout_8207e30fb21f467196ac5dcd44c59b15 0 | |
engine_fanout_8c84670f9315448796cb0ad5bebbfc6a 0 | |
engine_fanout_9535587bcc4f455fa0b15f1dd621d3fc 0 | |
engine_fanout_991986fc76414e7097382e44130a0d24 0 | |
engine_fanout_9ecb01c1d532449db1999b1dea7f10e2 0 | |
engine_fanout_a9883dc5f15344b98d2f192622acbbdb 0 | |
engine_fanout_c0491dfb07094b3f8f33807c0e9ac566 0 | |
engine_fanout_d13b352d73cb4df988efc72633e243f2 0 | |
engine_fanout_e2c489c042c049debabf344c3fceb89a 0 | |
engine_fanout_e2f4ebc7bc484abb8c7efb3ea1e0bbfd 0 | |
engine_fanout_eedc0537c8a94fc7be5b265a61f2e2ef 0 | |
engine_fanout_f9a8b3958c5f43708ff7ff031f111929 0 | |
heat-engine-listener 0 | |
heat-engine-listener.05567d67-9e4a-4500-93ac-189bddd94252 0 | |
heat-engine-listener.1c0f222e-2f75-4352-98bc-f21dfdb9ad03 0 | |
heat-engine-listener.2a8ce267-8fac-45d7-b3de-fcd9a0795734 0 | |
heat-engine-listener.38c189a0-3f7b-4e4c-b8f3-e44b989c2c64 0 | |
heat-engine-listener.3a95dee1-6d19-47ac-82c2-3e871b157c2a 0 | |
heat-engine-listener.3afabcf0-3182-456b-a2e1-edf9dca0286f 0 | |
heat-engine-listener.43e7a6bf-d8f6-41ff-867c-ad0b2baa9ea2 0 | |
heat-engine-listener.61c80605-a456-41f5-a245-d59f1e3fb175 0 | |
heat-engine-listener.67e639d5-15b3-49dc-a5c3-1a1046918456 0 | |
heat-engine-listener.7bbf95c4-3e1c-4c20-a58b-d405f12c0436 0 | |
heat-engine-listener.7ddb4db0-a663-4f03-9680-2d2bdb660fa8 0 | |
heat-engine-listener.90c93118-40d9-4284-8117-aca3c69d4c68 0 | |
heat-engine-listener.9719eb28-90c5-4986-b9e8-2703444ef7eb 0 | |
heat-engine-listener.99736a5b-bce5-44c9-bd7e-0a5b5b5c531f 0 | |
heat-engine-listener.a5b7f7c7-14be-41bd-bb4a-a48756e51714 0 | |
heat-engine-listener.b3646a76-1e5a-4046-8f23-d648022c0501 0 | |
heat-engine-listener.b6f1108f-2fbe-40b8-a5c4-b5dd3445ad5c 0 | |
heat-engine-listener.c634e038-363b-47e8-88cd-c5f157a8d8df 0 | |
heat-engine-listener.d0f57f26-2174-4119-9b91-384335247b9f 0 | |
heat-engine-listener.d1755784-9f8d-4f23-bed1-d2e6d7624ac2 0 | |
heat-engine-listener.e1456ff3-b02d-49e8-8a10-3d5aa2f87805 0 | |
heat-engine-listener.e6e0dfc1-8ed1-4e74-a19a-7630b81b227e 0 | |
heat-engine-listener.ede6c1b2-de22-4bb8-b386-3828e19e76cd 0 | |
heat-engine-listener.ee8695a3-c065-420f-b7e5-7140e282de3e 0 | |
heat-engine-listener_fanout_083348a9bf9f45c681b12e95fb4c71d1 0 | |
heat-engine-listener_fanout_12353abbdd254d74b41ac6e5a5c22e28 0 | |
heat-engine-listener_fanout_1ce536d17bea462c921d3e9aeda44b0c 0 | |
heat-engine-listener_fanout_1dd6113fe4714db388717901a2e01fd2 0 | |
heat-engine-listener_fanout_22484ca3cc16424f9929ff0436ebf9d1 0 | |
heat-engine-listener_fanout_2893874da15b40a8a50cf5b058dee5a8 0 | |
heat-engine-listener_fanout_2a3c8e4e345e41efb21ee45766dd2006 0 | |
heat-engine-listener_fanout_2bd0fea034e64bda9d3c4103df80a0d7 0 | |
heat-engine-listener_fanout_2c57cc33c87d4f7fb7be294e82a23447 0 | |
heat-engine-listener_fanout_3f2eb2f4013c430eb63726719a4d0ea0 0 | |
heat-engine-listener_fanout_4e75767cf2d24d3e9b0fe73f387e149c 0 | |
heat-engine-listener_fanout_502d23eac9ac4bcbb136bf37f40ea436 0 | |
heat-engine-listener_fanout_5266d79709d94c9ca54173a374379698 0 | |
heat-engine-listener_fanout_546522533b5847168b1fd01cdd88326e 0 | |
heat-engine-listener_fanout_7149e1ae7a4345349697cc32366aa81b 0 | |
heat-engine-listener_fanout_94ab2562b4824e9c8cb3ecff32ea18d8 0 | |
heat-engine-listener_fanout_9f8618b7544343f6bbe9267980b4fb31 0 | |
heat-engine-listener_fanout_a37fe3b282e349b1bb63262911d96538 0 | |
heat-engine-listener_fanout_bd0d40696cf449ceba6bd0326fb21d0c 0 | |
heat-engine-listener_fanout_d3799c7ff9314176977d8609cbea6720 0 | |
heat-engine-listener_fanout_d432de1bc6ab4da58a06a321d990952a 0 | |
heat-engine-listener_fanout_df5e636d82ea4d1d86bd68656ffaa8d3 0 | |
heat-engine-listener_fanout_f4069343fe7d46dd87a03eb828b49d4e 0 | |
heat-engine-listener_fanout_fe95ee4034f94dc1acc966bff15b546e 0 | |
l3_agent 0 | |
l3_agent.mercury-control-server-1.novalocal 0 | |
l3_agent.mercury-control-server-2.novalocal 0 | |
l3_agent.mercury-control-server-3.novalocal 0 | |
l3_agent_fanout_45d1208ae3e54423b0a2e005152cff00 0 | |
l3_agent_fanout_9253ccd2ab87477aa26f71b112f9f760 0 | |
l3_agent_fanout_d5996470cd4f43538f0f746d1dd402a9 0 | |
n-lbaas_agent 0 | |
n-lbaas_agent.mercury-control-server-1.novalocal 0 | |
n-lbaas_agent.mercury-control-server-2.novalocal 0 | |
n-lbaas_agent.mercury-control-server-3.novalocal 0 | |
n-lbaas_agent_fanout_18a3b28c969148f3a008df8f3e5f5363 0 | |
n-lbaas_agent_fanout_a7d48e8a1b27443d82ee4944bec44cf8 0 | |
n-lbaas_agent_fanout_b5360edb19c240e79c71d60806977f66 0 | |
n-lbaasv2-plugin 0 | |
n-lbaasv2-plugin.mercury-control-server-1.novalocal 0 | |
n-lbaasv2-plugin.mercury-control-server-2.novalocal 0 | |
n-lbaasv2-plugin.mercury-control-server-3.novalocal 0 | |
n-lbaasv2-plugin_fanout_5cbb6dd4fafc4c4784add8a20e0a28a5 0 | |
n-lbaasv2-plugin_fanout_756ee4e4eee547528d0f6e3dde71b150 0 | |
n-lbaasv2-plugin_fanout_7629f7bb85ce493d83c334dfcc2cd4aa 0 | |
notifications.info 8 | |
q-agent-notifier-network-delete 0 | |
q-agent-notifier-network-delete.mercury-compute-server-1.novalocal 0 | |
q-agent-notifier-network-delete.mercury-compute-server-2.novalocal 0 | |
q-agent-notifier-network-delete.mercury-control-server-1.novalocal 0 | |
q-agent-notifier-network-delete.mercury-control-server-2.novalocal 0 | |
q-agent-notifier-network-delete.mercury-control-server-3.novalocal 0 | |
q-agent-notifier-network-delete_fanout_0e555fdf2cb54b689d9714f2acaf9a35 0 | |
q-agent-notifier-network-delete_fanout_3ce6cf1fc4af4889b273ec7d06a8b05e 0 | |
q-agent-notifier-network-delete_fanout_496539998cbd4348922826689358e6bb 0 | |
q-agent-notifier-network-delete_fanout_6c026836c04d4a398c6f36401bd18047 0 | |
q-agent-notifier-network-delete_fanout_7697dad4caa547d39a2722748fd1e193 0 | |
q-agent-notifier-port-update 0 | |
q-agent-notifier-port-update.mercury-compute-server-1.novalocal 0 | |
q-agent-notifier-port-update.mercury-compute-server-2.novalocal 0 | |
q-agent-notifier-port-update.mercury-control-server-1.novalocal 0 | |
q-agent-notifier-port-update.mercury-control-server-2.novalocal 0 | |
q-agent-notifier-port-update.mercury-control-server-3.novalocal 0 | |
q-agent-notifier-port-update_fanout_1224d0f46e0b4bc7803f8943b12ee05f 0 | |
q-agent-notifier-port-update_fanout_5fca72c707ea4f16bb6441fe07ed1031 0 | |
q-agent-notifier-port-update_fanout_b92cb8b359ea499489ee5b0573773bdc 0 | |
q-agent-notifier-port-update_fanout_f25c7328d8da479cbd6d8e600e6a092a 0 | |
q-agent-notifier-port-update_fanout_fdb6e8214d1c46ec990be44eb671e284 0 | |
q-agent-notifier-security_group-update 0 | |
q-agent-notifier-security_group-update.mercury-compute-server-1.novalocal 0 | |
q-agent-notifier-security_group-update.mercury-compute-server-2.novalocal 0 | |
q-agent-notifier-security_group-update.mercury-control-server-1.novalocal 0 | |
q-agent-notifier-security_group-update.mercury-control-server-2.novalocal 0 | |
q-agent-notifier-security_group-update.mercury-control-server-3.novalocal 0 | |
q-agent-notifier-security_group-update_fanout_02eb31cb51284d2c9c02713687cb3b9a 0 | |
q-agent-notifier-security_group-update_fanout_627127a3af5a426b9c0947a06595e110 0 | |
q-agent-notifier-security_group-update_fanout_8d043c5e07314b45a5cf0aa93ab8511c 0 | |
q-agent-notifier-security_group-update_fanout_a22d823a1e814ccaa779d38b227c0b51 0 | |
q-agent-notifier-security_group-update_fanout_e82724a567ad41fbb8980fabfff9abec 0 | |
q-l3-plugin 0 | |
q-l3-plugin.mercury-control-server-1.novalocal 0 | |
q-l3-plugin.mercury-control-server-2.novalocal 0 | |
q-l3-plugin.mercury-control-server-3.novalocal 0 | |
q-l3-plugin_fanout_20314ec94a514ea0bf8bb65b303f48cd 0 | |
q-l3-plugin_fanout_c3f3c1d7d6f3498eb8a239bc8a2d184a 0 | |
q-l3-plugin_fanout_cbfe05b5a13845d8a4417c5f073cd530 0 | |
q-plugin 0 | |
q-plugin.mercury-control-server-1.novalocal 0 | |
q-plugin.mercury-control-server-2.novalocal 0 | |
q-plugin.mercury-control-server-3.novalocal 0 | |
q-plugin_fanout_0e9e34552e5b46698ef853eaebb28240 0 | |
q-plugin_fanout_973883ebfdce4808b642d3f8d09e65b0 0 | |
q-plugin_fanout_b30eb0b87bc74d2397b4c405ed5a3e6b 0 | |
reply_11c5a0e738424717b5c4217d6bd74ef5 0 | |
reply_144a80d0fcad4c8f998b6e440f1e757f 0 | |
reply_1538200af8a4453fb006eb6b516efffb 0 | |
reply_1ed8f0eeb56346b68a3929817772f882 0 | |
reply_27014b10ce1e4b109b72924fb9262e39 0 | |
reply_298e974973874b199b9fb92d48d6e880 0 | |
reply_33654def69fb479889a0af1c3b7aee4b 0 | |
reply_40173e13b6534d0e9215bb203c880f8c 0 | |
reply_445bd650b8bc4c3898a3ea7b170300d1 0 | |
reply_50bfeaf5026f4793a18572b03eaf605c 0 | |
reply_681f29f1f8c84b0ca4c533f8d8adf9a8 0 | |
reply_747fceac0a914116bf50e0b7ef665f76 0 | |
reply_74d75cc889684f368496d48d9653c257 0 | |
reply_81f87d38ca594db98d9bacc7cbf4f9a6 0 | |
reply_9e4a0ae4fbb64f3ba3de4a2baaa63495 0 | |
reply_9fe7cf2ae72f43f590aa73b138c6a7fc 0 | |
reply_a956029a6bcd43ffa6cd2f91b18cf734 0 | |
reply_afab45ac723d44ec9651a7ba70ba4118 0 | |
reply_c4207d45e1f64c19b112d367e5a879e6 0 | |
reply_cb79e67ad8f34b659fb2724d58a0316c 0 | |
reply_ce42a8dec8304278986c93026ac0bca8 0 | |
reply_e6046aa232404820a96e9830c5dfbf71 0 | |
scheduler 0 | |
scheduler.mercury-control-server-1.novalocal 0 | |
scheduler.mercury-control-server-2.novalocal 0 | |
scheduler.mercury-control-server-3.novalocal 0 | |
scheduler_fanout_5d7c5d84033c42979e92a0582aa39ea2 0 | |
scheduler_fanout_6905112eb03843189caf1effbb70474d 0 | |
scheduler_fanout_95c7e4f248604c85b6b87054b2365216 0 | |
...done. | |
# rabbitmq server logs | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-3': <3038.25481.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-2': <3037.25635.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Synchronising: all slaves already synced | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:23 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in vhost '/': Synchronising: all slaves already synced | |
... | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-3': <3038.25503.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-2': <3037.25657.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-3': <3038.25506.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: all slaves already synced | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: all slaves already synced | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-2': <3037.25660.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_a6c881c1ae9c4a259b6a3f8c2f42a0b4' in vhost '/': Synchronising: all slaves already synced | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Adding mirror on node 'rabbit@mercury-control-se | |
rver-2': <3037.25660.1> | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Synchronising: all slaves already synced | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Synchronising: 0 messages to synchronise | |
=INFO REPORT==== 6-Jun-2016::19:01:24 === | |
Mirrored queue 'n-lbaasv2-plugin_fanout_3d7abfb48a7a4bc8bbfb406490e9e8b6' in vhost '/': Synchronising: all slaves already synced | |
<snip> |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment