Skip to content

Instantly share code, notes, and snippets.

@natevw

natevw/debug.log Secret

Created February 6, 2015 17:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save natevw/e0f7759b317dfe87d800 to your computer and use it in GitHub Desktop.
Save natevw/e0f7759b317dfe87d800 to your computer and use it in GitHub Desktop.
Couchbase server admin not available, often taking most of CPU, even after stop/start cycle (`tail -n 5000` of both logs)
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1']],
[{replication_topology,star},{tags,undefined},{max_slaves,10}]}]},
{read_only_user_creds,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]}|
null]},
{rest_creds,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838608}}]}|
{"admin",{password,"*****"}}]},
{rest,[{port,8091}]},
{settings,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838575}}]},
{stats,[{send_stats,false}]}]},
{memory_quota,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838539}}]}|
300]},
{alert_limits,[{max_overhead_perc,50},{max_disk_used,90}]},
{auto_failover_cfg,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{enabled,false},
{timeout,120},
{max_nodes,1},
{count,0}]},
{autocompaction,
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]},
{cert_and_pkey,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838435}}]}|
{<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIE6G8swsVztgwCwYJKoZIhvcNAQEFMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgZmJlYzEyZTIwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIGZiZWMxMmUy\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmUPts4fhuyVK7DvebO/r\n/7yXZKkKGPdiV72cFepOUvzC5Nod+Lwh1UH4lQ431OJ9irKo4nN7KTRvkewMhVvV\ng4U8EdMGa9jlOX0ccWAaX4vKWjgAIpwMQi5ddu3fttqW33xRY5FZ0D4qkQzbOtEZ\n1oFZYN8RubfkNEn3ZfdK3ZufYxL4EiMJ1fKU7a0i33qlfizwkNx7/di+wCe0Pm9b\nIoFLuDfsM3jCIRSG4Gfrf3KdgPgJM1W3s5AmdUUTl7XBNKD8Akh09WAD9lbVYzxA\nYkDRuG+30kKwNAuAtTCXaXToiaVJYJ0uSLGdNLXvUcyQlQ6yn9wg9DTcAF2ZHJI0\nVwIDAQABozgwNjAOBgNVHQ8BAf8EBAMCAKQwEwYDVR0lBAwwCgYIKwYBBQUHAwEw\nDwYDVR0TAQH/BAUwAwEB/zALBgkqhkiG9w0BAQUDggEBABmlUxTnMTVZR+OVxrkh\nAkdohLBuxJVFV1LvMqa5b0mvPixDcpsbFoqqMe9XPRkM9J3h3tcrQqtFjC3YYaf5\ncXugn8DVHrwDzD5Hf3ivT7HYOz2he+qgfHGM2XpiIqAqzjP0yf8cim64r6+0nFcM\nxjXFkNJQuK1fmPkXU/9yfbfDh+/dYACnz+dIlzG33uYqyFa+cHJq4N+l7DE8/fRp\nLat0tBylzGktFbz9cUjgQDEl2P4rFRPBmkOS9qhpQnSax9VUkZpZFwZO0TOIsS/F\n2Icb8iE0JA1YV562PqRfJa0NE9Bohnn2H8igJr+Fa7B1P6QYeRjcW3r7Vpc+Z1K4\nZA0=\n-----END CERTIFICATE-----\n">>,
<<"*****">>}]},
{cluster_compat_version,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},
3,0]},
{drop_request_memory_threshold_mib,undefined},
{dynamic_config_version,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{4,63581838430}}]},
3,0]},
{email_alerts,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{recipients,["root@localhost"]},
{sender,"couchbase@localhost"},
{enabled,false},
{email_server,
[{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]},
{alerts,
[auto_failover_node,auto_failover_maximum_reached,
auto_failover_other_nodes_down,auto_failover_cluster_too_small,ip,disk,
overhead,ep_oom_errors,ep_item_commit_failed]}]},
{fast_warmup,
[{fast_warmup_enabled,true},
{min_memory_threshold,10},
{min_items_threshold,10}]},
{index_aware_rebalance_disabled,false},
{max_bucket_count,10},
{nodes_wanted,['ns_1@127.0.0.1']},
{otp,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838429}}]},
{cookie,mpwstvnqzckasujp}]},
{remote_clusters,[]},
{replication,[{enabled,true}]},
{replication_topology,star},
{server_groups,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},
[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]},
{set_view_update_daemon,
[{update_interval,5000},
{update_min_changes,5000},
{replica_update_min_changes,5000}]},
{uuid,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838479}}]}|
<<"da76cb0bd4306a69006bcf16b33f8c58">>]},
{{couchdb,max_parallel_indexers},4},
{{couchdb,max_parallel_replica_indexers},2},
{{request_limit,capi},undefined},
{{request_limit,rest},undefined},
{{node,'ns_1@127.0.0.1',capi_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
8092]},
{{node,'ns_1@127.0.0.1',compaction_daemon},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{check_interval,30},
{min_file_size,131072}]},
{{node,'ns_1@127.0.0.1',config_version},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{8,63581838428}}]}|
{3,0}]},
{{node,'ns_1@127.0.0.1',is_enterprise},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
false]},
{{node,'ns_1@127.0.0.1',isasl},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]},
{{node,'ns_1@127.0.0.1',membership},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
active]},
{{node,'ns_1@127.0.0.1',memcached},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{ssl_port,undefined},
{verbosity,0},
{mccouch_port,11213},
{engines,
[{membase,
[{engine,"/opt/couchbase/lib/memcached/ep.so"},
{static_config_string,
"vb0=false;waitforwarmup=false;failpartialwarmup=false"}]},
{memcached,
[{engine,"/opt/couchbase/lib/memcached/default_engine.so"},
{static_config_string,"vb0=true"}]}]},
{log_path,"/opt/couchbase/var/lib/couchbase/logs"},
{log_prefix,"memcached.log"},
{log_generations,20},
{log_cyclesize,10485760},
{log_sleeptime,19},
{log_rotation_period,39003},
{dedicated_port,11209},
{bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"},
{port,11210},
{dedicated_port,11209},
{admin_user,"_admin"},
{admin_pass,"*****"}]},
{{node,'ns_1@127.0.0.1',memcached_config},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]}|
{[{interfaces,
{ns_ports_setup,omit_missing_mcd_ports,
[{[{host,<<"*">>},{port,port},{maxconn,30000}]},
{[{host,<<"*">>},{port,dedicated_port},{maxconn,5000}]},
{[{host,<<"*">>},
{port,ssl_port},
{maxconn,30000},
{ssl,
{[{key,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>},
{cert,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}},
{extensions,
[{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>},
{config,<<>>}]},
{[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>},
{config,
{"cyclesize=~B;sleeptime=~B;filename=~s/~s",
[log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]},
{engine,
{[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>},
{config,
{"admin=~s;default_bucket_name=default;auto_create=false",
[admin_user]}}]}},
{verbosity,verbosity}]}]},
{{node,'ns_1@127.0.0.1',moxi},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,11211},
{verbosity,[]}]},
{{node,'ns_1@127.0.0.1',ns_log},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]},
{{node,'ns_1@127.0.0.1',port_servers},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{moxi,"/opt/couchbase/bin/moxi",
["-Z",
{"port_listen=~B,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200",
[port]},
"-z",
{"url=http://127.0.0.1:~B/pools/default/saslBucketsStreaming",
[{misc,this_node_rest_port,[]}]},
"-p","0","-Y","y","-O","stderr",
{"~s",[verbosity]}],
[{env,
[{"EVENT_NOSELECT","1"},
{"MOXI_SASL_PLAIN_USR",{"~s",[{ns_moxi_sup,rest_user,[]}]}},
{"MOXI_SASL_PLAIN_PWD",{"~s",[{ns_moxi_sup,rest_pass,[]}]}}]},
use_stdio,exit_status,port_server_send_eol,stderr_to_stdout,stream]},
{memcached,"/opt/couchbase/bin/memcached",
["-C","/opt/couchbase/var/lib/couchbase/config/memcached.json"],
[{env,
[{"EVENT_NOSELECT","1"},
{"MEMCACHED_TOP_KEYS","5"},
{"ISASL_PWFILE",{"~s",[{isasl,path}]}}]},
use_stdio,stderr_to_stdout,exit_status,port_server_send_eol,stream]}]},
{{node,'ns_1@127.0.0.1',rest},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,8091},
{port_meta,global}]},
{{node,'ns_1@127.0.0.1',ssl_capi_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_rest_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',uuid},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
<<"d8bc0a12a1e863160cb3286c78f86696">>]}]]
[ns_server:info,2015-02-06T9:33:22.552,ns_1@127.0.0.1:ns_config<0.262.0>:ns_config:load_config:916]Here's full dynamic config we loaded + static & default config:
[{{node,'ns_1@127.0.0.1',uuid},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
<<"d8bc0a12a1e863160cb3286c78f86696">>]},
{{node,'ns_1@127.0.0.1',ssl_rest_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',ssl_capi_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]},
{{node,'ns_1@127.0.0.1',rest},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,8091},
{port_meta,global}]},
{{node,'ns_1@127.0.0.1',port_servers},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{moxi,"/opt/couchbase/bin/moxi",
["-Z",
{"port_listen=~B,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200",
[port]},
"-z",
{"url=http://127.0.0.1:~B/pools/default/saslBucketsStreaming",
[{misc,this_node_rest_port,[]}]},
"-p","0","-Y","y","-O","stderr",
{"~s",[verbosity]}],
[{env,
[{"EVENT_NOSELECT","1"},
{"MOXI_SASL_PLAIN_USR",{"~s",[{ns_moxi_sup,rest_user,[]}]}},
{"MOXI_SASL_PLAIN_PWD",{"~s",[{ns_moxi_sup,rest_pass,[]}]}}]},
use_stdio,exit_status,port_server_send_eol,stderr_to_stdout,stream]},
{memcached,"/opt/couchbase/bin/memcached",
["-C","/opt/couchbase/var/lib/couchbase/config/memcached.json"],
[{env,
[{"EVENT_NOSELECT","1"},
{"MEMCACHED_TOP_KEYS","5"},
{"ISASL_PWFILE",{"~s",[{isasl,path}]}}]},
use_stdio,stderr_to_stdout,exit_status,port_server_send_eol,stream]}]},
{{node,'ns_1@127.0.0.1',ns_log},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]},
{{node,'ns_1@127.0.0.1',moxi},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,11211},
{verbosity,[]}]},
{{node,'ns_1@127.0.0.1',memcached_config},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]}|
{[{interfaces,
{ns_ports_setup,omit_missing_mcd_ports,
[{[{host,<<"*">>},{port,port},{maxconn,30000}]},
{[{host,<<"*">>},{port,dedicated_port},{maxconn,5000}]},
{[{host,<<"*">>},
{port,ssl_port},
{maxconn,30000},
{ssl,
{[{key,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>},
{cert,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}},
{extensions,
[{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>},
{config,<<>>}]},
{[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>},
{config,
{"cyclesize=~B;sleeptime=~B;filename=~s/~s",
[log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]},
{engine,
{[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>},
{config,
{"admin=~s;default_bucket_name=default;auto_create=false",
[admin_user]}}]}},
{verbosity,verbosity}]}]},
{{node,'ns_1@127.0.0.1',memcached},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{ssl_port,undefined},
{verbosity,0},
{mccouch_port,11213},
{engines,
[{membase,
[{engine,"/opt/couchbase/lib/memcached/ep.so"},
{static_config_string,
"vb0=false;waitforwarmup=false;failpartialwarmup=false"}]},
{memcached,
[{engine,"/opt/couchbase/lib/memcached/default_engine.so"},
{static_config_string,"vb0=true"}]}]},
{log_path,"/opt/couchbase/var/lib/couchbase/logs"},
{log_prefix,"memcached.log"},
{log_generations,20},
{log_cyclesize,10485760},
{log_sleeptime,19},
{log_rotation_period,39003},
{dedicated_port,11209},
{bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"},
{port,11210},
{dedicated_port,11209},
{admin_user,"_admin"},
{admin_pass,"*****"}]},
{{node,'ns_1@127.0.0.1',membership},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
active]},
{{node,'ns_1@127.0.0.1',isasl},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]},
{{node,'ns_1@127.0.0.1',is_enterprise},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
false]},
{{node,'ns_1@127.0.0.1',config_version},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{8,63581838428}}]}|
{3,0}]},
{{node,'ns_1@127.0.0.1',compaction_daemon},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{check_interval,30},
{min_file_size,131072}]},
{{node,'ns_1@127.0.0.1',capi_port},
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
8092]},
{{request_limit,rest},undefined},
{{request_limit,capi},undefined},
{{couchdb,max_parallel_replica_indexers},2},
{{couchdb,max_parallel_indexers},4},
{uuid,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838479}}]}|
<<"da76cb0bd4306a69006bcf16b33f8c58">>]},
{set_view_update_daemon,
[{update_interval,5000},
{update_min_changes,5000},
{replica_update_min_changes,5000}]},
{server_groups,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},
[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]},
{replication_topology,star},
{replication,[{enabled,true}]},
{remote_clusters,[]},
{otp,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838429}}]},
{cookie,mpwstvnqzckasujp}]},
{nodes_wanted,['ns_1@127.0.0.1']},
{max_bucket_count,10},
{index_aware_rebalance_disabled,false},
{fast_warmup,
[{fast_warmup_enabled,true},
{min_memory_threshold,10},
{min_items_threshold,10}]},
{email_alerts,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{recipients,["root@localhost"]},
{sender,"couchbase@localhost"},
{enabled,false},
{email_server,
[{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]},
{alerts,
[auto_failover_node,auto_failover_maximum_reached,
auto_failover_other_nodes_down,auto_failover_cluster_too_small,ip,disk,
overhead,ep_oom_errors,ep_item_commit_failed]}]},
{dynamic_config_version,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{4,63581838430}}]},
3,0]},
{drop_request_memory_threshold_mib,undefined},
{cluster_compat_version,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},
3,0]},
{cert_and_pkey,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838435}}]}|
{<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIE6G8swsVztgwCwYJKoZIhvcNAQEFMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgZmJlYzEyZTIwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIGZiZWMxMmUy\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmUPts4fhuyVK7DvebO/r\n/7yXZKkKGPdiV72cFepOUvzC5Nod+Lwh1UH4lQ431OJ9irKo4nN7KTRvkewMhVvV\ng4U8EdMGa9jlOX0ccWAaX4vKWjgAIpwMQi5ddu3fttqW33xRY5FZ0D4qkQzbOtEZ\n1oFZYN8RubfkNEn3ZfdK3ZufYxL4EiMJ1fKU7a0i33qlfizwkNx7/di+wCe0Pm9b\nIoFLuDfsM3jCIRSG4Gfrf3KdgPgJM1W3s5AmdUUTl7XBNKD8Akh09WAD9lbVYzxA\nYkDRuG+30kKwNAuAtTCXaXToiaVJYJ0uSLGdNLXvUcyQlQ6yn9wg9DTcAF2ZHJI0\nVwIDAQABozgwNjAOBgNVHQ8BAf8EBAMCAKQwEwYDVR0lBAwwCgYIKwYBBQUHAwEw\nDwYDVR0TAQH/BAUwAwEB/zALBgkqhkiG9w0BAQUDggEBABmlUxTnMTVZR+OVxrkh\nAkdohLBuxJVFV1LvMqa5b0mvPixDcpsbFoqqMe9XPRkM9J3h3tcrQqtFjC3YYaf5\ncXugn8DVHrwDzD5Hf3ivT7HYOz2he+qgfHGM2XpiIqAqzjP0yf8cim64r6+0nFcM\nxjXFkNJQuK1fmPkXU/9yfbfDh+/dYACnz+dIlzG33uYqyFa+cHJq4N+l7DE8/fRp\nLat0tBylzGktFbz9cUjgQDEl2P4rFRPBmkOS9qhpQnSax9VUkZpZFwZO0TOIsS/F\n2Icb8iE0JA1YV562PqRfJa0NE9Bohnn2H8igJr+Fa7B1P6QYeRjcW3r7Vpc+Z1K4\nZA0=\n-----END CERTIFICATE-----\n">>,
<<"*****">>}]},
{autocompaction,
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]},
{auto_failover_cfg,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{enabled,false},
{timeout,120},
{max_nodes,1},
{count,0}]},
{alert_limits,[{max_overhead_perc,50},{max_disk_used,90}]},
{memory_quota,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838539}}]}|
300]},
{settings,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838575}}]},
{stats,[{send_stats,false}]}]},
{rest,[{port,8091}]},
{rest_creds,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838608}}]}|
{"admin",{password,"*****"}}]},
{read_only_user_creds,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]}|
null]},
{vbucket_map_history,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838625}}]},
{[['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1']],
[{replication_topology,star},{tags,undefined},{max_slaves,10}]}]},
{buckets,
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{4,63581838625}}]},
{configs,
[{"default",
[{repl_type,dcp},
{uuid,<<"49f5fb12ea1cc535731a4ebd04b37dc8">>},
{sasl_password,"*****"},
{num_replicas,0},
{replica_index,false},
{ram_quota,104857600},
{auth_type,sasl},
{flush_enabled,false},
{num_threads,3},
{eviction_policy,full_eviction},
{type,membase},
{num_vbuckets,1024},
{servers,['ns_1@127.0.0.1']},
{map,
[['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1']]},
{map_opts_hash,133465355}]}]}]}]
[error_logger:info,2015-02-06T9:33:22.566,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.262.0>},
{name,ns_config},
{mfargs,
{ns_config,start_link,
["/opt/couchbase/etc/couchbase/config",
ns_config_default]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.572,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.264.0>},
{name,ns_config_remote},
{mfargs,
{ns_config_replica,start_link,
[{local,ns_config_remote}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.577,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.265.0>},
{name,ns_config_log},
{mfargs,{ns_config_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.584,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.267.0>},
{name,cb_config_couch_sync},
{mfargs,{cb_config_couch_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.585,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.259.0>},
{name,ns_config_sup},
{mfargs,{ns_config_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.587,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.269.0>},
{name,vbucket_filter_changes_registry},
{mfargs,
{ns_process_registry,start_link,
[vbucket_filter_changes_registry,
[{terminate_command,shutdown}]]}},
{restart_type,permanent},
{shutdown,100},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.606,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.271.0>},
{name,ns_disksup},
{mfa,{ns_disksup,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.609,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.272.0>},
{name,diag_handler_worker},
{mfa,{work_queue,start_link,[diag_handler_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:info,2015-02-06T9:33:22.619,ns_1@127.0.0.1:ns_server_sup<0.270.0>:dir_size:start_link:49]Starting quick version of dir_size with program name: i386-linux-godu
[error_logger:info,2015-02-06T9:33:22.624,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.273.0>},
{name,dir_size},
{mfa,{dir_size,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.629,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.274.0>},
{name,request_throttler},
{mfa,{request_throttler,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.642,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.276.0>},
{name,timer2_server},
{mfargs,{timer2,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.659,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.275.0>},
{name,ns_log},
{mfa,{ns_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.659,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.277.0>},
{name,ns_crash_log_consumer},
{mfa,{ns_log,start_link_crash_consumer,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:33:22.676,ns_1@127.0.0.1:ns_config_isasl_sync<0.278.0>:ns_config_isasl_sync:init:63]isasl_sync init: ["/opt/couchbase/var/lib/couchbase/isasl.pw","_admin",
"c49e5d66261936e7c76736e477881a25"]
[ns_server:debug,2015-02-06T9:33:22.676,ns_1@127.0.0.1:ns_config_isasl_sync<0.278.0>:ns_config_isasl_sync:init:71]isasl_sync init buckets: ["default"]
[ns_server:debug,2015-02-06T9:33:22.684,ns_1@127.0.0.1:ns_config_isasl_sync<0.278.0>:ns_config_isasl_sync:writeSASLConf:143]Writing isasl passwd file: "/opt/couchbase/var/lib/couchbase/isasl.pw"
[user:info,2015-02-06T9:33:22.712,ns_1@127.0.0.1:<0.277.0>:ns_log:crash_consumption_loop:70]Port server ns_server on node 'babysitter_of_ns_1@127.0.0.1' exited with status 137. Restarting. Messages: Apache CouchDB 2.1.1r-432-gc2af28d (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
working as port
"4507": Booted. Waiting for shutdown request
[ns_server:error,2015-02-06T9:33:22.739,ns_1@127.0.0.1:ns_log<0.275.0>:ns_log:handle_cast:210]unable to notify listeners because of badarg
[ns_server:warn,2015-02-06T9:35:22.732,ns_1@127.0.0.1:ns_config_isasl_sync<0.278.0>:ns_memcached:connect:1260]Unable to connect: {error,{badmatch,{error,timeout}}}, retrying.
[error_logger:info,2015-02-06T9:35:23.733,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.278.0>},
{name,ns_config_isasl_sync},
{mfa,{ns_config_isasl_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.733,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.283.0>},
{name,ns_log_events},
{mfa,{gen_event,start_link,[{local,ns_log_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.756,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.285.0>},
{name,ns_node_disco_events},
{mfargs,
{gen_event,start_link,
[{local,ns_node_disco_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:23.756,ns_1@127.0.0.1:ns_node_disco<0.286.0>:ns_node_disco:init:115]Initting ns_node_disco with []
[ns_server:debug,2015-02-06T9:35:23.756,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync
[user:info,2015-02-06T9:35:23.756,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_sync:130]Node 'ns_1@127.0.0.1' synchronized otp cookie mpwstvnqzckasujp from cluster
[ns_server:debug,2015-02-06T9:35:23.757,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server"
[ns_server:debug,2015-02-06T9:35:23.766,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok
[ns_server:debug,2015-02-06T9:35:23.766,ns_1@127.0.0.1:<0.287.0>:ns_node_disco:do_nodes_wanted_updated_fun:201]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[ns_server:debug,2015-02-06T9:35:23.790,ns_1@127.0.0.1:<0.287.0>:ns_node_disco:do_nodes_wanted_updated_fun:207]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[error_logger:info,2015-02-06T9:35:23.790,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.286.0>},
{name,ns_node_disco},
{mfargs,{ns_node_disco,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.815,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.289.0>},
{name,ns_node_disco_log},
{mfargs,{ns_node_disco_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.822,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.290.0>},
{name,ns_node_disco_conf_events},
{mfargs,{ns_node_disco_conf_events,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.825,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.291.0>},
{name,ns_config_rep_merger},
{mfargs,{ns_config_rep,start_link_merger,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:23.825,ns_1@127.0.0.1:ns_config_rep<0.292.0>:ns_config_rep:init:66]init pulling
[ns_server:debug,2015-02-06T9:35:23.825,ns_1@127.0.0.1:ns_config_rep<0.292.0>:ns_config_rep:init:68]init pushing
[ns_server:debug,2015-02-06T9:35:23.827,ns_1@127.0.0.1:ns_config_rep<0.292.0>:ns_config_rep:init:72]init reannouncing
[ns_server:debug,2015-02-06T9:35:23.828,ns_1@127.0.0.1:ns_config_events<0.260.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted
[ns_server:debug,2015-02-06T9:35:23.828,ns_1@127.0.0.1:ns_config_events<0.260.0>:ns_node_disco_conf_events:handle_event:50]ns_node_disco_conf_events config on otp
[ns_server:debug,2015-02-06T9:35:23.828,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync
[ns_server:debug,2015-02-06T9:35:23.829,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
alert_limits ->
[{max_overhead_perc,50},{max_disk_used,90}]
[ns_server:debug,2015-02-06T9:35:23.829,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
auto_failover_cfg ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{enabled,false},
{timeout,120},
{max_nodes,1},
{count,0}]
[ns_server:debug,2015-02-06T9:35:23.829,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server"
[ns_server:debug,2015-02-06T9:35:23.830,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
autocompaction ->
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]
[ns_server:debug,2015-02-06T9:35:23.833,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
buckets ->
[[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{4,63581838625}}],
{configs,[[{map,[{0,[],['ns_1@127.0.0.1']},
{1,[],['ns_1@127.0.0.1']},
{2,[],['ns_1@127.0.0.1']},
{3,[],['ns_1@127.0.0.1']},
{4,[],['ns_1@127.0.0.1']},
{5,[],['ns_1@127.0.0.1']},
{6,[],['ns_1@127.0.0.1']},
{7,[],['ns_1@127.0.0.1']},
{8,[],['ns_1@127.0.0.1']},
{9,[],['ns_1@127.0.0.1']},
{10,[],['ns_1@127.0.0.1']},
{11,[],['ns_1@127.0.0.1']},
{12,[],['ns_1@127.0.0.1']},
{13,[],['ns_1@127.0.0.1']},
{14,[],['ns_1@127.0.0.1']},
{15,[],['ns_1@127.0.0.1']},
{16,[],['ns_1@127.0.0.1']},
{17,[],['ns_1@127.0.0.1']},
{18,[],['ns_1@127.0.0.1']},
{19,[],['ns_1@127.0.0.1']},
{20,[],['ns_1@127.0.0.1']},
{21,[],['ns_1@127.0.0.1']},
{22,[],['ns_1@127.0.0.1']},
{23,[],['ns_1@127.0.0.1']},
{24,[],['ns_1@127.0.0.1']},
{25,[],['ns_1@127.0.0.1']},
{26,[],['ns_1@127.0.0.1']},
{27,[],['ns_1@127.0.0.1']},
{28,[],['ns_1@127.0.0.1']},
{29,[],['ns_1@127.0.0.1']},
{30,[],['ns_1@127.0.0.1']},
{31,[],['ns_1@127.0.0.1']},
{32,[],['ns_1@127.0.0.1']},
{33,[],['ns_1@127.0.0.1']},
{34,[],['ns_1@127.0.0.1']},
{35,[],['ns_1@127.0.0.1']},
{36,[],['ns_1@127.0.0.1']},
{37,[],['ns_1@127.0.0.1']},
{38,[],['ns_1@127.0.0.1']},
{39,[],['ns_1@127.0.0.1']},
{40,[],['ns_1@127.0.0.1']},
{41,[],['ns_1@127.0.0.1']},
{42,[],['ns_1@127.0.0.1']},
{43,[],['ns_1@127.0.0.1']},
{44,[],['ns_1@127.0.0.1']},
{45,[],['ns_1@127.0.0.1']},
{46,[],['ns_1@127.0.0.1']},
{47,[],['ns_1@127.0.0.1']},
{48,[],['ns_1@127.0.0.1']},
{49,[],['ns_1@127.0.0.1']},
{50,[],['ns_1@127.0.0.1']},
{51,[],['ns_1@127.0.0.1']},
{52,[],['ns_1@127.0.0.1']},
{53,[],['ns_1@127.0.0.1']},
{54,[],['ns_1@127.0.0.1']},
{55,[],['ns_1@127.0.0.1']},
{56,[],['ns_1@127.0.0.1']},
{57,[],['ns_1@127.0.0.1']},
{58,[],['ns_1@127.0.0.1']},
{59,[],['ns_1@127.0.0.1']},
{60,[],['ns_1@127.0.0.1']},
{61,[],['ns_1@127.0.0.1']},
{62,[],['ns_1@127.0.0.1']},
{63,[],['ns_1@127.0.0.1']},
{64,[],['ns_1@127.0.0.1']},
{65,[],['ns_1@127.0.0.1']},
{66,[],['ns_1@127.0.0.1']},
{67,[],['ns_1@127.0.0.1']},
{68,[],['ns_1@127.0.0.1']},
{69,[],['ns_1@127.0.0.1']},
{70,[],['ns_1@127.0.0.1']},
{71,[],['ns_1@127.0.0.1']},
{72,[],['ns_1@127.0.0.1']},
{73,[],['ns_1@127.0.0.1']},
{74,[],['ns_1@127.0.0.1']},
{75,[],['ns_1@127.0.0.1']},
{76,[],['ns_1@127.0.0.1']},
{77,[],['ns_1@127.0.0.1']},
{78,[],['ns_1@127.0.0.1']},
{79,[],['ns_1@127.0.0.1']},
{80,[],['ns_1@127.0.0.1']},
{81,[],['ns_1@127.0.0.1']},
{82,[],['ns_1@127.0.0.1']},
{83,[],['ns_1@127.0.0.1']},
{84,[],['ns_1@127.0.0.1']},
{85,[],['ns_1@127.0.0.1']},
{86,[],['ns_1@127.0.0.1']},
{87,[],[...]},
{88,[],...},
{89,...},
{...}|...]},
{fastForwardMap,[]},
{repl_type,dcp},
{uuid,<<"49f5fb12ea1cc535731a4ebd04b37dc8">>},
{sasl_password,"*****"},
{num_replicas,0},
{replica_index,false},
{ram_quota,104857600},
{auth_type,sasl},
{flush_enabled,false},
{num_threads,3},
{eviction_policy,full_eviction},
{type,membase},
{num_vbuckets,1024},
{servers,['ns_1@127.0.0.1']},
{map_opts_hash,133465355}]]}]
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
cert_and_pkey ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838435}}]}|
{<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIE6G8swsVztgwCwYJKoZIhvcNAQEFMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgZmJlYzEyZTIwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIGZiZWMxMmUy\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmUPts4fhuyVK7DvebO/r\n/7yXZKkKGPdiV72cFepOUvzC5Nod+Lwh1UH4lQ431OJ9irKo4nN7KTRvkewMhVvV\ng4U8EdMGa9jlOX0ccWAaX4vKWjgAIpw"...>>,
<<"*****">>}]
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
cluster_compat_version ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},3,0]
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
drop_request_memory_threshold_mib ->
undefined
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
dynamic_config_version ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{4,63581838430}}]},3,0]
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
email_alerts ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{recipients,["root@localhost"]},
{sender,"couchbase@localhost"},
{enabled,false},
{email_server,[{user,[]},
{pass,"*****"},
{host,"localhost"},
{port,25},
{encrypt,false}]},
{alerts,[auto_failover_node,auto_failover_maximum_reached,
auto_failover_other_nodes_down,auto_failover_cluster_too_small,ip,
disk,overhead,ep_oom_errors,ep_item_commit_failed]}]
[ns_server:debug,2015-02-06T9:35:23.834,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
fast_warmup ->
[{fast_warmup_enabled,true},
{min_memory_threshold,10},
{min_items_threshold,10}]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
index_aware_rebalance_disabled ->
false
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
max_bucket_count ->
10
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
memory_quota ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838539}}]}|300]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
nodes_wanted ->
['ns_1@127.0.0.1']
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
otp ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838429}}]},
{cookie,mpwstvnqzckasujp}]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
read_only_user_creds ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]}|null]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
remote_clusters ->
[]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
replication ->
[{enabled,true}]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
replication_topology ->
star
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
rest ->
[{port,8091}]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
rest_creds ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838608}}]}|
{"admin",{password,"*****"}}]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
server_groups ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838430}}]},
[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]
[ns_server:debug,2015-02-06T9:35:23.835,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
set_view_update_daemon ->
[{update_interval,5000},
{update_min_changes,5000},
{replica_update_min_changes,5000}]
[ns_server:debug,2015-02-06T9:35:23.836,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
settings ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838575}}]},
{stats,[{send_stats,false}]}]
[ns_server:debug,2015-02-06T9:35:23.836,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
uuid ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838479}}]}|
<<"da76cb0bd4306a69006bcf16b33f8c58">>]
[ns_server:debug,2015-02-06T9:35:23.836,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok
[ns_server:debug,2015-02-06T9:35:23.836,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync
[ns_server:debug,2015-02-06T9:35:23.837,ns_1@127.0.0.1:<0.295.0>:ns_node_disco:do_nodes_wanted_updated_fun:201]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[ns_server:debug,2015-02-06T9:35:23.837,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server"
[ns_server:debug,2015-02-06T9:35:23.837,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
vbucket_map_history ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838625}}]},
{[['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
['ns_1@127.0.0.1'],
[...]|...],
[{replication_topology,star},{tags,undefined},{max_slaves,10}]}]
[ns_server:debug,2015-02-06T9:35:23.837,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{couchdb,max_parallel_indexers} ->
4
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:<0.295.0>:ns_node_disco:do_nodes_wanted_updated_fun:207]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{couchdb,max_parallel_replica_indexers} ->
2
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{request_limit,capi} ->
undefined
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{request_limit,rest} ->
undefined
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',capi_port} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|8092]
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',compaction_daemon} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{check_interval,30},
{min_file_size,131072}]
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',config_version} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{8,63581838428}}]}|{3,0}]
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',is_enterprise} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|false]
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',isasl} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]
[ns_server:debug,2015-02-06T9:35:23.838,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',membership} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
active]
[ns_server:debug,2015-02-06T9:35:23.839,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',memcached} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{ssl_port,undefined},
{verbosity,0},
{mccouch_port,11213},
{engines,
[{membase,
[{engine,"/opt/couchbase/lib/memcached/ep.so"},
{static_config_string,
"vb0=false;waitforwarmup=false;failpartialwarmup=false"}]},
{memcached,
[{engine,"/opt/couchbase/lib/memcached/default_engine.so"},
{static_config_string,"vb0=true"}]}]},
{log_path,"/opt/couchbase/var/lib/couchbase/logs"},
{log_prefix,"memcached.log"},
{log_generations,20},
{log_cyclesize,10485760},
{log_sleeptime,19},
{log_rotation_period,39003},
{dedicated_port,11209},
{bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"},
{port,11210},
{dedicated_port,11209},
{admin_user,"_admin"},
{admin_pass,"*****"}]
[ns_server:debug,2015-02-06T9:35:23.839,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',memcached_config} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]}|
{[{interfaces,
{ns_ports_setup,omit_missing_mcd_ports,
[{[{host,<<"*">>},{port,port},{maxconn,30000}]},
{[{host,<<"*">>},{port,dedicated_port},{maxconn,5000}]},
{[{host,<<"*">>},
{port,ssl_port},
{maxconn,30000},
{ssl,
{[{key,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>},
{cert,
<<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}},
{extensions,
[{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>},
{config,<<>>}]},
{[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>},
{config,
{"cyclesize=~B;sleeptime=~B;filename=~s/~s",
[log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]},
{engine,
{[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>},
{config,
{"admin=~s;default_bucket_name=default;auto_create=false",
[admin_user]}}]}},
{verbosity,verbosity}]}]
[ns_server:debug,2015-02-06T9:35:23.839,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',moxi} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,11211},
{verbosity,[]}]
[ns_server:debug,2015-02-06T9:35:23.839,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',ns_log} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{2,63581838428}}]},
{filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',port_servers} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{5,63581838428}}]},
{moxi,"/opt/couchbase/bin/moxi",
["-Z",
{"port_listen=~B,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200",
[port]},
"-z",
{"url=http://127.0.0.1:~B/pools/default/saslBucketsStreaming",
[{misc,this_node_rest_port,[]}]},
"-p","0","-Y","y","-O","stderr",
{"~s",[verbosity]}],
[{env,[{"EVENT_NOSELECT","1"},
{"MOXI_SASL_PLAIN_USR",{"~s",[{ns_moxi_sup,rest_user,[]}]}},
{"MOXI_SASL_PLAIN_PWD",{"~s",[{ns_moxi_sup,rest_pass,[]}]}}]},
use_stdio,exit_status,port_server_send_eol,stderr_to_stdout,stream]},
{memcached,"/opt/couchbase/bin/memcached",
["-C","/opt/couchbase/var/lib/couchbase/config/memcached.json"],
[{env,[{"EVENT_NOSELECT","1"},
{"MEMCACHED_TOP_KEYS","5"},
{"ISASL_PWFILE",{"~s",[{isasl,path}]}}]},
use_stdio,stderr_to_stdout,exit_status,port_server_send_eol,
stream]}]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',rest} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]},
{port,8091},
{port_meta,global}]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',ssl_capi_port} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',ssl_rest_port} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
undefined]
[ns_server:debug,2015-02-06T9:35:23.840,ns_1@127.0.0.1:ns_config_log<0.265.0>:ns_config_log:log_common:134]config change:
{node,'ns_1@127.0.0.1',uuid} ->
[{'_vclock',[{<<"d8bc0a12a1e863160cb3286c78f86696">>,{1,63581838428}}]}|
<<"d8bc0a12a1e863160cb3286c78f86696">>]
[ns_server:debug,2015-02-06T9:35:23.846,ns_1@127.0.0.1:ns_cookie_manager<0.257.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok
[ns_server:debug,2015-02-06T9:35:23.846,ns_1@127.0.0.1:<0.296.0>:ns_node_disco:do_nodes_wanted_updated_fun:201]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[ns_server:debug,2015-02-06T9:35:23.847,ns_1@127.0.0.1:<0.296.0>:ns_node_disco:do_nodes_wanted_updated_fun:207]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: mpwstvnqzckasujp
[error_logger:info,2015-02-06T9:35:23.850,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.292.0>},
{name,ns_config_rep},
{mfargs,{ns_config_rep,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:23.850,ns_1@127.0.0.1:ns_config_rep<0.292.0>:ns_config_rep:do_push_keys:319]Replicating some config keys ([alert_limits,auto_failover_cfg,autocompaction,
buckets,cert_and_pkey,cluster_compat_version,
drop_request_memory_threshold_mib,
dynamic_config_version,email_alerts,
fast_warmup,index_aware_rebalance_disabled,
max_bucket_count,memory_quota,nodes_wanted,otp,
read_only_user_creds,remote_clusters,
replication,replication_topology,rest,
rest_creds,server_groups,
set_view_update_daemon,settings,uuid,
vbucket_map_history,
{couchdb,max_parallel_indexers},
{couchdb,max_parallel_replica_indexers},
{request_limit,capi},
{request_limit,rest},
{node,'ns_1@127.0.0.1',capi_port},
{node,'ns_1@127.0.0.1',compaction_daemon},
{node,'ns_1@127.0.0.1',config_version},
{node,'ns_1@127.0.0.1',is_enterprise},
{node,'ns_1@127.0.0.1',isasl},
{node,'ns_1@127.0.0.1',membership},
{node,'ns_1@127.0.0.1',memcached},
{node,'ns_1@127.0.0.1',memcached_config},
{node,'ns_1@127.0.0.1',moxi},
{node,'ns_1@127.0.0.1',ns_log},
{node,'ns_1@127.0.0.1',port_servers},
{node,'ns_1@127.0.0.1',rest},
{node,'ns_1@127.0.0.1',ssl_capi_port},
{node,'ns_1@127.0.0.1',
ssl_proxy_downstream_port},
{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port},
{node,'ns_1@127.0.0.1',ssl_rest_port},
{node,'ns_1@127.0.0.1',uuid}]..)
[error_logger:info,2015-02-06T9:35:23.852,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.284.0>},
{name,ns_node_disco_sup},
{mfa,{ns_node_disco_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:23.879,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.298.0>},
{name,vbucket_map_mirror},
{mfa,{vbucket_map_mirror,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.887,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.300.0>},
{name,bucket_info_cache},
{mfa,{bucket_info_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.887,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.303.0>},
{name,ns_tick_event},
{mfa,{gen_event,start_link,[{local,ns_tick_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.888,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.304.0>},
{name,buckets_events},
{mfa,{gen_event,start_link,[{local,buckets_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:23.922,ns_1@127.0.0.1:ns_log_events<0.283.0>:ns_mail_log:init:44]ns_mail_log started up
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_mail_sup}
started: [{pid,<0.306.0>},
{name,ns_mail_log},
{mfargs,{ns_mail_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.305.0>},
{name,ns_mail_sup},
{mfa,{ns_mail_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.307.0>},
{name,ns_stats_event},
{mfa,{gen_event,start_link,[{local,ns_stats_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.926,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.308.0>},
{name,samples_loader_tasks},
{mfa,{samples_loader_tasks,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.932,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.310.0>},
{name,ns_heart},
{mfargs,{ns_heart,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.932,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.313.0>},
{name,ns_heart_slow_updater},
{mfargs,{ns_heart,start_link_slow_updater,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.933,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.309.0>},
{name,ns_heart_sup},
{mfa,{ns_heart_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[ns_server:debug,2015-02-06T9:35:23.954,ns_1@127.0.0.1:ns_heart<0.310.0>:ns_heart:current_status_slow_inner:259]Ignoring failure to grab system stats:
{'EXIT',{noproc,{gen_server,call,
[{'stats_reader-@system','ns_1@127.0.0.1'},
{latest,"minute"}]}}}
[error_logger:info,2015-02-06T9:35:23.959,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.314.0>},
{name,ns_doctor},
{mfa,{ns_doctor,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:23.965,ns_1@127.0.0.1:ns_heart<0.310.0>:ns_heart:current_status_slow_inner:272]Ignoring failure to get stats for bucket: "default":
{'EXIT',{noproc,{gen_server,call,
[{'stats_reader-default','ns_1@127.0.0.1'},
{latest,minute}]}}}
[ns_server:info,2015-02-06T9:35:23.990,ns_1@127.0.0.1:remote_clusters_info<0.317.0>:remote_clusters_info:read_or_create_table:552]Reading remote_clusters_info content from /opt/couchbase/var/lib/couchbase/remote_clusters_cache_v3
[ns_server:debug,2015-02-06T9:35:23.998,ns_1@127.0.0.1:ns_heart<0.310.0>:ns_heart:grab_local_xdcr_replications:430]Ignoring exception getting xdcr replication infos
{exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}},
[{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]},
{xdc_replication_sup,all_local_replication_infos,0,
[{file,"src/xdc_replication_sup.erl"},{line,56}]},
{ns_heart,grab_local_xdcr_replications,0,
[{file,"src/ns_heart.erl"},{line,409}]},
{ns_heart,current_status_slow_inner,0,
[{file,"src/ns_heart.erl"},{line,294}]},
{ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,249}]},
{ns_heart,update_current_status,1,
[{file,"src/ns_heart.erl"},{line,186}]},
{ns_heart,handle_info,2,[{file,"src/ns_heart.erl"},{line,120}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,604}]}]}
[ns_server:debug,2015-02-06T9:35:24.026,ns_1@127.0.0.1:ns_heart<0.310.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg
[error_logger:info,2015-02-06T9:35:24.034,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.319.0>},
{name,disk_log_sup},
{mfargs,{disk_log_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.035,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.320.0>},
{name,disk_log_server},
{mfargs,{disk_log_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.060,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.317.0>},
{name,remote_clusters_info},
{mfa,{remote_clusters_info,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.060,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.323.0>},
{name,master_activity_events},
{mfa,
{gen_event,start_link,
[{local,master_activity_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.064,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:ns_heart:current_status_slow_inner:259]Ignoring failure to grab system stats:
{'EXIT',{noproc,{gen_server,call,
[{'stats_reader-@system','ns_1@127.0.0.1'},
{latest,"minute"}]}}}
[ns_server:debug,2015-02-06T9:35:24.065,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:ns_heart:current_status_slow_inner:272]Ignoring failure to get stats for bucket: "default":
{'EXIT',{noproc,{gen_server,call,
[{'stats_reader-default','ns_1@127.0.0.1'},
{latest,minute}]}}}
[ns_server:debug,2015-02-06T9:35:24.067,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:ns_heart:grab_local_xdcr_replications:430]Ignoring exception getting xdcr replication infos
{exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}},
[{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]},
{xdc_replication_sup,all_local_replication_infos,0,
[{file,"src/xdc_replication_sup.erl"},{line,56}]},
{ns_heart,grab_local_xdcr_replications,0,
[{file,"src/ns_heart.erl"},{line,409}]},
{ns_heart,current_status_slow_inner,0,
[{file,"src/ns_heart.erl"},{line,294}]},
{ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,249}]},
{ns_heart,slow_updater_loop,0,[{file,"src/ns_heart.erl"},{line,243}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
[ns_server:debug,2015-02-06T9:35:24.067,ns_1@127.0.0.1:ns_server_sup<0.270.0>:mb_master:check_master_takeover_needed:141]Sending master node question to the following nodes: []
[ns_server:debug,2015-02-06T9:35:24.068,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg
[ns_server:debug,2015-02-06T9:35:24.068,ns_1@127.0.0.1:ns_server_sup<0.270.0>:mb_master:check_master_takeover_needed:143]Got replies: []
[ns_server:debug,2015-02-06T9:35:24.068,ns_1@127.0.0.1:ns_server_sup<0.270.0>:mb_master:check_master_takeover_needed:149]Was unable to discover master, not going to force mastership takeover
[user:info,2015-02-06T9:35:24.093,ns_1@127.0.0.1:mb_master<0.329.0>:mb_master:init:86]I'm the only node, so I'm the master.
[ns_server:debug,2015-02-06T9:35:24.107,ns_1@127.0.0.1:mb_master_sup<0.331.0>:misc:start_singleton:954]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.332.0> on 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:24.107,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.332.0>},
{name,ns_orchestrator},
{mfargs,{ns_orchestrator,start_link,[]}},
{restart_type,permanent},
{shutdown,20},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.117,ns_1@127.0.0.1:mb_master_sup<0.331.0>:misc:start_singleton:954]start_singleton(gen_server, ns_tick, [], []): started as <0.334.0> on 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:24.117,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.334.0>},
{name,ns_tick},
{mfargs,{ns_tick,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.126,ns_1@127.0.0.1:<0.335.0>:auto_failover:init:142]init auto_failover.
[ns_server:debug,2015-02-06T9:35:24.126,ns_1@127.0.0.1:mb_master_sup<0.331.0>:misc:start_singleton:954]start_singleton(gen_server, auto_failover, [], []): started as <0.335.0> on 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:24.126,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.335.0>},
{name,auto_failover},
{mfargs,{auto_failover,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.329.0>},
{name,mb_master},
{mfa,{mb_master,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.336.0>},
{name,master_activity_events_ingress},
{mfa,
{gen_event,start_link,
[{local,master_activity_events_ingress}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.337.0>},
{name,master_activity_events_timestamper},
{mfa,
{master_activity_events,start_link_timestamper,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.158,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
{'EXIT',{noproc,{gen_server,call,
[{'janitor_agent-default','ns_1@127.0.0.1'},
query_vbucket_states,infinity]}}}
[ns_server:debug,2015-02-06T9:35:24.158,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:24.161,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.341.0>},
{name,master_activity_events_pids_watcher},
{mfa,
{master_activity_events_pids_watcher,start_link,
[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.184,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.342.0>},
{name,master_activity_events_keeper},
{mfa,{master_activity_events_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.247,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_ssl_services_sup}
started: [{pid,<0.346.0>},
{name,ns_ssl_services_setup},
{mfargs,{ns_ssl_services_setup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.281,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.345.0>},
{name,ns_ssl_services_sup},
{mfargs,{ns_ssl_services_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.288,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.348.0>},
{name,menelaus_ui_auth},
{mfargs,{menelaus_ui_auth,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.291,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.349.0>},
{name,menelaus_web_cache},
{mfargs,{menelaus_web_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.294,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.350.0>},
{name,menelaus_stats_gatherer},
{mfargs,{menelaus_stats_gatherer,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.305,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.351.0>},
{name,menelaus_web},
{mfargs,{menelaus_web,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.308,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.368.0>},
{name,menelaus_event},
{mfargs,{menelaus_event,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.314,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.333,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.370.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[user:info,2015-02-06T9:35:24.333,ns_1@127.0.0.1:ns_server_sup<0.270.0>:menelaus_sup:start_link:44]Couchbase Server has started on web port 8091 on node 'ns_1@127.0.0.1'. Version: "3.0.1-1444-rel-community".
[error_logger:info,2015-02-06T9:35:24.334,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.344.0>},
{name,menelaus},
{mfa,{menelaus_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.343,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.372.0>},
{name,mc_conn_sup},
{mfargs,{mc_conn_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,supervisor}]
[ns_server:info,2015-02-06T9:35:24.348,ns_1@127.0.0.1:<0.373.0>:mc_tcp_listener:init:24]mccouch is listening on port 11213
[error_logger:info,2015-02-06T9:35:24.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.373.0>},
{name,mc_tcp_listener},
{mfargs,{mc_tcp_listener,start_link,[11213]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.349,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.371.0>},
{name,mc_sup},
{mfa,{mc_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[ns_server:debug,2015-02-06T9:35:24.350,ns_1@127.0.0.1:<0.373.0>:mc_tcp_listener:accept_loop:31]Got new connection
[error_logger:info,2015-02-06T9:35:24.351,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.351,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.375.0>},
{name,ns_port_memcached_killer},
{mfa,{ns_ports_setup,start_memcached_force_killer,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[ns_server:info,2015-02-06T9:35:24.354,ns_1@127.0.0.1:<0.377.0>:ns_memcached_log_rotator:init:28]Starting log rotator on "/opt/couchbase/var/lib/couchbase/logs"/"memcached.log"* with an initial period of 39003ms
[error_logger:info,2015-02-06T9:35:24.355,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.377.0>},
{name,ns_memcached_log_rotator},
{mfa,{ns_memcached_log_rotator,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:info,2015-02-06T9:35:24.361,ns_1@127.0.0.1:<0.377.0>:ns_memcached_log_rotator:handle_info:45]Removed 1 "memcached.log" log files from "/opt/couchbase/var/lib/couchbase/logs" (retaining up to 20)
[ns_server:debug,2015-02-06T9:35:24.362,ns_1@127.0.0.1:<0.379.0>:mc_connection:handle_select_bucket:131]Got select bucket default
[ns_server:debug,2015-02-06T9:35:24.362,ns_1@127.0.0.1:<0.373.0>:mc_tcp_listener:accept_loop:33]Passed connection to mc_conn_sup: <0.379.0>
[ns_server:debug,2015-02-06T9:35:24.362,ns_1@127.0.0.1:<0.379.0>:mc_connection:handle_select_bucket:133]Sent reply on select bucket
[error_logger:info,2015-02-06T9:35:24.381,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.380.0>},
{name,memcached_clients_pool},
{mfa,{memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.443,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.382.0>},
{name,proxied_memcached_clients_pool},
{mfa,{proxied_memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.444,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.383.0>},
{name,xdc_lhttpc_pool},
{mfa,
{lhttpc_manager,start_link,
[[{name,xdc_lhttpc_pool},
{connection_timeout,120000},
{pool_size,200}]]}},
{restart_type,{permanent,1}},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.451,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.384.0>},
{name,ns_null_connection_pool},
{mfa,
{ns_null_connection_pool,start_link,
[ns_null_connection_pool]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.456,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.386.0>},
{name,xdc_stats_holder},
{mfargs,
{proc_lib,start_link,
[xdcr_sup,link_stats_holder_body,[]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.457,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.387.0>},
{name,xdc_replication_sup},
{mfargs,{xdc_replication_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.491,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.388.0>},
{name,xdc_rep_manager},
{mfargs,{xdc_rep_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,30000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.492,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.385.0>},
{name,xdcr_sup},
{mfa,{xdcr_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.512,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.396.0>},
{name,ns_memcached_sockets_pool},
{mfa,{ns_memcached_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.575,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.398.0>},
{name,xdcr_dcp_sockets_pool},
{mfa,{xdcr_dcp_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.578,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.400.0>},
{name,ns_bucket_worker},
{mfargs,{work_queue,start_link,[ns_bucket_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.581,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.402.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.581,ns_1@127.0.0.1:ns_bucket_worker<0.400.0>:ns_bucket_sup:update_childs:84]Starting new child: {{per_bucket_sup,"default"},
{single_bucket_sup,start_link,["default"]},
permanent,infinity,supervisor,
[single_bucket_sup]}
[error_logger:info,2015-02-06T9:35:24.587,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.401.0>},
{name,ns_bucket_sup},
{mfargs,{ns_bucket_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.588,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.399.0>},
{name,ns_bucket_worker_sup},
{mfa,{ns_bucket_worker_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.588,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.403.0>},
{name,system_stats_collector},
{mfa,{system_stats_collector,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.593,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.407.0>},
{name,{per_bucket_sup,"default"}},
{mfargs,{single_bucket_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.601,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.409.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.601,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.411.0>},
{name,{stats_reader,"@system"}},
{mfa,{stats_reader,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.634,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.414.0>},
{name,compaction_daemon},
{mfa,{compaction_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.904,ns_1@127.0.0.1:<0.424.0>:new_concurrency_throttle:init:113]init concurrent throttle process, pid: <0.424.0>, type: kv_throttle# of available token: 1
[error_logger:info,2015-02-06T9:35:24.905,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.412.0>},
{name,{capi_set_view_manager,"default"}},
{mfargs,{capi_set_view_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.906,ns_1@127.0.0.1:ns_memcached-default<0.427.0>:ns_memcached:init:161]Starting ns_memcached
[ns_server:debug,2015-02-06T9:35:24.906,ns_1@127.0.0.1:<0.426.0>:capi_set_view_manager:ddoc_replicator_loop:598]doing replicate_newnodes_docs
[ns_server:debug,2015-02-06T9:35:24.907,ns_1@127.0.0.1:<0.428.0>:ns_memcached:run_connect_phase:184]Started 'connecting' phase of ns_memcached-default. Parent is <0.427.0>
[error_logger:info,2015-02-06T9:35:24.907,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.427.0>},
{name,{ns_memcached,"default"}},
{mfargs,{ns_memcached,start_link,["default"]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:24.992,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_new_daemon:process_scheduler_message:1288]Starting compaction for the following buckets:
[<<"default">>]
[error_logger:info,2015-02-06T9:35:24.992,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.416.0>},
{name,compaction_new_daemon},
{mfa,{compaction_new_daemon,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,86400000},
{child_type,worker}]
[ns_server:info,2015-02-06T9:35:24.993,ns_1@127.0.0.1:<0.429.0>:compaction_new_daemon:spawn_scheduled_kv_compactor:468]Start compaction of vbuckets for bucket default with config:
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]
[ns_server:debug,2015-02-06T9:35:24.993,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_new_daemon:process_scheduler_message:1288]Starting compaction for the following buckets:
[<<"default">>]
[ns_server:debug,2015-02-06T9:35:24.994,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_new_daemon:process_scheduler_message:1288]Starting compaction for the following buckets:
[<<"default">>]
[ns_server:info,2015-02-06T9:35:24.994,ns_1@127.0.0.1:<0.430.0>:compaction_new_daemon:try_to_cleanup_indexes:564]Cleaning up indexes for bucket `default`
[ns_server:info,2015-02-06T9:35:24.995,ns_1@127.0.0.1:<0.434.0>:compaction_new_daemon:spawn_master_db_compactor:832]Start compaction of master db for bucket default with config:
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]
[ns_server:debug,2015-02-06T9:35:24.996,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_new_daemon:process_compactors_exit:1329]Finished compaction iteration.
[ns_server:debug,2015-02-06T9:35:24.996,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_scheduler:schedule_next:60]Finished compaction too soon. Next run will be in 3600s
[error_logger:info,2015-02-06T9:35:25.106,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.435.0>},
{name,{ns_vbm_sup,"default"}},
{mfargs,{ns_vbm_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[ns_server:debug,2015-02-06T9:35:25.160,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
{'EXIT',{noproc,{gen_server,call,
[{'janitor_agent-default','ns_1@127.0.0.1'},
query_vbucket_states,infinity]}}}
[ns_server:info,2015-02-06T9:35:25.161,ns_1@127.0.0.1:<0.430.0>:compaction_new_daemon:spawn_scheduled_views_compactor:494]Start compaction of indexes for bucket default with config:
[{database_fragmentation_threshold,{30,undefined}},
{view_fragmentation_threshold,{30,undefined}}]
[ns_server:debug,2015-02-06T9:35:25.162,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:25.162,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.436.0>:xdc_rdoc_replication_srv:init:76]Loaded the following docs:
[]
[ns_server:debug,2015-02-06T9:35:25.162,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.436.0>:xdc_rdoc_replication_srv:handle_info:154]doing replicate_newnodes_docs
[error_logger:info,2015-02-06T9:35:25.163,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.436.0>},
{name,xdc_rdoc_replication_srv},
{mfa,{xdc_rdoc_replication_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:25.164,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_new_daemon:process_compactors_exit:1329]Finished compaction iteration.
[ns_server:debug,2015-02-06T9:35:25.165,ns_1@127.0.0.1:compaction_new_daemon<0.416.0>:compaction_scheduler:schedule_next:60]Finished compaction too soon. Next run will be in 29s
[error_logger:info,2015-02-06T9:35:25.171,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.439.0>},
{name,{dcp_sup,"default"}},
{mfargs,{dcp_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.173,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.440.0>},
{name,{replication_manager,"default"}},
{mfargs,{replication_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:info,2015-02-06T9:35:25.182,ns_1@127.0.0.1:set_view_update_daemon<0.441.0>:set_view_update_daemon:init:50]Set view update daemon, starting with the following settings:
update interval: 5000ms
minimum number of changes: 5000
[error_logger:info,2015-02-06T9:35:25.182,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.441.0>},
{name,set_view_update_daemon},
{mfa,{set_view_update_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.191,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cluster_logs_sup}
started: [{pid,<0.444.0>},
{name,ets_holder},
{mfargs,
{cluster_logs_collection_task,
start_link_ets_holder,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.191,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.443.0>},
{name,cluster_logs_sup},
{mfa,{cluster_logs_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[ns_server:debug,2015-02-06T9:35:25.192,ns_1@127.0.0.1:<0.2.0>:child_erlang:child_loop:118]"4582": Entered child_loop
[error_logger:info,2015-02-06T9:35:25.192,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.270.0>},
{name,ns_server_sup},
{mfargs,{ns_server_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.192,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ns_server
started_at: 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:25.195,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.445.0>},
{name,{dcp_notifier,"default"}},
{mfargs,{dcp_notifier,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.209,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.447.0>},
{name,rebalance_subprocesses_registry},
{mfargs,
{ns_process_registry,start_link,
['rebalance_subprocesses_registry-default',
[{terminate_command,kill}]]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[ns_server:info,2015-02-06T9:35:25.209,ns_1@127.0.0.1:janitor_agent-default<0.448.0>:janitor_agent:read_flush_counter:1048]Loading flushseq failed: {error,enoent}. Assuming it's equal to global config.
[ns_server:info,2015-02-06T9:35:25.209,ns_1@127.0.0.1:janitor_agent-default<0.448.0>:janitor_agent:read_flush_counter_from_config:1055]Initialized flushseq 0 from bucket config
[error_logger:info,2015-02-06T9:35:25.210,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.448.0>},
{name,janitor_agent},
{mfargs,{janitor_agent,start_link,["default"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.210,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.446.0>},
{name,{janitor_agent_sup,"default"}},
{mfargs,{janitor_agent_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.237,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.449.0>},
{name,{couch_stats_reader,"default"}},
{mfargs,{couch_stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.276,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.450.0>},
{name,{stats_collector,"default"}},
{mfargs,{stats_collector,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.453.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.455.0>},
{name,{stats_reader,"default"}},
{mfargs,{stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.456.0>},
{name,{failover_safeness_level,"default"}},
{mfargs,
{failover_safeness_level,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.704,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.457.0>},
{name,{terse_bucket_info_uploader,"default"}},
{mfargs,
{terse_bucket_info_uploader,start_link,
["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:26.327,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:26.345,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:27.366,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:27.366,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[user:info,2015-02-06T9:35:27.369,ns_1@127.0.0.1:<0.461.0>:menelaus_web_alerts_srv:global_alert:81]Approaching full disk warning. Usage of disk "/" on node "127.0.0.1" is around 94%.
[ns_server:debug,2015-02-06T9:35:28.367,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:28.367,ns_1@127.0.0.1:<0.340.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:info,2015-02-06T9:35:29.159,ns_1@127.0.0.1:<0.333.0>:ns_janitor:cleanup_with_states:116]Bucket "default" not yet ready on ['ns_1@127.0.0.1']
[ns_server:debug,2015-02-06T9:35:34.107,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:34.107,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:35.108,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:35.108,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:36.109,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:36.109,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:37.110,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:37.110,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:debug,2015-02-06T9:35:38.111,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop:109]Exception from query_vbucket_states of "default":'ns_1@127.0.0.1'
warming_up
[ns_server:debug,2015-02-06T9:35:38.111,ns_1@127.0.0.1:<0.487.0>:janitor_agent:query_vbucket_states_loop_next_step:114]Waiting for "default" on 'ns_1@127.0.0.1'
[ns_server:warn,2015-02-06T9:35:38.290,ns_1@127.0.0.1:<0.428.0>:ns_memcached:connect:1260]Unable to connect: {error,{badmatch,{error,closed}}}, retrying.
[ns_server:info,2015-02-06T9:35:38.290,ns_1@127.0.0.1:<0.379.0>:mc_connection:run_loop:162]mccouch connection was normally closed
[user:debug,2015-02-06T9:35:38.291,ns_1@127.0.0.1:<0.277.0>:ns_log:crash_consumption_loop:70]Port server memcached on node 'babysitter_of_ns_1@127.0.0.1' exited with status 0. Restarting. Messages: Fri Feb 6 09:35:24.501291 PST 3: (default) Shutting down tap connections!
Fri Feb 6 09:35:24.501348 PST 3: (default) Shutting down dcp connections!
Fri Feb 6 09:35:24.501368 PST 3: (default) Stopping warmup while engine is loading data from underlying storage, shutdown = yes
Fri Feb 6 09:35:24.502937 PST 3: (default) Had to wait 1067 usec for shutdown
Fri Feb 6 09:35:24.503126 PST 3: (No Engine) Unregistering last bucket default
[user:debug,2015-02-06T9:35:38.328,ns_1@127.0.0.1:<0.277.0>:ns_log:crash_consumption_loop:70]Port server moxi on node 'babysitter_of_ns_1@127.0.0.1' exited with status 0. Restarting. Messages: WARNING: curl error: Failed to connect to 127.0.0.1 port 8091: Connection refused from: http://127.0.0.1:8091/pools/default/saslBucketsStreaming
ERROR: could not contact REST server(s): http://127.0.0.1:8091/pools/default/saslBucketsStreaming
WARNING: curl error: Failed to connect to 127.0.0.1 port 8091: Connection refused from: http://127.0.0.1:8091/pools/default/saslBucketsStreaming
ERROR: could not contact REST server(s): http://127.0.0.1:8091/pools/default/saslBucketsStreaming
EOL on stdin. Exiting
[ns_server:debug,2015-02-06T9:35:38.328,ns_1@127.0.0.1:ns_ssl_services_setup<0.346.0>:ns_ssl_services_setup:restart_xdcr_proxy:325]Xdcr proxy restart failed. But that's usually normal. {'EXIT',
{{badmatch,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,call,
[ns_child_ports_sup,
which_children,
infinity]}}}}},
[{ns_ports_setup,
restart_xdcr_proxy,
0,
[{file,
"src/ns_ports_setup.erl"},
{line,51}]},
{ns_ssl_services_setup,
restart_xdcr_proxy,
0,
[{file,
"src/ns_ssl_services_setup.erl"},
{line,322}]},
{ns_ssl_services_setup,
init,1,
[{file,
"src/ns_ssl_services_setup.erl"},
{line,210}]},
{gen_server,init_it,
6,
[{file,
"gen_server.erl"},
{line,304}]},
{proc_lib,
init_p_do_apply,3,
[{file,
"proc_lib.erl"},
{line,239}]}]}}
[ns_server:debug,2015-02-06T9:35:38.329,ns_1@127.0.0.1:<0.2.0>:child_erlang:child_loop:122]"4582": Got EOL
[ns_server:info,2015-02-06T9:35:38.329,ns_1@127.0.0.1:<0.2.0>:ns_bootstrap:stop:41]Initiated server shutdown
[error_logger:info,2015-02-06T9:35:38.329,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Initiated server shutdown
[ns_server:debug,2015-02-06T9:35:38.329,ns_1@127.0.0.1:<0.442.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.441.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:38.329,ns_1@127.0.0.1:<0.417.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.416.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:38.330,ns_1@127.0.0.1:<0.431.0>:compaction_new_daemon:do_chain_compactors:600]Got exit signal from parent: {'EXIT',<0.429.0>,shutdown}
[ns_server:debug,2015-02-06T9:35:38.993,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:ns_heart:current_status_slow_inner:259]Ignoring failure to grab system stats:
{'EXIT',{noproc,{gen_server,call,
[{'stats_reader-@system','ns_1@127.0.0.1'},
{latest,"minute"}]}}}
[ns_server:debug,2015-02-06T9:35:38.994,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.313.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg
[ns_server:info,2015-02-06T9:35:39.297,ns_1@127.0.0.1:<0.484.0>:ns_janitor:cleanup_with_states:116]Bucket "default" not yet ready on ['ns_1@127.0.0.1']
[ns_server:warn,2015-02-06T9:35:39.303,ns_1@127.0.0.1:<0.428.0>:ns_memcached:connect:1260]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying.
[ns_server:debug,2015-02-06T9:35:39.330,ns_1@127.0.0.1:<0.376.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.374.0>} exited with reason {badmatch,
{is_pid,
false,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,
call,
[ns_child_ports_sup,
which_children,
infinity]}}}}}}
[ns_server:debug,2015-02-06T9:35:39.333,ns_1@127.0.0.1:<0.410.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_stats_event,<0.409.0>} exited with reason killed
[ns_server:debug,2015-02-06T9:35:39.336,ns_1@127.0.0.1:<0.406.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ale_stats_events,<0.403.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:39.336,ns_1@127.0.0.1:<0.405.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_tick_event,<0.403.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:39.337,ns_1@127.0.0.1:<0.407.0>:single_bucket_sup:top_loop:29]Delegating exit {'EXIT',<0.401.0>,shutdown} to child supervisor: <0.408.0>
[ns_server:debug,2015-02-06T9:35:39.337,ns_1@127.0.0.1:<0.458.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {bucket_info_cache_invalidations,<0.457.0>} exited with reason shutdown
[error_logger:error,2015-02-06T9:35:39.707,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: ns_ports_setup:setup_body_tramp/0
pid: <0.374.0>
registered_name: ns_ports_setup
exception error: no match of right hand side value
{is_pid,false,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}
in function ns_ports_setup:set_childs_and_loop/1 (src/ns_ports_setup.erl, line 59)
in call from misc:delaying_crash/2 (src/misc.erl, line 1507)
ancestors: [ns_server_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: [<0.270.0>,<0.376.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 75113
stack_size: 27
reductions: 12412
neighbours:
[error_logger:error,2015-02-06T9:35:39.708,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.409.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:warn,2015-02-06T9:35:40.325,ns_1@127.0.0.1:<0.428.0>:ns_memcached:connect:1260]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying.
[ns_server:debug,2015-02-06T9:35:40.340,ns_1@127.0.0.1:<0.454.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_stats_event,<0.453.0>} exited with reason killed
[error_logger:error,2015-02-06T9:35:40.346,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,'single_bucket_sup-default'}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.453.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:40.346,ns_1@127.0.0.1:<0.452.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.450.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.346,ns_1@127.0.0.1:<0.451.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_tick_event,<0.450.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.347,ns_1@127.0.0.1:replication_manager-default<0.440.0>:replication_manager:terminate:105]Replication manager died {shutdown,{state,"default",tap,[],undefined}}
[error_logger:error,2015-02-06T9:35:40.347,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,menelaus_sup}
Context: child_terminated
Reason: {shutdown,
{gen_server,call,
['ns_memcached-default',topkeys,180000]}}
Offender: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:40.348,ns_1@127.0.0.1:<0.413.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.412.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.348,ns_1@127.0.0.1:<0.407.0>:single_bucket_sup:top_loop:25]per-bucket supervisor for "default" died with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.348,ns_1@127.0.0.1:<0.402.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.401.0>} exited with reason shutdown
[error_logger:info,2015-02-06T9:35:40.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.506.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_bucket_sup}
Context: shutdown_error
Reason: normal
Offender: [{pid,<0.402.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:40.348,ns_1@127.0.0.1:<0.378.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.375.0>} exited with reason killed
[error_logger:error,2015-02-06T9:35:40.349,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: {badmatch,
{is_pid,false,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}}
Offender: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[ns_server:debug,2015-02-06T9:35:40.349,ns_1@127.0.0.1:<0.347.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.346.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.350,ns_1@127.0.0.1:<0.343.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {master_activity_events,<0.342.0>} exited with reason killed
[ns_server:info,2015-02-06T9:35:40.350,ns_1@127.0.0.1:mb_master<0.329.0>:mb_master:terminate:299]Synchronously shutting down child mb_master_sup
[ns_server:debug,2015-02-06T9:35:40.351,ns_1@127.0.0.1:<0.330.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.329.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.352,ns_1@127.0.0.1:<0.315.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.314.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.353,ns_1@127.0.0.1:<0.311.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {buckets_events,<0.310.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.353,ns_1@127.0.0.1:<0.302.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.300.0>} exited with reason killed
[ns_server:debug,2015-02-06T9:35:40.353,ns_1@127.0.0.1:<0.299.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.298.0>} exited with reason killed
[error_logger:error,2015-02-06T9:35:40.353,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: gen_event:init_it/6
pid: <0.301.0>
registered_name: bucket_info_cache_invalidations
exception exit: killed
in function gen_event:terminate_server/4 (gen_event.erl, line 320)
ancestors: [bucket_info_cache,ns_server_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 159
neighbours:
[ns_server:debug,2015-02-06T9:35:40.354,ns_1@127.0.0.1:<0.293.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events_local,<0.292.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.354,ns_1@127.0.0.1:<0.279.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.278.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.361,ns_1@127.0.0.1:<0.268.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.267.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.361,ns_1@127.0.0.1:<0.266.0>:ns_pubsub:do_subscribe_link:136]Parent process of subscription {ns_config_events,<0.265.0>} exited with reason shutdown
[ns_server:debug,2015-02-06T9:35:40.361,ns_1@127.0.0.1:ns_config<0.262.0>:ns_config:wait_saver:712]Done waiting for saver.
[error_logger:error,2015-02-06T9:35:40.364,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.238.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.238.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.236.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1128
neighbours:
[error_logger:error,2015-02-06T9:35:40.365,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.420.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.420.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.418.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1140
neighbours:
[error_logger:error,2015-02-06T9:35:40.367,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.391.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.391.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.389.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1139
neighbours:
[error_logger:error,2015-02-06T9:35:40.367,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.239.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.239.0>,<0.240.0>,nil,<<"1423244002239760">>,
<0.236.0>,<0.241.0>,
{db_header,11,1,
<<0,0,0,0,13,103,0,0,0,0,0,51,0,0,0,0,1,0,0,0,
0,0,0,0,0,0,13,69>>,
<<0,0,0,0,13,154,0,0,0,0,0,49,0,0,0,0,1>>,
nil,0,nil,nil},
1,
{btree,<0.236.0>,
{3431,
<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,13,69>>,
51},
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,2558,
true},
{btree,<0.236.0>,
{3482,<<0,0,0,0,1>>,49},
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,2558,
true},
{btree,<0.236.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
1,<<"_users">>,
"/opt/couchbase/var/lib/couchbase/data/_users.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[{user_ctx,
{user_ctx,null,[<<"_admin">>],undefined}},
sys_db]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:35:40.368,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.239.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 230
neighbours:
[error_logger:error,2015-02-06T9:35:40.369,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.421.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.421.0>,<0.422.0>,nil,<<"1423244124901309">>,
<0.418.0>,<0.423.0>,
{db_header,11,0,nil,nil,nil,0,nil,nil},
0,
{btree,<0.418.0>,nil,
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,
2558,true},
{btree,<0.418.0>,nil,
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,
2558,true},
{btree,<0.418.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
0,<<"default/master">>,
"/opt/couchbase/var/lib/couchbase/data/default/master.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:35:40.369,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.421.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 257
neighbours:
[error_logger:error,2015-02-06T9:35:40.370,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.392.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.392.0>,<0.393.0>,nil,<<"1423244124491465">>,
<0.389.0>,<0.394.0>,
{db_header,11,0,nil,nil,nil,0,nil,nil},
0,
{btree,<0.389.0>,nil,
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,2558,
true},
{btree,<0.389.0>,nil,
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,2558,
true},
{btree,<0.389.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
0,<<"_replicator">>,
"/opt/couchbase/var/lib/couchbase/data/_replicator.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[sys_db,
{user_ctx,
{user_ctx,null,
[<<"_admin">>,<<"_replicator">>],
undefined}}]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:35:40.370,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.392.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 244
neighbours:
[error_logger:info,2015-02-06T9:35:40.370,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================INFO REPORT=========================
application: ns_server
exited: stopped
type: permanent
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.320.0>},
{name,disk_log_server},
{mfargs,{disk_log_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.073,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.317.0>},
{name,remote_clusters_info},
{mfa,{remote_clusters_info,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.073,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.323.0>},
{name,master_activity_events},
{mfa,
{gen_event,start_link,
[{local,master_activity_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.382,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.332.0>},
{name,ns_orchestrator},
{mfargs,{ns_orchestrator,start_link,[]}},
{restart_type,permanent},
{shutdown,20},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.633,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.334.0>},
{name,ns_tick},
{mfargs,{ns_tick,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.652,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.335.0>},
{name,auto_failover},
{mfargs,{auto_failover,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.654,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.329.0>},
{name,mb_master},
{mfa,{mb_master,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:24.654,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.336.0>},
{name,master_activity_events_ingress},
{mfa,
{gen_event,start_link,
[{local,master_activity_events_ingress}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.925,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.337.0>},
{name,master_activity_events_timestamper},
{mfa,
{master_activity_events,start_link_timestamper,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.932,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.341.0>},
{name,master_activity_events_pids_watcher},
{mfa,
{master_activity_events_pids_watcher,start_link,
[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:24.953,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.342.0>},
{name,master_activity_events_keeper},
{mfa,{master_activity_events_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.301,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_ssl_services_sup}
started: [{pid,<0.346.0>},
{name,ns_ssl_services_setup},
{mfargs,{ns_ssl_services_setup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.502,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.345.0>},
{name,ns_ssl_services_sup},
{mfargs,{ns_ssl_services_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:25.509,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.348.0>},
{name,menelaus_ui_auth},
{mfargs,{menelaus_ui_auth,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.511,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.349.0>},
{name,menelaus_web_cache},
{mfargs,{menelaus_web_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.514,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.350.0>},
{name,menelaus_stats_gatherer},
{mfargs,{menelaus_stats_gatherer,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.772,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.351.0>},
{name,menelaus_web},
{mfargs,{menelaus_web,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.776,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.368.0>},
{name,menelaus_event},
{mfargs,{menelaus_event,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:25.787,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.066,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.370.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.067,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.344.0>},
{name,menelaus},
{mfa,{menelaus_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:26.073,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.372.0>},
{name,mc_conn_sup},
{mfargs,{mc_conn_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:26.075,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.373.0>},
{name,mc_tcp_listener},
{mfargs,{mc_tcp_listener,start_link,[11213]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.076,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.371.0>},
{name,mc_sup},
{mfa,{mc_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:26.077,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.078,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.375.0>},
{name,ns_port_memcached_killer},
{mfa,{ns_ports_setup,start_memcached_force_killer,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.085,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.377.0>},
{name,ns_memcached_log_rotator},
{mfa,{ns_memcached_log_rotator,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.345,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.379.0>},
{name,memcached_clients_pool},
{mfa,{memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.358,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.381.0>},
{name,proxied_memcached_clients_pool},
{mfa,{proxied_memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.358,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.382.0>},
{name,xdc_lhttpc_pool},
{mfa,
{lhttpc_manager,start_link,
[[{name,xdc_lhttpc_pool},
{connection_timeout,120000},
{pool_size,200}]]}},
{restart_type,{permanent,1}},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.581,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.385.0>},
{name,ns_null_connection_pool},
{mfa,
{ns_null_connection_pool,start_link,
[ns_null_connection_pool]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.592,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.387.0>},
{name,xdc_stats_holder},
{mfargs,
{proc_lib,start_link,
[xdcr_sup,link_stats_holder_body,[]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.592,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.388.0>},
{name,xdc_replication_sup},
{mfargs,{xdc_replication_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:26.828,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.389.0>},
{name,xdc_rep_manager},
{mfargs,{xdc_rep_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,30000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:26.829,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.386.0>},
{name,xdcr_sup},
{mfa,{xdcr_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:26.839,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.397.0>},
{name,ns_memcached_sockets_pool},
{mfa,{ns_memcached_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.098,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.399.0>},
{name,xdcr_dcp_sockets_pool},
{mfa,{xdcr_dcp_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.104,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.401.0>},
{name,ns_bucket_worker},
{mfargs,{work_queue,start_link,[ns_bucket_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.107,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.403.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.108,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.402.0>},
{name,ns_bucket_sup},
{mfargs,{ns_bucket_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:27.108,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.400.0>},
{name,ns_bucket_worker_sup},
{mfa,{ns_bucket_worker_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:27.108,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.404.0>},
{name,system_stats_collector},
{mfa,{system_stats_collector,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.327,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.408.0>},
{name,{per_bucket_sup,"default"}},
{mfargs,{single_bucket_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:27.337,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.410.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:27.337,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.412.0>},
{name,{stats_reader,"@system"}},
{mfa,{stats_reader,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:31.622,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: ns_ports_setup:setup_body_tramp/0
pid: <0.374.0>
registered_name: ns_ports_setup
exception error: no match of right hand side value
{is_pid,false,
{badrpc,
{'EXIT',
{noproc,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}
in function ns_ports_setup:set_childs_and_loop/1 (src/ns_ports_setup.erl, line 59)
in call from misc:delaying_crash/2 (src/misc.erl, line 1507)
ancestors: [ns_server_sup,ns_server_cluster_sup,<0.60.0>]
messages: []
links: [<0.270.0>,<0.376.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 75113
stack_size: 27
reductions: 12412
neighbours:
[error_logger:info,2015-02-06T9:26:31.624,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.416.0>},
{name,compaction_daemon},
{mfa,{compaction_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.208,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.413.0>},
{name,{capi_set_view_manager,"default"}},
{mfargs,{capi_set_view_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.213,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.434.0>},
{name,{ns_memcached,"default"}},
{mfargs,{ns_memcached,start_link,["default"]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.500,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.442.0>},
{name,{ns_vbm_sup,"default"}},
{mfargs,{ns_vbm_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:32.513,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.429.0>},
{name,compaction_new_daemon},
{mfa,{compaction_new_daemon,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.767,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.449.0>},
{name,{dcp_sup,"default"}},
{mfargs,{dcp_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:32.768,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.450.0>},
{name,{replication_manager,"default"}},
{mfargs,{replication_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.772,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.451.0>},
{name,xdc_rdoc_replication_srv},
{mfa,{xdc_rdoc_replication_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:32.784,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.453.0>},
{name,set_view_update_daemon},
{mfa,{set_view_update_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.059,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.457.0>},
{name,{dcp_notifier,"default"}},
{mfargs,{dcp_notifier,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.062,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cluster_logs_sup}
started: [{pid,<0.460.0>},
{name,ets_holder},
{mfargs,
{cluster_logs_collection_task,
start_link_ets_holder,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.063,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.459.0>},
{name,cluster_logs_sup},
{mfa,{cluster_logs_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:33.065,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.461.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.067,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.270.0>},
{name,ns_server_sup},
{mfargs,{ns_server_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:26:33.068,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ns_server
started_at: 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:26:33.069,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Initiated server shutdown
[error_logger:info,2015-02-06T9:26:33.069,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.464.0>},
{name,rebalance_subprocesses_registry},
{mfargs,
{ns_process_registry,start_link,
['rebalance_subprocesses_registry-default',
[{terminate_command,kill}]]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.071,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.465.0>},
{name,janitor_agent},
{mfargs,{janitor_agent,start_link,["default"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.078,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.463.0>},
{name,{janitor_agent_sup,"default"}},
{mfargs,{janitor_agent_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:33.272,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.469.0>},
{name,{couch_stats_reader,"default"}},
{mfargs,{couch_stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:35.038,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: ns_ports_setup:setup_body_tramp/0
pid: <0.461.0>
registered_name: ns_ports_setup
exception error: no match of right hand side value
{is_pid,false,
{badrpc,
{'EXIT',
{noproc,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}
in function ns_ports_setup:set_childs_and_loop/1 (src/ns_ports_setup.erl, line 59)
in call from misc:delaying_crash/2 (src/misc.erl, line 1507)
ancestors: [ns_server_sup,ns_server_cluster_sup,<0.60.0>]
messages: []
links: [<0.270.0>,<0.462.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 75113
stack_size: 27
reductions: 12388
neighbours:
[error_logger:error,2015-02-06T9:26:35.038,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.410.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:35.044,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.476.0>},
{name,{stats_collector,"default"}},
{mfargs,{stats_collector,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:35.046,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.479.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:35.046,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.481.0>},
{name,{stats_reader,"default"}},
{mfargs,{stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:35.047,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.482.0>},
{name,{failover_safeness_level,"default"}},
{mfargs,
{failover_safeness_level,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:41.548,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================INFO REPORT=========================
alarm_handler: {set,{system_memory_high_watermark,[]}}
[error_logger:info,2015-02-06T9:26:41.552,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.491.0>},
{name,{terse_bucket_info_uploader,"default"}},
{mfargs,
{terse_bucket_info_uploader,start_link,
["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:41.554,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,menelaus_sup}
Context: child_terminated
Reason: {shutdown,
{gen_server,call,
[{'stats_reader-default','ns_1@127.0.0.1'},
{latest,minute,1}]}}
Offender: [{pid,<0.370.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:44.493,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.496.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:44.504,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,'single_bucket_sup-default'}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.479.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:44.509,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,menelaus_sup}
Context: child_terminated
Reason: {shutdown,
{gen_server,call,
['ns_memcached-default',topkeys,180000]}}
Offender: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:26:44.512,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.505.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:44.513,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_bucket_sup}
Context: shutdown_error
Reason: normal
Offender: [{pid,<0.403.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:44.515,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: {badmatch,
{is_pid,false,
{badrpc,
{'EXIT',
{noproc,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}}
Offender: [{pid,<0.461.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:45.055,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server ns_heart terminating
** Last message in was beat
** When Server state == {state,undefined,<0.311.0>,
[{status_latency,21962},
{outgoing_replications_safeness_level,
[{"default",unknown}]},
{incoming_replications_conf_hashes,
[{"default",[]}]},
{local_tasks,[]},
{memory,
[{total,76514240},
{processes,30172568},
{processes_used,30141224},
{system,46341672},
{atom,537185},
{atom_used,508464},
{binary,687152},
{code,13041360},
{ets,25677872}]},
{system_memory_data,
[{system_total_memory,618852352},
{free_swap,0},
{total_swap,0},
{cached_memory,48902144},
{buffered_memory,10682368},
{free_memory,153591808},
{total_memory,618852352}]},
{node_storage_conf,
[{db_path,
"/opt/couchbase/var/lib/couchbase/data"},
{index_path,
"/opt/couchbase/var/lib/couchbase/data"}]},
{statistics,
[{wall_clock,{72225,865}},
{context_switches,{45436,0}},
{garbage_collection,{12670,66367659,0}},
{io,{{input,25096265},{output,2857973}}},
{reductions,{36662956,149621}},
{run_queue,0},
{runtime,{45290,750}},
{run_queues,{0}}]},
{system_stats,
[{cpu_utilization_rate,100.0},
{swap_total,0},
{swap_used,0},
{mem_total,618852352},
{mem_free,213340160}]},
{interesting_stats,[]},
{per_bucket_interesting_stats,[]},
{processes_stats,
[{<<"proc/(main)beam.smp/cpu_utilization">>,
0},
{<<"proc/(main)beam.smp/major_faults">>,0},
{<<"proc/(main)beam.smp/major_faults_raw">>,
37},
{<<"proc/(main)beam.smp/mem_resident">>,
123592704},
{<<"proc/(main)beam.smp/mem_share">>,
6840320},
{<<"proc/(main)beam.smp/mem_size">>,
817631232},
{<<"proc/(main)beam.smp/minor_faults">>,425},
{<<"proc/(main)beam.smp/minor_faults_raw">>,
40797},
{<<"proc/(main)beam.smp/page_faults">>,425},
{<<"proc/(main)beam.smp/page_faults_raw">>,
40834},
{<<"proc/inet_gethost/cpu_utilization">>,97},
{<<"proc/inet_gethost/major_faults">>,0},
{<<"proc/inet_gethost/major_faults_raw">>,
10076},
{<<"proc/inet_gethost/mem_resident">>,
102400},
{<<"proc/inet_gethost/mem_share">>,0},
{<<"proc/inet_gethost/mem_size">>,7557120},
{<<"proc/inet_gethost/minor_faults">>,0},
{<<"proc/inet_gethost/minor_faults_raw">>,
124179},
{<<"proc/inet_gethost/page_faults">>,0},
{<<"proc/inet_gethost/page_faults_raw">>,
134255}]},
{cluster_compatibility_version,196608},
{version,
[{lhttpc,"1.3.0"},
{os_mon,"2.2.14"},
{public_key,"0.21"},
{asn1,"2.0.4"},
{couch,"2.1.1r-432-gc2af28d"},
{kernel,"2.16.4"},
{syntax_tools,"1.6.13"},
{xmerl,"1.3.6"},
{ale,"3.0.1-1444-rel-community"},
{couch_set_view,"2.1.1r-432-gc2af28d"},
{compiler,"4.9.4"},
{inets,"5.9.8"},
{mapreduce,"1.0.0"},
{couch_index_merger,"2.1.1r-432-gc2af28d"},
{ns_server,"3.0.1-1444-rel-community"},
{oauth,"7d85d3ef"},
{crypto,"3.2"},
{ssl,"5.3.3"},
{sasl,"2.3.4"},
{couch_view_parser,"1.0.0"},
{mochiweb,"2.4.2"},
{stdlib,"1.19.4"}]},
{supported_compat_version,[3,0]},
{advertised_version,[3,0,0]},
{system_arch,"x86_64-unknown-linux-gnu"},
{wall_clock,72},
{memory_data,
{618852352,336928768,{<0.60.0>,460648}}},
{disk_data,
[{"/",8256952,93},
{"/dev",294072,1},
{"/run",60436,1},
{"/run/lock",5120,0},
{"/run/shm",302172,0},
{"/home/ubuntu/data",20642428,6},
{"/mnt/argyle-kpr",7224824,92},
{"/mnt/argyle-landsat",36124288,75},
{"/mnt/argyle-antarctica",1032088,21}]},
{meminfo,
<<"MemTotal: 604348 kB\nMemFree: 150380 kB\nBuffers: 10428 kB\nCached: 47760 kB\nSwapCached: 0 kB\nActive: 363808 kB\nInactive: 48744 kB\nActive(anon): 354420 kB\nInactive(anon): 168 kB\nActive(file): 9388 kB\nInactive(file): 48576 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 0 kB\nSwapFree: 0 kB\nDirty: 492 kB\nWriteback: 0 kB\nAnonPages: 354432 kB\nMapped: 14496 kB\nShmem: 208 kB\nSlab: 18692 kB\nSReclaimable: 10260 kB\nSUnreclaim: 8432 kB\nKernelStack: 1352 kB\nPageTables: 6100 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 302172 kB\nCommitted_AS: 632972 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 7152 kB\nVmallocChunk: 34359729008 kB\nHardwareCorrupted: 0 kB\nAnonHugePages: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 637952 kB\nDirectMap2M: 0 kB\n">>}],
{1423,243593,57667}}
** Reason for termination ==
** {badarg,[{ns_heart,update_current_status,1,
[{file,"src/ns_heart.erl"},{line,192}]},
{ns_heart,handle_info,2,[{file,"src/ns_heart.erl"},{line,120}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
[error_logger:error,2015-02-06T9:26:45.058,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: ns_heart:init/1
pid: <0.310.0>
registered_name: ns_heart
exception exit: {badarg,
[{ns_heart,update_current_status,1,
[{file,"src/ns_heart.erl"},{line,192}]},
{ns_heart,handle_info,2,
[{file,"src/ns_heart.erl"},{line,120}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [ns_heart_sup,ns_server_sup,ns_server_cluster_sup,<0.60.0>]
messages: [{'EXIT',<0.309.0>,shutdown}]
links: [<0.276.0>,<0.311.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 17731
stack_size: 27
reductions: 20087
neighbours:
[error_logger:error,2015-02-06T9:26:45.058,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_heart_sup}
Context: shutdown_error
Reason: {badarg,[{ns_heart,update_current_status,1,
[{file,"src/ns_heart.erl"},{line,192}]},
{ns_heart,handle_info,2,
[{file,"src/ns_heart.erl"},{line,120}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
Offender: [{pid,<0.310.0>},
{name,ns_heart},
{mfargs,{ns_heart,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:26:45.061,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: gen_event:init_it/6
pid: <0.301.0>
registered_name: bucket_info_cache_invalidations
exception exit: killed
in function gen_event:terminate_server/4 (gen_event.erl, line 320)
ancestors: [bucket_info_cache,ns_server_sup,ns_server_cluster_sup,
<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 159
neighbours:
[error_logger:error,2015-02-06T9:26:45.065,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.238.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.238.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.236.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1128
neighbours:
[error_logger:error,2015-02-06T9:26:45.066,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.426.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.426.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.424.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1140
neighbours:
[error_logger:error,2015-02-06T9:26:45.068,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.392.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.392.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.390.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 791
neighbours:
[error_logger:error,2015-02-06T9:26:45.069,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.239.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.239.0>,<0.240.0>,nil,<<"1423243577650732">>,
<0.236.0>,<0.241.0>,
{db_header,11,1,
<<0,0,0,0,13,103,0,0,0,0,0,51,0,0,0,0,1,0,0,0,
0,0,0,0,0,0,13,69>>,
<<0,0,0,0,13,154,0,0,0,0,0,49,0,0,0,0,1>>,
nil,0,nil,nil},
1,
{btree,<0.236.0>,
{3431,
<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,13,69>>,
51},
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,2558,
true},
{btree,<0.236.0>,
{3482,<<0,0,0,0,1>>,49},
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,2558,
true},
{btree,<0.236.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
1,<<"_users">>,
"/opt/couchbase/var/lib/couchbase/data/_users.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[{user_ctx,
{user_ctx,null,[<<"_admin">>],undefined}},
sys_db]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:26:45.070,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.239.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 230
neighbours:
[error_logger:error,2015-02-06T9:26:45.071,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.427.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.427.0>,<0.428.0>,nil,<<"1423243592197575">>,
<0.424.0>,<0.431.0>,
{db_header,11,0,nil,nil,nil,0,nil,nil},
0,
{btree,<0.424.0>,nil,
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,
2558,true},
{btree,<0.424.0>,nil,
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,
2558,true},
{btree,<0.424.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
0,<<"default/master">>,
"/opt/couchbase/var/lib/couchbase/data/default/master.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:26:45.071,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.427.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.60.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 257
neighbours:
[error_logger:error,2015-02-06T9:26:45.072,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.393.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.393.0>,<0.394.0>,nil,<<"1423243586827701">>,
<0.390.0>,<0.395.0>,
{db_header,11,0,nil,nil,nil,0,nil,nil},
0,
{btree,<0.390.0>,nil,
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,2558,
true},
{btree,<0.390.0>,nil,
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,2558,
true},
{btree,<0.390.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
0,<<"_replicator">>,
"/opt/couchbase/var/lib/couchbase/data/_replicator.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[sys_db,
{user_ctx,
{user_ctx,null,
[<<"_admin">>,<<"_replicator">>],
undefined}}]}
** Reason for termination ==
** killed
[error_logger:info,2015-02-06T9:33:07.715,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cb_couch_sup}
started: [{pid,<0.146.0>},
{name,cb_auth_info},
{mfargs,{cb_auth_info,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.731,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,crypto_sup}
started: [{pid,<0.151.0>},
{name,crypto_server},
{mfargs,{crypto_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.731,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: crypto
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.741,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: asn1
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.746,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: public_key
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.758,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.158.0>},
{name,ftp_sup},
{mfargs,{ftp_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.787,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_profile_sup}
started: [{pid,<0.161.0>},
{name,httpc_manager},
{mfargs,
{httpc_manager,start_link,
[default,only_session_cookies,inets]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.787,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_sup}
started: [{pid,<0.160.0>},
{name,httpc_profile_sup},
{mfargs,
{httpc_profile_sup,start_link,
[[{httpc,{default,only_session_cookies}}]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.795,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_sup}
started: [{pid,<0.162.0>},
{name,httpc_handler_sup},
{mfargs,{httpc_handler_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.796,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.159.0>},
{name,httpc_sup},
{mfargs,
{httpc_sup,start_link,
[[{httpc,{default,only_session_cookies}}]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.807,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.163.0>},
{name,httpd_sup},
{mfargs,{httpd_sup,start_link,[[]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.827,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.164.0>},
{name,tftp_sup},
{mfargs,{tftp_sup,start_link,[[]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.827,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: inets
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.828,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: oauth
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.867,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ssl_sup}
started: [{pid,<0.170.0>},
{name,ssl_manager},
{mfargs,{ssl_manager,start_link,[[]]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.875,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ssl_sup}
started: [{pid,<0.171.0>},
{name,tls_connection},
{mfargs,{tls_connection_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.875,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ssl
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.895,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,lhttpc_sup}
started: [{pid,<0.176.0>},
{name,lhttpc_manager},
{mfargs,
{lhttpc_manager,start_link,
[[{name,lhttpc_manager}]]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.895,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: lhttpc
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.899,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: xmerl
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.908,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: compiler
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.913,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: syntax_tools
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.913,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: mochiweb
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.916,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_view_parser
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.919,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_set_view
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.922,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_index_merger
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.924,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: mapreduce
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:07.969,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.185.0>},
{name,couch_config},
{mfargs,
{couch_server_sup,couch_config_start_link_wrapper,
[["/opt/couchbase/etc/couchdb/default.ini",
"/opt/couchbase/etc/couchdb/default.d/capi.ini",
"/opt/couchbase/etc/couchdb/default.d/geocouch.ini",
"/opt/couchbase/etc/couchdb/local.ini"],
<0.185.0>]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.983,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.188.0>},
{name,collation_driver},
{mfargs,{couch_drv,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:07.983,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.189.0>},
{name,couch_task_events},
{mfargs,
{gen_event,start_link,[{local,couch_task_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.986,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.190.0>},
{name,couch_task_status},
{mfargs,{couch_task_status,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:07.995,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.191.0>},
{name,couch_file_write_guard},
{mfargs,{couch_file_write_guard,sup_start_link,[]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.005,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.192.0>},
{name,couch_server},
{mfargs,{couch_server,sup_start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.005,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.193.0>},
{name,couch_db_update_event},
{mfargs,
{gen_event,start_link,[{local,couch_db_update}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.006,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.194.0>},
{name,couch_replication_event},
{mfargs,
{gen_event,start_link,[{local,couch_replication}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.008,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.195.0>},
{name,couch_replication_supervisor},
{mfargs,{couch_rep_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.017,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.196.0>},
{name,couch_log},
{mfargs,{couch_log,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.027,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.197.0>},
{name,couch_main_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_main_index_barrier,
"max_parallel_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.027,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.198.0>},
{name,couch_replica_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_replica_index_barrier,
"max_parallel_replica_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.028,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.199.0>},
{name,couch_spatial_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_spatial_index_barrier,
"max_parallel_spatial_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.028,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.187.0>},
{name,couch_primary_services},
{mfargs,{couch_primary_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.039,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.201.0>},
{name,couch_db_update_notifier_sup},
{mfargs,{couch_db_update_notifier_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.076,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.202.0>},
{name,spatial_view_manager_dev},
{mfargs,{couch_set_view,start_link,[dev,spatial_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.076,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.205.0>},
{name,index_merger_pool},
{mfargs,
{lhttpc_manager,start_link,
[[{connection_timeout,90000},
{pool_size,10000},
{name,couch_index_merger_connection_pool}]]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.080,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.206.0>},
{name,query_servers},
{mfargs,{couch_query_servers,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.087,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.208.0>},
{name,couch_set_view_ddoc_cache},
{mfargs,{couch_set_view_ddoc_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.091,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.210.0>},
{name,view_manager},
{mfargs,{couch_view,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.104,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.212.0>},
{name,uuids},
{mfargs,{couch_uuids,start,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.105,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.214.0>},
{name,spatial_view_manager},
{mfargs,
{couch_set_view,start_link,[prod,spatial_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.129,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.216.0>},
{name,httpd},
{mfargs,{couch_httpd,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.140,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.233.0>},
{name,set_view_manager_dev},
{mfargs,
{couch_set_view,start_link,[dev,mapreduce_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.177,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.235.0>},
{name,auth_cache},
{mfargs,{couch_auth_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.178,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.244.0>},
{name,set_view_manager},
{mfargs,
{couch_set_view,start_link,[prod,mapreduce_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.179,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.200.0>},
{name,couch_secondary_services},
{mfargs,{couch_secondary_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.179,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cb_couch_sup}
started: [{pid,<0.186.0>},
{name,couch_app},
{mfargs,
{couch_app,start,
[fake,
["/opt/couchbase/etc/couchdb/default.ini",
"/opt/couchbase/etc/couchdb/local.ini"]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.179,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.145.0>},
{name,cb_couch_sup},
{mfargs,{cb_couch_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.210,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.247.0>},
{name,timeout_diag_logger},
{mfargs,{timeout_diag_logger,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.220,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inet_gethost_native_sup}
started: [{pid,<0.250.0>},{mfa,{inet_gethost_native,init,[[]]}}]
[error_logger:info,2015-02-06T9:33:08.220,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.249.0>},
{name,inet_gethost_native_sup},
{mfargs,{inet_gethost_native,start_link,[]}},
{restart_type,temporary},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.259,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.252.0>},
{name,erl_epmd},
{mfargs,{erl_epmd,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.259,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.253.0>},
{name,auth},
{mfargs,{auth,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.260,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.254.0>},
{name,net_kernel},
{mfargs,
{net_kernel,start_link,
[['ns_1@127.0.0.1',longnames]]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.260,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_sup}
started: [{pid,<0.251.0>},
{name,net_sup_dynamic},
{mfargs,
{erl_distribution,start_link,
[['ns_1@127.0.0.1',longnames]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.271,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.248.0>},
{name,dist_manager},
{mfargs,{dist_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.273,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.257.0>},
{name,ns_cookie_manager},
{mfargs,{ns_cookie_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.283,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.258.0>},
{name,ns_cluster},
{mfargs,{ns_cluster,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.285,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.260.0>},
{name,ns_config_events},
{mfargs,
{gen_event,start_link,[{local,ns_config_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.285,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.261.0>},
{name,ns_config_events_local},
{mfargs,
{gen_event,start_link,
[{local,ns_config_events_local}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.355,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.262.0>},
{name,ns_config},
{mfargs,
{ns_config,start_link,
["/opt/couchbase/etc/couchbase/config",
ns_config_default]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.357,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.264.0>},
{name,ns_config_remote},
{mfargs,
{ns_config_replica,start_link,
[{local,ns_config_remote}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.359,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.265.0>},
{name,ns_config_log},
{mfargs,{ns_config_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.368,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.267.0>},
{name,cb_config_couch_sync},
{mfargs,{cb_config_couch_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.368,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.259.0>},
{name,ns_config_sup},
{mfargs,{ns_config_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:08.370,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.269.0>},
{name,vbucket_filter_changes_registry},
{mfargs,
{ns_process_registry,start_link,
[vbucket_filter_changes_registry,
[{terminate_command,shutdown}]]}},
{restart_type,permanent},
{shutdown,100},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.397,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.271.0>},
{name,ns_disksup},
{mfa,{ns_disksup,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.398,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.272.0>},
{name,diag_handler_worker},
{mfa,{work_queue,start_link,[diag_handler_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.413,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.273.0>},
{name,dir_size},
{mfa,{dir_size,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.419,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.274.0>},
{name,request_throttler},
{mfa,{request_throttler,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.431,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.276.0>},
{name,timer2_server},
{mfargs,{timer2,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.436,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.275.0>},
{name,ns_log},
{mfa,{ns_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:08.436,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.277.0>},
{name,ns_crash_log_consumer},
{mfa,{ns_log,start_link_crash_consumer,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.492,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.278.0>},
{name,ns_config_isasl_sync},
{mfa,{ns_config_isasl_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.492,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.283.0>},
{name,ns_log_events},
{mfa,{gen_event,start_link,[{local,ns_log_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.499,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.285.0>},
{name,ns_node_disco_events},
{mfargs,
{gen_event,start_link,
[{local,ns_node_disco_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.519,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.286.0>},
{name,ns_node_disco},
{mfargs,{ns_node_disco,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.521,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.289.0>},
{name,ns_node_disco_log},
{mfargs,{ns_node_disco_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.522,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.290.0>},
{name,ns_node_disco_conf_events},
{mfargs,{ns_node_disco_conf_events,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.524,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.291.0>},
{name,ns_config_rep_merger},
{mfargs,{ns_config_rep,start_link_merger,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.532,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.292.0>},
{name,ns_config_rep},
{mfargs,{ns_config_rep,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.535,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.284.0>},
{name,ns_node_disco_sup},
{mfa,{ns_node_disco_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.540,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.298.0>},
{name,vbucket_map_mirror},
{mfa,{vbucket_map_mirror,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.544,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.300.0>},
{name,bucket_info_cache},
{mfa,{bucket_info_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.545,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.303.0>},
{name,ns_tick_event},
{mfa,{gen_event,start_link,[{local,ns_tick_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.545,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.304.0>},
{name,buckets_events},
{mfa,{gen_event,start_link,[{local,buckets_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.551,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_mail_sup}
started: [{pid,<0.306.0>},
{name,ns_mail_log},
{mfargs,{ns_mail_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.552,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.305.0>},
{name,ns_mail_sup},
{mfa,{ns_mail_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.552,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.307.0>},
{name,ns_stats_event},
{mfa,{gen_event,start_link,[{local,ns_stats_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.559,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.308.0>},
{name,samples_loader_tasks},
{mfa,{samples_loader_tasks,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.563,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.310.0>},
{name,ns_heart},
{mfargs,{ns_heart,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.563,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.313.0>},
{name,ns_heart_slow_updater},
{mfargs,{ns_heart,start_link_slow_updater,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.564,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.309.0>},
{name,ns_heart_sup},
{mfa,{ns_heart_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.575,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.314.0>},
{name,ns_doctor},
{mfa,{ns_doctor,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.625,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.319.0>},
{name,disk_log_sup},
{mfargs,{disk_log_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.625,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.320.0>},
{name,disk_log_server},
{mfargs,{disk_log_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.645,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.317.0>},
{name,remote_clusters_info},
{mfa,{remote_clusters_info,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.646,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.323.0>},
{name,master_activity_events},
{mfa,
{gen_event,start_link,
[{local,master_activity_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.666,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.332.0>},
{name,ns_orchestrator},
{mfargs,{ns_orchestrator,start_link,[]}},
{restart_type,permanent},
{shutdown,20},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.673,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.334.0>},
{name,ns_tick},
{mfargs,{ns_tick,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.687,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.338.0>},
{name,auto_failover},
{mfargs,{auto_failover,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.688,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.325.0>},
{name,mb_master},
{mfa,{mb_master,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.688,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.339.0>},
{name,master_activity_events_ingress},
{mfa,
{gen_event,start_link,
[{local,master_activity_events_ingress}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.688,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.340.0>},
{name,master_activity_events_timestamper},
{mfa,
{master_activity_events,start_link_timestamper,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.690,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.341.0>},
{name,master_activity_events_pids_watcher},
{mfa,
{master_activity_events_pids_watcher,start_link,
[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.708,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.342.0>},
{name,master_activity_events_keeper},
{mfa,{master_activity_events_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.754,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_ssl_services_sup}
started: [{pid,<0.346.0>},
{name,ns_ssl_services_setup},
{mfargs,{ns_ssl_services_setup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.772,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.345.0>},
{name,ns_ssl_services_sup},
{mfargs,{ns_ssl_services_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.774,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.348.0>},
{name,menelaus_ui_auth},
{mfargs,{menelaus_ui_auth,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.779,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.349.0>},
{name,menelaus_web_cache},
{mfargs,{menelaus_web_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.781,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.350.0>},
{name,menelaus_stats_gatherer},
{mfargs,{menelaus_stats_gatherer,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.791,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.351.0>},
{name,menelaus_web},
{mfargs,{menelaus_web,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.793,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.368.0>},
{name,menelaus_event},
{mfargs,{menelaus_event,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.796,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.799,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.370.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.800,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.344.0>},
{name,menelaus},
{mfa,{menelaus_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.818,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.372.0>},
{name,mc_conn_sup},
{mfargs,{mc_conn_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.826,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.373.0>},
{name,mc_tcp_listener},
{mfargs,{mc_tcp_listener,start_link,[11213]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.827,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.371.0>},
{name,mc_sup},
{mfa,{mc_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.828,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.829,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.375.0>},
{name,ns_port_memcached_killer},
{mfa,{ns_ports_setup,start_memcached_force_killer,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.832,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.377.0>},
{name,ns_memcached_log_rotator},
{mfa,{ns_memcached_log_rotator,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.850,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.379.0>},
{name,memcached_clients_pool},
{mfa,{memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.863,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.381.0>},
{name,proxied_memcached_clients_pool},
{mfa,{proxied_memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.864,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.382.0>},
{name,xdc_lhttpc_pool},
{mfa,
{lhttpc_manager,start_link,
[[{name,xdc_lhttpc_pool},
{connection_timeout,120000},
{pool_size,200}]]}},
{restart_type,{permanent,1}},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.875,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.383.0>},
{name,ns_null_connection_pool},
{mfa,
{ns_null_connection_pool,start_link,
[ns_null_connection_pool]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.883,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.385.0>},
{name,xdc_stats_holder},
{mfargs,
{proc_lib,start_link,
[xdcr_sup,link_stats_holder_body,[]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.884,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.386.0>},
{name,xdc_replication_sup},
{mfargs,{xdc_replication_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.900,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.387.0>},
{name,xdc_rep_manager},
{mfargs,{xdc_rep_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,30000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.901,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.384.0>},
{name,xdcr_sup},
{mfa,{xdcr_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.902,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.395.0>},
{name,ns_memcached_sockets_pool},
{mfa,{ns_memcached_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.925,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.397.0>},
{name,xdcr_dcp_sockets_pool},
{mfa,{xdcr_dcp_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.926,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.399.0>},
{name,ns_bucket_worker},
{mfargs,{work_queue,start_link,[ns_bucket_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.943,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.401.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.944,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.400.0>},
{name,ns_bucket_sup},
{mfargs,{ns_bucket_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.945,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.398.0>},
{name,ns_bucket_worker_sup},
{mfa,{ns_bucket_worker_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:09.945,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.402.0>},
{name,system_stats_collector},
{mfa,{system_stats_collector,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:09.959,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.406.0>},
{name,{per_bucket_sup,"default"}},
{mfargs,{single_bucket_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:10.004,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.408.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.020,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.413.0>},
{name,{stats_reader,"@system"}},
{mfa,{stats_reader,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.444,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.418.0>},
{name,compaction_daemon},
{mfa,{compaction_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.446,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.410.0>},
{name,{capi_set_view_manager,"default"}},
{mfargs,{capi_set_view_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.447,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.424.0>},
{name,{ns_memcached,"default"}},
{mfargs,{ns_memcached,start_link,["default"]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.457,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.428.0>},
{name,{ns_vbm_sup,"default"}},
{mfargs,{ns_vbm_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:10.496,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.430.0>},
{name,{dcp_sup,"default"}},
{mfargs,{dcp_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:10.496,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.431.0>},
{name,{replication_manager,"default"}},
{mfargs,{replication_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.513,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.426.0>},
{name,compaction_new_daemon},
{mfa,{compaction_new_daemon,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.532,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.438.0>},
{name,xdc_rdoc_replication_srv},
{mfa,{xdc_rdoc_replication_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.534,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.440.0>},
{name,{dcp_notifier,"default"}},
{mfargs,{dcp_notifier,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.539,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.441.0>},
{name,set_view_update_daemon},
{mfa,{set_view_update_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.541,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.444.0>},
{name,rebalance_subprocesses_registry},
{mfargs,
{ns_process_registry,start_link,
['rebalance_subprocesses_registry-default',
[{terminate_command,kill}]]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.547,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.445.0>},
{name,janitor_agent},
{mfargs,{janitor_agent,start_link,["default"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.547,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.443.0>},
{name,{janitor_agent_sup,"default"}},
{mfargs,{janitor_agent_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.555,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cluster_logs_sup}
started: [{pid,<0.447.0>},
{name,ets_holder},
{mfargs,
{cluster_logs_collection_task,
start_link_ets_holder,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.556,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.446.0>},
{name,cluster_logs_sup},
{mfa,{cluster_logs_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:10.556,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.270.0>},
{name,ns_server_sup},
{mfargs,{ns_server_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:10.556,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ns_server
started_at: 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:33:10.562,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.448.0>},
{name,{couch_stats_reader,"default"}},
{mfargs,{couch_stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.604,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.449.0>},
{name,{stats_collector,"default"}},
{mfargs,{stats_collector,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.611,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.452.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.623,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.454.0>},
{name,{stats_reader,"default"}},
{mfargs,{stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:10.767,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.455.0>},
{name,{failover_safeness_level,"default"}},
{mfargs,
{failover_safeness_level,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:11.487,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.458.0>},
{name,{terse_bucket_info_uploader,"default"}},
{mfargs,
{terse_bucket_info_uploader,start_link,
["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.018,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cb_couch_sup}
started: [{pid,<0.146.0>},
{name,cb_auth_info},
{mfargs,{cb_auth_info,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.044,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,crypto_sup}
started: [{pid,<0.151.0>},
{name,crypto_server},
{mfargs,{crypto_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.044,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: crypto
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.096,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: asn1
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.110,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: public_key
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.139,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.158.0>},
{name,ftp_sup},
{mfargs,{ftp_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.194,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_profile_sup}
started: [{pid,<0.161.0>},
{name,httpc_manager},
{mfargs,
{httpc_manager,start_link,
[default,only_session_cookies,inets]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.194,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_sup}
started: [{pid,<0.160.0>},
{name,httpc_profile_sup},
{mfargs,
{httpc_profile_sup,start_link,
[[{httpc,{default,only_session_cookies}}]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.199,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,httpc_sup}
started: [{pid,<0.162.0>},
{name,httpc_handler_sup},
{mfargs,{httpc_handler_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.200,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.159.0>},
{name,httpc_sup},
{mfargs,
{httpc_sup,start_link,
[[{httpc,{default,only_session_cookies}}]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.206,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.163.0>},
{name,httpd_sup},
{mfargs,{httpd_sup,start_link,[[]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.212,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inets_sup}
started: [{pid,<0.164.0>},
{name,tftp_sup},
{mfargs,{tftp_sup,start_link,[[]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.212,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: inets
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.212,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: oauth
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.278,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ssl_sup}
started: [{pid,<0.170.0>},
{name,ssl_manager},
{mfargs,{ssl_manager,start_link,[[]]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.282,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ssl_sup}
started: [{pid,<0.171.0>},
{name,tls_connection},
{mfargs,{tls_connection_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,4000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.282,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ssl
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.308,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,lhttpc_sup}
started: [{pid,<0.176.0>},
{name,lhttpc_manager},
{mfargs,
{lhttpc_manager,start_link,
[[{name,lhttpc_manager}]]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.308,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: lhttpc
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.322,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: xmerl
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.343,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: compiler
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.358,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: syntax_tools
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.358,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: mochiweb
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.362,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_view_parser
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.387,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_set_view
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.399,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: couch_index_merger
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.412,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: mapreduce
started_at: nonode@nohost
[error_logger:info,2015-02-06T9:33:21.541,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.185.0>},
{name,couch_config},
{mfargs,
{couch_server_sup,couch_config_start_link_wrapper,
[["/opt/couchbase/etc/couchdb/default.ini",
"/opt/couchbase/etc/couchdb/default.d/capi.ini",
"/opt/couchbase/etc/couchdb/default.d/geocouch.ini",
"/opt/couchbase/etc/couchdb/local.ini"],
<0.185.0>]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.676,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.188.0>},
{name,collation_driver},
{mfargs,{couch_drv,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.676,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.189.0>},
{name,couch_task_events},
{mfargs,
{gen_event,start_link,[{local,couch_task_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.681,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.190.0>},
{name,couch_task_status},
{mfargs,{couch_task_status,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.688,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.191.0>},
{name,couch_file_write_guard},
{mfargs,{couch_file_write_guard,sup_start_link,[]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.720,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.192.0>},
{name,couch_server},
{mfargs,{couch_server,sup_start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.720,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.193.0>},
{name,couch_db_update_event},
{mfargs,
{gen_event,start_link,[{local,couch_db_update}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.720,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.194.0>},
{name,couch_replication_event},
{mfargs,
{gen_event,start_link,[{local,couch_replication}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.729,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.195.0>},
{name,couch_replication_supervisor},
{mfargs,{couch_rep_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.734,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.196.0>},
{name,couch_log},
{mfargs,{couch_log,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.741,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.197.0>},
{name,couch_main_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_main_index_barrier,
"max_parallel_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.742,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.198.0>},
{name,couch_replica_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_replica_index_barrier,
"max_parallel_replica_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.742,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_primary_services}
started: [{pid,<0.199.0>},
{name,couch_spatial_index_barrier},
{mfargs,
{couch_index_barrier,start_link,
[couch_spatial_index_barrier,
"max_parallel_spatial_indexers"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.743,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.187.0>},
{name,couch_primary_services},
{mfargs,{couch_primary_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.763,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.201.0>},
{name,couch_db_update_notifier_sup},
{mfargs,{couch_db_update_notifier_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:21.926,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.202.0>},
{name,spatial_view_manager_dev},
{mfargs,{couch_set_view,start_link,[dev,spatial_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.926,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.205.0>},
{name,index_merger_pool},
{mfargs,
{lhttpc_manager,start_link,
[[{connection_timeout,90000},
{pool_size,10000},
{name,couch_index_merger_connection_pool}]]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.937,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.206.0>},
{name,query_servers},
{mfargs,{couch_query_servers,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.951,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.208.0>},
{name,couch_set_view_ddoc_cache},
{mfargs,{couch_set_view_ddoc_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:21.968,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.210.0>},
{name,view_manager},
{mfargs,{couch_view,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.027,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.212.0>},
{name,uuids},
{mfargs,{couch_uuids,start,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.028,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.214.0>},
{name,spatial_view_manager},
{mfargs,
{couch_set_view,start_link,[prod,spatial_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.164,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.216.0>},
{name,httpd},
{mfargs,{couch_httpd,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.176,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.233.0>},
{name,set_view_manager_dev},
{mfargs,
{couch_set_view,start_link,[dev,mapreduce_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.300,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.235.0>},
{name,auth_cache},
{mfargs,{couch_auth_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.301,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_secondary_services}
started: [{pid,<0.244.0>},
{name,set_view_manager},
{mfargs,
{couch_set_view,start_link,[prod,mapreduce_view]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.301,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,couch_server_sup}
started: [{pid,<0.200.0>},
{name,couch_secondary_services},
{mfargs,{couch_secondary_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.302,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cb_couch_sup}
started: [{pid,<0.186.0>},
{name,couch_app},
{mfargs,
{couch_app,start,
[fake,
["/opt/couchbase/etc/couchdb/default.ini",
"/opt/couchbase/etc/couchdb/local.ini"]]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.303,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.145.0>},
{name,cb_couch_sup},
{mfargs,{cb_couch_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.348,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.247.0>},
{name,timeout_diag_logger},
{mfargs,{timeout_diag_logger,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.357,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,inet_gethost_native_sup}
started: [{pid,<0.250.0>},{mfa,{inet_gethost_native,init,[[]]}}]
[error_logger:info,2015-02-06T9:33:22.357,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.249.0>},
{name,inet_gethost_native_sup},
{mfargs,{inet_gethost_native,start_link,[]}},
{restart_type,temporary},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.411,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.252.0>},
{name,erl_epmd},
{mfargs,{erl_epmd,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.411,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.253.0>},
{name,auth},
{mfargs,{auth,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.414,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,net_sup}
started: [{pid,<0.254.0>},
{name,net_kernel},
{mfargs,
{net_kernel,start_link,
[['ns_1@127.0.0.1',longnames]]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.415,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_sup}
started: [{pid,<0.251.0>},
{name,net_sup_dynamic},
{mfargs,
{erl_distribution,start_link,
[['ns_1@127.0.0.1',longnames]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.422,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.248.0>},
{name,dist_manager},
{mfargs,{dist_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.425,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.257.0>},
{name,ns_cookie_manager},
{mfargs,{ns_cookie_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.438,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.258.0>},
{name,ns_cluster},
{mfargs,{ns_cluster,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.440,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.260.0>},
{name,ns_config_events},
{mfargs,
{gen_event,start_link,[{local,ns_config_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.441,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.261.0>},
{name,ns_config_events_local},
{mfargs,
{gen_event,start_link,
[{local,ns_config_events_local}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.566,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.262.0>},
{name,ns_config},
{mfargs,
{ns_config,start_link,
["/opt/couchbase/etc/couchbase/config",
ns_config_default]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.572,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.264.0>},
{name,ns_config_remote},
{mfargs,
{ns_config_replica,start_link,
[{local,ns_config_remote}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.577,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.265.0>},
{name,ns_config_log},
{mfargs,{ns_config_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.584,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_config_sup}
started: [{pid,<0.267.0>},
{name,cb_config_couch_sync},
{mfargs,{cb_config_couch_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.585,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.259.0>},
{name,ns_config_sup},
{mfargs,{ns_config_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:33:22.587,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.269.0>},
{name,vbucket_filter_changes_registry},
{mfargs,
{ns_process_registry,start_link,
[vbucket_filter_changes_registry,
[{terminate_command,shutdown}]]}},
{restart_type,permanent},
{shutdown,100},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.606,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.271.0>},
{name,ns_disksup},
{mfa,{ns_disksup,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.609,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.272.0>},
{name,diag_handler_worker},
{mfa,{work_queue,start_link,[diag_handler_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.624,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.273.0>},
{name,dir_size},
{mfa,{dir_size,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.629,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.274.0>},
{name,request_throttler},
{mfa,{request_throttler,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.642,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.276.0>},
{name,timer2_server},
{mfargs,{timer2,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.659,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.275.0>},
{name,ns_log},
{mfa,{ns_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:33:22.659,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.277.0>},
{name,ns_crash_log_consumer},
{mfa,{ns_log,start_link_crash_consumer,[]}},
{restart_type,{permanent,4}},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.733,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.278.0>},
{name,ns_config_isasl_sync},
{mfa,{ns_config_isasl_sync,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.733,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.283.0>},
{name,ns_log_events},
{mfa,{gen_event,start_link,[{local,ns_log_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.756,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.285.0>},
{name,ns_node_disco_events},
{mfargs,
{gen_event,start_link,
[{local,ns_node_disco_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.790,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.286.0>},
{name,ns_node_disco},
{mfargs,{ns_node_disco,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.815,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.289.0>},
{name,ns_node_disco_log},
{mfargs,{ns_node_disco_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.822,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.290.0>},
{name,ns_node_disco_conf_events},
{mfargs,{ns_node_disco_conf_events,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.825,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.291.0>},
{name,ns_config_rep_merger},
{mfargs,{ns_config_rep,start_link_merger,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.850,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_node_disco_sup}
started: [{pid,<0.292.0>},
{name,ns_config_rep},
{mfargs,{ns_config_rep,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.852,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.284.0>},
{name,ns_node_disco_sup},
{mfa,{ns_node_disco_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:23.879,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.298.0>},
{name,vbucket_map_mirror},
{mfa,{vbucket_map_mirror,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.887,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.300.0>},
{name,bucket_info_cache},
{mfa,{bucket_info_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.887,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.303.0>},
{name,ns_tick_event},
{mfa,{gen_event,start_link,[{local,ns_tick_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.888,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.304.0>},
{name,buckets_events},
{mfa,{gen_event,start_link,[{local,buckets_events}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_mail_sup}
started: [{pid,<0.306.0>},
{name,ns_mail_log},
{mfargs,{ns_mail_log,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.305.0>},
{name,ns_mail_sup},
{mfa,{ns_mail_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:23.922,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.307.0>},
{name,ns_stats_event},
{mfa,{gen_event,start_link,[{local,ns_stats_event}]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.926,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.308.0>},
{name,samples_loader_tasks},
{mfa,{samples_loader_tasks,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.932,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.310.0>},
{name,ns_heart},
{mfargs,{ns_heart,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.932,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_heart_sup}
started: [{pid,<0.313.0>},
{name,ns_heart_slow_updater},
{mfargs,{ns_heart,start_link_slow_updater,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:23.933,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.309.0>},
{name,ns_heart_sup},
{mfa,{ns_heart_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:23.959,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.314.0>},
{name,ns_doctor},
{mfa,{ns_doctor,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.034,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.319.0>},
{name,disk_log_sup},
{mfargs,{disk_log_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.035,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.320.0>},
{name,disk_log_server},
{mfargs,{disk_log_server,start_link,[]}},
{restart_type,permanent},
{shutdown,2000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.060,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.317.0>},
{name,remote_clusters_info},
{mfa,{remote_clusters_info,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.060,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.323.0>},
{name,master_activity_events},
{mfa,
{gen_event,start_link,
[{local,master_activity_events}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.107,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.332.0>},
{name,ns_orchestrator},
{mfargs,{ns_orchestrator,start_link,[]}},
{restart_type,permanent},
{shutdown,20},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.117,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.334.0>},
{name,ns_tick},
{mfargs,{ns_tick,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.126,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mb_master_sup}
started: [{pid,<0.335.0>},
{name,auto_failover},
{mfargs,{auto_failover,start_link,[]}},
{restart_type,permanent},
{shutdown,10},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.329.0>},
{name,mb_master},
{mfa,{mb_master,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.336.0>},
{name,master_activity_events_ingress},
{mfa,
{gen_event,start_link,
[{local,master_activity_events_ingress}]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.127,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.337.0>},
{name,master_activity_events_timestamper},
{mfa,
{master_activity_events,start_link_timestamper,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.161,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.341.0>},
{name,master_activity_events_pids_watcher},
{mfa,
{master_activity_events_pids_watcher,start_link,
[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.184,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.342.0>},
{name,master_activity_events_keeper},
{mfa,{master_activity_events_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.247,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_ssl_services_sup}
started: [{pid,<0.346.0>},
{name,ns_ssl_services_setup},
{mfargs,{ns_ssl_services_setup,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.281,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.345.0>},
{name,ns_ssl_services_sup},
{mfargs,{ns_ssl_services_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.288,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.348.0>},
{name,menelaus_ui_auth},
{mfargs,{menelaus_ui_auth,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.291,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.349.0>},
{name,menelaus_web_cache},
{mfargs,{menelaus_web_cache,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.294,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.350.0>},
{name,menelaus_stats_gatherer},
{mfargs,{menelaus_stats_gatherer,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.305,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.351.0>},
{name,menelaus_web},
{mfargs,{menelaus_web,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.308,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.368.0>},
{name,menelaus_event},
{mfargs,{menelaus_event,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.314,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.333,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.370.0>},
{name,menelaus_web_alerts_srv},
{mfargs,{menelaus_web_alerts_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.334,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.344.0>},
{name,menelaus},
{mfa,{menelaus_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.343,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.372.0>},
{name,mc_conn_sup},
{mfargs,{mc_conn_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,mc_sup}
started: [{pid,<0.373.0>},
{name,mc_tcp_listener},
{mfargs,{mc_tcp_listener,start_link,[11213]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.349,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.371.0>},
{name,mc_sup},
{mfa,{mc_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.351,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.351,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.375.0>},
{name,ns_port_memcached_killer},
{mfa,{ns_ports_setup,start_memcached_force_killer,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.355,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.377.0>},
{name,ns_memcached_log_rotator},
{mfa,{ns_memcached_log_rotator,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.381,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.380.0>},
{name,memcached_clients_pool},
{mfa,{memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.443,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.382.0>},
{name,proxied_memcached_clients_pool},
{mfa,{proxied_memcached_clients_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.444,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.383.0>},
{name,xdc_lhttpc_pool},
{mfa,
{lhttpc_manager,start_link,
[[{name,xdc_lhttpc_pool},
{connection_timeout,120000},
{pool_size,200}]]}},
{restart_type,{permanent,1}},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.451,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.384.0>},
{name,ns_null_connection_pool},
{mfa,
{ns_null_connection_pool,start_link,
[ns_null_connection_pool]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.456,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.386.0>},
{name,xdc_stats_holder},
{mfargs,
{proc_lib,start_link,
[xdcr_sup,link_stats_holder_body,[]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.457,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.387.0>},
{name,xdc_replication_sup},
{mfargs,{xdc_replication_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.491,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,xdcr_sup}
started: [{pid,<0.388.0>},
{name,xdc_rep_manager},
{mfargs,{xdc_rep_manager,start_link,[]}},
{restart_type,permanent},
{shutdown,30000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.492,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.385.0>},
{name,xdcr_sup},
{mfa,{xdcr_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.512,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.396.0>},
{name,ns_memcached_sockets_pool},
{mfa,{ns_memcached_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.575,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.398.0>},
{name,xdcr_dcp_sockets_pool},
{mfa,{xdcr_dcp_sockets_pool,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.578,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.400.0>},
{name,ns_bucket_worker},
{mfargs,{work_queue,start_link,[ns_bucket_worker]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.581,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.402.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.587,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_worker_sup}
started: [{pid,<0.401.0>},
{name,ns_bucket_sup},
{mfargs,{ns_bucket_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.588,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.399.0>},
{name,ns_bucket_worker_sup},
{mfa,{ns_bucket_worker_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.588,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.403.0>},
{name,system_stats_collector},
{mfa,{system_stats_collector,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.593,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_bucket_sup}
started: [{pid,<0.407.0>},
{name,{per_bucket_sup,"default"}},
{mfargs,{single_bucket_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:24.601,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.409.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.601,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.411.0>},
{name,{stats_reader,"@system"}},
{mfa,{stats_reader,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.634,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.414.0>},
{name,compaction_daemon},
{mfa,{compaction_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.905,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.412.0>},
{name,{capi_set_view_manager,"default"}},
{mfargs,{capi_set_view_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.907,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.427.0>},
{name,{ns_memcached,"default"}},
{mfargs,{ns_memcached,start_link,["default"]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:24.992,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.416.0>},
{name,compaction_new_daemon},
{mfa,{compaction_new_daemon,start_link,[]}},
{restart_type,{permanent,4}},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.106,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.435.0>},
{name,{ns_vbm_sup,"default"}},
{mfargs,{ns_vbm_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.163,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.436.0>},
{name,xdc_rdoc_replication_srv},
{mfa,{xdc_rdoc_replication_srv,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.171,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.439.0>},
{name,{dcp_sup,"default"}},
{mfargs,{dcp_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.173,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.440.0>},
{name,{replication_manager,"default"}},
{mfargs,{replication_manager,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.182,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.441.0>},
{name,set_view_update_daemon},
{mfa,{set_view_update_daemon,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.191,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,cluster_logs_sup}
started: [{pid,<0.444.0>},
{name,ets_holder},
{mfargs,
{cluster_logs_collection_task,
start_link_ets_holder,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.191,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_sup}
started: [{pid,<0.443.0>},
{name,cluster_logs_sup},
{mfa,{cluster_logs_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.192,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_server_cluster_sup}
started: [{pid,<0.270.0>},
{name,ns_server_sup},
{mfargs,{ns_server_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
[error_logger:info,2015-02-06T9:35:25.192,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
application: ns_server
started_at: 'ns_1@127.0.0.1'
[error_logger:info,2015-02-06T9:35:25.195,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.445.0>},
{name,{dcp_notifier,"default"}},
{mfargs,{dcp_notifier,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.209,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.447.0>},
{name,rebalance_subprocesses_registry},
{mfargs,
{ns_process_registry,start_link,
['rebalance_subprocesses_registry-default',
[{terminate_command,kill}]]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.210,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'janitor_agent_sup-default'}
started: [{pid,<0.448.0>},
{name,janitor_agent},
{mfargs,{janitor_agent,start_link,["default"]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.210,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.446.0>},
{name,{janitor_agent_sup,"default"}},
{mfargs,{janitor_agent_sup,start_link,["default"]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.237,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.449.0>},
{name,{couch_stats_reader,"default"}},
{mfargs,{couch_stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.276,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.450.0>},
{name,{stats_collector,"default"}},
{mfargs,{stats_collector,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.453.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.455.0>},
{name,{stats_reader,"default"}},
{mfargs,{stats_reader,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.277,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.456.0>},
{name,{failover_safeness_level,"default"}},
{mfargs,
{failover_safeness_level,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:25.704,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,'single_bucket_sup-default'}
started: [{pid,<0.457.0>},
{name,{terse_bucket_info_uploader,"default"}},
{mfargs,
{terse_bucket_info_uploader,start_link,
["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:38.329,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Initiated server shutdown
[error_logger:error,2015-02-06T9:35:39.707,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: ns_ports_setup:setup_body_tramp/0
pid: <0.374.0>
registered_name: ns_ports_setup
exception error: no match of right hand side value
{is_pid,false,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}
in function ns_ports_setup:set_childs_and_loop/1 (src/ns_ports_setup.erl, line 59)
in call from misc:delaying_crash/2 (src/misc.erl, line 1507)
ancestors: [ns_server_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: [<0.270.0>,<0.376.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 75113
stack_size: 27
reductions: 12412
neighbours:
[error_logger:error,2015-02-06T9:35:39.708,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.409.0>},
{name,{stats_archiver,"@system"}},
{mfa,{stats_archiver,start_link,["@system"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.346,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,'single_bucket_sup-default'}
Context: shutdown_error
Reason: killed
Offender: [{pid,<0.453.0>},
{name,{stats_archiver,"default"}},
{mfargs,{stats_archiver,start_link,["default"]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.347,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,menelaus_sup}
Context: child_terminated
Reason: {shutdown,
{gen_server,call,
['ns_memcached-default',topkeys,180000]}}
Offender: [{pid,<0.369.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:info,2015-02-06T9:35:40.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.506.0>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.348,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_bucket_sup}
Context: shutdown_error
Reason: normal
Offender: [{pid,<0.402.0>},
{name,buckets_observing_subscription},
{mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.349,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_server_sup}
Context: shutdown_error
Reason: {badmatch,
{is_pid,false,
{badrpc,
{'EXIT',
{shutdown,
{gen_server,call,
[ns_child_ports_sup,which_children,infinity]}}}}}}
Offender: [{pid,<0.374.0>},
{name,ns_ports_setup},
{mfa,{ns_ports_setup,start,[]}},
{restart_type,{permanent,4}},
{shutdown,brutal_kill},
{child_type,worker}]
[error_logger:error,2015-02-06T9:35:40.353,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: gen_event:init_it/6
pid: <0.301.0>
registered_name: bucket_info_cache_invalidations
exception exit: killed
in function gen_event:terminate_server/4 (gen_event.erl, line 320)
ancestors: [bucket_info_cache,ns_server_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 159
neighbours:
[error_logger:error,2015-02-06T9:35:40.364,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.238.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.238.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.236.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1128
neighbours:
[error_logger:error,2015-02-06T9:35:40.365,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.420.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.420.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.418.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1140
neighbours:
[error_logger:error,2015-02-06T9:35:40.367,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_file:spawn_writer/2
pid: <0.391.0>
registered_name: []
exception exit: {noproc,
{gen_server,call,
[couch_file_write_guard,
{remove,<0.391.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from couch_file:writer_loop/4 (/home/buildbot/buildbot_slave/ubuntu-1204-x64-301-builder/build/build/couchdb/src/couchdb/couch_file.erl, line 693)
ancestors: [<0.389.0>,couch_server,couch_primary_services,
couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1139
neighbours:
[error_logger:error,2015-02-06T9:35:40.367,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.239.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.239.0>,<0.240.0>,nil,<<"1423244002239760">>,
<0.236.0>,<0.241.0>,
{db_header,11,1,
<<0,0,0,0,13,103,0,0,0,0,0,51,0,0,0,0,1,0,0,0,
0,0,0,0,0,0,13,69>>,
<<0,0,0,0,13,154,0,0,0,0,0,49,0,0,0,0,1>>,
nil,0,nil,nil},
1,
{btree,<0.236.0>,
{3431,
<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,13,69>>,
51},
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,2558,
true},
{btree,<0.236.0>,
{3482,<<0,0,0,0,1>>,49},
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,2558,
true},
{btree,<0.236.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
1,<<"_users">>,
"/opt/couchbase/var/lib/couchbase/data/_users.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[{user_ctx,
{user_ctx,null,[<<"_admin">>],undefined}},
sys_db]}
** Reason for termination ==
** killed
[error_logger:error,2015-02-06T9:35:40.368,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: couch_db:init/1
pid: <0.239.0>
registered_name: []
exception exit: killed
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [couch_server,couch_primary_services,couch_server_sup,
cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 230
neighbours:
[error_logger:error,2015-02-06T9:35:40.369,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.421.0> terminating
** Last message in was {'EXIT',<0.192.0>,killed}
** When Server state == {db,<0.421.0>,<0.422.0>,nil,<<"1423244124901309">>,
<0.418.0>,<0.423.0>,
{db_header,11,0,nil,nil,nil,0,nil,nil},
0,
{btree,<0.418.0>,nil,
#Fun<couch_db_updater.7.78420415>,
#Fun<couch_db_updater.8.78420415>,
#Fun<couch_btree.1.39972947>,
#Fun<couch_db_updater.9.78420415>,1279,
2558,true},
{btree,<0.418.0>,nil,
#Fun<couch_db_updater.10.78420415>,
#Fun<couch_db_updater.11.78420415>,
#Fun<couch_db_updater.6.78420415>,
#Fun<couch_db_updater.12.78420415>,1279,
2558,true},
{btree,<0.418.0>,nil,identity,identity,
#Fun<couch_btree.1.39972947>,nil,1279,2558,
true},
0,<<"default/master">>,
"/opt/couchbase/var/lib/couchbase/data/default/master.couch.1",
[],nil,
{user_ctx,null,[],undefined},
nil,
[before_header,after_header,on_file_open],
[]}
** Reason for termination ==
** killed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment