Skip to content

Instantly share code, notes, and snippets.

@arunapiravi
Created April 28, 2016 21:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save arunapiravi/530b26e69fbc4ea53453f53eeddb0b5f to your computer and use it in GitHub Desktop.
Save arunapiravi/530b26e69fbc4ea53453f53eeddb0b5f to your computer and use it in GitHub Desktop.
./testrunner -i INI_FILE.ini skip-cleanup=True,get-cbcollect-info=True,get-logs=False,stop-on-failure=False,GROUP=P1 -t fts.moving_topology_fts.MovingTopFTS.rebalance_out_during_index_building,items=30000,cluster=D,F,F,index_replicas=1,standard_buckets=2,sasl_buckets=2,GROUP=P1
Test Input params:
{'logs_folder': '/Users/apiravi/Documents/testrunner/logs/testrunner-16-Apr-28_14-38-03/test_1', 'GROUP': 'P1', 'skip-cleanup': 'True', 'items': '30000', 'standard_buckets': '2', 'num_nodes': 4, 'cluster_name': 'INI_FILE', 'cluster': 'D,F,F', 'stop-on-failure': 'False', 'ini': 'INI_FILE.ini', 'case_number': 1, 'get-logs': 'False', 'sasl_buckets': '2', 'get-cbcollect-info': 'True', 'index_replicas': '1'}
Run before suite setup for fts.moving_topology_fts.MovingTopFTS.rebalance_out_during_index_building
2016-04-28 14:38:03 | WARNING | MainProcess | MainThread | [data_helper.<module>] Can not import concurrent module. Data for each server will be loaded/retrieved sequentially
rebalance_out_during_index_building (fts.moving_topology_fts.MovingTopFTS) ... 2016-04-28 14:38:03 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is started for test #1 rebalance_out_during_index_building ====
2016-04-28 14:38:03 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] removing nodes from cluster ...
2016-04-28 14:38:04 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] cleanup [ip:172.23.106.139 port:8091 ssh_username:root]
2016-04-28 14:38:04 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets [u'default', u'sasl_bucket_1', u'sasl_bucket_2', u'standard_bucket_1', u'standard_bucket_2'] on 172.23.106.139
2016-04-28 14:38:04 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket default ...
2016-04-28 14:38:07 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 172.23.106.139
2016-04-28 14:38:07 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2016-04-28 14:38:07 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.139 existing buckets : [u'sasl_bucket_1', u'sasl_bucket_2', u'standard_bucket_1', u'standard_bucket_2']
2016-04-28 14:38:07 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket sasl_bucket_1 ...
2016-04-28 14:38:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : sasl_bucket_1 from 172.23.106.139
2016-04-28 14:38:11 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2016-04-28 14:38:11 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.139 existing buckets : [u'sasl_bucket_2', u'standard_bucket_1', u'standard_bucket_2']
2016-04-28 14:38:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket sasl_bucket_2 ...
2016-04-28 14:38:13 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : sasl_bucket_2 from 172.23.106.139
2016-04-28 14:38:13 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2016-04-28 14:38:13 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.139 existing buckets : [u'standard_bucket_1', u'standard_bucket_2']
2016-04-28 14:38:13 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket standard_bucket_1 ...
2016-04-28 14:38:16 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : standard_bucket_1 from 172.23.106.139
2016-04-28 14:38:16 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2016-04-28 14:38:17 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.139 existing buckets : [u'standard_bucket_2']
2016-04-28 14:38:17 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket standard_bucket_2 ...
2016-04-28 14:38:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : standard_bucket_2 from 172.23.106.139
2016-04-28 14:38:19 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2016-04-28 14:38:19 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.139 existing buckets : []
2016-04-28 14:38:19 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] rebalancing all nodes in order to remove nodes
2016-04-28 14:38:20 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance params : password=password&ejectedNodes=ns_1%40172.23.106.175&user=Administrator&knownNodes=ns_1%40172.23.106.175%2Cns_1%40172.23.106.139
2016-04-28 14:38:20 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance operation started
2016-04-28 14:38:20 | INFO | MainProcess | test_thread | [rest_client._rebalance_progress] rebalance percentage : 25.00 %
2016-04-28 14:38:25 | INFO | MainProcess | test_thread | [rest_client._rebalance_progress] rebalance percentage : 25.00 %
2016-04-28 14:38:35 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] rebalance progress took 15.2256948948 seconds
2016-04-28 14:38:35 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance...
2016-04-28 14:38:45 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] removed all the nodes from cluster associated with ip:172.23.106.139 port:8091 ssh_username:root ? [(u'ns_1@172.23.106.175', 8091)]
2016-04-28 14:38:45 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.106.139:8091
2016-04-28 14:38:45 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.106.139:8091 is running
2016-04-28 14:38:45 | INFO | MainProcess | test_thread | [fts_base.init_cluster] Initializing Cluster ...
2016-04-28 14:38:45 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2069
2016-04-28 14:38:45 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=forestdb
2016-04-28 14:38:46 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 172.23.106.139:8091:username=Administrator&password=password&port=8091
2016-04-28 14:38:46 | INFO | MainProcess | test_thread | [fts_base.init_cluster] 172.23.106.175 will be configured with services fts
2016-04-28 14:38:46 | INFO | MainProcess | test_thread | [fts_base.init_cluster] 172.23.106.176 will be configured with services fts
2016-04-28 14:38:47 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.106.175:8091 to cluster
2016-04-28 14:38:47 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.106.175:8091 to this cluster @172.23.106.139:8091
2016-04-28 14:38:51 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.106.176:8091 to cluster
2016-04-28 14:38:51 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.106.176:8091 to this cluster @172.23.106.139:8091
2016-04-28 14:38:56 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : password=password&ejectedNodes=&user=Administrator&knownNodes=ns_1%40172.23.106.176%2Cns_1%40172.23.106.175%2Cns_1%40172.23.106.139
2016-04-28 14:38:56 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started
2016-04-28 14:38:56 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 16.67 %
2016-04-28 14:39:06 | INFO | MainProcess | Cluster_Thread | [task.check] rebalancing was completed with progress: 100% in 10.1218121052 sec
2016-04-28 14:39:07 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.139:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=313&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2016-04-28 14:39:07 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.07 seconds to create bucket default
2016-04-28 14:39:07 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.106.139 to accept set ops
2016-04-28 14:39:09 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 default
2016-04-28 14:39:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 default
2016-04-28 14:39:36 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 313
2016-04-28 14:39:37 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.139:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=313&proxyPort=11211&authType=sasl&name=sasl_bucket_1&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=password
2016-04-28 14:39:38 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.07 seconds to create bucket sasl_bucket_1
2016-04-28 14:39:38 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.139:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=313&proxyPort=11211&authType=sasl&name=sasl_bucket_2&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=password
2016-04-28 14:39:38 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.06 seconds to create bucket sasl_bucket_2
2016-04-28 14:39:38 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : sasl_bucket_1 in 172.23.106.139 to accept set ops
2016-04-28 14:39:48 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_1
2016-04-28 14:39:49 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_1
2016-04-28 14:40:16 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'sasl_bucket_1' was created with per node RAM quota: 313
2016-04-28 14:40:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : sasl_bucket_2 in 172.23.106.139 to accept set ops
2016-04-28 14:40:17 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_2
2016-04-28 14:40:18 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_2
2016-04-28 14:40:44 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'sasl_bucket_2' was created with per node RAM quota: 313
2016-04-28 14:40:45 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.139:8091/pools/default/buckets with param: proxyPort=11217&bucketType=membase&authType=none&replicaIndex=1&name=standard_bucket_1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&replicaNumber=1&ramQuotaMB=313
2016-04-28 14:40:45 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.07 seconds to create bucket standard_bucket_1
2016-04-28 14:40:45 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.139:8091/pools/default/buckets with param: proxyPort=11218&bucketType=membase&authType=none&replicaIndex=1&name=standard_bucket_2&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&replicaNumber=1&ramQuotaMB=313
2016-04-28 14:40:45 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.06 seconds to create bucket standard_bucket_2
2016-04-28 14:40:45 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : standard_bucket_1 in 172.23.106.139 to accept set ops
2016-04-28 14:40:48 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_1
2016-04-28 14:40:49 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_1
2016-04-28 14:41:16 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'standard_bucket_1' was created with per node RAM quota: 313
2016-04-28 14:41:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : standard_bucket_2 in 172.23.106.139 to accept set ops
2016-04-28 14:41:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_2
2016-04-28 14:41:17 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_2
2016-04-28 14:41:44 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'standard_bucket_2' was created with per node RAM quota: 313
2016-04-28 14:41:44 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 172.23.106.139:8091: True content: "/opt/couchbase/var/lib/couchbase/logs" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2016-04-28 14:41:44 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.139 with username : root password : couchbase ssh_key:
2016-04-28 14:41:46 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.139
2016-04-28 14:41:47 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.139: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.139: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 1
2016-04-28 14:41:48 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.175 with username : root password : couchbase ssh_key:
2016-04-28 14:41:49 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.175
2016-04-28 14:41:51 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.175: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:51 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:51 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:51 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.175: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:52 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:52 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.176 with username : root password : couchbase ssh_key:
2016-04-28 14:41:53 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.176
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.176: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.176: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:55 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.107.110 with username : root password : couchbase ssh_key:
2016-04-28 14:41:57 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.107.110
2016-04-28 14:41:58 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.107.110: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.107.110: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [fts_base.__initialize_error_count_dict] {'172.23.106.139': {'crash in forestdb': 1, 'panic:': 0}, '172.23.106.176': {'crash in forestdb': 0, 'panic:': 0}, '172.23.107.110': {'crash in forestdb': 0, 'panic:': 0}, '172.23.106.175': {'crash in forestdb': 0, 'panic:': 0}}
2016-04-28 14:41:59 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is finished for test #1 rebalance_out_during_index_building ====
2016-04-28 14:42:02 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 default
2016-04-28 14:42:05 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_1
2016-04-28 14:42:09 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 sasl_bucket_2
2016-04-28 14:42:14 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_1
2016-04-28 14:42:19 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.106.139:11210 standard_bucket_2
2016-04-28 14:43:16 | INFO | MainProcess | test_thread | [fts_base.load_data] Loading phase complete!
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2016-04-28 14:43:17 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/default_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:17 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/default_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index default_index_1 on 172.23.106.176
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "default_index_1", "planParams": {"hierarchyRules": null, "nodePlanParams": null, "numReplicas": 1, "planFrozen": false, "maxPartitionsPerPIndex": 32}, "sourceName": "default", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "default", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index default_index_1 created
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for default_index_1 ...
2016-04-28 14:43:17 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 32
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 32 partitions or lesser
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'36ab1cfc131697b64da16ccaa5366c88', u'616911398aab71a3819e41cf62a7b40a', u'26465bd92b1608d86e2d9290d9111678']
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 309-373
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 36ab1cfc131697b64da16ccaa5366c88 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 616911398aab71a3819e41cf62a7b40a houses 10 pindexes which serve 320 partitions
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 26465bd92b1608d86e2d9290d9111678 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:23 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2016-04-28 14:43:23 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/sasl_bucket_1_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:24 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/sasl_bucket_1_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:24 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index sasl_bucket_1_index_1 on 172.23.106.176
2016-04-28 14:43:24 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "sasl_bucket_1_index_1", "planParams": {"hierarchyRules": null, "nodePlanParams": null, "numReplicas": 1, "planFrozen": false, "maxPartitionsPerPIndex": 32}, "sourceName": "sasl_bucket_1", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "sasl_bucket_1", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "password", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2016-04-28 14:43:24 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index sasl_bucket_1_index_1 created
2016-04-28 14:43:24 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for sasl_bucket_1_index_1 ...
2016-04-28 14:43:24 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 32
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 32 partitions or lesser
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'36ab1cfc131697b64da16ccaa5366c88', u'616911398aab71a3819e41cf62a7b40a', u'26465bd92b1608d86e2d9290d9111678']
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 309-373
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 36ab1cfc131697b64da16ccaa5366c88 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 616911398aab71a3819e41cf62a7b40a houses 10 pindexes which serve 320 partitions
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 26465bd92b1608d86e2d9290d9111678 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2016-04-28 14:43:30 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.139:8094/api/index/sasl_bucket_2_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:30 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.139:8094/api/index/sasl_bucket_2_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index sasl_bucket_2_index_1 on 172.23.106.139
2016-04-28 14:43:30 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "sasl_bucket_2_index_1", "planParams": {"hierarchyRules": null, "nodePlanParams": null, "numReplicas": 1, "planFrozen": false, "maxPartitionsPerPIndex": 32}, "sourceName": "sasl_bucket_2", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "sasl_bucket_2", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "password", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2016-04-28 14:43:31 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index sasl_bucket_2_index_1 created
2016-04-28 14:43:31 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for sasl_bucket_2_index_1 ...
2016-04-28 14:43:31 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 32
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 32 partitions or lesser
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'36ab1cfc131697b64da16ccaa5366c88', u'616911398aab71a3819e41cf62a7b40a', u'26465bd92b1608d86e2d9290d9111678']
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 309-373
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 36ab1cfc131697b64da16ccaa5366c88 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 616911398aab71a3819e41cf62a7b40a houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 26465bd92b1608d86e2d9290d9111678 houses 10 pindexes which serve 320 partitions
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2016-04-28 14:43:37 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.175:8094/api/index/standard_bucket_1_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:37 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.175:8094/api/index/standard_bucket_1_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index standard_bucket_1_index_1 on 172.23.106.175
2016-04-28 14:43:37 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "standard_bucket_1_index_1", "planParams": {"hierarchyRules": null, "nodePlanParams": null, "numReplicas": 1, "planFrozen": false, "maxPartitionsPerPIndex": 32}, "sourceName": "standard_bucket_1", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "standard_bucket_1", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2016-04-28 14:43:38 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index standard_bucket_1_index_1 created
2016-04-28 14:43:38 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for standard_bucket_1_index_1 ...
2016-04-28 14:43:39 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2016-04-28 14:43:44 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 32
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 32 partitions or lesser
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'36ab1cfc131697b64da16ccaa5366c88', u'616911398aab71a3819e41cf62a7b40a', u'26465bd92b1608d86e2d9290d9111678']
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 309-373
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 36ab1cfc131697b64da16ccaa5366c88 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 616911398aab71a3819e41cf62a7b40a houses 10 pindexes which serve 320 partitions
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 26465bd92b1608d86e2d9290d9111678 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2016-04-28 14:43:45 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/standard_bucket_2_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:45 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://172.23.106.176:8094/api/index/standard_bucket_2_index_1 error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index standard_bucket_2_index_1 on 172.23.106.176
2016-04-28 14:43:45 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "standard_bucket_2_index_1", "planParams": {"hierarchyRules": null, "nodePlanParams": null, "numReplicas": 1, "planFrozen": false, "maxPartitionsPerPIndex": 32}, "sourceName": "standard_bucket_2", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "standard_bucket_2", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2016-04-28 14:43:46 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index standard_bucket_2_index_1 created
2016-04-28 14:43:46 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for standard_bucket_2_index_1 ...
2016-04-28 14:43:46 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2016-04-28 14:43:52 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 32
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 32 partitions or lesser
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'36ab1cfc131697b64da16ccaa5366c88', u'616911398aab71a3819e41cf62a7b40a', u'26465bd92b1608d86e2d9290d9111678']
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 309-373
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 36ab1cfc131697b64da16ccaa5366c88 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 616911398aab71a3819e41cf62a7b40a houses 10 pindexes which serve 320 partitions
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node 26465bd92b1608d86e2d9290d9111678 houses 11 pindexes which serve 352 partitions
2016-04-28 14:43:53 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 10 secs. ...
2016-04-28 14:44:03 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index building has begun...
2016-04-28 14:44:06 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index count for default_index_1: 8186
2016-04-28 14:44:10 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index count for sasl_bucket_1_index_1: 4571
2016-04-28 14:44:14 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index count for sasl_bucket_2_index_1: 3016
2016-04-28 14:44:18 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index count for standard_bucket_1_index_1: 1907
2016-04-28 14:44:22 | INFO | MainProcess | test_thread | [moving_topology_fts.rebalance_out_during_index_building] Index count for standard_bucket_2_index_1: 685
2016-04-28 14:44:22 | INFO | MainProcess | test_thread | [fts_base.__async_rebalance_out] Starting rebalance-out nodes:[ip:172.23.106.176 port:8091 ssh_username:root] at C1 cluster 172.23.106.139
2016-04-28 14:44:23 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : password=password&ejectedNodes=ns_1%40172.23.106.176&user=Administrator&knownNodes=ns_1%40172.23.106.176%2Cns_1%40172.23.106.175%2Cns_1%40172.23.106.139
2016-04-28 14:44:23 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started
2016-04-28 14:44:23 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 0.00 %
2016-04-28 14:44:33 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 16.67 %
2016-04-28 14:44:43 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 16.67 %
2016-04-28 14:44:54 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 17.65 %
2016-04-28 14:45:04 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:45:14 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:45:24 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:45:34 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:45:44 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:45:54 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:04 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:14 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:24 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:34 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:44 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:46:55 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:05 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:15 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:25 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:35 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:45 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:47:55 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:48:05 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:48:15 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:48:25 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 19.61 %
2016-04-28 14:48:27 | INFO | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] Latest logs from UI on 172.23.106.139:
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Bucket "default" rebalance appears to be swap rebalance', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:26.607Z', u'module': u'ns_vbucket_mover', u'tstamp': 1461879866607, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Started rebalancing bucket default', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:26.058Z', u'module': u'ns_rebalancer', u'tstamp': 1461879866058, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Bucket "sasl_bucket_1" rebalance appears to be swap rebalance', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:25.947Z', u'module': u'ns_vbucket_mover', u'tstamp': 1461879865947, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Started rebalancing bucket sasl_bucket_1', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:25.556Z', u'module': u'ns_rebalancer', u'tstamp': 1461879865556, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Bucket "sasl_bucket_2" rebalance appears to be swap rebalance', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:25.448Z', u'module': u'ns_vbucket_mover', u'tstamp': 1461879865448, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Started rebalancing bucket sasl_bucket_2', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:24.945Z', u'module': u'ns_rebalancer', u'tstamp': 1461879864945, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Bucket "standard_bucket_1" rebalance appears to be swap rebalance', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:24.819Z', u'module': u'ns_vbucket_mover', u'tstamp': 1461879864819, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Started rebalancing bucket standard_bucket_1', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:24.469Z', u'module': u'ns_rebalancer', u'tstamp': 1461879864469, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Bucket "standard_bucket_2" rebalance appears to be swap rebalance', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:24.368Z', u'module': u'ns_vbucket_mover', u'tstamp': 1461879864368, u'type': u'info'}
2016-04-28 14:48:27 | ERROR | MainProcess | Cluster_Thread | [rest_client.print_UI_logs] {u'node': u'ns_1@172.23.106.139', u'code': 0, u'text': u'Started rebalancing bucket standard_bucket_2', u'shortText': u'message', u'serverTime': u'2016-04-28T14:44:24.028Z', u'module': u'ns_rebalancer', u'tstamp': 1461879864028, u'type': u'info'}
[('/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py', 781, '__bootstrap'[('/System/Library/Frameworks/Pyt, ho'ns.eflrfa.m_e_wboorokt/sVterraspi_oinnsn/e2r.(7)/'li)b/, py(th'o/nS2y.s7t/etmh/rLeiabdrianrgy./pFyr'am, ew781or, ks'/_P_ybtohootns.tfrraapm'ew, or'ks/eVlefr.s_i_obnoso/t2s.t7r/alpi_bi/npnyetrh(o)n'2.)7/, th(re'a/dSiynsgt.epmy/'Li, br808ar, y/'F_r_abmoeowtosrtkrsa/pP_yitnhnoenr.'fr, am'esweolrfk./rVuenr(s)i'on)s/, 2.(7/'lliibb//ptyatshkosn/2t.a7s/ktmharneaagdeirn.gp.yp'y', , 808, '__bootstrap31_i, nn'erru'n', , 'task.step(self)'), ('lib/tasks/task.py', 74, 'step', 'self.check(task_manager)'), ('lib/tasks/task.py', 'self.run()'), ('./test495ru, nn'ecrh.epcyk'', 282, 'run', , 'self.set_exception(Rebalance'F*a*isleeldfE.x_cTehprteiaodn_(_"ksweaermgss )l'ik)e , re(ba'l/aSnycset ehma/nLgisb.r aprlye/aFsrea mcehweocrkk sl/oPgyst!h"o)n).'fr)am, ew(or'kl/iVbe/rtsaisoknss//f2u.t7u/rlei.bp/yp'yt, ho264n2, .7'/suenti_tetxecsetp/triuonnn'er, .p'yp'ri, nt151 t, ra'creubna'ck, .e'xttersatc(tr_esstualctk)(')'))]
, (Thu Apr 28 14:48:27 2016
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py', 395, '__call__', 'return self.run(*args, **kwds)'), ('/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py', 331, 'run', 'testMethod()'), ('pytests/fts/moving_topology_fts.py', 40, 'rebalance_out_during_index_building', 'self._cb_cluster.rebalance_out()'), ('pytests/fts/fts_base.py', 1616, 'rebalance_out', 'task.result()'), ('lib/tasks/future.py', 160, 'result', 'return self.__get_result()'), ('lib/tasks/future.py', 111, '__get_result', 'print traceback.extract_stack()')]
ERROR
2016-04-28 14:48:27 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 172.23.106.139:8091: True content: "/opt/couchbase/var/lib/couchbase/logs" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2016-04-28 14:48:27 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.139 with username : root password : couchbase ssh_key:
2016-04-28 14:48:28 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.139
2016-04-28 14:48:31 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.139: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'panic:' count on 172.23.106.139 :0, now :0
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.139: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 1
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'crash in forestdb' count on 172.23.106.139 :1, now :1
2016-04-28 14:48:32 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.175 with username : root password : couchbase ssh_key:
2016-04-28 14:48:34 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.175
2016-04-28 14:48:36 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.175: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:36 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:36 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:36 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'panic:' count on 172.23.106.175 :0, now :0
2016-04-28 14:48:36 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.175: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:37 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:37 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:37 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'crash in forestdb' count on 172.23.106.175 :0, now :0
2016-04-28 14:48:37 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.176 with username : root password : couchbase ssh_key:
2016-04-28 14:48:38 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.176
2016-04-28 14:48:41 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.176: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:41 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:41 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:41 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'panic:' count on 172.23.106.176 :0, now :0
2016-04-28 14:48:41 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.106.176: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:42 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:42 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'crash in forestdb' count on 172.23.106.176 :0, now :0
2016-04-28 14:48:42 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.107.110 with username : root password : couchbase ssh_key:
2016-04-28 14:48:43 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.107.110
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.107.110: zgrep "panic:" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'panic:' count on 172.23.107.110 :0, now :0
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.107.110: zgrep "crash in forestdb" "/opt/couchbase/var/lib/couchbase/logs"/fts.log* | wc -l
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] 0
2016-04-28 14:48:45 | INFO | MainProcess | test_thread | [fts_base.check_error_count_in_fts_log] Initial 'crash in forestdb' count on 172.23.107.110 :0, now :0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment