Skip to content

Instantly share code, notes, and snippets.

@alexey-milovidov
Created August 24, 2022 14:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alexey-milovidov/1ddb1e453088f69949bdc685d3858854 to your computer and use it in GitHub Desktop.
Save alexey-milovidov/1ddb1e453088f69949bdc685d3858854 to your computer and use it in GitHub Desktop.
```
2022.08.24 14:08:36.908985 [ 2805621 ] {} <Error> ZooKeeperClient: Code: 999. Coordination::Exception: Operation timeout (no response) for request Create for path: /clickhouse/tables/7af94bd6-f5de-4f7f-bb31-c2bc6af692d3/1/part_moves_shard (Operation timeout). (KEEPER_EXCEPTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa360bfa in /usr/bin/clickhouse
1. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error, int) @ 0x168f8af5 in /usr/bin/clickhouse
2. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error) @ 0x168f8e47 in /usr/bin/clickhouse
3. Coordination::ZooKeeper::receiveThread() @ 0x1694bbd5 in /usr/bin/clickhouse
4. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan, std::__1::shared_ptr<DB::ZooKeeperLog>)::$_2>(Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan, std::__1::shared_ptr<DB::ZooKeeperLog>)::$_2&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x16951c2e in /usr/bin/clickhouse
5. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa42f9ac in /usr/bin/clickhouse
6. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xa432dbd in /usr/bin/clickhouse
7. ? @ 0x7f52ad12c609 in ?
8. __clone @ 0x7f52ad051163 in ?
(version 22.8.1.1231 (official build))
2022.08.24 14:08:37.024527 [ 2805097 ] {} <Error> Application: Caught exception while loading metadata: Code: 999. Coordination::Exception: Failed to create new nodes at /clickhouse/tables/7af94bd6-f5de-4f7f-bb31-c2bc6af692d3/1 (Connection loss): Cannot attach table `default`.`run_attributes_v1` from metadata file /mnt/raid/clickhouse/store/349/3491607f-e4a7-49d6-8409-28f942a2a9fc/run_attributes_v1.sql from query ATTACH TABLE default.run_attributes_v1 UUID '7af94bd6-f5de-4f7f-bb31-c2bc6af692d3' (`old_sha` String, `new_sha` String, `metric` LowCardinality(String), `metric_value` String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}') ORDER BY (old_sha, new_sha) SETTINGS index_granularity = 8192. (KEEPER_EXCEPTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa360bfa in /usr/bin/clickhouse
1. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error, int) @ 0x168f8af5 in /usr/bin/clickhouse
2. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error) @ 0x168f8e47 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::createNewZooKeeperNodes() @ 0x15aa5339 in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, DB::StorageReplicatedMergeTree::RenamingRestrictions) @ 0x15a946e1 in /usr/bin/clickhouse
5. DB::create(DB::StorageFactory::Arguments const&) @ 0x16063887 in /usr/bin/clickhouse
6. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x159b743e in /usr/bin/clickhouse
7. DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool) @ 0x1497f938 in /usr/bin/clickhouse
8. DB::DatabaseOrdinary::loadTableFromMetadata(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::QualifiedTableName const&, std::__1::shared_ptr<DB::IAST> const&, bool) @ 0x14a2c404 in /usr/bin/clickhouse
9. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::TablesLoader::startLoadingIndependentTables(ThreadPoolImpl<ThreadFromGlobalPool>&, unsigned long, std::__1::shared_ptr<DB::Context>)::$_1, void ()> >(std::__1::__function::__policy_storage const*) @ 0x14a98a6b in /usr/bin/clickhouse
10. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa432108 in /usr/bin/clickhouse
11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0xa433e55 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa42f9ac in /usr/bin/clickhouse
13. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xa432dbd in /usr/bin/clickhouse
14. ? @ 0x7f52ad12c609 in ?
15. __clone @ 0x7f52ad051163 in ?
(version 22.8.1.1231 (official build))
2022.08.24 14:08:37.024611 [ 2805097 ] {} <Information> Application: Shutting down storages.
2022.08.24 14:08:37.024617 [ 2805097 ] {} <Information> Context: Shutdown disk default
2022.08.24 14:08:37.024924 [ 2805735 ] {} <Trace> SystemLog (system.query_log): Terminating
2022.08.24 14:08:37.025057 [ 2805638 ] {} <Trace> SystemLog (system.query_thread_log): Terminating
2022.08.24 14:08:37.025205 [ 2805602 ] {} <Trace> SystemLog (system.part_log): Terminating
2022.08.24 14:08:37.025306 [ 2805728 ] {} <Trace> SystemLog (system.trace_log): Terminating
2022.08.24 14:08:37.025701 [ 2805717 ] {} <Trace> SystemLog (system.crash_log): Terminating
2022.08.24 14:08:37.314812 [ 2805114 ] {} <Trace> KeeperTCPHandler: Has 1 responses in the queue
2022.08.24 14:08:37.314946 [ 2805114 ] {} <Information> KeeperTCPHandler: Got exception processing session #6: Code: 210. DB::NetException: I/O error: Broken pipe, while writing to socket ([::ffff:127.0.0.1]:40402). (NETWORK_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa360bfa in /usr/bin/clickhouse
1. DB::WriteBufferFromPocoSocket::nextImpl() @ 0x14a65e10 in /usr/bin/clickhouse
2. Coordination::ZooKeeperResponse::write(DB::WriteBuffer&) const @ 0x1692eafb in /usr/bin/clickhouse
3. DB::KeeperTCPHandler::runImpl() @ 0x1618a9aa in /usr/bin/clickhouse
4. Poco::Net::TCPServerConnection::start() @ 0x1925ad93 in /usr/bin/clickhouse
5. Poco::Net::TCPServerDispatcher::run() @ 0x1925c2ad in /usr/bin/clickhouse
6. Poco::PooledThread::run() @ 0x1941fadd in /usr/bin/clickhouse
7. Poco::ThreadImpl::runnableEntry(void*) @ 0x1941d122 in /usr/bin/clickhouse
8. ? @ 0x7f52ad12c609 in ?
9. __clone @ 0x7f52ad051163 in ?
2022.08.24 14:08:37.692722 [ 2805653 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 1 entries to flush up to offset 17
2022.08.24 14:08:37.700472 [ 2805653 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 156.78 GiB.
2022.08.24 14:08:37.702611 [ 2805653 ] {} <Trace> MergedBlockOutputStream: filled checksums 202208_3_3_0 (state Temporary)
2022.08.24 14:08:37.702947 [ 2805653 ] {} <Trace> system.metric_log (7f12799c-f233-4b2b-bea8-a89e35aec1fd): Renaming temporary part tmp_insert_202208_3_3_0 to 202208_180140_180140_0.
2022.08.24 14:08:37.703857 [ 2805653 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 17
2022.08.24 14:08:37.703863 [ 2805653 ] {} <Trace> SystemLog (system.metric_log): Terminating
2022.08.24 14:08:37.704006 [ 2805708 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Terminating
2022.08.24 14:08:37.704220 [ 2805734 ] {} <Trace> SystemLog (system.opentelemetry_span_log): Terminating
2022.08.24 14:08:37.704332 [ 2805657 ] {} <Trace> SystemLog (system.query_views_log): Terminating
2022.08.24 14:08:37.704452 [ 2805669 ] {} <Trace> SystemLog (system.session_log): Terminating
2022.08.24 14:08:37.704683 [ 2805097 ] {} <Trace> default.dish (ReplicatedMergeTreeRestartingThread): Restarting thread finished
2022.08.24 14:08:37.704699 [ 2805097 ] {} <Trace> default.dish (ReplicatedMergeTreeRestartingThread): Waiting for threads to finish
2022.08.24 14:08:37.704704 [ 2805097 ] {} <Trace> default.dish (ReplicatedMergeTreeRestartingThread): Threads finished
2022.08.24 14:08:37.704708 [ 2805097 ] {} <Trace> default.dish (PartMovesBetweenShardsOrchestrator): PartMovesBetweenShardsOrchestrator thread finished
2022.08.24 14:08:37.704715 [ 2805097 ] {} <Trace> default.stock (ReplicatedMergeTreeRestartingThread): Restarting thread finished
2022.08.24 14:08:37.704717 [ 2805097 ] {} <Trace> default.stock (ReplicatedMergeTreeRestartingThread): Waiting for threads to finish
2022.08.24 14:08:37.704721 [ 2805097 ] {} <Trace> default.stock (ReplicatedMergeTreeRestartingThread): Threads finished
2022.08.24 14:08:37.704724 [ 2805097 ] {} <Trace> default.stock (PartMovesBetweenShardsOrchestrator): PartMovesBetweenShardsOrchestrator thread finished
2022.08.24 14:08:37.764919 [ 2805097 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2022.08.24 14:08:37.765133 [ 2805097 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 67.22 KiB.
2022.08.24 14:08:37.765180 [ 2805097 ] {} <Debug> Application: Shut down storages.
2022.08.24 14:08:37.765183 [ 2805097 ] {} <Debug> Application: Waiting for current connections to servers for tables to finish.
2022.08.24 14:08:37.976501 [ 2805097 ] {} <Information> Application: Closed all listening sockets.
2022.08.24 14:08:37.976511 [ 2805097 ] {} <Information> Application: Closed connections to servers for tables.
2022.08.24 14:08:37.976515 [ 2805097 ] {} <Debug> KeeperDispatcher: Shutting down storage dispatcher
2022.08.24 14:08:38.130435 [ 2805097 ] {} <Information> RaftInstance: shutting down raft core
2022.08.24 14:08:38.130468 [ 2805097 ] {} <Information> RaftInstance: sent stop signal to the commit thread.
2022.08.24 14:08:38.130525 [ 2805097 ] {} <Information> RaftInstance: cancelled all schedulers.
2022.08.24 14:08:38.130531 [ 2805097 ] {} <Information> RaftInstance: commit thread stopped.
2022.08.24 14:08:38.130537 [ 2805097 ] {} <Information> RaftInstance: all pending commit elements dropped.
2022.08.24 14:08:38.130542 [ 2805097 ] {} <Information> RaftInstance: reset all pointers.
2022.08.24 14:08:38.131138 [ 2805097 ] {} <Information> RaftInstance: joined terminated commit thread.
2022.08.24 14:08:38.131213 [ 2805097 ] {} <Information> RaftInstance: sent stop signal to background append thread.
2022.08.24 14:08:38.131220 [ 2805520 ] {} <Information> RaftInstance: bg append_entries thread terminated
2022.08.24 14:08:38.131579 [ 2805097 ] {} <Information> RaftInstance: clean up auto-forwarding queue: 0 elems
2022.08.24 14:08:38.131591 [ 2805097 ] {} <Information> RaftInstance: clean up auto-forwarding clients
2022.08.24 14:08:38.131596 [ 2805097 ] {} <Information> RaftInstance: srv 2, in-use 1, idle 0
2022.08.24 14:08:38.131599 [ 2805097 ] {} <Information> RaftInstance: raft_server shutdown completed.
2022.08.24 14:08:38.131603 [ 2805097 ] {} <Information> RaftInstance: manually create a snapshot on 394024
2022.08.24 14:08:38.131607 [ 2805097 ] {} <Information> RaftInstance: creating a snapshot for index 394024
2022.08.24 14:08:38.131611 [ 2805097 ] {} <Information> RaftInstance: create snapshot idx 394024 log_term 1
2022.08.24 14:08:38.131614 [ 2805097 ] {} <Debug> KeeperStateMachine: Creating snapshot 394024
2022.08.24 14:08:38.131627 [ 2805097 ] {} <Information> KeeperStateMachine: Creating a snapshot during shutdown because 'create_snapshot_on_exit' is enabled.
2022.08.24 14:08:38.137929 [ 2805097 ] {} <Debug> KeeperStateMachine: Created persistent snapshot 394024 with path /var/lib/clickhouse/coordination/snapshots/snapshot_394024.bin.zstd
2022.08.24 14:08:38.137940 [ 2805097 ] {} <Trace> KeeperStateMachine: Clearing garbage after snapshot
2022.08.24 14:08:38.137944 [ 2805097 ] {} <Trace> KeeperStateMachine: Cleared garbage after snapshot
2022.08.24 14:08:38.137952 [ 2805097 ] {} <Information> RaftInstance: snapshot idx 394024 log_term 1 created, compact the log store if needed
2022.08.24 14:08:38.137956 [ 2805097 ] {} <Information> RaftInstance: log_store_ compact upto 294024
2022.08.24 14:08:38.137961 [ 2805097 ] {} <Information> KeeperLogStore: Compact logs up to log index 294024, our max log id is 394024
2022.08.24 14:08:38.139213 [ 2805097 ] {} <Information> KeeperLogStore: Compaction up to 294024 finished new min index 294025, new max index 394024
2022.08.24 14:08:38.139222 [ 2805097 ] {} <Information> RaftInstance: create snapshot idx 394024 log_term 1 done: 7607 us elapsed
2022.08.24 14:08:38.140009 [ 2805496 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 94
...
RaftInstance: end of asio worker thread, remaining threads: 2
2022.08.24 14:08:38.151344 [ 2805097 ] {} <Debug> KeeperDispatcher: Dispatcher shut down
2022.08.24 14:08:38.543950 [ 2805097 ] {} <Debug> Application: Destroyed global context.
2022.08.24 14:08:38.568629 [ 2805097 ] {} <Error> Application: Coordination::Exception: Failed to create new nodes at /clickhouse/tables/7af94bd6-f5de-4f7f-bb31-c2bc6af692d3/1 (Connection loss): Cannot attach table `default`.`run_attributes_v1` from metadata file /mnt/raid/clickhouse/store/349/3491607f-e4a7-49d6-8409-28f942a2a9fc/run_attributes_v1.sql from query ATTACH TABLE default.run_attributes_v1 UUID '7af94bd6-f5de-4f7f-bb31-c2bc6af692d3' (`old_sha` String, `new_sha` String, `metric` LowCardinality(String), `metric_value` String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}') ORDER BY (old_sha, new_sha) SETTINGS index_granularity = 8192
2022.08.24 14:08:38.568665 [ 2805097 ] {} <Information> Application: shutting down
2022.08.24 14:08:38.568669 [ 2805097 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.08.24 14:08:38.568755 [ 2805098 ] {} <Trace> BaseDaemon: Received signal -2
2022.08.24 14:08:38.568785 [ 2805098 ] {} <Information> BaseDaemon: Stop SignalListener thread
2022.08.24 14:08:38.760720 [ 2805096 ] {} <Information> Application: Child process exited normally with code 70.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment