Skip to content

Instantly share code, notes, and snippets.

@filimonov
Last active July 14, 2023 15:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save filimonov/7268c5d45778838f201cc9539cf0958d to your computer and use it in GitHub Desktop.
Save filimonov/7268c5d45778838f201cc9539cf0958d to your computer and use it in GitHub Desktop.
23.3

Version 1: 22.8.15.23

Version 2: 23.3.8.21

Number of authors 329

Number of commits 10816

Number of PR 3151

Tables in system database

name groupArray(version)
asynchronous_metric_log ['old']
dropped_tables ['new']
metric_log ['old']
moves ['new']
named_collections ['new']
query_cache ['new']
server_settings ['new']
trace_log ['old']

Table structures in system database

table name groupArray(version) groupArray(type)
asynchronous_inserts last_update ['old'] ['DateTime64(6)']
asynchronous_inserts entries.finished ['old'] ['Array(UInt8)']
asynchronous_inserts entries.exception ['old'] ['Array(String)']
asynchronous_metric_log event_date ['old'] ['Date']
asynchronous_metric_log event_time ['old'] ['DateTime']
asynchronous_metric_log metric ['old'] ['LowCardinality(String)']
asynchronous_metric_log value ['old'] ['Float64']
asynchronous_metrics description ['new'] ['String']
backups total_size ['new'] ['UInt64']
backups num_entries ['new'] ['UInt64']
backups files_read ['new'] ['UInt64']
backups bytes_read ['new'] ['UInt64']
data_skipping_indices type_full ['new'] ['String']
databases engine_full ['new'] ['String']
detached_parts bytes_on_disk ['new'] ['UInt64']
detached_parts path ['new'] ['String']
disks unreserved_space ['new'] ['UInt64']
disks is_encrypted ['new'] ['UInt8']
disks is_read_only ['new'] ['UInt8']
disks is_write_once ['new'] ['UInt8']
disks is_remote ['new'] ['UInt8']
disks is_broken ['new'] ['UInt8']
distribution_queue last_exception_time ['new'] ['DateTime']
dropped_tables index ['new'] ['UInt32']
dropped_tables database ['new'] ['String']
dropped_tables table ['new'] ['String']
dropped_tables uuid ['new'] ['UUID']
dropped_tables engine ['new'] ['String']
dropped_tables metadata_dropped_path ['new'] ['String']
dropped_tables table_dropped_time ['new'] ['DateTime']
filesystem_cache kind ['new'] ['String']
filesystem_cache unbound ['new'] ['UInt8']
formats supports_parallel_parsing ['new'] ['UInt8']
formats supports_parallel_formatting ['new'] ['UInt8']
functions description ['new'] ['String']
grants access_type ['old','new'] ...
merge_tree_settings min ['new'] ['Nullable(String)']
merge_tree_settings max ['new'] ['Nullable(String)']
merge_tree_settings readonly ['new'] ['UInt8']
models model_path ['new'] ['String']
models name ['old'] ['String']
models status ['old'] ['Enum8('NOT_LOADED' = 0, 'LOADED' = 1, 'FAILED' = 2, 'LOADING' = 3, 'FAILED_AND_RELOADING' = 4, 'LOADED_AND_RELOADING' = 5, 'NOT_EXIST' = 6)']
models origin ['old'] ['String']
models last_exception ['old'] ['String']
moves database ['new'] ['String']
moves table ['new'] ['String']
moves elapsed ['new'] ['Float64']
moves target_disk_name ['new'] ['String']
moves target_disk_path ['new'] ['String']
moves part_name ['new'] ['String']
moves part_size ['new'] ['UInt64']
moves thread_id ['new'] ['UInt64']
named_collections name ['new'] ['String']
named_collections collection ['new'] ['Map(String, String)']
parts has_lightweight_delete ['new'] ['UInt8']
parts last_removal_attemp_time ['new'] ['DateTime']
parts removal_state ['new'] ['String']
privileges privilege ['old','new'] ...
privileges level ['old','new'] ['Nullable(Enum8('GLOBAL' = 0, 'DATABASE' = 1, 'TABLE' = 2, 'DICTIONARY' = 3, 'VIEW' = 4, 'COLUMN' = 5))','Nullable(Enum8('GLOBAL' = 0, 'DATABASE' = 1, 'TABLE' = 2, 'DICTIONARY' = 3, 'VIEW' = 4, 'COLUMN' = 5, 'NAMED_COLLECTION' = 6))']
privileges parent_group ['old','new'] ...
processes query_kind ['new'] ['String']
query_cache query ['new'] ['String']
query_cache key_hash ['new'] ['UInt64']
query_cache expires_at ['new'] ['DateTime']
query_cache stale ['new'] ['UInt8']
query_cache shared ['new'] ['UInt8']
query_cache result_size ['new'] ['UInt64']
remote_data_paths size ['new'] ['UInt64']
remote_data_paths common_prefix_for_blobs ['new'] ['String']
replicated_merge_tree_settings min ['new'] ['Nullable(String)']
replicated_merge_tree_settings max ['new'] ['Nullable(String)']
replicated_merge_tree_settings readonly ['new'] ['UInt8']
replication_queue last_exception_time ['new'] ['DateTime']
server_settings name ['new'] ['String']
server_settings value ['new'] ['String']
server_settings default ['new'] ['String']
server_settings changed ['new'] ['UInt8']
server_settings description ['new'] ['String']
server_settings type ['new'] ['String']
settings default ['new'] ['String']
settings alias_for ['new'] ['String']
settings_profile_elements writability ['new'] ['Nullable(Enum8('WRITABLE' = 0, 'CONST' = 1, 'CHANGEABLE_IN_READONLY' = 2))']
settings_profile_elements readonly ['old'] ['Nullable(UInt8)']
table_functions description ['new'] ['String']
table_functions allow_readonly ['new'] ['UInt8']
tables parts ['new'] ['Nullable(UInt64)']
tables active_parts ['new'] ['Nullable(UInt64)']
tables total_marks ['new'] ['Nullable(UInt64)']
trace_log event_date ['old'] ['Date']
trace_log event_time ['old'] ['DateTime']
trace_log event_time_microseconds ['old'] ['DateTime64(6)']
trace_log timestamp_ns ['old'] ['UInt64']
trace_log revision ['old'] ['UInt32']
trace_log trace_type ['old'] ['Enum8('Real' = 0, 'CPU' = 1, 'Memory' = 2, 'MemorySample' = 3, 'MemoryPeak' = 4)']
trace_log thread_id ['old'] ['UInt64']
trace_log query_id ['old'] ['String']
trace_log trace ['old'] ['Array(UInt64)']
trace_log size ['old'] ['Int64']

settings and merge_tree_settings

source name old_value new_value new_description
merge_tree_settings async_block_ids_cache_min_update_interval_ms 100 minimum interval between updates of async_block_ids_cache
merge_tree_settings clean_deleted_rows Never Is the Replicated Merge cleanup has to be done automatically at each merge or manually (possible values are 'Always'/'Never' (default))
merge_tree_settings compress_marks 0 Marks support compression, reduce mark file size and speed up network transmission.
merge_tree_settings compress_primary_key 0 Primary key support compression, reduce primary key file size and speed up network transmission.
merge_tree_settings disk Name of storage disk. Can be specified instead of storage policy.
merge_tree_settings initialization_retry_period 60 Retry period for table initialization, in seconds.
merge_tree_settings marks_compress_block_size 65536 Mark compress block size, the actual size of the block to compress.
merge_tree_settings marks_compression_codec ZSTD(3) Compression encoding used by marks, marks are small enough and cached, so the default compression is ZSTD(3).
merge_tree_settings max_avg_part_size_for_too_many_parts 10737418240 The 'too many parts' check according to 'parts_to_delay_insert' and 'parts_to_throw_insert' will be active only if the average part size (in the relevant partition) is not larger than the specified threshold. If it is larger than the specified threshold, the INSERTs will be neither delayed or rejected. This allows to have hundreds of terabytes in a single table on a single server if the parts are successfully merged to larger parts. This does not affect the thresholds on inactive parts or total parts.
merge_tree_settings max_digestion_size_per_segment 268435456 Max number of bytes to digest per segment to build GIN index.
merge_tree_settings max_number_of_mutations_for_replica 0 Limit the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings).
merge_tree_settings max_part_loading_threads 'auto(6)' 'auto(12)' The number of threads to load data parts at startup.
merge_tree_settings max_part_removal_threads 'auto(6)' 'auto(12)' The number of threads for concurrent removal of inactive data parts. One is usually enough, but in 'Google Compute Environment SSD Persistent Disks' file removal (unlink) operation is extraordinarily slow and you probably have to increase this number (recommended is up to 16).
merge_tree_settings max_replicated_merges_in_queue 16 1000 How many tasks of merging and mutating parts are allowed simultaneously in ReplicatedMergeTree queue.
merge_tree_settings max_suspicious_broken_parts 10 100 Max broken parts, if more - deny automatic deletion.
merge_tree_settings min_age_to_force_merge_on_partition_only 0 Whether min_age_to_force_merge_seconds should be applied only on the entire partition and not on subset.
merge_tree_settings min_age_to_force_merge_seconds 0 If all parts in a certain range are older than this value, range will be always eligible for merging. Set to 0 to disable.
merge_tree_settings min_delay_to_insert_ms 10 Min delay of inserting data into MergeTree table in milliseconds, if there are a lot of unmerged parts in single partition.
merge_tree_settings primary_key_compress_block_size 65536 Primary compress block size, the actual size of the block to compress.
merge_tree_settings primary_key_compression_codec ZSTD(3) Compression encoding used by primary, primary key is small enough and cached, so the default compression is ZSTD(3).
merge_tree_settings replicated_deduplication_window_for_async_inserts 10000 How many last hash values of async_insert blocks should be kept in ZooKeeper (old blocks will be deleted).
merge_tree_settings replicated_deduplication_window_seconds_for_async_inserts 604800 Similar to "replicated_deduplication_window_for_async_inserts", but determines old blocks by their lifetime. Hash of an inserted block will be deleted (and the block will not be deduplicated after) if it outside of one "window". You can set very big replicated_deduplication_window to avoid duplicating INSERTs during that period of time.
merge_tree_settings simultaneous_parts_removal_limit 0 Maximum number of parts to remove during one CleanupThread iteration (0 means unlimited).
merge_tree_settings use_async_block_ids_cache 0 use in-memory cache to filter duplicated async inserts based on block ids
settings allow_aggregate_partitions_independently 0 Enable independent aggregation of partitions on separate threads when partition key suits group by key. Beneficial when number of partitions close to number of cores and partitions have roughly the same size
settings allow_asynchronous_read_from_io_pool_for_merge_tree 0 Use background I/O pool to read from MergeTree tables. This setting may increase performance for I/O bound queries
settings allow_custom_error_code_in_throwif 0 Enable custom error code in function throwIf(). If true, thrown exceptions may have unexpected error codes.
settings allow_execute_multiif_columnar 1 Allow execute multiIf function columnar
settings allow_experimental_analyzer 0 Allow experimental analyzer
settings allow_experimental_annoy_index 0 Allows to use Annoy index. Disabled by default because this feature is experimental
settings allow_experimental_inverted_index 0 If it is set to true, allow to use experimental inverted index.
settings allow_experimental_lightweight_delete 0 1 Obsolete setting, does nothing.
settings allow_experimental_query_cache 0 Enable experimental query cache
settings allow_experimental_undrop_table_query 0 Allow to use undrop query to restore dropped table in a limited time
settings allow_prefetched_read_pool_for_local_filesystem 0 Prefer prefethed threadpool if all parts are on remote filesystem
settings allow_prefetched_read_pool_for_remote_filesystem 0 Prefer prefethed threadpool if all parts are on remote filesystem
settings allow_suspicious_fixed_string_types 0 In CREATE TABLE statement allows creating columns of type FixedString(n) with n > 256. FixedString with length >= 256 is suspicious and most likely indicates misusage
settings alter_sync 1 Wait for actions to manipulate the partitions. 0 - do not wait, 1 - wait for execution only of itself, 2 - wait for everyone.
settings ann_index_select_query_params Parameters passed to ANN indexes in SELECT queries, the format is 'param1=x, param2=y, ...'
settings async_insert_cleanup_timeout_ms 1000 Obsolete setting, does nothing.
settings async_insert_deduplicate 0 For async INSERT queries in the replicated table, specifies that deduplication of insertings blocks should be performed
settings async_insert_max_data_size 100000 1000000 Maximum size in bytes of unparsed data collected per query before being inserted
settings async_insert_max_query_number 450 Maximum number of insert queries before being inserted
settings backup_batch_size_for_keeper_multiread 10000 Maximum size of batch for multiread request to [Zoo]Keeper during backup
settings backup_keeper_max_retries 20 Max retries for keeper operations during backup
settings backup_keeper_retry_initial_backoff_ms 100 Initial backoff timeout for [Zoo]Keeper operations during backup
settings backup_keeper_retry_max_backoff_ms 5000 Max backoff timeout for [Zoo]Keeper operations during backup
settings backup_keeper_value_max_size 1048576 Maximum size of data of a [Zoo]Keeper's node during backup
settings check_referential_table_dependencies 0 Check that DDL query (such as DROP TABLE or RENAME) will not break referential dependencies
settings cluster_for_parallel_replicas default Cluster for a shard in which current server is located
settings compile_aggregate_expressions 1 0 Compile aggregate functions to native code. This feature has a bug and should not be used.
settings database_replicated_allow_replicated_engine_arguments 1 Allow to create only Replicated tables in database with engine Replicated with explicit arguments
settings dialect clickhouse Which dialect will be used to parse query
settings dictionary_use_async_executor 0 Execute a pipeline for reading from a dictionary with several threads. It's supported only by DIRECT dictionary with CLICKHOUSE source.
settings enable_extended_results_for_datetime_functions 0 Enable date functions like toLastDayOfMonth return Date32 results (instead of Date results) for Date32/DateTime64 arguments.
settings enable_filesystem_read_prefetches_log 0 Log to system.filesystem prefetch_log during query. Should be used only for testing or debugging, not recommended to be turned on by default
settings enable_lightweight_delete 1 Enable lightweight DELETE mutations for mergetree tables.
settings enable_memory_bound_merging_of_aggregation_results 0 Enable memory bound merging strategy for aggregation. Set it to true only if all nodes of your clusters have versions >= 22.12.
settings enable_multiple_prewhere_read_steps 0 Move more conditions from WHERE to PREWHERE and do reads from disk and filtering in multiple steps if there are multiple conditions combined with AND
settings enable_reads_from_query_cache 1 Enable reading results of SELECT queries from the query cache
settings enable_software_prefetch_in_aggregation 1 Enable use of software prefetch in aggregation
settings enable_writes_to_query_cache 1 Enable storing results of SELECT queries in the query cache
settings errors_output_format CSV Method to write Errors to text output.
settings except_default_mode ALL Set default mode in EXCEPT query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without mode will throw exception.
settings filesystem_cache_max_download_size 137438953472 Max remote filesystem cache size that can be downloaded by a single query
settings filesystem_cache_max_wait_sec 5
settings filesystem_prefetch_max_memory_usage 1073741824 Maximum memory usage for prefetches. Zero means unlimited
settings filesystem_prefetch_min_bytes_for_single_read_task 8388608 Do not parallelize within one file read less than this amount of bytes. E.g. one reader will not receive a read task of size less than this amount. This setting is recommended to avoid spikes of time for aws getObject requests to aws
settings filesystem_prefetch_step_bytes 0 Prefetch step in bytes. Zero means auto - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task
settings filesystem_prefetch_step_marks 0 Prefetch step in marks. Zero means auto - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task
settings filesystem_prefetches_limit 0 Maximum number of prefetches. Zero means unlimited. A setting filesystem_prefetches_max_memory_usage is more recommended if you want to limit the number of prefetches
settings final 0 Query with the FINAL modifier by default. If the engine does not support final, it does not have any effect. On queries with multiple tables final is applied only on those that support it. It also works on distributed tables
settings force_aggregate_partitions_independently 0 Force the use of optimization when it is applicable, but heuristics decided not to use it
settings force_aggregation_in_order 0 Force use of aggregation in order on remote nodes during distributed aggregation. PLEASE, NEVER CHANGE THIS SETTING VALUE MANUALLY!
settings force_grouping_standard_compatibility 1 Make GROUPING function to return 1 when argument is not used as an aggregation key
settings format_binary_max_array_size 1073741824 The maximum allowed size for Array in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit
settings format_binary_max_string_size 1073741824 The maximum allowed size for String in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit
settings format_json_object_each_row_column_for_object_name The name of column that will be used as object names in JSONObjectEachRow format. Column type should be String
settings grace_hash_join_initial_buckets 1 Initial number of grace hash join buckets
settings grace_hash_join_max_buckets 1024 Limit on the number of grace hash join buckets
settings http_max_request_param_data_size 10485760 Limit on size of request data used as a query parameter in predefined HTTP requests.
settings http_response_buffer_size 0 The number of bytes to buffer in the server memory before sending a HTTP response to the client or flushing to disk (when http_wait_end_of_query is enabled).
settings http_wait_end_of_query 0 Enable HTTP response buffering on the server-side.
settings input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0 Skip fields with unsupported types while schema inference for format BSON.
settings input_format_csv_detect_header 1 Automatically detect header with names and types in CSV format
settings input_format_custom_detect_header 1 Automatically detect header with names and types in CustomSeparated format
settings input_format_json_defaults_for_missing_elements_in_named_tuple 1 Insert default value in named tuple element if it's missing in json object
settings input_format_json_ignore_unknown_keys_in_named_tuple 1 Ignore unknown keys in json object for named tuples
settings input_format_json_named_tuples_as_objects 1 Deserialize named tuple columns as JSON objects
settings input_format_json_read_numbers_as_strings 0 Allow to parse numbers as strings in JSON input formats
settings input_format_json_read_objects_as_strings 1 Allow to parse JSON objects as strings in JSON input formats
settings input_format_json_validate_types_from_metadata 1 For JSON/JSONCompact/JSONColumnsWithMetadata input formats this controls whether format parser should check if data types from input metadata match data types of the corresponding columns from the table
settings input_format_native_allow_types_conversion 1 Allow data types conversion in Native input format
settings input_format_parquet_max_block_size 8192 Max block size for parquet reader.
settings input_format_record_errors_file_path Path of the file used to record errors while reading text formats (CSV, TSV).
settings input_format_tsv_detect_header 1 Automatically detect header with names and types in TSV format
settings insert_keeper_fault_injection_probability 0 Approximate probability of failure for a keeper request during insert. Valid value is in interval [0.0f, 1.0f]
settings insert_keeper_fault_injection_seed 0 0 - random seed, otherwise the setting value
settings insert_keeper_max_retries 20 Max retries for keeper operations during insert
settings insert_keeper_retry_initial_backoff_ms 100 Initial backoff timeout for keeper operations during insert
settings insert_keeper_retry_max_backoff_ms 10000 Max backoff timeout for keeper operations during insert
settings intersect_default_mode ALL Set default mode in INTERSECT query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without mode will throw exception.
settings load_marks_asynchronously 0 Load MergeTree marks asynchronously
settings materialized_views_ignore_errors 0 Allows to ignore errors for MATERIALIZED VIEW, and deliver original block to the table regardless of MVs
settings max_alter_threads 'auto(6)' 'auto(12)' Obsolete setting, does nothing.
settings max_analyze_depth 5000 Maximum number of analyses performed by interpreter.
settings max_block_size 65505 65409 Maximum block size for reading
settings max_final_threads 16 'auto(12)' The maximum number of threads to read from table with FINAL.
settings max_insert_block_size 1048545 1048449 The maximum block size for insertion, if we control the creation of blocks for insertion.
settings max_joined_block_size_rows 65505 65409 Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited.
settings max_limit_for_ann_queries 1000000 Maximum limit value for using ANN indexes is used to prevent memory overflow in search queries for indexes
settings max_number_of_partitions_for_independent_aggregation 128 Maximal number of partitions in table to apply optimization
settings max_pipeline_depth 1000 0 Obsolete setting, does nothing.
settings max_query_cache_size 137438953472
settings max_rows_in_set_to_optimize_join 100000 Maximal size of the set to filter joined tables by each other row sets before joining. 0 - disable.
settings max_size_to_preallocate_for_aggregation 10000000 100000000 For how many elements it is allowed to preallocate space in all hash tables in total before aggregation
settings max_streams_for_merge_tree_reading 0 If is not zero, limit the number of reading streams for MergeTree table.
settings max_temporary_data_on_disk_size_for_query 0 The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running queries. Zero means unlimited.
settings max_temporary_data_on_disk_size_for_user 0 The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running user queries. Zero means unlimited.
settings max_threads 'auto(6)' 'auto(12)' The maximum number of threads to execute the request. By default, it is determined automatically.
settings min_insert_block_size_bytes 268427520 268402944 Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough.
settings min_insert_block_size_rows 1048545 1048449 Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough.
settings move_all_conditions_to_prewhere 0 Move all viable conditions from WHERE to PREWHERE
settings optimize_distinct_in_order 1 0 This optimization has a bug and it is disabled. Enable DISTINCT optimization if some columns in DISTINCT form a prefix of sorting. For example, prefix of sorting key in merge tree or ORDER BY statement
settings optimize_duplicate_order_by_and_distinct 1 0 Remove duplicate ORDER BY and DISTINCT if it's possible
settings optimize_monotonous_functions_in_order_by 1 0 Replace monotonous function with its argument in ORDER BY
settings optimize_rewrite_aggregate_function_with_if 1 Rewrite aggregate functions with if expression as argument when logically equivalent. For example, avg(if(cond, col, null)) can be rewritten to avgIf(cond, col)
settings optimize_rewrite_array_exists_to_has 1 Rewrite arrayExists() functions to has() when logically equivalent. For example, arrayExists(x -> x = 1, arr) can be rewritten to has(arr, 1)
settings optimize_rewrite_sum_if_to_count_if 1 0 Rewrite sumIf() and sum(if()) function countIf() function when logically equivalent
settings output_format_arrow_compression_method lz4_frame Compression method for Arrow output format. Supported codecs: lz4_frame, zstd, none (uncompressed)
settings output_format_arrow_fixed_string_as_fixed_byte_array 1 Use Arrow FIXED_SIZE_BINARY type instead of Binary for FixedString columns.
settings output_format_bson_string_as_string 0 Use BSON String type instead of Binary for String columns.
settings output_format_json_quote_64bit_floats 0 Controls quoting of 64-bit float numbers in JSON output format.
settings output_format_json_quote_decimals 0 Controls quoting of decimals in JSON output format.
settings output_format_json_validate_utf8 0 Validate UTF-8 sequences in JSON output formats, doesn't impact formats JSON/JSONCompact/JSONColumnsWithMetadata, they always validate utf8
settings output_format_orc_compression_method lz4 Compression method for ORC output format. Supported codecs: lz4, snappy, zlib, zstd, none (uncompressed)
settings output_format_parquet_compression_method lz4 Compression method for Parquet output format. Supported codecs: snappy, lz4, brotli, zstd, gzip, none (uncompressed)
settings output_format_parquet_fixed_string_as_fixed_byte_array 1 Use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary for FixedString columns.
settings output_format_parquet_version 2.latest Parquet format version for output format. Supported versions: 1.0, 2.4, 2.6 and 2.latest (default)
settings output_format_sql_insert_max_batch_size 65505 65409 The maximum number of rows in one INSERT statement.
settings parallel_replicas_custom_key Custom key assigning work to replicas when parallel replicas are used.
settings parallel_replicas_custom_key_filter_type default Type of filter to use with custom key for parallel replicas. default - use modulo operation on the custom key, range - use range filter on custom key using all possible values for the value type of custom key.
settings parallel_replicas_for_non_replicated_merge_tree 0 If true, ClickHouse will use parallel replicas algorithm also for non-replicated MergeTree tables
settings parallel_replicas_single_task_marks_count_multiplier 2 A multiplier which will be added during calculation for minimal number of marks to retrieve from coordinator. This will be applied only for remote replicas.
settings partial_result_on_first_cancel 0 Allows query to return a partial result after cancel.
settings parts_to_delay_insert 150 0 If the destination table contains at least that many active parts in a single partition, artificially slow down insert into table.
settings parts_to_throw_insert 300 0 If more than this number active parts in a single partition of the destination table, throw 'Too many parts ...' exception.
settings query_cache_min_query_duration 0 Minimum time in milliseconds for a query to run for its result to be stored in the query cache.
settings query_cache_min_query_runs 0 Minimum number a SELECT query must run before its result is stored in the query cache
settings query_cache_share_between_users 0 Allow other users to read entry in the query cache
settings query_cache_store_results_of_queries_with_nondeterministic_functions 0 Store results of queries with non-deterministic functions (e.g. rand(), now()) in the query cache
settings query_cache_ttl 60 After this time in seconds entries in the query cache become stale
settings query_plan_aggregation_in_order 1 Use query plan for aggregation-in-order optimisation
settings query_plan_optimize_projection 1 Use query plan for aggregation-in-order optimisation
settings query_plan_read_in_order 1 Use query plan for read-in-order optimisation
settings query_plan_remove_redundant_distinct 1 Remove redundant Distinct step in query plan
settings query_plan_remove_redundant_sorting 1 Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries
settings regexp_dict_allow_hyperscan 1 Allow regexp_tree dictionary using Hyperscan library.
settings regexp_dict_allow_other_sources 0 Allow regexp_tree dictionary to use sources other than yaml source.
settings reject_expensive_hyperscan_regexps 1 Reject patterns which will likely be expensive to evaluate with hyperscan (due to NFA state explosion)
settings s3_list_object_keys_size 1000 Maximum number of files that could be returned in batch by ListObject request
settings s3_max_get_burst 0 Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to s3_max_get_rps
settings s3_max_get_rps 0 Limit on S3 GET request per second rate before throttling. Zero means unlimited.
settings s3_max_put_burst 0 Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to s3_max_put_rps
settings s3_max_put_rps 0 Limit on S3 PUT request per second rate before throttling. Zero means unlimited.
settings s3_max_unexpected_write_error_retries 4 The maximum number of retries in case of unexpected errors during S3 write.
settings s3_max_upload_part_size 5368709120 The maximum size of part to upload during multipart upload to S3.
settings s3_throw_on_zero_files_match 0 Throw an error, when ListObjects request cannot match any files
settings s3_upload_part_size_multiply_parts_count_threshold 1000 500 Each time this number of parts was uploaded to S3 s3_min_upload_part_size multiplied by s3_upload_part_size_multiply_factor.
settings schema_inference_make_columns_nullable 1 If set to true, all inferred types will be Nullable in schema inference for formats without information about nullability.
settings single_join_prefer_left_table 1 For single JOIN in case of identifier ambiguity prefer left table
settings sleep_in_receive_cancel_ms 0
settings storage_file_read_method mmap Method of reading data from storage file, one of: read, pread, mmap.
settings temporary_live_view_timeout 5 1 Obsolete setting, does nothing.
settings throw_on_error_from_cache_on_write_operations 0 Ignore error from cache when caching on write operations (INSERT, merges)
settings trace_profile_events 0 Send to system.trace_log profile event and value of increment on each increment with 'ProfileEvent' trace_type
settings use_query_cache 0 Enable the query cache
settings use_structure_from_insertion_table_in_table_functions 0 2 Use structure from insertion table instead of schema inference from data. Possible values: 0 - disabled, 1 - enabled, 2 - auto
settings workload default Name of workload to be used to access resources

Other

source name groupArray(version)
asynchronous_metrics AsynchronousHeavyMetricsCalculationTimeSpent ['new']
asynchronous_metrics AsynchronousHeavyMetricsUpdateInterval ['new']
asynchronous_metrics AsynchronousMetricsUpdateInterval ['new']
asynchronous_metrics FilesystemCacheBytes ['new']
asynchronous_metrics FilesystemCacheFiles ['new']
asynchronous_metrics NumberOfDetachedByUserParts ['new']
asynchronous_metrics NumberOfDetachedParts ['new']
errors CANNOT_PARSE_IPV4 ['new']
errors CANNOT_PARSE_IPV6 ['new']
errors INVALID_SCHEDULER_NODE ['new']
errors INVALID_STATE ['new']
errors IO_URING_INIT_FAILED ['new']
errors IO_URING_SUBMIT_ERROR ['new']
errors MIXED_ACCESS_PARAMETER_TYPES ['new']
errors NAMED_COLLECTION_ALREADY_EXISTS ['new']
errors NAMED_COLLECTION_DOESNT_EXIST ['new']
errors NAMED_COLLECTION_IS_IMMUTABLE ['new']
errors NOT_INITIALIZED ['new']
errors REPLICA_ALREADY_EXISTS ['new']
errors REPLICA_IS_ALREADY_EXIST ['old']
errors RESOURCE_ACCESS_DENIED ['new']
errors RESOURCE_NOT_FOUND ['new']
errors SIZES_OF_ARRAYS_DOESNT_MATCH ['old']
errors SIZES_OF_ARRAYS_DONT_MATCH ['new']
errors THREAD_WAS_CANCELED ['new']
errors UNKNOWN_ELEMENT_OF_ENUM ['new']
events AsyncInsertCacheHits ['new']
events AsynchronousRemoteReadWaitMicroseconds ['new']
events BackgroundLoadingMarksTasks ['new']
events DiskS3AbortMultipartUpload ['new']
events DiskS3CompleteMultipartUpload ['new']
events DiskS3CopyObject ['new']
events DiskS3CreateMultipartUpload ['new']
events DiskS3DeleteObjects ['new']
events DiskS3GetObject ['new']
events DiskS3GetObjectAttributes ['new']
events DiskS3GetRequestThrottlerCount ['new']
events DiskS3GetRequestThrottlerSleepMicroseconds ['new']
events DiskS3HeadObject ['new']
events DiskS3ListObjects ['new']
events DiskS3PutObject ['new']
events DiskS3PutRequestThrottlerCount ['new']
events DiskS3PutRequestThrottlerSleepMicroseconds ['new']
events DiskS3ReadMicroseconds ['new']
events DiskS3ReadRequestsCount ['new']
events DiskS3ReadRequestsErrors ['new']
events DiskS3ReadRequestsRedirects ['new']
events DiskS3ReadRequestsThrottling ['new']
events DiskS3UploadPart ['new']
events DiskS3UploadPartCopy ['new']
events DiskS3WriteMicroseconds ['new']
events DiskS3WriteRequestsCount ['new']
events DiskS3WriteRequestsErrors ['new']
events DiskS3WriteRequestsRedirects ['new']
events DiskS3WriteRequestsThrottling ['new']
events ExternalJoinCompressedBytes ['new']
events ExternalJoinMerge ['new']
events ExternalJoinUncompressedBytes ['new']
events ExternalJoinWritePart ['new']
events ExternalProcessingCompressedBytesTotal ['new']
events ExternalProcessingFilesTotal ['new']
events ExternalProcessingUncompressedBytesTotal ['new']
events ExternalSortCompressedBytes ['new']
events ExternalSortUncompressedBytes ['new']
events FailedAsyncInsertQuery ['new']
events FileSegmentWriteMicroseconds ['new']
events IOUringCQEsCompleted ['new']
events IOUringCQEsFailed ['new']
events IOUringSQEsResubmits ['new']
events IOUringSQEsSubmitted ['new']
events KeeperCheckRequest ['new']
events KeeperCreateRequest ['new']
events KeeperExistsRequest ['new']
events KeeperGetRequest ['new']
events KeeperListRequest ['new']
events KeeperMultiReadRequest ['new']
events KeeperMultiRequest ['new']
events KeeperRemoveRequest ['new']
events KeeperSetRequest ['new']
events LoadedMarksCount ['new']
events LoadedMarksMemoryBytes ['new']
events MemoryAllocatorPurge ['new']
events MemoryAllocatorPurgeTimeMicroseconds ['new']
events MergeTreeAllRangesAnnouncementsSent ['new']
events MergeTreeAllRangesAnnouncementsSentElapsedMicroseconds ['new']
events MergeTreePrefetchedReadPoolInit ['new']
events MergeTreeReadTaskRequestsReceived ['new']
events MergeTreeReadTaskRequestsSent ['new']
events MergeTreeReadTaskRequestsSentElapsedMicroseconds ['new']
events QueryCacheHits ['new']
events QueryCacheMisses ['new']
events ReadBufferFromS3InitMicroseconds ['new']
events ReadTaskRequestsReceived ['new']
events ReadTaskRequestsSent ['new']
events ReadTaskRequestsSentElapsedMicroseconds ['new']
events RemoteFSPrefetchedBytes ['new']
events RemoteFSUnprefetchedBytes ['new']
events RemoteReadThrottlerBytes ['new']
events RemoteReadThrottlerSleepMicroseconds ['new']
events RemoteWriteThrottlerBytes ['new']
events RemoteWriteThrottlerSleepMicroseconds ['new']
events S3AbortMultipartUpload ['new']
events S3CompleteMultipartUpload ['new']
events S3CopyObject ['new']
events S3CreateMultipartUpload ['new']
events S3DeleteObjects ['new']
events S3GetObject ['new']
events S3GetObjectAttributes ['new']
events S3GetRequestThrottlerCount ['new']
events S3GetRequestThrottlerSleepMicroseconds ['new']
events S3HeadObject ['new']
events S3ListObjects ['new']
events S3PutObject ['new']
events S3PutRequestThrottlerCount ['new']
events S3PutRequestThrottlerSleepMicroseconds ['new']
events S3UploadPart ['new']
events S3UploadPartCopy ['new']
events ServerStartupMilliseconds ['new']
events SynchronousRemoteReadWaitMicroseconds ['new']
events ThreadpoolReaderSubmit ['new']
events WaitMarksLoadMicroseconds ['new']
events WaitPrefetchTaskMicroseconds ['new']
events WriteBufferFromS3Microseconds ['new']
events WriteBufferFromS3RequestsErrors ['new']
formats BSONEachRow ['new']
formats JSONObjectEachRow ['new']
formats JSONStringEachRow ['old']
functions BLAKE3 ['new']
functions DATE_FORMAT ['new']
functions DATE_TRUNC ['new']
functions JSONArrayLength ['new']
functions JSON_ARRAY_LENGTH ['new']
functions MAP_FROM_ARRAYS ['new']
functions REGEXP_EXTRACT ['new']
functions TO_UNIXTIME ['new']
functions TimeDiff ['new']
functions ULIDStringToDateTime ['new']
functions UTCTimestamp ['new']
functions UTC_timestamp ['new']
functions accurate_Cast ['old']
functions accurate_CastOrNull ['old']
functions addInterval ['new']
functions addTupleOfIntervals ['new']
functions age ['new']
functions analysisOfVariance ['new']
functions anova ['new']
functions arrayPartialReverseSort ['new']
functions arrayPartialShuffle ['new']
functions arrayPartialSort ['new']
functions arrayShuffle ['new']
functions ascii ['new']
functions catboostEvaluate ['new']
functions concatWithSeparator ['new']
functions concatWithSeparatorAssumeInjective ['new']
functions concat_ws ['new']
functions corrMatrix ['new']
functions covarPopMatrix ['new']
functions covarSampMatrix ['new']
functions cutToFirstSignificantSubdomainCustomRFC ['new']
functions cutToFirstSignificantSubdomainCustomWithWWWRFC ['new']
functions cutToFirstSignificantSubdomainRFC ['new']
functions cutToFirstSignificantSubdomainWithWWWRFC ['new']
functions date_trunc ['old']
functions dictGetIPv4 ['new']
functions dictGetIPv4OrDefault ['new']
functions dictGetIPv6 ['new']
functions dictGetIPv6OrDefault ['new']
functions displayName ['new']
functions divideDecimal ['new']
functions domainRFC ['new']
functions domainWithoutWWWRFC ['new']
functions factorial ['new']
functions filesystemFree ['old']
functions filesystemUnreserved ['new']
functions firstSignificantSubdomainCustomRFC ['new']
functions firstSignificantSubdomainRFC ['new']
functions formatDateTimeInJodaSyntax ['new']
functions formatReadableDecimalSize ['new']
functions fromUnixTimestampInJodaSyntax ['new']
functions generateULID ['new']
functions getSubcolumn ['new']
functions groupArrayLast ['new']
functions hasTokenCaseInsensitiveOrNull ['new']
functions hasTokenOrNull ['new']
functions instr ['new']
functions mapFromArrays ['new']
functions medianInterpolatedWeighted ['new']
functions modelEvaluate ['old']
functions mortonDecode ['new']
functions mortonEncode ['new']
functions multiplyDecimal ['new']
functions nested ['new']
functions ntile ['new']
functions parseDateTime ['new']
functions parseDateTimeInJodaSyntax ['new']
functions parseDateTimeInJodaSyntaxOrNull ['new']
functions parseDateTimeInJodaSyntaxOrZero ['new']
functions parseDateTimeOrNull ['new']
functions parseDateTimeOrZero ['new']
functions pmod ['new']
functions portRFC ['new']
functions positiveModulo ['new']
functions positive_modulo ['new']
functions quantileInterpolatedWeighted ['new']
functions quantilesInterpolatedWeighted ['new']
functions randBernoulli ['new']
functions randBinomial ['new']
functions randCanonical ['new']
functions randChiSquared ['new']
functions randExponential ['new']
functions randFisherF ['new']
functions randLogNormal ['new']
functions randNegativeBinomial ['new']
functions randNormal ['new']
functions randPoisson ['new']
functions randStudentT ['new']
functions randUniform ['new']
functions regexpExtract ['new']
functions sipHash128Keyed ['new']
functions sipHash128Reference ['new']
functions sipHash128ReferenceKeyed ['new']
functions sipHash64Keyed ['new']
functions splitByAlpha ['new']
functions str_to_date ['new']
functions subtractInterval ['new']
functions subtractTupleOfIntervals ['new']
functions toDecimalString ['new']
functions toIPv4OrZero ['new']
functions toIPv6OrZero ['new']
functions topLevelDomainRFC ['new']
functions tryBase58Decode ['new']
functions tryDecrypt ['new']
functions uniqThetaIntersect ['new']
functions uniqThetaNot ['new']
functions uniqThetaUnion ['new']
functions widthBucket ['new']
functions width_bucket ['new']
functions xxh3 ['new']
licenses annoy ['new']
licenses aws-c-auth ['new']
licenses aws-c-cal ['new']
licenses aws-c-compression ['new']
licenses aws-c-http ['new']
licenses aws-c-io ['new']
licenses aws-c-mqtt ['new']
licenses aws-c-s3 ['new']
licenses aws-c-sdkutils ['new']
licenses aws-crt-cpp ['new']
licenses aws-s2n-tls ['new']
licenses corrosion ['new']
licenses crc32-s390x ['new']
licenses crc32-vpmsum ['new']
licenses google-benchmark ['new']
licenses idxd-config ['new']
licenses libcxx ['old']
licenses libcxxabi ['old']
licenses liburing ['new']
licenses llvm ['old']
licenses llvm-project ['new']
licenses morton-nd ['new']
licenses openssl ['new']
licenses poco ['old']
licenses xxHash ['new']
metrics ActiveAsyncDrainedConnections ['old']
metrics ActiveSyncDrainedConnections ['old']
metrics AggregatorThreads ['new']
metrics AggregatorThreadsActive ['new']
metrics AsyncDrainedConnections ['old']
metrics AsyncInsertCacheSize ['new']
metrics AsynchronousInsertThreads ['new']
metrics AsynchronousInsertThreadsActive ['new']
metrics BackgroundBufferFlushSchedulePoolSize ['new']
metrics BackgroundCommonPoolSize ['new']
metrics BackgroundDistributedSchedulePoolSize ['new']
metrics BackgroundFetchesPoolSize ['new']
metrics BackgroundMergesAndMutationsPoolSize ['new']
metrics BackgroundMessageBrokerSchedulePoolSize ['new']
metrics BackgroundMovePoolSize ['new']
metrics BackgroundSchedulePoolSize ['new']
metrics BackupsIOThreads ['new']
metrics BackupsIOThreadsActive ['new']
metrics BackupsThreads ['new']
metrics BackupsThreadsActive ['new']
metrics CacheDictionaryThreads ['new']
metrics CacheDictionaryThreadsActive ['new']
metrics DDLWorkerThreads ['new']
metrics DDLWorkerThreadsActive ['new']
metrics DatabaseCatalogThreads ['new']
metrics DatabaseCatalogThreadsActive ['new']
metrics DatabaseOnDiskThreads ['new']
metrics DatabaseOnDiskThreadsActive ['new']
metrics DatabaseOrdinaryThreads ['new']
metrics DatabaseOrdinaryThreadsActive ['new']
metrics DestroyAggregatesThreads ['new']
metrics DestroyAggregatesThreadsActive ['new']
metrics DiskObjectStorageAsyncThreads ['new']
metrics DiskObjectStorageAsyncThreadsActive ['new']
metrics DistributedInsertThreads ['new']
metrics DistributedInsertThreadsActive ['new']
metrics HashedDictionaryThreads ['new']
metrics HashedDictionaryThreadsActive ['new']
metrics IOPrefetchThreads ['new']
metrics IOPrefetchThreadsActive ['new']
metrics IOThreads ['new']
metrics IOThreadsActive ['new']
metrics IOUringInFlightEvents ['new']
metrics IOUringPendingEvents ['new']
metrics IOWriterThreads ['new']
metrics IOWriterThreadsActive ['new']
metrics MMappedAllocBytes ['new']
metrics MMappedAllocs ['new']
metrics MarksLoaderThreads ['new']
metrics MarksLoaderThreadsActive ['new']
metrics MergeTreeAllRangesAnnouncementsSent ['new']
metrics MergeTreeBackgroundExecutorThreads ['new']
metrics MergeTreeBackgroundExecutorThreadsActive ['new']
metrics MergeTreeDataSelectExecutorThreads ['new']
metrics MergeTreeDataSelectExecutorThreadsActive ['new']
metrics MergeTreePartsCleanerThreads ['new']
metrics MergeTreePartsCleanerThreadsActive ['new']
metrics MergeTreePartsLoaderThreads ['new']
metrics MergeTreePartsLoaderThreadsActive ['new']
metrics MergeTreeReadTaskRequestsSent ['new']
metrics Move ['new']
metrics ParallelFormattingOutputFormatThreads ['new']
metrics ParallelFormattingOutputFormatThreadsActive ['new']
metrics ParallelParsingInputFormatThreads ['new']
metrics ParallelParsingInputFormatThreadsActive ['new']
metrics ReadTaskRequestsSent ['new']
metrics RemoteRead ['new']
metrics RestartReplicaThreads ['new']
metrics RestartReplicaThreadsActive ['new']
metrics RestoreThreads ['new']
metrics RestoreThreadsActive ['new']
metrics StartupSystemTablesThreads ['new']
metrics StartupSystemTablesThreadsActive ['new']
metrics StorageDistributedThreads ['new']
metrics StorageDistributedThreadsActive ['new']
metrics StorageHiveThreads ['new']
metrics StorageHiveThreadsActive ['new']
metrics StorageS3Threads ['new']
metrics StorageS3ThreadsActive ['new']
metrics SyncDrainedConnections ['old']
metrics SystemReplicasThreads ['new']
metrics SystemReplicasThreadsActive ['new']
metrics TablesLoaderThreads ['new']
metrics TablesLoaderThreadsActive ['new']
metrics TemporaryFilesForAggregation ['new']
metrics TemporaryFilesForJoin ['new']
metrics TemporaryFilesForSort ['new']
metrics TemporaryFilesUnknown ['new']
metrics ThreadPoolFSReaderThreads ['new']
metrics ThreadPoolFSReaderThreadsActive ['new']
metrics ThreadPoolRemoteFSReaderThreads ['new']
metrics ThreadPoolRemoteFSReaderThreadsActive ['new']
metrics ThreadsInOvercommitTracker ['new']
metrics TotalTemporaryFiles ['new']
privileges ALTER NAMED COLLECTION ['new']
privileges CREATE ARBITRARY TEMPORARY TABLE ['new']
privileges CREATE NAMED COLLECTION ['new']
privileges DROP NAMED COLLECTION ['new']
privileges NAMED COLLECTION CONTROL ['new']
privileges SHOW CACHES ['old']
privileges SHOW FILESYSTEM CACHES ['new']
privileges SHOW NAMED COLLECTIONS ['new']
privileges SHOW NAMED COLLECTIONS SECRETS ['new']
privileges SYSTEM DROP QUERY CACHE ['new']
privileges SYSTEM DROP S3 CLIENT CACHE ['new']
privileges SYSTEM RELOAD USERS ['new']
privileges SYSTEM SYNC FILE CACHE ['new']
privileges SYSTEM WAIT LOADING PARTS ['new']
privileges UNDROP TABLE ['new']
table_engines DeltaLake ['new']
table_engines Hudi ['new']
table_engines Iceberg ['new']
table_engines KeeperMap ['new']
table_engines OSS ['new']
table_functions MeiliSearch ['old']
table_functions deltaLake ['new']
table_functions hudi ['new']
table_functions iceberg ['new']
table_functions meilisearch ['new']
table_functions oss ['new']
table_functions viewExplain ['new']

Removed

	config.xml	/clickhouse/concurrent_threads_soft_limit	{}	0
  
  config.xml	/clickhouse/query_masking_rules	{}	
	config.xml	/clickhouse/query_masking_rules/rule	{}	
	config.xml	/clickhouse/query_masking_rules/rule/name	{}	hide encrypt/decrypt arguments
	config.xml	/clickhouse/query_masking_rules/rule/regexp	{}	((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\\s*\\(\\s*(?:'(?:\\\\'|.)+'|.*?)\\s*\\)
	config.xml	/clickhouse/query_masking_rules/rule/replace	{}	\\1(???)

	users.xml	/clickhouse/profiles/default/load_balancing	{}	random
  

Added

	config.xml	/clickhouse/concurrent_threads_soft_limit_num	{}	0
	config.xml	/clickhouse/concurrent_threads_soft_limit_ratio_to_cores	{}	0

	config.xml	/clickhouse/allow_plaintext_password	{}	1
	config.xml	/clickhouse/allow_no_password	{}	1
	config.xml	/clickhouse/allow_implicit_no_password	{}	1

config.xml	/clickhouse/access_control_improvements/settings_constraints_replace_previous	{}	false
	config.xml	/clickhouse/access_control_improvements/role_cache_expiration_time_seconds	{}	600
  
  	config.xml	/clickhouse/remote_servers/parallel_replicas	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/internal_replication	{}	false
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica/host	{}	127.0.0.1
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[2]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[2]/host	{}	127.0.0.2
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[2]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[3]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[3]/host	{}	127.0.0.3
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[3]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[4]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[4]/host	{}	127.0.0.4
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[4]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[5]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[5]/host	{}	127.0.0.5
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[5]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[6]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[6]/host	{}	127.0.0.6
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[6]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[7]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[7]/host	{}	127.0.0.7
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[7]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[8]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[8]/host	{}	127.0.0.8
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[8]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[9]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[9]/host	{}	127.0.0.9
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[9]/port	{}	9000
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[10]	{}	
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[10]/host	{}	127.0.0.10
	config.xml	/clickhouse/remote_servers/parallel_replicas/shard/replica[10]/port	{}	9000

	config.xml	/clickhouse/asynchronous_insert_log	{}	
	config.xml	/clickhouse/asynchronous_insert_log/database	{}	system
	config.xml	/clickhouse/asynchronous_insert_log/table	{}	asynchronous_insert_log
	config.xml	/clickhouse/asynchronous_insert_log/flush_interval_milliseconds	{}	7500
	config.xml	/clickhouse/asynchronous_insert_log/partition_by	{}	event_date
	config.xml	/clickhouse/asynchronous_insert_log/ttl	{}	event_date + INTERVAL 3 DAY
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment