Skip to content

Instantly share code, notes, and snippets.

@Orphist
Forked from NikolayS/howto.md
Created September 26, 2021 18:30
Show Gist options
  • Save Orphist/b50bb4f4e6282e2f4727aaa98a914708 to your computer and use it in GitHub Desktop.
Save Orphist/b50bb4f4e6282e2f4727aaa98a914708 to your computer and use it in GitHub Desktop.
log_min_duration_statement = 0 and I/O impact

How to get an estimate of the impact of writing Postgres logs with log_min_duration_statement = 0:

  1. Do select pg_stat_statements_reset(); and wait N seconds (where N >> 60 – say 1-24 hours, covering typical busy hours). Remember when it was, and write down somewhere – this timestamp will be needed!

  2. Check if select count(*) from pg_stat_statements is lower than pg_stat_statements.max. If it's equal to it, then raise pg_stat_statements.max and restart with the step 1.

  3. Get the estimate:

\set TS_PGSS_RESET 'XXXX-XX-XX XX:XX:XX';
select
  sum(calls * length(query)) as total_bytes,
  sum(calls * length(query)) / extract(epoch from now() - :'TS_PGSS_RESET') as bytes_per_sec
from pg_stat_statements;

-- this will give the number of bytes per second.

This is our estimate. The real load will be slightly higher due to:

  • presence of concrete parameter values (pg_stat_statements.query doesn't have them in general),
  • additional wrapping text for each query (log line prefix, keywords like LOG: duration: XXX ms statement: ),
  • additional messages (connection/disconnection, autovacuum, locks, tmp files logging, error messages, warning, etc).

Also, if pg_stat_statements.track = all, some queries might be counted multiple times.

Also, it is worth to take into account how the workload is usually distributed during a day / a week. For example, if you see that bytes_per_second is somewhat ~500kB/sec, this means that during the busiest hours, spikes of many MB/s might happen and this can cause significant impact on disks' performance.


Alternative query:

-- WARNING: this will be OK only if pg_stat_reset() and 
-- pg_stat_statements_reset() were last invoked at the same time !

with const(stats_since, "pg_stat_statements.max") as (
  select
    (select stats_reset from pg_stat_database where datname = current_database()),
    (select setting from pg_settings where name = 'pg_stat_statements.max')
)
select
  (select stats_since from const),
  (select now() - stats_since from const) stats_age,
  count(*) as query_groups,
  (select "pg_stat_statements.max" from const),
  sum(calls * length(query)) as total_bytes,
  sum(calls * length(query)) / extract(epoch from now() - (select stats_since from const)) as bytes_per_sec
from pg_stat_statements
;

How to reset stats (fully) and remember reset time in psql var:

select now() as "TS_PGSS_RESET"
from
  pg_stat_reset(),
  pg_stat_reset_shared('archiver') a,
  pg_stat_reset_shared('bgwriter') b,
  pg_stat_statements_reset()
  --, pg_stat_kcache_reset() -- optional, uncomment if needed
\gset

select :'TS_PGSS_RESET' as reset_timestamp;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment