Skip to content

Instantly share code, notes, and snippets.

@danypr92
Last active January 28, 2018 13:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save danypr92/3d06642276e40ad478ed257a5a7ff496 to your computer and use it in GitHub Desktop.
Save danypr92/3d06642276e40ad478ed257a5a7ff496 to your computer and use it in GitHub Desktop.
Explore slow database queries

Find slow query

  • First find the PID of the main process:
$ps faux
postgres 10980 35.3  6.9 297584 141476 ?       Ss   14:45   5:17  \_ postgres: 9.6/main: danypr92 staging [local] idle in transaction
  • We can use the table pg_stat_activity searching by the PID:
my_db=> select * from pg_stat_activity where pid = 5178;
-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
datname | staging
pid     | 10980
state   | idle in transaction
query   | SELECT "a"."id" AS "id", "a"."create_uid" AS "create_uid", "a"."product" AS "product", "a"."create_date" AS "create_date", "a"."destiny" AS "destiny", "a"."date_export" AS "date_export", "a"."contract" AS "contract", "a"."untaxed_amount" AS "untaxed_amount", "a"."state" AS "state", "a"."date_event" AS "date_event", "a"."write_date" AS "write_date", "a"."tax_amount" AS "tax_amount", "a"."write_uid" AS "write_uid", "a"."dealer" AS "dealer", "a"."invoice_line" AS "invoice_line", "a"."quantity" AS "quantity", CAST(EXTRACT('EPOCH' FROM COALESCE("a"."write_date", "a"."create_date")) AS VARCHAR) AS "_timestamp" FROM "contract_invoice_line_detail" AS "a" WHERE (("a"."date_event" = '2017-10-14T13:50:46'::timestamp) AND ("a"."contract" = 5501) AND ("a"."invoice_line" = 235661)) ORDER BY "a"."id" ASC
  • Use EXPLAIN to show the query costs:
staging=> EXPLAIN(SELECT "a"."id" AS "id", "a"."create_uid" AS "create_uid", "a"."product" AS "product", "a"."create_date" AS "create_date", "a"."destiny" AS "destiny", "a"."date_export" AS "date_export", "a"."contract" AS "contract", "a"."untaxed_amount" AS "untaxed_amount", "a"."state" AS "state", "a"."date_event" AS "date_event", "a"."write_date" AS "write_date", "a"."tax_amount" AS "tax_amount", "a"."write_uid" AS "write_uid", "a"."dealer" AS "dealer", "a"."invoice_line" AS "invoice_line", "a"."quantity" AS "quantity", CAST(EXTRACT('EPOCH' FROM COALESCE("a"."write_date", "a"."create_date")) AS VARCHAR) AS "_timestamp" FROM "contract_invoice_line_detail" AS "a" WHERE (("a"."date_event" = '2017-10-14T13:50:46'::timestamp) AND ("a"."contract" = 5501) AND ("a"."invoice_line" = 235661)) ORDER BY "a"."id" ASC);
                                                                        QUERY PLAN                                                                         
-----------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=8.46..8.46 rows=1 width=125)
   Sort Key: id
   ->  Index Scan using contract_invoice_line_detail_date_event_contract_product_idx on contract_invoice_line_detail a  (cost=0.42..8.45 rows=1 width=125)
         Index Cond: ((date_event = '2017-10-14 13:50:46'::timestamp without time zone) AND (contract = 5501))
         Filter: (invoice_line = 235661)

PG_STAT_STATEMENT

Add this content to the postgres.conf:

# postgresql.conf
shared_preload_libraries = 'pg_stat_statements'

pg_stat_statements.max = 10000
pg_stat_statements.track = all

Create Extension in db:

CREATE EXTENSION pg_stat_statements;

Use the app and force the most performance cost actions.

View the metrics with pg_stat_statements function:

postgres=# SELECT query, calls, total_time, rows, 100.0 * shared_blks_hit /
               nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
          FROM pg_stat_statements ORDER BY total_time DESC LIMIT 5;
-[ RECORD 1 ]---------------------------------------------------------------------
query       | UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2;
calls       | 3000
total_time  | 9609.00100000002
rows        | 2836
hit_percent | 99.9778970000200936
-[ RECORD 2 ]---------------------------------------------------------------------
query       | UPDATE pgbench_tellers SET tbalance = tbalance + $1 WHERE tid = $2;
calls       | 3000
total_time  | 8015.156
rows        | 2990
hit_percent | 99.9731126579631345
-[ RECORD 3 ]---------------------------------------------------------------------
query       | copy pgbench_accounts from stdin
calls       | 1
total_time  | 310.624
rows        | 100000
hit_percent | 0.30395136778115501520
-[ RECORD 4 ]---------------------------------------------------------------------
query       | UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2;
calls       | 3000
total_time  | 271.741999999997
rows        | 3000
hit_percent | 93.7968855088209426
-[ RECORD 5 ]---------------------------------------------------------------------
query       | alter table pgbench_accounts add primary key (aid)
calls       | 1
total_time  | 81.42
rows        | 0
hit_percent | 34.4947735191637631
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment