Skip to content

Instantly share code, notes, and snippets.

@kigster
Last active August 1, 2020 23:46
Show Gist options
  • Save kigster/4751844 to your computer and use it in GitHub Desktop.
Save kigster/4751844 to your computer and use it in GitHub Desktop.
PostgreSQL config file for high performance writes on Joyent Cloud, used by Wanelo.com in production (4K commits/second on a dedicated 80GB instance).
max_connections = 1500 # (change requires restart)
shared_buffers = 12000MB # min 128kB, based on 80GB RAM DB
temp_buffers = 8MB # min 800kB
work_mem = 64MB # min 64kB
maintenance_work_mem = 1500MB # min 1MB
wal_level = hot_standby # minimal, archive, or hot_standby
checkpoint_segments = 256 # in logfile segments, min 1, 16MB each
checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
max_wal_senders = 6 # max number of walsender processes
wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables
hot_standby = 'on' # "on" allows queries during recovery
hot_standby_feedback = 'on' # we may want to experiment with it being 'off'
max_standby_streaming_delay = 30s # max delay before canceling queries
random_page_cost = 2.0 # same scale as above
effective_cache_size = 65824MB # based on 80GB RAM DB
# Logging
log_destination = 'syslog' # Valid values are combinations of
log_min_duration_statement = 80 # -1 is disabled, 0 logs all statements
log_line_prefix = ''
log_temp_files = 0 # log temporary files equal or larger
log_checkpoints = on # log checkpoints
log_timezone = 'PST8PDT'
# use stat_statements, and log stats into /tmp (RAM on SmartOS)
# note that startup script must re-recreate /tmp/pg_stats_temp_directory upon server restart
shared_preload_libraries = '$libdir/pg_stat_statements' # (change requires restart)
stats_temp_directory = '/tmp/pg_stats_temp_directory' # log stats on RAM (/tmp) instead of hard disk
autovacuum_max_workers = 3 # max number of autovacuum subprocesses
autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
autovacuum_freeze_max_age = 1000000000 # maximum XID age before forced vacuum
vacuum_freeze_min_age = 5000000
vacuum_freeze_table_age = 500000000
timezone = 'PST8PDT'
lc_messages = 'C' # locale for system error message
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
# these are settings for high-throughput write load
wal_buffers = 32MB
synchronous_commit = off
wal_writer_delay = 200ms
commit_delay = 100 # microseconds
commit_siblings = 5
@tschellenbach
Copy link

Hey guys,

Did you consider putting max_connections = 80 and adding pgbouncer in front of your DB?
Adding pgbouncer was one of our largest jumps in performance.

Cheers,
Thierry

CTO Fashiolista

@kigster
Copy link
Author

kigster commented Apr 9, 2013

We did actually just recently went to using pgbouncer. No change in performance (I wouldn't expect one, unless you were hitting lock contention on the database due to too many connections), but connection count had dropped about 5X.

@vventirozos
Copy link

Hey Kigster,
I saw your config file and it looks fine to me, i don't think that you will see better performance if you change something i don't know exactly what types of queries you will run but i have i have a couple of "generic" recommendations that might make your everyday life a bit better,

I would set mainenance_work_mem higher, in fact much higher especially since you look that you can afford it, maybe to something like 2Gb or even more, you wont see any improvement in performance with this setting but you will get better times on create index and vacuum and judging by your max_connections settings you will probably need it. As i told you, i don't know your business but would probably set increase checkpoint_segments to a value close to 200-300 this really depends on your workflow and if you are doing mass imports , mass updates etc, but in a big installation like yours i would definitely increase it.

i hope i helped a bit,
and by all means feel free to contact me if you have any questions.

Vasilis Ventirozos
OmniTI Computer Consulting Inc.
Database Administrator

@draganHR
Copy link

It's wal_writer_delay, not wall_writer_delay, right?

@kigster
Copy link
Author

kigster commented Jun 10, 2020

Hey Kigster,
I saw your config file and it looks fine to me, i don't think that you will see better performance if you change something i don't know exactly what types of queries you will run but i have i have a couple of "generic" recommendations that might make your everyday life a bit better,

I would set mainenance_work_mem higher, in fact much higher especially since you look that you can afford it, maybe to something like 2Gb or even more, you wont see any improvement in performance with this setting but you will get better times on create index and vacuum and judging by your max_connections settings you will probably need it. As i told you, i don't know your business but would probably set increase checkpoint_segments to a value close to 200-300 this really depends on your workflow and if you are doing mass imports , mass updates etc, but in a big installation like yours i would definitely increase it.

i hope i helped a bit,
and by all means feel free to contact me if you have any questions.

Vasilis Ventirozos
OmniTI Computer Consulting Inc.
Database Administrator

@vventirozos thank you for your notes, I’ve updated the config to reflect.

@kigster
Copy link
Author

kigster commented Jun 10, 2020

It's wal_writer_delay, not wall_writer_delay, right?

@draganHR Yes sir, thank you for that!

@brandonros
Copy link

wal_level = hot_standby

How do you handle WAN/cross data-center connections securely? Wireguard? Something Azure/AWS/GCP specific?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment