Skip to content

Instantly share code, notes, and snippets.

@hgmnz
Created April 27, 2010 22:50
Show Gist options
  • Save hgmnz/381456 to your computer and use it in GitHub Desktop.
Save hgmnz/381456 to your computer and use it in GitHub Desktop.
# -------------------------------------------
# Simple PostgreSQL Configuration File v. 8.4
# -------------------------------------------
# This file provides a simple configuration with the most common options
# which most users need to modify for running PostgreSQL in production,
# including extensive notes on how to set each of these. If your configuration
# needs are more specific, then use the standard postgresql.conf, or add
# additional configuration options to the bottom of this file.
#
# This file is re-read when you send a SIGHUP to the server, or on a full
# restart. Note that on a SIGHUP simply recommenting the settings is not
# enough to reset to default value; the last explicit value you set will
# still be in effect.
#
# AvRAM: Several of the formulas below ask for "AvRAM", which is short for
# "Available RAM". This refers to the amount of memory which is available for
# running PostgreSQL. On a dedicated PostgreSQL server, you can use the total
# system RAM, but on shared servers you need to estimate what portion of RAM
# is usually available for PostgreSQL.
#
# Each setting below lists one recommended starting setting, followed by
# several alternate settings which are commented out. If multiple settings
# are uncommented, the *last* one will take effect.
# listen_addresses
# ------------------------
# listen_addresses takes a list of network interfaces the Postmaster will
# listen on. The setting below, '*', listens on all interfaces, and is only
# appropriate for development servers and initial setup. Otherwise, it
# should be restrictively set to only specific addresses. Note that most
# PostgreSQL access control settings are in the pg_hba.conf file.
listen_addresses = '*' # all interfaces
# listen_addresses = 'localhost' # unix sockets and loopback only
# listen_addresses = 'localhost,192.168.1.1' # local and one external interface
# max_connections
# ------------------------
# An integer setting a limit on the number of new connection processes which
# PostgreSQL will create. Should be set to the maximum number of connections
# which you expect to need at peak load. Note that each connection uses
# shared_buffer memory, as well as additional non-shared memory, so be careful
# not to run the system out of memory. In general, if you need more than 500
# connections, you should probably be making more use of connection pooling.
#
# Note that by default 3 connections are reserved for autovacuum and
# administration.
max_connections = 100 # small server
# max_connections = 500 # web application database
# max_connections = 40 # data warehousing database
# shared_buffers
# ------------------------
# A memory quantity defining PostgreSQL's "dedicated" RAM, which is used
# for connection control, active operations, and more. However, since
# PostgreSQL also needs free RAM for file system buffers, sorts and
# maintenance operations, it is not advisable to set shared_buffers to a
# majority of RAM.
#
# Note that increasing shared_buffers often requires you to increase some
# system kernel parameters, most notably SHMMAX and SHMALL. See
# Operating System Environment: Managing Kernel Resources in the PostgreSQL
# documentation for more details. Also note that shared_buffers over 2GB is
# only supported on 64-bit systems.
#
# The setting below is a formula. Calculate the resulting value, then
# uncomment it. Values should be expressed in kB, MB or GB.
# shared_buffers = ( AvRAM / 4 )
# shared_buffers = 512MB # basic 2GB web server
# shared_buffers = 8GB # 64-bit server with 32GB RAM
# work_mem
# ------------------------
# This memory quantity sets the limit for the amount of non-shared RAM
# available for each query operation, including sorts and hashes. This limit
# acts as a primitive resource control, preventing the server from going
# into swap due to overallocation. Note that this is non-shared RAM per
# *operation*, which means large complex queries can use multple times
# this amount. Also, work_mem is allocated by powers of two, so round
# to the nearest binary step.
# The setting below is a formula. Calculate the resulting value, then
# uncomment it. Values should be expressed in kB, MB or GB. Note that
# it's seldom effective to set work_mem over 2GB, and that you cannot do
# so at all on 32-bit servers.
# Most web applications should use the formula below, because their
# queries often require no work_mem.
# work_mem = ( AvRAM / max_connections ) ROUND DOWN to 2^x
# work_mem = 2MB # for 2GB server with 700 connections
# Formula for most BI/DW applications, or others running many complex
# queries:
# work_mem = ( AvRAM / ( 2 * max_connections ) ) ROUND DOWN to 2^x
# work_mem = 128MB # DW server with 32GB RAM and 40 connections
# maintenance_work_mem
# -------------------------
# This memory value sets the limit for the amount that autovacuum,
# manual vacuum, bulk index build and other maintenance routines are
# permitted to use. Setting it to a moderately high value will increase
# the efficiency of vacuum and other operations.
# Now that we have multiple autovacuum workers, you need to also
# consider the effect of multiple allocations of this memory. If
# you have some reason to raise the number of workers, decrease
# this setting proportionally.
# The setting below is a formula. Calculate the resulting value, then
# uncomment it. Values should be expressed in kB, MB or GB.
# If you plan to run autovacuum (see below):
# maintenance_work_mem = ( AvRAM / 16 ) ROUND DOWN to 2^x
# maintenance_work_mem = 128MB #webserver with 2GB RAM
# If not running autovacuum:
# maintenance_work_mem = ( AvRAM / 8 ) ROUND DOWN to 2^x
# maintenance_work_mem = 2GB #DW server with 32GB RAM
# synchronous_commit
# -------------------------
# This boolean setting controls whether or not all of your transactions
# are gauranteed to be written to disk when they commit. If you are
# willing to lose up to 0.4 seconds of data in the event of an unexpected
# shutdown (as many web applications are), then you can gain substantial
# performance benefits by turning off synchronous commit. For most
# applications, however, this setting is better used on a per-session
# basis.
synchronous_commit = on #most applications
# synchronous_commit = off #if speed is more important than data
# wal_buffers
# -------------------------
# this memory setting defines how much buffer space is available for
# the Write Ahead Log. Set too low, it can become a bottleneck on
# inserts and updates; there is no benefit to setting it high, however.
# As with some of the other settings above, may require increasing
# some kernel parameters.
wal_buffers = 8MB
# checkpoint_segments
# -------------------------
# This integer defines the maximum number of 8MB transaction log segments
# PostgreSQL will create before forcing a checkpoint. For most
# high-volume OTLP databases and DW you will want to increase this
# setting significantly. Alternately, just wait for checkpoint
# warnings in the log before increasing this.
#
# Increasing this setting can make recovery in the event of unexpected
# shutdown take longer.
#
# Maximum disk space required is (checkpoint_segments * 2 + 1) * 16MB,
# so make sure you have that much available before setting it.
checkpoint_segments = 16 #normal small-medium database
# checkpoint_segments = 64 #high-volume OLTP database
# checkpoint_segments = 128 #heavy-ETL large database
# autovacuum
# ---------------------------
# autovacuum turns on a maintenance daemon which runs in the background,
# periodically cleaning up your tables and indexes. The only reason to turn
# autovacuum off is for large batch loads (ETL).
autovacuum = on #most databases
# autovacuum = off #large DW
# effective_cache_size
# --------------------------
# This memory setting tells the PostgreSQL query planner how much RAM
# is estimated to be available for caching data, in both shared_buffers and
# in the filesystem cache. This setting just helps the planner make good
# cost estimates; it does not actually allocate the memory.
# The setting below is a formula. Calculate the resulting value, then
# uncomment it.
# effective_cache_size = ( AvRAM * 0.75 )
# default_statistics_target
# --------------------------
# This integer setting determines the histogram sample size for the
# data about table contents kept by the query planner. The default
# is fine for most databases, but often users need to increase it
# either because they're running data warehouses or because they have
# a lot of poorly planned queries.
# Note that while 10 was the default through PostgreSQL 8.3,
# 100 is now the default for 8.4 and later because it was found
# to be beneficial for most users.
# default_statistics_target = 10 #very small simple database
default_statistics_target = 100 #average moderate database
# default_statistics_target = 500 #data warehouse
# constraint_exclusion
# --------------------------
# This boolean setting should be turned "on" if you plan to use table
# partitioning. Otherwise, it should be "off".
# constraint_exclusion = off #if you don't use partioning or complex queries
constraint_exclusion = partition #most databases
# constraint_exclusion = on #if you do UNIONs and other complex queries
# log_destination & logging settings
# --------------------------
# This ENUM value determines where PostgreSQL's logs are sent. What
# setting to use really depends on your server room setup and the
# production status and OS of your server.
#
# Note that there are several dozen settings on what and how often
# to log; these will not be covered in detail in this quick
# configuration file. Instead, several common combinations are
# given.
# Syslog setup for centralized monitoring
# log_destination = 'syslog'
# syslog_facility = 'LOCAL0' #local syslog
# Windows
# log_destination = 'eventlog'
# Private PostgreSQL Log, rotates weekly
# log_destination = 'stderr'
# log_collector = on
# log_directory = '/path/to/log/dir'
# log_filename = 'postgresql-a%'
# log_truncate_on_rotation = on
# CSV logging for pgfouine log analyzer
# summarizes all queries slower than 100ms
# Warning: may generate large log files
# log_destination = 'csvlog'
# log_collector = on
# log_directory = '/path/to/log/dir'
# log_min_duration_statement = 100
# log_rotation_size = 1GB
# CSV logging for collecting multiple performance statistics.
# Warning: this much logging will generate many log
# files and affect performance.
# log_destination = 'csvlog'
# log_collector = on
# log_directory = '/path/to/log/dir'
# log_duration = on
# log_temp_files = 256kB
# log_lock_waits = on
# log_connections = on
# log_disconnections = on
# log_statement = 'all'
# log_rotation_size = 10GB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment