Skip to content

Instantly share code, notes, and snippets.

@brianc
Created October 9, 2013 21:00
Show Gist options
  • Save brianc/6908287 to your computer and use it in GitHub Desktop.
Save brianc/6908287 to your computer and use it in GitHub Desktop.

Using pg.connect is the way to go in a web environment.

PostgreSQL server can only handle 1 query at a time per conenction. That means if you have 1 global new pg.Client() connected to your backend your entire app is bottleknecked based on how fast postgres can respond to queries. It literally will line everything up, queuing each query. Yeah, it's async and so that's alright...but wouldn't you rather multiply your throughput by 10x? Use pg.connect set the pg.defaults.poolSize to something sane (we do 25-100, not sure the right number yet).

new pg.Client is for when you know what you're doing. When you need a single long lived client for some reason or need to very carefully control the life-cycle. A good example of this is when using LISTEN/NOTIFY. The listening client needs to be around and connected and not shared so it can properly handle NOTIFY messages. Other example would be when opening up a 1-off client to kill some hung stuff or in command line scripts.

@rarkins
Copy link

rarkins commented Mar 25, 2014

In the case where I do want queries executed synchronously (e.g. database setup and provisioning), can I rely on this "1 query at a time per connection" property to ensure a long list of SQL commands are executed synchronously? And in such a case should I set up and then destroy a single pg.Client, or use pg.connect?

@wavyx
Copy link

wavyx commented Jul 13, 2014

@rarkins I would go with async.js as described here http://justinklemm.com/node-js-async-tutorial/

@2013gang
Copy link

2013gang commented Sep 7, 2014

nice explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment