Skip to content

Instantly share code, notes, and snippets.

@joshmosh
Created September 1, 2017 21:10
Show Gist options
  • Save joshmosh/cec1129aee02f9c4c46111fd40415e0f to your computer and use it in GitHub Desktop.
Save joshmosh/cec1129aee02f9c4c46111fd40415e0f to your computer and use it in GitHub Desktop.
Proud database moments πŸ‘ πŸŽ‰

This is much faster on my local machine but the report below shows how well my database optimizations are working on AWS hardware.

Database Instance Class: db.t2.micro Database Engine: Postgres 9.6.2

The test is not a table with a single colum. There are 3 integer columns and 10 string columns. I can't give away too much for information for security purposes.

table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.008s
----------------------- --------- --------- --------- --------------
audience_entries 25612472 25612472 0 12m0.165s
----------------------- --------- --------- --------- --------------
Files Processed 26 26 0 0.018s
COPY Threads Completion 78 78 0 9m22.615s
----------------------- --------- --------- --------- --------------
Total import time 25612472 25612472 0 9m24.299s
@joshmosh
Copy link
Author

joshmosh commented Sep 1, 2017

Not sure why it shows 12m0.165s on Line 5 but my shell command time corresponded with the Total import time on line 10. Must be a bug with pgloader.

@joshmosh
Copy link
Author

joshmosh commented Sep 1, 2017

Same thing happened when I imported a 33 million row file. You can see that my shell returned 12m17s as the execution time. πŸ€·β€β™‚οΈ

interesting-pgloader-display

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment