A schema contains a group of tables. A database contains a group of schemas.
In prod, we use a cloudsql PostgreSQL instance.
The reports
DB is for prod only. It's presence should be a gatekeeper to prevent destructive teardowns.
Everywhere else, we assume a local PostgreSQL running on port 5432 with two existing databases.
The khan_test
DB is for integration tests.
The khan_dev
DB is for local development.
We can use the postgres
user in all three environments, but in prod we want to vary the user to better
differentiate the access pattern metrics.
The schemas for all three should be kept the same.
We do this with migrations.
Assuming you have golang-migrate CLI installed
(e.g. brew install golang-migrate
).
NOTE: Running migrations in prod is super-dangerous! Measure twice, cut once!
The golang-migrate
provides a CLI can be used to manually perform the database migrations.
For instance to get the initial schema created:
migrate -verbose -path . -database postgres://localhost:5432/khan_test?sslmode=disable goto 1
For an existing database that already has the schemas applied, you can set the version without applying migrations via:
migrate -verbose -path . -database postgres://localhost:5432/khan_test?sslmode=disable force 1
A complete tutorial is available here.
khan_dev=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+-------------------
khan_dev | postgres | UTF8 | C | C |
khan_test | postgres | UTF8 | C | C |
postgres | steve | UTF8 | C | C |
This is an example of a script that will set up the empty databases for dev and test:
# A simple script to create the test / dev postgres databases needed for webapp
# tests and development that depend on postgres.
# Our test and dev environments assume that the user has created a postgres
# user with create db privs. This should have been done within dotfiles, but
# we try our best to do it here anyways.
if ! psql -tc "SELECT rolname from pg_catalog.pg_roles" postgres \
| grep -c 'postgres' > /dev/null 2>&1 ; then
psql --quiet -c "CREATE ROLE postgres LOGIN SUPERUSER;" postgres;
fi
# Create the db that we'll use in our tests.
if ! psql -U postgres -l | grep khan_test ; then
createdb khan_test -U postgres -O postgres
fi
psql -U postgres -qc "create extension if not exists btree_gist" khan_test;
psql -U postgres -qc "create extension if not exists pg_stat_statements" khan_test;
# We'll also create the dev db, in case we run tests that use the
# dev_appserver
if ! psql -U postgres -l | grep khan_dev ; then
createdb khan_dev -U postgres -O postgres
fi
psql -U postgres -qc "create extension if not exists btree_gist" khan_dev;
psql -U postgres -qc "create extension if not exists pg_stat_statements" khan_dev;
Please Note the naming of the files in this directory.
They need to be named so that the migrations can be ordered lexicographically.
It's often accomplished by using a numbered prefix like 001_something.sql
.
If you added another file you would name it 002_whatever_you_want.sql
.
Another possibility is yyyymmdd_whatever_you_want.sql
, e.g. 20200921_something.sql
but you would need to rename all the files that already exist here to follow this convention.
The initial migrations in 001_initial_schema.up.sql
were generated by running:
cloud_sql_proxy -dir=/Users/steve/cloudsql -instances=$DBHOST=tcp:5433 &
pg_dump -U $DBUSER -h 127.0.0.1 -p 5433 $DBNAME -s -f 001_initial_schema.up.sql
I deleted a few out-of-the-box functions, as well as an old deprecated table or two.
Subsequent migrations should be placed in files lexicographically sorted.
Assuming you have installed go-bindata
using go get -u github.com/go-bindata/go-bindata/...
:
cd ../..
go-bindata -pkg migrations -ignore bindata -nometadata -prefix reports/migrations/ -o ./reports/migrations/bindata.go ./reports/migrations
This will allow the unit and integration tests to verify the application is synched to the code.