Skip to content

Instantly share code, notes, and snippets.

@slotrans
Created August 6, 2021 23:50
Show Gist options
  • Star 48 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save slotrans/353952c4f383596e6fe8777db5d098f0 to your computer and use it in GitHub Desktop.
Save slotrans/353952c4f383596e6fe8777db5d098f0 to your computer and use it in GitHub Desktop.
Building blocks for generic history-keeping in Postgres.
/*
Replace "your_schema" with whatever schema is appropriate in your environment.
It is possible to use "public"... but you shouldn't!
*/
/*
Function to stamp a "modified" timestamp. Adjust the name to suit your environment,
but that name is hard-coded so it is assumed that you only use _one_ such name.
Bind to a table like so:
create trigger z_stamp_modified
before insert or update
on your_schema.your_table_name
for each row
execute procedure your_schema.f_stamp_modified() ;
Remember, trigger names are table-local so you can use the same name everywhere.
Triggers within a phase are executed in order by name. The "z_" prefix is a hacky/imperfect
way of saying "this one should be last". You can use any name you like.
*/
create function your_schema.f_stamp_modified()
returns trigger
as $$
declare
begin
NEW.modified = now();
return NEW;
end;
$$ language plpgsql;
/*
Generic history table template.
You CAN use this as a single history table to record all changes to any table in your system
BUT YOU SHOULD NOT DO THAT. You absolutely want to have a separate history table for each real table.
Using table inheritance lets you pretend they're all one at query time if you wish.
Create individual history tables like so:
create table your_schema.foo_history () inherits(your_schema.base_history) ;
You will probably want to index "created" and "pk_value".
Field meanings...
relid:
The system OID of the tracked table. In practice this turned out to be less useful than I thought it would be, and can
probably be omitted. It costs little though, and provides some potentially interesting optionality. (Remember a relid can
be transformed into a name with ::relname).
dml_type:
I for insert, U for update, D for delete. Some folks find this cryptic but one letter is all you need.
txid:
The ID of the transaction in which a change occurred. Very useful for identifying changes that were made together. These
values may eventually wrap around so be cautious about comparing over long spans of time.
application_name:
The application name value supplied by the client, e.g. psql looks for this in the PGAPPNAME env var
db_session_user:
The name of the database user
inet_client_addr:
The IP address of the client
old_row_data:
The row as it was before the change, as serialized by row_to_json(). Null for inserts.
new_row_data:
The row as it is after the change, as serialized by row_to_json(). Null for deletes.
pk_value:
The value of the affected row's primay key field. Requires passing the name of that field in the trigger binding.
*/
create table your_schema.base_history
(
hist_id bigserial not null primary key
, created timestamptz default now() not null
, relid oid
, dml_type char(1) not null
, txid bigint default txid_current() not null
, application_name text default current_setting('application_name')
, db_session_user text default session_user not null
, inet_client_addr inet default inet_client_addr()
, pg_backend_pid int default pg_backend_pid()
, old_row_data json
, new_row_data json
, pk_value bigint
);
--this check constraint is more for documentation purposes, and can be omitted if you wish
alter table public.base_history
add constraint chk_basehistory_dmlrowdata
check ( (dml_type = 'I' and old_row_data is null and new_row_data is not null)
or (dml_type = 'U' and old_row_data is not null and new_row_data is not null)
or (dml_type = 'D' and old_row_data is not null and new_row_data is null )
)
;
/*
Generic history trigger function.
This function takes one optional argument which is the name of the tracked table's primary key field, which must be
a bigint (or castable to a bigint). If that argument is not supplied, the function will still work, but "pk_value" in the
history table will not be populated.
This can all be adapted to e.g. UUID PKs, but that is left as an exercise for the reader. If you use a naming standard
such as where all PKs are named "ID", this function can be simplified considerably. I do not favor that strategy so again
that adaptation is left as an exercise.
The names of history tables are assumed to be "_history" as a suffix to the tracked table, e.g. the history for table
"foo" is "foo_history", in the same schema. This code can easily be adapted to use a different suffix, a prefix, a different
schema, take the history table name as an argument, or whatever you like.
Bind to a table like so:
create trigger z_record_history
after insert or update or delete
on your_schema.your_table_name
for each row
execute procedure your_schema.f_generic_history('your_table_name_id') ;
*/
create function your_schema.f_generic_history()
returns trigger
as $$
declare
cc_dml_type constant char(1) := substring(TG_OP from 1 for 1) ;
vc_insert_sql constant text := 'insert into your_schema.'||TG_TABLE_NAME||'_history ( relid, dml_type, old_row_data, new_row_data, pk_value ) values ( $1, $2, $3, $4, $5 )' ;
vc_pk_sql_template constant text := 'select ($1).%I' ;
vc_pk_column_name constant varchar(64) := TG_ARGV[0] ;
n_pk_value bigint := null ;
v_message text ;
v_detail text ;
v_hint text ;
begin
if TG_OP = 'INSERT' then
if vc_pk_column_name is not null
then
execute format(vc_pk_sql_template, vc_pk_column_name) into n_pk_value using NEW ;
end if;
execute vc_insert_sql using TG_RELID, cc_dml_type, null::json, row_to_json( NEW ), n_pk_value ;
elsif (TG_OP = 'UPDATE' and OLD is distinct from NEW)
then
if vc_pk_column_name is not null
then
execute format(vc_pk_sql_template, vc_pk_column_name) into n_pk_value using NEW ;
end if;
execute vc_insert_sql using TG_RELID, cc_dml_type, row_to_json( OLD ), row_to_json( NEW ), n_pk_value ;
elsif TG_OP = 'DELETE'
then
if vc_pk_column_name is not null
then
execute format(vc_pk_sql_template, vc_pk_column_name) into n_pk_value using OLD ;
end if;
execute vc_insert_sql using TG_RELID, cc_dml_type, row_to_json( OLD ), null::json, n_pk_value ;
end if ;
return null ;
exception
when others then
get stacked diagnostics v_message := MESSAGE_TEXT
, v_detail := PG_EXCEPTION_DETAIL
, v_hint := PG_EXCEPTION_HINT ;
raise warning 'SQLSTATE % in f_generic_history(%) on table %.%; MESSAGE=%; DETAIL=%; HINT=%', SQLSTATE, TG_ARGV[0], TG_TABLE_SCHEMA, TG_TABLE_NAME, v_message, v_detail, v_hint ;
return null ;
end ;
$$ language plpgsql;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment