-- Based off IETF draft, https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/ | |
create or replace function uuid_generate_v7() | |
returns uuid | |
as $$ | |
begin | |
-- use random v4 uuid as starting point (which has the same variant we need) | |
-- then overlay timestamp | |
-- then set version 7 by flipping the 2 and 1 bit in the version 4 string | |
return encode( | |
set_bit( | |
set_bit( | |
overlay(uuid_send(gen_random_uuid()) | |
placing substring(int8send(floor(extract(epoch from clock_timestamp()) * 1000)::bigint) from 3) | |
from 1 for 6 | |
), | |
52, 1 | |
), | |
53, 1 | |
), | |
'hex')::uuid; | |
end | |
$$ | |
language plpgsql | |
volatile; | |
-- Generate a custom UUID v8 with microsecond precision | |
create or replace function uuid_generate_v8() | |
returns uuid | |
as $$ | |
declare | |
timestamp timestamptz; | |
microseconds int; | |
begin | |
timestamp = clock_timestamp(); | |
microseconds = (cast(extract(microseconds from timestamp)::int - (floor(extract(milliseconds from timestamp))::int * 1000) as double precision) * 4.096)::int; | |
-- use random v4 uuid as starting point (which has the same variant we need) | |
-- then overlay timestamp | |
-- then set version 8 and add microseconds | |
return encode( | |
set_byte( | |
set_byte( | |
overlay(uuid_send(gen_random_uuid()) | |
placing substring(int8send(floor(extract(epoch from timestamp) * 1000)::bigint) from 3) | |
from 1 for 6 | |
), | |
6, (b'1000' || (microseconds >> 8)::bit(4))::bit(8)::int | |
), | |
7, microseconds::bit(8)::int | |
), | |
'hex')::uuid; | |
end | |
$$ | |
language plpgsql | |
volatile; |
PERFORMANCE: Move from pgcrypto to built-in gen_random_uuid(): | |
Curtis Summers (https://github.com/csummers) | |
PERFORMANCE: Use set_bit to upgrade v4 to v7, not set_byte: | |
PERFORMANCE: Reduce local variable use while still being maintainable | |
Rolf Timmermans (https://github.com/rolftimmermans) |
Copyright 2023 Kyle Hubert <kjmph@users.noreply.github.com> (https://github.com/kjmph) | |
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: | |
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. | |
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
Will it be possible to implement a machine ID as mentioned in the proposal?
@brsnik, the new proposal clarified this a bit. However, as this gist is focused on Postgres, the distributed generation of uuids has a lower utility. In addition, as it is outside the scope of the draft, I don't have an intention of trying to define the proper node ID placement in the generated UUID.
@kjmph Yes, you have my full permission to freely use/modify/publish/distribute/sell the code I wrote. You can probably do something similar with the v8 implementation as well; I just didn't need it.
I'd be curious what the benchmark results are. My rough "generate a million uuids" query showed ~15-20% faster over the previous version.
Sorry, at work right now, I'll run my benchmark this evening. It was about 40% faster for v8, which is what I use. I'll get specifics later.
Yep, it was 37% faster. Updating.
Gonna try v7 for all my PK... hope I dont regret this. Postgres does convert uuid into binary behind the scenes correct?
Hello @nawlbergs, I'm running it as a PK. It is pleasant to have an PK with ordering and creation time built in that can be sent to a frontend (if creation time isn't sensitive). At least, I enjoy it. :) The UUID is stored as binary:
postgres=# select pg_size_pretty(sum(pg_column_size(id))) as size from (select uuid_generate_v7() as id) a;
size
----------
16 bytes
Sorry for intruding, but I can not find any info on PostgreSQL supporting uuid v7 natively in the upcoming 16 version. Do you have any idea if it's at least being discussed?
@vvitad It's still just a draft proposal. Will need to be approved and then I'm sure it'll be implemented.
@brsnik ooh, so it's an actual proposal? could you please send me a link to an email discussion or smth like that.
I added text at the top, yet here is the link: https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/
Following my own link, I see they have updated to a new draft: https://datatracker.ietf.org/doc/draft-ietf-uuidrev-rfc4122bis/
I'll have to read more, yet looks even more promising that this will be implemented everywhere if it is extending the original RFC.
I also had a need for v7 UUIDs, so I wrote a tiny C extension to create them. It is nearly as fast as creating the native UUIDs!
pgbench --client=8 --jobs=8 --transactions=200000 --file=${TEST}.sql
-- SELECT gen_random_uuid();
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
maximum number of tries: 1
number of transactions per client: 200000
number of transactions actually processed: 1600000/1600000
number of failed transactions: 0 (0.000%)
latency average = 0.096 ms
initial connection time = 6.653 ms
tps = 83478.089558 (without initial connection time)
-- pg_uuidv7 C extension
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
maximum number of tries: 1
number of transactions per client: 200000
number of transactions actually processed: 1600000/1600000
number of failed transactions: 0 (0.000%)
latency average = 0.098 ms
initial connection time = 6.908 ms
tps = 81562.002058 (without initial connection time)
-- sql function r18
scaling factor: 1
query mode: simple
number of clients: 8
number of threads: 8
maximum number of tries: 1
number of transactions per client: 200000
number of transactions actually processed: 1600000/1600000
number of failed transactions: 0 (0.000%)
latency average = 0.123 ms
initial connection time = 7.602 ms
tps = 65269.760532 (without initial connection time)
Hi I created generate_uuid_v4_timestamp postgres function
YYYYMMDD-HHMM-4SSM-aMMM-MMRRRRRRRRRR
20230113-1405-4445-a265-710a8ea38313 (2023-01-13 14:05:44.526571)
20230113-1405-4554-a463-0547220159f4 (2023-01-13 14:05:55.446305)
20230113-1406-4020-a235-258fb5d8074e (2023-01-13 14:06:02.023525)
20230113-1406-4076-a257-239b4ba070e5 (2023-01-13 14:06:07.625723)
Y'all are amazing, love this activity.. Should there be a PG UUID extras extension? I wasn't aware there are so many of us.
Should there be a PG UUID extras extension?
That would be great!
In case anybody in interested: I made a function to extract the timestamp from the uuid.
CREATE OR REPLACE FUNCTION public.timestamp_from_uuid_v7(_uuid uuid)
RETURNS timestamp without time zone
LANGUAGE sql
IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF
AS $$
SELECT to_timestamp(('x0000' || substr(_uuid::text, 1, 8) || substr(_uuid::text, 10, 4))::bit(64)::bigint::numeric / 1000);
$$
;
@qwesda, thanks!
@qwesda
Im fairly new to postgres.. (coming in from mysql) can you give an example of how/when you would use that? can you use it as part of a select query?
@nawlbergs I'm currently running some test for a new DB and my current plan is to have most tables set up like this:
CREATE TABLE statements.statement (
uuid_statement uuid DEFAULT public.uuid_generate_v7() PRIMARY KEY,
revision_id uuid NOT NULL DEFAULT public.uuid_generate_v7(),
name text,
currency text,
amount_in_currency_calculated numeric
[ ... ]
);
CREATE OR REPLACE FUNCTION public.trigger_set_revision_id()
RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
NEW.revision_id = public.uuid_generate_v7();
RETURN NEW;
END;
$$
;
CREATE TRIGGER trigger_set_revision_id
BEFORE INSERT OR UPDATE ON statements.statement
FOR EACH ROW
EXECUTE FUNCTION public.trigger_set_revision_id();
With this I have the following properties:
- uuid based primary keys, that are sortable
- a revision uuid that can be used in audit logs (also sortable) or in web apps to detect if data has changed
- created/modified timestamps (without timezone) for free
The API would then make a request something like (simplified for readability ...):
SELECT
jsonb_agg(
jsonb_build_object(
'uuid_statement', statement.uuid_statement,
'revision_id', statement.revision_id,
'created_at', public.timestamp_from_uuid_v7(statement.uuid_statement),
'modified_at', public.timestamp_from_uuid_v7(statement.revision_id),
'name', statement.name,
'currency', statement.currency,
'amount_in_currency_calculated', statement.amount_in_currency_calculated
)
ORDER BY statement.uuid_statement DESC)
FROM statements.statement;
I'm not knowledgeable enough in the theory to write such a function and be able to test/verify that it will not generate collisions. Are there any known and accepted ways to do that?
@kjmph Would it be possible for you to attach a license to this, either in the gist itself, or by copying this over to a repository with a license? Even a simple license like MIT would make it a lot easier to use this in projects that have restrictions/tracking on that aspect of code.
Also, thanks for putting this all together! I'd love for this to be a built-in Postgres function someday, but in the meantime, it's great to see the community getting ahead of things.
Would my addition of a LICENSE.txt and CONTRIBUTORS.txt suffice in the gist for now? I added it. Also, it seems GitHub wants to name this according to the alphabetical ordering of the files in the gist. Bleh. One day I may break this into a repository. Thanks @DanHultonEvisort
That's perfect, thanks so much!
amazing!
Thanks for this. For those who don't mind sacrificing readability, the following implementation of uuid_generate_v7()
should be a bit faster (no benchmarks, sorry):
create or replace function uuid_generate_v7()
returns uuid
as $$
begin
return encode(overlay(
set_bit(set_bit(uuid_send(gen_random_uuid()), 53, 1), 52, 1)
placing substring(int8send((extract(epoch from clock_timestamp()) * 1000)::bigint) from 3)
from 1 for 6), 'hex')::uuid;
end
$$
language plpgsql
volatile;
So, let's see if I can spot the difference with your changes.
- There was a stray floor that still remained after earlier changes to the IETF draft, this was removed.
- All local variables are removed (forgive me, is this not optimized out.. I'm not as familiar with what happens behind the scenes with PL/pgSQL.)
- Rather than set the byte that contains the version number, since v4 is being upgraded, only the 2 and 1 bits are flipped from zeros to ones (thus version 7 becomes set).
Making change #1 and change #3 gets the implementation on par with your implementation without local variables on my test machine. This brought the generation from 24.59M Ops/sec to 25.41M Ops/sec. So, I don't see the need to sacrifice readability for performance here. Does this implementation look correct to you, @rolftimmermans? If so, I'll make the change to the gist and add you to contributors.
create or replace function uuid_generate_v7()
returns uuid
as $$
declare
unix_ts_ms bytea;
uuid_bytes bytea;
begin
unix_ts_ms = substring(int8send((extract(epoch from clock_timestamp()) * 1000)::bigint) from 3);
-- use random v4 uuid as starting point (which has the same variant we need)
uuid_bytes = uuid_send(gen_random_uuid());
-- overlay timestamp
uuid_bytes = overlay(uuid_bytes placing unix_ts_ms from 1 for 6);
-- set version 7 by flipping the 2 and 1 bit in the version 4 string
uuid_bytes = set_bit(set_bit(uuid_bytes, 53, 1), 52, 1);
return encode(uuid_bytes, 'hex')::uuid;
end
$$
language plpgsql
volatile;
Thanks!
Ah, my apologies, the floor wasn't left over from the earlier draft, it is there for two reasons.
- The specification states that the least significant bits MUST be used when truncating a timestamp.
- The microsecond version, labeled version 8, also relies on the timestamp precision being accurate.
Thus, we can't remove the floor for correctness reasons, unless there is a faster way to only retrieve the millisecond bits from the clock_timestamp. However, good news, it seems all the performance gains on my test machine were due to change #3, which is the set_bit instead of the set_byte calls. So, I still think that change should be accepted.
Thanks @kjmph for the summary and sorry for not making this clearer in my comment. I was surprised to see that removing local variables made a small but significant difference on our production PostgreSQL 15 server, but of course YMMV.
And thanks for clearing up that removing floor() is not standards compliant. Keep the changes you like and definitely feel free to ignore the rest. :)
Thanks @rolftimmermans, it is incorporated now. Cheers!
(EDIT: So much for test machines.. The same thing happened to me on production, I see a marked difference when removing local variables. I took that change as well. Let's see what everyone else thinks).
@csummers; that's brilliant. Love it. Are you okay with me replacing the gist? I didn't expect this to be so useful, and I didn't setup a license or anything.