Back

Postgres UUIDv7 and per-back end monotonicity

230 points1 yearbrandur.org
kingkilr1 year ago

I would strongly implore people not to follow the example this post suggests, and write code that relies on this monotonicity.

The reason for this is simple: the documentation doesn't promise this property. Moreover, even if it did, the RFC for UUIDv7 doesn't promise this property. If you decide to depend on it, you're setting yourself up for a bad time when PostgreSQL decides to change their implementation strategy, or you move to a different database.

Further, the stated motivations for this, to slightly simplify testing code, are massively under-motivating. Saving a single line of code can hardly be said to be worth it, but even if it were, this is a problem far better solved by simply writing a function that will both generate the objects and sort them.

As a profession, I strongly feel we need to do a better job orienting ourselves to the reality that our code has a tendency to live for a long time, and we need to optimize not for "how quickly can I type it", but "what will this code cost over its lifetime".

throw0101c1 year ago

> […] code that relies on this monotonicity. The reason for this is simple: the documentation doesn't promise this property. Moreover, even if it did, the RFC for UUIDv7 doesn't promise this property.

The "RFC for UUIDv7", RFC 9562, explicitly mentions monotonicity in §6.2 ("Monotonicity and Counters"):

    Monotonicity (each subsequent value being greater than the last) is 
    the backbone of time-based sortable UUIDs. Normally, time-based UUIDs 
    from this document will be monotonic due to an embedded timestamp; 
    however, implementations can guarantee additional monotonicity via 
    the concepts covered in this section.
* https://datatracker.ietf.org/doc/html/rfc9562#name-monotonic...

In the UUIDv7 definition (§5.7) it explicitly mentions the technique that Postgres employs for rand_a:

    rand_a:
        12 bits of pseudorandom data to provide uniqueness as per
        Section 6.9 and/or optional constructs to guarantee additional 
        monotonicity as per Section 6.2. Occupies bits 52 through 63 
        (octets 6-7).
* https://datatracker.ietf.org/doc/html/rfc9562#name-uuid-vers...

Note: "optional constructs to guarantee additional monotonicity". Pg makes use of that option.

stonemetal121 year ago

>explicitly mentions monotonicity

>optional constructs

So it is explicitly mentioned in the RFC as optional, and Pg doesn't state that they guaranty that option. The point still stands, depending on optional behavior is a recipe for failure when the option is no longer taken.

mlyle1 year ago

It's mentioned in the RFC as being explicitly monotonic based the time-based design.

Implementations that need monotonicity beyond the resolution of a timestamp-- like when you allocate 30 UUIDs at one instant in a batch-- can optionally use those additional bits for that purpose.

> Implementations SHOULD employ the following methods for single-node UUID implementations that require batch UUID creation or are otherwise concerned about monotonicity with high-frequency UUID generation.

(And it goes on to recommend the obvious things you'd do: use a counter in those bits when assigning a batch; use more bits of time precision; etc.)

The comment in PostgreSQL before the implementation makes it clear that they chose the third option for this in the RFC:

     * variant bits. To ensure monotonicity in scenarios of high-
     * frequency UUID generation, we employ the method "Replace
     * LeftmostRandom Bits with Increased Clock Precision (Method 3)",
     * described in the RFC. ...
+2
Dylan168071 year ago
throw0101c1 year ago

> So it is explicitly mentioned in the RFC as optional […]

The use of rand_a for extra monotonicity is optional. The monotonicity itself is not optional.

§5.7 states:

    Alternatively, implementations MAY fill the 74 bits, 
    jointly, with a combination of the following subfields, 
    in this order from the most significant bits to the least, 
    to guarantee additional monotonicity within a millisecond:
Guaranteeing additional monotonicity means that there is already a 'base' level of monotonicity, and there are provisions for even more ("additional") levels of it. This 'base level' is why §6.2 states:

    Monotonicity (each subsequent value being greater than the last) is 
    the backbone of time-based sortable UUIDs. Normally, time-based UUIDs 
    from this document will be monotonic due to an embedded timestamp; 
    however, implementations can guarantee additional monotonicity via 
    the concepts covered in this section.
"Backbone of time-based sortable UUIDs"; "additional monotonicity". Additional: adding to what's already there.

* https://datatracker.ietf.org/doc/html/rfc9562

Dylan168071 year ago

"this monotonicity" that OP suggests people not use is specifically the additional monotonicity.

Or to put it another way: OP is suggesting you don't depend on it being properly monotonic, because the default is that it is only partially monotonic.

reshlo1 year ago

> Normally, time-based UUIDs from this document will be monotonic due to an embedded timestamp; however, implementations can guarantee additional monotonicity via the concepts covered in this section.

“Normally, I am at home because I do not have a reason to go out; however, sometimes I am at home because I am sleeping.”

Notice how this statement does not actually mean that I am always at home.

fc417fc8021 year ago

[dead]

btown1 year ago

I was recently bit doing a Postgres upgrade by the Postgres team considering statements like `select 1 group by true` fine to silently break in Postgres 15. See https://postgrespro.com/list/thread-id/2661353 - and this behavior remains undocumented in https://www.postgresql.org/docs/release/ . It's an absolutely incredible project, and I don't disagree with the decision to classify it as wontfix - but it's an anecdote to not rely on undefined behavior!

idconvict1 year ago

The "optional" portion is this part of the spec, not the time part.

> implementations can guarantee additional monotonicity via the concepts covered in this section

dragonwriter1 year ago

The “time part” is actually two different parts: the required millisecond-level ordering and the optional use of part of rand_a (which postgres does) to provide higher-resolution (nanosecond, in the postgres case) time ordering when combined with the required portion.

So, no, the “time part” of the postgres implementation is, in part, one of the options discussed in the spec, not merely the “time part” required in the spec.

arghwhat1 year ago

Relying on an explicitly documented implementation behavior that the specification explicitly describes as an option is not an issue. Especially if the behavior is only relied on in a test, where the worst outcome is a failed testcase that is easily fixed.

Even if the behavior went away, UUIDs unlike serials can always be safely generated directly by the application just as well as they can be generated by the database.

Going straight for that would arguably be the "better" path, and allows mocking PRNG to get sequential IDs.

sbuttgereit1 year ago

Software is arbitrary. Any so-called "guarantee" is only as good as the developers and organizations maintaining a piece of software want to make it regardless of prior statements. At some point, the practical likelihood of a documented, but not guaranteed, process being violated vs. the willful abandonment of a guarantee start to look very similar.... at which point nothing saves you.

Sometimes the best you can do is recognize who you're working with today, know how they work, and be prepared for those people to be different in the future (or of a different mind) and for things to change regardless to expressed guarantees.

....unless we're talking about the laws of physics... ...that's different...

peterldowns1 year ago

The test should do a set comparison, not an ordered list comparison, if it wants to check that the same 5 accounts were returned by the API. I think it's as simple as that.

The blogpost is interesting and I appreciated learning the details of how the UUIDv7 implementation works.

vips7L1 year ago

Don’t you think that depends on what you’re guaranteeing in your api? If you’re guaranteeing that your api returns the accounts ordered you need to test for that. But I do agree in general that using a set is the correct move.

Too1 year ago

The test is a very strange example indeed. Is it testing the backend, the database or both? If the api was guaranteeing ordered values, pre-uuid7 the backend must have sorted them by other means before returning, making the test identical. If the backend is not guaranteeing order, that shouldn't be tested either.

sedatk1 year ago

As a counter-argument, it will inevitably turn into a spec if it becomes widely-used enough.

What was that saying, like: “every behavior of software eventually becomes API”

tomstuart1 year ago
sedatk1 year ago

Yes, that one! Thanks :)

the84721 year ago

Consider the incentives you're setting up there. An API contract goes both ways, the vendor promises some things and not others to preserve flexibility, and the user has to abide by it to not get broken in the future. If you unilaterally ignore the contract, even plan to do so in advance, then eventually kindness and capacity to accommodate such abuse will run might run out and they may switch to an adversarial stance. See QUIC for example which is a big middle finger to middle boxes.

sedatk1 year ago

Sure, there is a risk. But, it all depends on how great and desirable the benefits are.

drbojingle1 year ago

In enterprise land. In proof of concept land that's not quite true (but does become true if the concept works)

StackTopherFlow1 year ago

I agree, optimizing for readability and maintainability is almost always the right choice.

paulddraper1 year ago

> Moreover, even if it did, the RFC for UUIDv7 doesn't promise this property.

Huh?

If the docs were to guarantee it, they guarantee it. Why are you looking for everything to be part of RFC UUIDv7?

Failure of logic.

fwip1 year ago

Their next sentence explains. Other databases might not make that guarantee, including future versions of Postgres.

3eb7988a16631 year ago

I too am missing the win on this. It is breaking the spec, and does not seem like it offers a significant advantage. In the eventual event where you have a collection of UUID7 you are only ever going to be able to rely on the millisecond precision anyway.

sbuttgereit1 year ago

You say it's breaking the spec, but is it?

From https://www.rfc-editor.org/rfc/rfc9562.html#name-uuid-versio...:

"UUIDv7 values are created by allocating a Unix timestamp in milliseconds in the most significant 48 bits and filling the remaining 74 bits, excluding the required version and variant bits, with random bits for each new UUIDv7 generated to provide uniqueness as per Section 6.9. Alternatively, implementations MAY fill the 74 bits, jointly, with a combination of the following subfields, in this order from the most significant bits to the least, to guarantee additional monotonicity within a millisecond:

   1.  An OPTIONAL sub-millisecond timestamp fraction (12 bits at
       maximum) as per Section 6.2 (Method 3).

   2.  An OPTIONAL carefully seeded counter as per Section 6.2 (Method 1
       or 2).

   3.  Random data for each new UUIDv7 generated for any remaining
       space."
Which the referenced "method 3" is:

"Replace Leftmost Random Bits with Increased Clock Precision (Method 3):

For UUIDv7, which has millisecond timestamp precision, it is possible to use additional clock precision available on the system to substitute for up to 12 random bits immediately following the timestamp. This can provide values that are time ordered with sub-millisecond precision, using however many bits are appropriate in the implementation environment. With this method, the additional time precision bits MUST follow the timestamp as the next available bit in the rand_a field for UUIDv7."

throw0101c1 year ago

> It is breaking the spec […]

As per a sibling comment, it is not breaking the spec. The comment in the Pg code even cites the spec that says what to do (and is quoted in the post):

     * Generate UUID version 7 per RFC 9562, with the given timestamp.
     *
     * UUID version 7 consists of a Unix timestamp in milliseconds (48
     * bits) and 74 random bits, excluding the required version and
     * variant bits. To ensure monotonicity in scenarios of high-
     * frequency UUID generation, we employ the method "Replace
     * LeftmostRandom Bits with Increased Clock Precision (Method 3)",
     * described in the RFC. […]
braiamp1 year ago

I don't think most people will heed this warning. I warned people in a programming forum that Python ordering of objects by insertion time was a implementation detail, because it's not guaranteed by any PEP [0]. I could literally write a PEP compliant Python interpreter and could blow up in someone's code because they rely on the CPython interpreter behavior.

[0]: https://mail.python.org/pipermail/python-dev/2017-December/1...

dragonwriter1 year ago

> I warned people in a programming forum that Python ordering of objects by insertion time was a implementation detail, because it's not guaranteed by any PEP

PEPs do not provide a spec for Python, they neither cover the initial base language before the PEP process started, nor were all subsequent language changes made through PEPs. The closest thing Python has to a cross-implementation standard is the Python Language Reference for a particular version, treating as excluded anything explicitly noted as a CPython implementation detail. Dictionaries being insertion-ordered went from a CPython implementation detail in 3.6 to guaranteed language feature in 3.7+.

kstrauser1 year ago

That definitely was true, and I use to jitter my code a little to deliberately find and break tests that depended on any particular ordering.

It's now explicitly documented to be true, and you can officially rely on it. From https://docs.python.org/3/library/stdtypes.html#dict:

> Changed in version 3.7: Dictionary order is guaranteed to be insertion order.

That link documents the Python language's semantics, not the behavior of any particular interpreter.

deadbabe1 year ago

Most code does not live for a long time. Similar to how consumer products are built for planned obsolescence, code is also built with a specific lifespan in mind.

If you spend time making code bulletproof so it can run for like 100 years, you will have wasted a lot of effort for nothing when someone comes along and wipes it clean and replaces it with new code in 2 years. Requirements change, code changes, it’s the nature of business.

Remember any fool can build a bridge that stands, it takes an engineer to make a bridge that barely stands.

agilob1 year ago

>Most code does not live for a long time.

Sure, and here I am in a third company doing cloud migration and changing our default DB from MySQL to SQL server. The pain is real, 2 year long roadmap is now 5 years longer roadmap. All because some dude negotiated a discount on cloud services. And we still develop integrations that talk to systems written for DOS.

mardifoufs1 year ago

What? Okay, so assume that most code doesn't last. It doesn't mean that you should purposefully make it brittle for basically no additional benefit? If as you say, it's about making the most with as little as possible (which is what the bridge analogy usually refers to), then surely adding a single function (to actually enforce the ordering you want) to make your code more robust is one of the best examples of that?

Pxtl1 year ago

Uh, more people work on 20-year-old codebases than you'd think.

9dev1 year ago

And yet these people are dwarved by the number of developers crunching out generic line of business CRUD apps every day.

mmerickel1 year ago

Remember even if timestamps may be generated using a monotonically increasing value that does not mean they were committed in the same order to the database. It is an entirely separate problem if you are trying to actually determine what rows are "new" versus "previously seen" for things like cursor-based APIs and background job processing. This problem exists even with things like a serial/autoincrement primary key.

shalabhc1 year ago

+1

What could be useful here is if postgres provided a way to determine the latest frozen uuid. This could be a few ms behind the last committed uuid but should guarantee that no new rows will land before the frozen uuid. Then we can use a single cursor track previously seen.

fngjdflmdflg1 year ago

>The Postgres patch solves the problem by repurposing 12 bits of the UUID’s random component to increase the precision of the timestamp down to nanosecond granularity [...]

>It makes a repeated UUID between processes more likely, but there’s still 62 bits of randomness left to make use of, so collisions remain vastly unlikely.

Does it? Even though the number of random bits has decreased, the time interval to create such a duplicate has also decreased, namely to an interval of one nanosecond.

londons_explore1 year ago

I could imagine that certain nanoseconds might be vastly more likely than other nanoseconds.

For example, imagine you have a router that sends network packets out at the start of each microsecond, synced to wall time.

Or the OS scheduler always wakes processes up on a millisecond timer tick or some polling loop.

Now, when those packets are received by a postgres server and processed, the time to do that is probably fairly consistent - meaning that X nanoseconds past the microsecond you probably get most records being created.

UltraSane1 year ago

But only one nanosecond slower or faster and you get another set of 4.611 billion billion random IDs. I think random variations in buffer depths and CPU speeds will easily introduce hundreds of nanoseconds of timing variations. syncing any two things to less than 1 nanosecond is incredibly hard and doesn't happen by accident.

zamadatix1 year ago

The important part is the events in time aren't going to be as random as the actual random source. The chances of an actual collision remain low but the distribution of events over time is a weaker (in relative terms) source of random bits compared to proper "random" sources which won't have obvious bias at all.

+2
UltraSane1 year ago
mlyle1 year ago

We're not talking about nanoseconds of real time; we're talking about nanoseconds as measured by the CPU doing the processing. Nanoseconds are not likely to be a uniform variate.

+1
UltraSane1 year ago
deepsun1 year ago

System doesn't give you 1 nanosecond precision really. It varies by OS+hardware, from what I remember you may get like 641 nanosecond precision. The number is nanoseconds, yes, but you never get "next nanosecond". In other words, every number you get has the same mod 641. In other systems you may get like 40,000 precision or even worse on SPECTRE/MELTDOWN-protected systems.

michaelt1 year ago

Imagine if you were generating 16 UUIDs per nanosecond, every nanosecond.

According to [1] due to the birthday paradox, the probability of a collision in any given nanosecond would be 3E−17 which of course sounds pretty low

But there are 3.154e+16 nanoseconds in a year - and if you get out your high-precision calculator, it'll tell you there's a 61.41% chance of a collision in a year.

Of course you might very well say "Who needs 16 UUIDs per nanosecond anyway?"

[1] https://www.bdayprob.com/

Horffupolde1 year ago

So what if there’s a collision? If the column is UNIQUE at most it’ll ROLLBACK on INSERT. 16 INSERTS per nanosecond is 16 billion TPS. At that scale you’ll have other problems.

paulddraper1 year ago

Depends if you think sub-millisecond locality is significant.

samatman1 year ago

I maintain that people are too eager to use UUIDv7 to begin with. It's a dessert topping and a floor wax.

Let's say you need an opaque unique handle, and a timestamp, and a monotonically increasing row ID. Common enough. Do they have to be the same thing? Should they be the same thing? Because to me that sounds like three things: an autoincrementing primary key, a UUIDv4, and a nanosecond timestamp.

Is it always ok that the 'opaque' unique ID isn't opaque at all, that it's carrying around a timestamp? Will that allow correlating things which maybe you didn't want hostiles to correlate? Are you 100% sure that you'll never want, or need, to re-timestamp data without changing its global ID?

Maybe you do need these things unnormalized and conflated. Do you though? At least ask the question.

peferron1 year ago

You can keep all three and still use UUIDv7 as a performance improvement in certain contexts due to data locality.

fastball1 year ago

Also if you have a nanosecond timestamp, do you actually need a monotonically increasing row ID? What for?

peferron1 year ago

Perhaps as a tie breaker if you insert multiple rows in a table within one transaction? In this situation, the timestamp returned by e.g. `now()` refers to the start of the transaction, which can cause it to be reused multiple times.

fastball1 year ago

I meant in a context with a random ID and a timestamp.

If you need to retrieve rows by time/order, you use the timestamp. If you need a row by ID, you use the ID.

The use-cases where you actually need to know which row was inserted first seem exceedingly rare (mostly financial / auditing applications), and even then can probably be handled with separate transactions (as you touch on).

user39393821 year ago

Re-timestamp would be a new one for me. What’s a conceivable use case? An NTP fault?

Dylan168071 year ago

> The Postgres patch solves the problem by repurposing 12 bits of the UUID’s random component to increase the precision of the timestamp down to nanosecond granularity (filling rand_a above), which in practice is too precise to contain two UUIDv7s generated in the same process.

A millisecond divided by 4096 is not a nanosecond. It's about 250 nanoseconds.

scrollaway1 year ago

UUID7 is excellent.

I want to share a django library I wrote a little while back which allows for prefixed identity fields, in the same style as Stripe's ID fields (obj_XXXXXXXXX):

https://github.com/jleclanche/django-prefixed-identity-field...

This gives a PrefixedIdentityField(prefix="obj_"), which is backed by uuid7 and base58. In the database, the IDs are stored as UUIDs, which makes them an efficient field -- they are transformed into prefixed IDs when coming out of the database, which makes them perfect for APIs.

(I know, no documentation .. if someone wants to use this, feel free to file issues to ask questions, I'd love to help)

dotdi1 year ago

My org has been using ULID[0] extensively for a few years, and generally we've been quite happy with it. After initially dabbing with a few implementations, I reimplemented the spec in Kotlin, and this has been working out quite well for us. We will open-source our implementation in the following weeks.

ULID does specifically require generated IDs to be monotonically increasing as opposed to what the RFC for UUIDv7 states, which is a big deal IMHO.

[0]: https://github.com/ulid/spec

willvarfar1 year ago

Having used a lot of the ULID variants that the UUIDv7 spec cites as prior art, including the ULID spec you link to, I've gotta say that UUIDv7 has some real advantages.

The biggest advantage is that it is hex. Haven't yet met a database system that doesn't have functions for substr and from_hex etc, meaning you can extract the time part using vanilla sql.

ULID and others that use custom variants of base32 or base62 or whatever are just about impossible to wrangle with normal tooling.

Your future selfs will thank you for being able to manipulate it in whatever database you use in the future to analyse old logs or import whatever data you generate today.

mixmastamyk1 year ago

Aren't they stored as 16 bytes in binary? How to format it later as text is then your choice.

WorldMaker1 year ago

It's that eternal push/pull "war" between "we need a sproc that can report this directly from the SQL server" and "please don't do things directly on the SQL server because you'll route around important application code" and "it's a feature not a bug that you can't just look things up by ID in the DB without a little bit of extra work".

I did work on a project using ULIDs in SQL Server. They were stored in uniqueidentifier fields with a complex byte swap from ULID to fake-UUID to get better storage/indexing out of SQL Server [1]. There was an attempt to use SQL Functions to display/search the ULID form directly in the database, but it was never as bug free as the C# byte order code and so it was definitely not recommended doing it directly in the DB and that if a "report" was missing it should be a part of the application (which was already almost nothing but a bloated "reporting" tool) or in a related "configuration" application. It did feel more like a feature than a bug because it did keep some meddling and drama out of the DB. I also see the arguments for why in some different types of applications it makes debugging a lot harder and those arguments make sense and it is definitely a trade-off to consider.

[1] The rabbit hole into SQL Server's ancient weird UUID sort order: https://devblogs.microsoft.com/oldnewthing/20190426-00/?p=10...

WorldMaker1 year ago

Also, depending on your DB engine and DB design and storage needs it might be just fine to store ULID as `char(26)` instead of `uniqueidentifier`. It's a lot more space in bytes, but bytes can be pretty affordable and then the ULIDs are never not in their canonical Base-32 form.

I also worked on applications that used ULIDs in string form only in NoSQL documents, string-based cache keys, and string indexes just fine. I didn't try `char(26)` columns in a DB and seeing how well, for instance, SQL Server clustered and indexed them, but I've SQL Server do just fine with someone's wild idea of clustering a `varhar(MAX)` field and I'm sure SQL Server can probably do it just fine on the technical side.

It's nice that you can easily convert a ULID to a 128-bit key, but you certainly don't have to. (Also, people really do like the ugly dashed hex form of UUIDs sometimes and I've seen people store those directly as strings in places you'd expect they should just store the 128-bit value, it goes both ways here, I suppose.)

sedatk1 year ago

Additionally, v7 UUIDs can be generated simultaneously on the client-side by multiple threads without waiting for an oracle to release the next available ID. That's quite important for parallel processing. Otherwise, you might as well use an autoincrement BIGINT.

sedatk1 year ago

ULID guarantees monotonicity only per process, and it requires ID generation to be serialized. I find the promise quite misleading because of that. You might as well use a wide-enough integer with the current timestamp + random as baseline for the same purpose, but I wouldn't recommend that either.

lordofgibbons1 year ago

What benefit does this have over something like Twitter's Snowflake, which can be used to generate distributed monotonically increasing IDs without synchronization?

We've been using an implementation of it in Go for many years in production without issues.

WorldMaker1 year ago

UUIDv7 interoperates with all the other versions of UUID. The v7 support in Postgres doesn't add a new column type, it makes the existing column type more powerful/capable. Applications that had been using UUIDv4 everywhere can get cheap Snowflake-like benefits in existing code just from switching the generator function. Most languages have a GUID or UUID class/struct that is compatibly upgradable from v4 to v7, too.

akvadrako1 year ago

Snowflake is a 64 bit integer. It doesn't need a new column type and works everywhere.

willvarfar1 year ago

Ordering for UUIDv7s in the same millisecond is super useful when some rows represent actions and others reactions.

I have used this guarantee for events generated on clients. It really simplifies a lot of reasoning.

fc417fc8021 year ago

[dead]

wslh1 year ago

This post makes me think (keep thinking) if we can use a solution that I used for another project in another context: using a Cryptographic Feistel Network to compute UUIDS so they are reversible if you need to know the original order. Each entity uses another key for the generation but if they know the keys they can know the order of the other party. Basically is using an existing cryptographic function if the block size is the same and if not adaping it to a specific block size via a Feistel Network.

Glyptodon1 year ago

On one hand I too am looking forward to more widespread use of UUIDv7, but on the other I don't really get the problem this is solving for their spec. If you care about timestamp ordering I don't think doing it in a way that forces you to fake a PK if you insert an earlier dated record at a future point makes sense. But I guess I'm implicitly assuming that human meaningful dates differ from insertion times in many domains.

kagitac1 year ago

After reading this I went ahead and added the extra 12 bits to my Go UUID library. Thanks for the write up on the PostgreSQL patch.

If anyone is interested, here is the package: https://github.com/cmackenzie1/go-uuid. It also includes a CLI similar to that of `uuidgen`, but supports version 7.

dfee1 year ago

I have an implementation function that computes N v7 UUIDs, sorts them, and returns them. This makes testing possible.

    Collection<UUID> generate(final int count);
I also have an interface that I can back with a RNG that generates auto incrementing values, sorts for testing, I have the experience of ints, but for production, my non-timestamp component is random.
pphysch1 year ago

The naming of "rand_a" and "rand_b" in the spec is a bit misleading. They don't have to be generated randomly. I'm sure there's a historical reason for it.

"extra_" or "distinct_" would be a more accurate prefix for UUIDv7.

UUIDv7 is actually quite a flexible standard due to these two underspecified fields. I'm glad Postgres took advantage of that!

nikisweeting1 year ago

I implemented this in pure Python a few days ago in case anyone finds it helpful, here it is: https://gist.github.com/pirate/7e44387c12f434a77072d50c52a3d...

My implementation supports graceful degradation between nanosecond scale resolution, microsecond, and millisecond, by using 12 bits for each and filling up the leftmost bits of rand_a and rand_b. Not all environments provide high resolution system clocks with no drift, so it's is important to maintain monotonicity when generating IDs with a low-res timestamp as input. You still want the bits that would've held the nanosecond value to be monotonic.

Neither of the existing uuid_utils and uuid7 python libs that can generate UUID7s support this monotonicity property.

Am planning on using this for ArchiveBox append-only "snapshot" records, which are intrinsically linked to time, so it's a good use-case imo.

There's another great resource here that I think is one of the best explainers of UUIDv7: https://antonz.org/uuidv7/

Whatever you do, don't implement the cursed 36-bit whole-second based time UUIDv7 variant that you occasionally see on StackOverflow / blog posts, stick to 48!

tomComb1 year ago

This looks great, thanks. But I think gists are better for unimportant stuff - for this I think it deserves its own repo.

nikisweeting1 year ago

It's in the ArchiveBox git repo and I may give it it's own library eventually, but for quick linking it's easier to read / less dependent on the rest of that codebase as a standalone script.

hardwaresofton1 year ago

Been waiting for UUIDv7 for years -- maybe it's time to archive pg_idkit[0], or maybe instead just switch the UUIDv7 version to do the native thing rather than the Rust code.

[0]: https://github.com/VADOSWARE/pg_idkit

urronglol1 year ago

What is a v7 UUID. Why do we need more than 1. uuid from a random seed and 2. one derived from that and a timestamp (orderable)

cube22221 year ago

UUID v7 is what you numbered #2.

For the others, it’s best to read up on Wikipedia[0]. I believe they all have their unique use-cases and tradeoffs.

E.g. including information about which node of the system generated an ID.

[0]: https://en.m.wikipedia.org/wiki/Universally_unique_identifie...

chimpontherun1 year ago

As it is usual in many areas of human endeavor, newcomers to the field tend to criticize design decisions that were made before them, only to re-invent what was already invented.

Sometimes it leads to improvements in the field, via rejection of the accumulated legacy crud, or just simply affording a new perspective. Most other times it's a well-intentioned, but low-effort noise.

I, personally, do it myself. This is how I learn.

n2d41 year ago

UUID v7 is the latter, whereas v4 is the former.

All the other versions are somewhat legacy, and you shouldn't use them in new systems (besides v8, which is "custom format UUID", if you need that.)

elehack1 year ago

UUID v5 is quite useful if you want to deterministically convert external identifiers into UUIDS — define a namespace UUID for each potential identifier source (to keep them separate), then use that to derive a V5 UUID from the external identifier. It's very useful for idempotent data imports.

jandrewrogers1 year ago

Both UUIDv3 and UUIDv5 are prohibited for some use cases in some countries (including the US), which is something to be aware of. Unfortunately, no one has created an updated standard UUID that uses a hash function that is not broken. While useful it is not always an option.

+1
kbolino1 year ago
mind-blight1 year ago

A deterministic uuid based off of a hash of bits is also very useful (UUID5). I've used that for deduping records from multiple sources

paulddraper1 year ago

[dead]

urronglol1 year ago

[flagged]

purerandomness1 year ago

It comes off as a low-effort question that seems to try to evoke a reply from a peer, while it's the kind of question that is best answered by an LLM, Google, or Wikipedia.

a3w1 year ago

Well, a LLM could answer it or write total bullshit. But yes, Wikipedia or other research quick internet research will help generally. And excactly here, it will tell you that there are competing standards since we have competing use cases.

kraftman1 year ago

For simple questions like this with unambiguous answers, it is statistically very unlikely that you'll get a bullshit answer from an LLM.

treve1 year ago

I didn't downvote you, but the terseness made it for me immediately come off as a kind of criticism, e.g.: "Why would we ever need it". May not have been your intent but if it was a genuine question, form matters.

urronglol1 year ago

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

treve1 year ago

I'm not arguing with you, but I'm giving you feedback on your communication style, which is even with this comment still completely lacking.

mtmail1 year ago

From the same guidelines "Please don't comment about the voting on comments. It never does any good, and it makes boring reading." treve gave insight into their thought process when they read your initial comment, and I had the same reaction. Neither treve nor I downvoted it.