So you want to understand what's going on with all these reports of "OpenZFS might be eating my data".
A bunch of software uses "where are there holes in the file" as an optimization to avoid having to read large swathes of zeroes. Many filesystems implement a call like FIEMAP which tells you a list of "these regions of the file have data", in addition to or rather than SEEK_HOLE/DATA, which are extensions to lseek to tell you where the next hole/data is from a position in the file.
OpenZFS, back in 2013, implemented support for SEEK_HOLE/SEEK_DATA, and that went into release 0.6.2.
On ZFS, because it has things like compression which might decide that your large swathe of zeroes was actually a hole in the file and store it accordingly, it requires flushing out all the pending dirty stuff to disk for a file in order to know accurately if something has holes or not, and where.
ZFS implemented a check for "if this thing is dirty, force a flush" for using SEEK_HOLE/DATA on things.
Unfortunately, it turns out the "is this thing dirty" check was incomplete, so sometimes it could decide a thing wasn't dirty, when it was, and not force a flush, and give you old information about the file's contents if you hit a very narrow window between flushing one thing and not another.
If you actually tried reading it, that would be correct, but if you skipped reading parts of it at all because you thought they were empty, well, then your program has incorrect ideas about what the contents was, and might write those ideas, or modify the empty areas and write the result, out somewhere.
So if you, say, reproduced this with cp on a file, you might get regions of zeroes where none existed on the destination, no matter what filesystem, if the source was ZFS and you hit this bug.
But if you were, say, compiling something, and the linker got zeroes for some of the objects, the output might not be just zeroes where it found something wrong, because it's not just copying the things it reads in, it's doing things to them and outputting a result.
So you can't just say "if I have regions of zeros, this is mangled" because plenty of files legitimately have large regions of zeroes. You also can't just say "if I have no regions of zeroes, this isn't mangled", because again, programs often read files and do things with the contents, not just write them out.
This isn't just a problem with using cp, or any particular program.
The good news is, this is very hard to hit without contrived examples, which is why it was around for so long - we hit a number of cases where someone made the window where it was incorrect much wider, those got noticed very fast and undone, and we couldn't reproduce it with those contrived examples afterward.
It's also not very common because GNU coreutils just started using things like this by default with 9.0+, though that's just for the trivial case of using cp, things that read files outside of coreutils might be doing god knows what.
So your best bet is if you have a known good source copy or hash from one, to compare your stuff against that. Anything else is going to be a heuristic with false positives and negatives.
Yes, that sucks. But life is messy sometimes.
In all the (many, many) comments on this problem, I've yet to see a clear statement of what change introduced this problem?
Does anyone know?