Metadata in PDF files can be stored in at least two places:
- the Info Dictionary, a limited set of key/value pairs
- XMP packets, which contain RDF statements expressed as XML
A PDF file contains a) objects and b) pointers to those objects.
When information is added to a PDF file, it is appended to the end of the file and a pointer is added.
When information is removed from a PDF file, the pointer is removed, but the actual data may not be removed.
To remove previously-deleted data, the PDF file must be rebuilt.
pdftk can be used to update the Info Dictionary of a PDF file. See pdftk-unset-info-dictionary-values.php
below for an example. As noted in the pdftk documentation, though, pdftk does not alter XMP metadata.
exiftool can be used to read/write XMP metadata from/to PDF files.
exiftool -all:all
=> read all the tags.exiftool -all:all=
=> remove all the tags.
exiftool -all:all
also removes the pointer to the Info Dictionary, but does not completely remove it.
qpdf can be used to linearize PDF files (qpdf --linearize $FILE
), which optimises them for fast web loading and removes any orphan data.
After running qpdf, there may be new XMP metadata, as it extracts metadata from any embedded objects. To read the XMP tags of
embedded objects, use exiftool -extractEmbedded -all:all $FILE
.
Hi, I wonder if exiftool is still a valid (or ever was) approach.
If I run exiftool on my file it warns me that tags are not really removed:
My files also grow from 542.9 kB to 543.2 kB by exiftool and then from 543.2 kB to 544.6 kB by qpdf. So it seems there is actually more information added?
Let's see if these pdf-redact-tools do anything more. However I do for sure not want to follow one of their approaches, stacking PNG files and call it a PDF (that won't be searchable, has no vector graphic figures, and is probably larger in file size...)
//edit: OK that is actually the only approach they support, that's not applicable for me (and shouldn't be for most people that don't want to give away very larger or bad quality PDFs)