Your postgres server is out of space to the point that
VACUUM FULL hits a table too large to write back out to disk even though other tables may still have space to give up.
Start with the tables that have the least amount of "un-wasted" space as they will require the minimal amount of free space to process, and hopefully the cumulative effect of releasing all of the space from the smaller tables allows you to process your larger tables as you go. If not, then there's no solution available to you short of deleting data or expading your disk.
Unforutnately postgres doesn't have a mechanism to show waste/bloat inside its datafiles, but there are ways to apporoximate it.
This abominable kludge is the result of smushing this query from check_postgres to show "wasted space" together with this query to show table sizes.
- This only produces a result set, you need to turn it into a series of
VACUUM FULL schema.tablestatements.
- For sanity's sake you probably want to set the
wastedbytes > 0to a more reasonable threshold to avoid needlessly
VACUUM FULL-ing a tables that won't release meaningful amounts of space.
- You can probably do math on the cumulative values of the
wastedcolumn, plus the starting free space on your partition, compared with the
unwastedcolumn to pre-check if running this script is a viable solution for you at all without actually having to run it.