Implement a capped pool which will start dropping messages once it reaches <size>. This will have a much lower rate of failure (due to load) than a database or standard queue.
For example, if there is a burst of data coming in, we'd first stick it into the pool, and then fire off a job to the queue (the job would be argless and simple say "get data from pool", it could also eventually be replaced with a continuous processor). This guarantees that your queue isnt overflowing with large amounts of data, but rather potential no-ops (just like our buffer implementation), and will also ensure you don't