How to store and retrieve gzip-compressed objects in AWS S3
# vim: set fileencoding=utf-8 : | |
# | |
# How to store and retrieve gzip-compressed objects in AWS S3 | |
########################################################################### | |
# | |
# Copyright 2015 Vince Veselosky and contributors | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# | |
from __future__ import absolute_import, print_function, unicode_literals | |
from io import BytesIO | |
from gzip import GzipFile | |
import boto3 | |
s3 = boto3.client('s3') | |
bucket = 'bluebucket.mindvessel.net' | |
# Read in some example text, as unicode | |
with open("utext.txt") as fi: | |
text_body = fi.read().decode("utf-8") | |
# A GzipFile must wrap a real file or a file-like object. We do not want to | |
# write to disk, so we use a BytesIO as a buffer. | |
gz_body = BytesIO() | |
gz = GzipFile(None, 'wb', 9, gz_body) | |
gz.write(text_body.encode('utf-8')) # convert unicode strings to bytes! | |
gz.close() | |
# GzipFile has written the compressed bytes into our gz_body | |
s3.put_object( | |
Bucket=bucket, | |
Key='gztest.txt', # Note: NO .gz extension! | |
ContentType='text/plain', # the original type | |
ContentEncoding='gzip', # MUST have or browsers will error | |
Body=gz_body.getvalue() | |
) | |
retr = s3.get_object(Bucket=bucket, Key='gztest.txt') | |
# Now the fun part. Reading it back requires this little dance, because | |
# GzipFile insists that its underlying file-like thing implement tell and | |
# seek, but boto3's io stream does not. | |
bytestream = BytesIO(retr['Body'].read()) | |
got_text = GzipFile(None, 'rb', fileobj=bytestream).read().decode('utf-8') | |
assert got_text == text_body |
This comment has been minimized.
This comment has been minimized.
joelgrus
commented
Oct 27, 2016
thank you, I found this helpful! |
This comment has been minimized.
This comment has been minimized.
fjavieralba
commented
Jan 3, 2017
very useful piece of code, thanks! |
This comment has been minimized.
This comment has been minimized.
soulmachine
commented
Feb 24, 2017
you saved my day, thanks! |
This comment has been minimized.
This comment has been minimized.
DaleBetts
commented
Dec 22, 2017
•
Good stuff, saved me in the world of Lambda :) thanks. |
This comment has been minimized.
This comment has been minimized.
nikchanda
commented
Jan 16, 2018
Very helpful. Thank you! |
This comment has been minimized.
This comment has been minimized.
skghosh-invn
commented
Mar 16, 2018
Hi Vince, Can you please comment on this Stackoverflow Question |
This comment has been minimized.
This comment has been minimized.
georgezoto
commented
Apr 2, 2018
Great code, I was looking for this online! |
This comment has been minimized.
This comment has been minimized.
apoorvab08
commented
Apr 11, 2018
•
Thanks a lot for this! Looked all over for this!! Finally got it to work! |
This comment has been minimized.
This comment has been minimized.
OXPHOS
commented
Apr 25, 2018
Saved my day. Thanks! |
This comment has been minimized.
This comment has been minimized.
urton
commented
Jul 27, 2018
I've been trying to read, and avoid downloading, CloudTrail logs from S3 and had nearly given up on the get()['Body'].read() class until you explained reading back the 'little dance'. THANK YOU. |
This comment has been minimized.
This comment has been minimized.
mattkiz
commented
Aug 4, 2018
This is a good fix, but I don't think it works for multi-file archives |
This comment has been minimized.
This comment has been minimized.
silent-vim
commented
Aug 24, 2018
Thanks! |
This comment has been minimized.
This comment has been minimized.
Markovenom
commented
Nov 27, 2018
Great code man, thnx! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This comment has been minimized.
veselosky commentedDec 1, 2015
Browsers will honor the content-encoding header and decompress the content automatically. This is roughly the same as running mod_gzip in your Apache or Nginx server, except this data is always compressed, whereas mod_gzip only compresses the response of the client advertises it accepts compression. In practice, all real browsers accept it. Most programming language HTTP libraries also handle it transparently (but not boto3, as demonstrated above).
It is worth noting that
curl
does not detect compression unless you have specifically asked it to. I strongly recommend adding--compressed
to your .curlrc, because when would you not want this?