Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
How to store and retrieve gzip-compressed objects in AWS S3
# vim: set fileencoding=utf-8 :
#
# How to store and retrieve gzip-compressed objects in AWS S3
###########################################################################
#
# Copyright 2015 Vince Veselosky and contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import absolute_import, print_function, unicode_literals
from io import BytesIO
from gzip import GzipFile
import boto3
s3 = boto3.client('s3')
bucket = 'bluebucket.mindvessel.net'
# Read in some example text, as unicode
with open("utext.txt") as fi:
text_body = fi.read().decode("utf-8")
# A GzipFile must wrap a real file or a file-like object. We do not want to
# write to disk, so we use a BytesIO as a buffer.
gz_body = BytesIO()
gz = GzipFile(None, 'wb', 9, gz_body)
gz.write(text_body.encode('utf-8')) # convert unicode strings to bytes!
gz.close()
# GzipFile has written the compressed bytes into our gz_body
s3.put_object(
Bucket=bucket,
Key='gztest.txt', # Note: NO .gz extension!
ContentType='text/plain', # the original type
ContentEncoding='gzip', # MUST have or browsers will error
Body=gz_body.getvalue()
)
retr = s3.get_object(Bucket=bucket, Key='gztest.txt')
# Now the fun part. Reading it back requires this little dance, because
# GzipFile insists that its underlying file-like thing implement tell and
# seek, but boto3's io stream does not.
bytestream = BytesIO(retr['Body'].read())
got_text = GzipFile(None, 'rb', fileobj=bytestream).read().decode('utf-8')
assert got_text == text_body
@veselosky

This comment has been minimized.

Copy link
Owner Author

commented Dec 1, 2015

Browsers will honor the content-encoding header and decompress the content automatically. This is roughly the same as running mod_gzip in your Apache or Nginx server, except this data is always compressed, whereas mod_gzip only compresses the response of the client advertises it accepts compression. In practice, all real browsers accept it. Most programming language HTTP libraries also handle it transparently (but not boto3, as demonstrated above).

It is worth noting that curl does not detect compression unless you have specifically asked it to. I strongly recommend adding --compressed to your .curlrc, because when would you not want this?

@joelgrus

This comment has been minimized.

Copy link

commented Oct 27, 2016

thank you, I found this helpful!

@fjavieralba

This comment has been minimized.

Copy link

commented Jan 3, 2017

very useful piece of code, thanks!

@soulmachine

This comment has been minimized.

Copy link

commented Feb 24, 2017

you saved my day, thanks!

@DaleBetts

This comment has been minimized.

Copy link

commented Dec 22, 2017

Good stuff, saved me in the world of Lambda :) thanks.

@nikchanda

This comment has been minimized.

Copy link

commented Jan 16, 2018

Very helpful. Thank you!

@skghosh-invn

This comment has been minimized.

Copy link

commented Mar 16, 2018

Hi Vince, Can you please comment on this Stackoverflow Question

@georgezoto

This comment has been minimized.

Copy link

commented Apr 2, 2018

Great code, I was looking for this online!

@apoorvab08

This comment has been minimized.

Copy link

commented Apr 11, 2018

Thanks a lot for this! Looked all over for this!! Finally got it to work!

@OXPHOS

This comment has been minimized.

Copy link

commented Apr 25, 2018

Saved my day. Thanks!

@urton

This comment has been minimized.

Copy link

commented Jul 27, 2018

I've been trying to read, and avoid downloading, CloudTrail logs from S3 and had nearly given up on the get()['Body'].read() class until you explained reading back the 'little dance'. THANK YOU.

@mattkiz

This comment has been minimized.

Copy link

commented Aug 4, 2018

This is a good fix, but I don't think it works for multi-file archives

@silent-vim

This comment has been minimized.

Copy link

commented Aug 24, 2018

Thanks!

@Markovenom

This comment has been minimized.

Copy link

commented Nov 27, 2018

Great code man, thnx!

@n-nakamichi

This comment has been minimized.

Copy link

commented Apr 11, 2019

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.