Skip to content

Instantly share code, notes, and snippets.

@veselosky
Last active May 8, 2023 21:42
Show Gist options
  • Star 96 You must be signed in to star a gist
  • Fork 14 You must be signed in to fork a gist
  • Save veselosky/9427faa38cee75cd8e27 to your computer and use it in GitHub Desktop.
Save veselosky/9427faa38cee75cd8e27 to your computer and use it in GitHub Desktop.
How to store and retrieve gzip-compressed objects in AWS S3
# vim: set fileencoding=utf-8 :
#
# How to store and retrieve gzip-compressed objects in AWS S3
###########################################################################
#
# Copyright 2015 Vince Veselosky and contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import absolute_import, print_function, unicode_literals
from io import BytesIO
from gzip import GzipFile
import boto3
s3 = boto3.client('s3')
bucket = 'bluebucket.mindvessel.net'
# Read in some example text, as unicode
with open("utext.txt") as fi:
text_body = fi.read().decode("utf-8")
# A GzipFile must wrap a real file or a file-like object. We do not want to
# write to disk, so we use a BytesIO as a buffer.
gz_body = BytesIO()
gz = GzipFile(None, 'wb', 9, gz_body)
gz.write(text_body.encode('utf-8')) # convert unicode strings to bytes!
gz.close()
# GzipFile has written the compressed bytes into our gz_body
s3.put_object(
Bucket=bucket,
Key='gztest.txt', # Note: NO .gz extension!
ContentType='text/plain', # the original type
ContentEncoding='gzip', # MUST have or browsers will error
Body=gz_body.getvalue()
)
retr = s3.get_object(Bucket=bucket, Key='gztest.txt')
# Now the fun part. Reading it back requires this little dance, because
# GzipFile insists that its underlying file-like thing implement tell and
# seek, but boto3's io stream does not.
bytestream = BytesIO(retr['Body'].read())
got_text = GzipFile(None, 'rb', fileobj=bytestream).read().decode('utf-8')
assert got_text == text_body
@joelgrus
Copy link

thank you, I found this helpful!

@fjavieralba
Copy link

very useful piece of code, thanks!

@soulmachine
Copy link

you saved my day, thanks!

@DaleBetts
Copy link

DaleBetts commented Dec 22, 2017

Good stuff, saved me in the world of Lambda :) thanks.

@nikchanda
Copy link

Very helpful. Thank you!

@skghosh-invn
Copy link

Hi Vince, Can you please comment on this Stackoverflow Question

@georgezoto
Copy link

Great code, I was looking for this online!

@apoorvab08
Copy link

apoorvab08 commented Apr 11, 2018

Thanks a lot for this! Looked all over for this!! Finally got it to work!

@OXPHOS
Copy link

OXPHOS commented Apr 25, 2018

Saved my day. Thanks!

@urton
Copy link

urton commented Jul 27, 2018

I've been trying to read, and avoid downloading, CloudTrail logs from S3 and had nearly given up on the get()['Body'].read() class until you explained reading back the 'little dance'. THANK YOU.

@mattkiz
Copy link

mattkiz commented Aug 4, 2018

This is a good fix, but I don't think it works for multi-file archives

@silent-vim
Copy link

Thanks!

@Markovenom
Copy link

Great code man, thnx!

@n-nakamichi
Copy link

Thanks!

@Gatsby-Lee
Copy link

Thank you for sharing.
I tried with python3. Here are the code.

python3 + boto3

retr = s3.get_object(Bucket=bucket, Key='gztest.txt')
got_text = gzip.decompress(retr['Body'].read()).decode('utf-8')
assert got_text == text_body

@murty0
Copy link

murty0 commented Jul 17, 2019

The decompression works, that's all I needed! Thanks

@sanjayadroll
Copy link

Thanks for this code. However in certain cases I get this error on this line

gz.write(my_csv_file.encode('utf-8')) # convert unicode strings to bytes!

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 618: ordinal not in range(128)

@rushabh268
Copy link

Thanks!

@mayanktri007
Copy link

Could you provide the same code in java

@RajaShyam
Copy link

Thanks a lot!

@rameshka
Copy link

rameshka commented Jun 3, 2020

Thanks a lot !!

@ggtools
Copy link

ggtools commented May 31, 2021

Quite nice however it has a really big issue: I have the feeling that you need to hold the compressed file in memory before sending it. Might work some something quite small but will definitely be a pain for very large files

@Gatsby-Lee
Copy link

Quite nice however it has a really big issue: I have the feeling that you need to hold the compressed file in memory before sending it. Might work some something quite small but will definitely be a pain for very large files

if you have a use case that need to handle a bigger size, I think you can update LN:50,51 to stream to file.

@sgodav
Copy link

sgodav commented Jun 23, 2021

Thank you so much, Saved my day !

@tinashe-wamambo
Copy link

Thank you, this was very helpful after many struggles and searches

@gunnerVivek
Copy link

@sanjayadroll Did you ever solved it? Looks like source file has characters that could not be encoded ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment