Skip to content

Instantly share code, notes, and snippets.

@yonixw
Created May 6, 2021 19:50
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save yonixw/dad3473e5f2980943f4ab1f076a7ef3f to your computer and use it in GitHub Desktop.
Save yonixw/dad3473e5f2980943f4ab1f076a7ef3f to your computer and use it in GitHub Desktop.
lambda for easy s3 upload with curl
import json
import logging
import boto3
from botocore.exceptions import ClientError
from botocore.client import Config
import os
import urllib.parse
"""
Example CURL, need put and file:
curl -L -v -XPUT -H "x-amz-acl: public-read" -T $FILE_PATH https://XXXXXXXXX.execute-api.REIGON.amazonaws.com/default/uploadKnownS3?id=$UPLOAD_ID
Env:
$UPLOAD_ID: [$BUCKET_NAME,$SAVE_PATH,$BUCKET_REIGON] // Array of strings
"""
def create_presigned_url(bucket_name, object_name,region,action="put_object", expiration=60):
"""Generate a presigned URL to share an S3 object
:param bucket_name: string
:param object_name: string
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Presigned URL as string. If error, returns None.
response = s3_client.generate_presigned_post( bucket_name,object_name,
ExpiresIn=expiration)
,region_name=region)
s3_client.generate_presigned_url(action,
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
"""
# Generate a presigned URL for the S3 object
s3_client = boto3.client('s3',region_name=region)
try:
response = s3_client.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': bucket_name,
'Key': object_name,
'ACL': 'public-read'
},
ExpiresIn=expiration
)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL
#return response
return response#str(response['url']) + "?" + str(urllib.parse.urlencode(response['fields']))
def lambda_handler(event, context):
myLogID = "(" + context.log_stream_name + ", " + context.aws_request_id + ")"
print(str(json.dumps({'reqID':myLogID,'id':event["queryStringParameters"]["id"]})))
# ENV_ID=[bucket,key]
myItemDataArr = json.loads(os.environ[event["queryStringParameters"]["id"]])
print(str(json.dumps(myItemDataArr)))
myS3URL = create_presigned_url(myItemDataArr[0],myItemDataArr[1],myItemDataArr[2])
print(str(myS3URL))
return {
'statusCode': 302,
"headers": {
"location":myS3URL ,
},
'body': ''
}
@yonixw
Copy link
Author

yonixw commented May 6, 2021

If file > 10mb you first need to get the url from location header:
echo $(curl -v -XPUT -H "x-amz-acl: public-read" -T "" https://XXXXXXXXX.execute-api.REIGON.amazonaws.com/default/uploadKnownS3?id=$UPLOAD_ID 2>&1 | grep location: | awk '{print $3}')

and then use it instead:
curl -L -v -XPUT -H "x-amz-acl: public-read" -T $FILE_PATH "$LONG_URL"


HTTP S3 support up to 5GB you can split with:
split -n 2/3 $FILE > $FILE.2.3 - save only the second split out of 3
split -b 2000m $FILE $FILE.split. - save splits of 2000 mb

Join file:
cat $FILE.* > $FILE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment