-
-
Save ahmetb/7ce6d741bd5baa194a3fac6b1fec8bb7 to your computer and use it in GitHub Desktop.
#!/bin/bash | |
# Copyright © 2017 Google Inc. | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
IFS=$'\n\t' | |
set -eou pipefail | |
if [[ "$#" -ne 2 || "${1}" == '-h' || "${1}" == '--help' ]]; then | |
cat >&2 <<"EOF" | |
gcrgc.sh cleans up tagged or untagged images pushed before specified date | |
for a given repository (an image name without a tag/digest). | |
USAGE: | |
gcrgc.sh REPOSITORY DATE | |
EXAMPLE | |
gcrgc.sh gcr.io/ahmet/my-app 2017-04-01 | |
would clean up everything under the gcr.io/ahmet/my-app repository | |
pushed before 2017-04-01. | |
EOF | |
exit 1 | |
elif [[ "${#2}" -ne 10 ]]; then | |
echo "wrong DATE format; use YYYY-MM-DD." >&2 | |
exit 1 | |
fi | |
main(){ | |
local C=0 | |
IMAGE="${1}" | |
DATE="${2}" | |
for digest in $(gcloud container images list-tags ${IMAGE} --limit=999999 --sort-by=TIMESTAMP \ | |
--filter="timestamp.datetime < '${DATE}'" --format='get(digest)'); do | |
( | |
set -x | |
gcloud container images delete -q --force-delete-tags "${IMAGE}@${digest}" | |
) | |
let C=C+1 | |
done | |
echo "Deleted ${C} images in ${IMAGE}." >&2 | |
} | |
main "${1}" "${2}" |
i have a littel upgrade for LTS Images where you need to keep older stuff
COUNTALL="$(gcloud container images list-tags ${IMAGE} --limit=999999 --sort-by=TIMESTAMP | grep -v DIGEST | wc -l)"
for digest in $(gcloud container images list-tags ${IMAGE} --limit=999999 --sort-by=TIMESTAMP \
--filter="timestamp.datetime < '${DATE}'" --format='get(digest)'); do
if [ $(( $COUNTALL - $C )) -ge 10 ]
then
(
set -x
gcloud container images delete -q --force-delete-tags "${IMAGE}@${digest}"
)
let C=C+1
else
echo "Deleted $(( $C - 1 )) images in ${IMAGE}." >&2
break
fi
done
you could define how many Images should stay
@falkvoigt can You update Your script? now it's incorrectly formated in gist, - it does nothing if You just run it! :)
@wibobm you can pin to a commit number on gists easily.
@falkvoigt what does your change do? mind explaining?
@evaldasou I just edited his comment to correct formatting
@wibobm i ran into the same requirement. Its easy to accomplish by changing the filter in line 44 of the orinial script to:
--filter "NOT tags:* AND timestamp.datetime < '${DATE}'"
I think there's a way to delete multiple images in the same command, which can be a time saver
UPDATE:
Consider using https://github.com/sethvargo/gcr-cleaner which is a Cloud Run app that you deploy and can be triggered periodically with Cloud Scheduler to garbage collect old images.
Quick question. I have something similar that Im going to implement. But I want this script to run inside a Pod in my k8s cluster.
So after dockerizing this script and invoking this as a cronjob in my cluster, how can I make sure that im authorized run the gcloud list and delete image commands ?
@jeunii You might have a look at https://github.com/sethvargo/gcr-cleaner
This one automatically scans all images from the project, no need to setup per repo as with this or gcr-cleaner: https://github.com/matti/gcr-pruner/blob/main/README.md
One more inline command
gcloud container images list-tags gcr.io/<project-name>/<image_name> --filter="timestamp.date('%Y-%m-%d', Z)<'2021-05-01'" --format="get(digest)" --limit=999999 | awk '{print "gcr.io/<project-name>/<image_name>@" $1}' | xargs gcloud container images delete --force-delete-tags --quiet
delete images older then 7 days
GCLOUD_PROJECT_ID=<project> \
CONTAINER_IMAGE_NAME=<image name> \
gcloud container images list-tags \
--project="${GCLOUD_PROJECT_ID}" \
"gcr.io/${GCLOUD_PROJECT_ID}/${CONTAINER_IMAGE_NAME}" \
--filter="timestamp.date('%Y-%m-%d', Z)<$(date --date='-7 days' +'%Y-%m-%d')" \
--format="get(digest)" --limit=999999 | awk '{print "'"gcr.io/${GCLOUD_PROJECT_ID}/${CONTAINER_IMAGE_NAME}@"'" $1}' \
| xargs -n 1 gcloud container images delete --project="${GCLOUD_PROJECT_ID}" --force-delete-tags --quiet
Is there a way to check if any pod is using the image before deleting it rather than just day check?
@matti perfect! But how can I add it into this sh file before deleting the container?
by programming, sometimes you just have to learn it.
Thanks oprudkyi I have updated your script to what works for me today! :)
Seems --project is not allowed for me, through cloud edit shell, so I removed this argument.
gcloud container images list-tags "gcr.io/{Project_ID}/{Container_Image_Path}" --filter="timestamp.date('%Y-%m-%d', Z)<$(date --date='-7 days' +'%Y-%m-%d')" --format="get(digest)" --limit=999999 | awk '{print "'"gcr.io/{Project_ID}/{Container_Image_Path}@"'" $1}' | xargs -n 1 gcloud container images delete --force-delete-tags --quiet
This gcr.io/{Project_ID}/{Container_Image_Path} should be replaced with the full path located under the container registry :
But this is horribly slow to run..
@bbakkebo
xargs -n1 -P4
might be faster , it will run 4 processes in parallel but output will be unreadable
also we run it inside cicd pipeline after each deploy so there less images to delete (only the first run is slow)
GCR.IO is depreciated
GCR.IO is depreciated
Script to only keep 10 latest images for every registry in a project:
#!/bin/bash
PROJECT=<PROJECT>
# Get a list of all registries in the project (assuming eu.gcr.io)
REGISTRIES=$(gcloud container images list --repository=eu.gcr.io/$PROJECT)
# Loop over each registry
while IFS= read -r REGISTRY; do
# Skip the header line
if [[ $REGISTRY == "NAME" ]]; then
continue
fi
while true; do
# Get a list of all image digests, sorted by date (newest first)
ALL_DIGESTS=$(gcloud container images list-tags $REGISTRY --format="get(digest)" --sort-by="~timestamp")
# Reset the array and index counter
unset DIGEST_ARRAY
i=0
# Process the multi-line output into an array
while IFS= read -r line; do
DIGEST_ARRAY[i++]="$line"
done <<< "$ALL_DIGESTS"
# Get the count of all digests
DIGEST_COUNT=${#DIGEST_ARRAY[@]}
# If there are more than 10 digests, delete the oldest ones
if [[ $DIGEST_COUNT -gt 10 ]]; then
for i in $(seq 10 $((DIGEST_COUNT - 1))); do
DIGEST=${DIGEST_ARRAY[$i]}
gcloud container images delete $REGISTRY@$DIGEST --force-delete-tags --quiet
done
else
echo "There are $DIGEST_COUNT images in $REGISTRY, no images to delete."
break
fi
# Sleep for a while before the next iteration (optional)
sleep 300
done
done <<< "$REGISTRIES"
This works for Container Registry. In Artifact Repository a cleanup policy can be configured to do the same: https://cloud.google.com/artifact-registry/docs/repositories/cleanup-policy.
Pretty sweet script. Could I bother you to make a version that does this but KEEPs tagged versions?