Skip to content

Instantly share code, notes, and snippets.

@adilsoncarvalho
Last active April 16, 2024 12:03
Show Gist options
  • Star 34 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save adilsoncarvalho/e0e8da81dbf52bf90c671887ef7e04d3 to your computer and use it in GitHub Desktop.
Save adilsoncarvalho/e0e8da81dbf52bf90c671887ef7e04d3 to your computer and use it in GitHub Desktop.
Bitbucket Pipelines deployment to a Google Container Engine configuration
options:
docker: true
pipelines:
branches:
master:
- step:
image: google/cloud-sdk:latest
name: Deploy to production
deployment: production
caches:
- docker
script:
# SETUP
- export IMAGE_NAME=us.gcr.io/$GCLOUD_PROJECT/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT
- export ENVIRONMENT=production
- echo $GCLOUD_API_KEYFILE | base64 -d > ~/.gcloud-api-key.json
- gcloud auth activate-service-account --key-file ~/.gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
- gcloud container clusters get-credentials $GCLOUD_CLUSTER --zone=$GCLOUD_ZONE --project $GCLOUD_PROJECT
- gcloud auth configure-docker --quiet
# BUILD IMAGE
- docker build . -t $IMAGE_NAME
# PUBLISH IMAGE
- docker push $IMAGE_NAME
# DEPLOYMENT
- kubectl set image deployment $BITBUCKET_REPO_SLUG-$ENVIRONMENT $BITBUCKET_REPO_SLUG=$IMAGE_NAME --record --namespace=$K8S_NAMESPACE
@loooping-old
Copy link

Fala Adilson, tranquilo? Cara, você já fez deploy dessa forma com alguma aplicação Laravel lá no GCE?

@brandoncollins7
Copy link

How can you cache these components so gcloud and kubectl don't need to waste your minutes installing every time?

@adilsoncarvalho
Copy link
Author

@brandoncollins7 I updated my gist. A long time ago I started to use the Google SDK image. It has all we need. Another good thing that happened since was that Bitbucket Pipelines started to cache Docker as well. The docker image build based pipeline got really fast with those two things.

@adilsoncarvalho
Copy link
Author

@loooping Laravel nunca, mas já fiz deploys assim para Yii. Não tem nada de mais no processo. É bem tranquilo.

@Lucifer017
Copy link

I have service account key, but it gave me ann error like this "Unable to read file [/root/gcloud/mct-deployments.json]". Where are you put the key? -> gcloud auth activate-service-account --key-file ~/.gcloud-api-key.json

@hidekuro
Copy link

@Lucifer017
Line16

- echo $GCLOUD_API_KEYFILE | base64 -d > ~/.gcloud-api-key.json

$GCLOUD_API_KEYFILE seems pre-provided base64 encoded keyfile with env vars in bitbucket repos or accounts.
see also: https://confluence.atlassian.com/bitbucket/variables-in-pipelines-794502608.html

you can make base64 encoded keyfile text on your local-machine like below

cat /PATH/TO/KEYFILE | base64 | tr -d '\n'

and to set shown text your repos or accounts env vars to use to pipeline.

@martinyung
Copy link

I change line 16 to:
echo $GCLOUD_API_KEYFILE > ~/.gcloud-api-key.json and it works.

@ArcherEmiya05
Copy link

ArcherEmiya05 commented Jun 19, 2021

Good day, I am learning Bitbucket Pipelines and Docker I just have a few question here

  • What exactly these lines doing
    - export IMAGE_NAME=us.gcr.io/$GCLOUD_PROJECT/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT
    - export ENVIRONMENT=production
    Are these perform by Docker?

  • How can we perform caching here? This is what I assume

         caches:
            - docker
          script:
            - export IMAGE_NAME=us.gcr.io/$GCLOUD_PROJECT/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT
            - export ENVIRONMENT=production
            - echo $GCLOUD_KEYFILE | base64 -d > ~/.gcloud-api-key.json
            - gcloud auth activate-service-account --key-file ~/.gcloud-api-key.json
            - gcloud config set project $GCLOUD_PROJECT
          services:
            - docker 
    

@adilsoncarvalho
Copy link
Author

@ArcherEmiya05 thanks for reaching.

The exports are defining some environment variables that will be used later in the script. The image name follows this pattern, and to about keep repeating myself recreating it, I decided to have it done in a single place and just use it.

  • GCLOUD_PROJECT: it's an env var I set on my pipelines config, and it has the google cloud project I am working with
  • BITBUCKET_REPO_SLUG: this env var is set by pipelines, and it has the repository name (let's say I had a repo https://github.com/adilsoncarvalho/my-project, the BITBUCKET_REPO_SLUG would be set with my-project)
  • BITBUCKET_COMMIT: another env var set by pipelines. It has the SHA of the current commit you are building. I like using that as an easy way to track exactly wich point in time the image is about.

If we assume that your gcloud project is named ACORN, and the repo is names MYCODE, and the commit you are building is d670460b4b4aece5915caf5c68d12f560a9fe3e4, your image will be created as us.gcr.io/ACORN/MYCODE: d670460b4b4aece5915caf5c68d12f560a9fe3e4

Then ENVIRONMENT variable is used later to tell kubernetes on which environment that image should be deployed. This is a pattern I use, but you can use different ones, like having it separated by namespaces.

kubectl set image deployment $BITBUCKET_REPO_SLUG-$ENVIRONMENT $BITBUCKET_REPO_SLUG=$IMAGE_NAME --record --namespace=$K8S_NAMESPACE

I hope that helped :)

@denisjo7
Copy link

@adilsoncarvalho Valeu muito! To conhecendo o bitbucket agora e estava tentando a muito tempo adaptar meu CI/CD do Github nele, mas estava com algumas complicações, como não conseguir usar o stage para agrupar e deixar bem separado cada passo.

@ArcherEmiya05
Copy link

@ArcherEmiya05 thanks for reaching.

The exports are defining some environment variables that will be used later in the script. The image name follows this pattern, and to about keep repeating myself recreating it, I decided to have it done in a single place and just use it.

  • GCLOUD_PROJECT: it's an env var I set on my pipelines config, and it has the google cloud project I am working with
  • BITBUCKET_REPO_SLUG: this env var is set by pipelines, and it has the repository name (let's say I had a repo https://github.com/adilsoncarvalho/my-project, the BITBUCKET_REPO_SLUG would be set with my-project)
  • BITBUCKET_COMMIT: another env var set by pipelines. It has the SHA of the current commit you are building. I like using that as an easy way to track exactly wich point in time the image is about.

If we assume that your gcloud project is named ACORN, and the repo is names MYCODE, and the commit you are building is d670460b4b4aece5915caf5c68d12f560a9fe3e4, your image will be created as us.gcr.io/ACORN/MYCODE: d670460b4b4aece5915caf5c68d12f560a9fe3e4

Then ENVIRONMENT variable is used later to tell kubernetes on which environment that image should be deployed. This is a pattern I use, but you can use different ones, like having it separated by namespaces.

kubectl set image deployment $BITBUCKET_REPO_SLUG-$ENVIRONMENT $BITBUCKET_REPO_SLUG=$IMAGE_NAME --record --namespace=$K8S_NAMESPACE

I hope that helped :)

Thanks for a detailed explanation, this gist helps me a lot and enables me to accomplish what I want to perform. However is it necessary to do the export and setting of environment part or I can just authenticate, select the project and run a gcloud commands I need? Will there be any side effect if I skip that part, also can you help me where can I possibly find this exported image perhaps in the Cloud Console?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment