Continuous Integration and Deployment using Google Cloudbuild

October 19, 2020


An amazing alternative for people who need fast and easy Continuous Integration and Continous Deployment tool on Google - Cloudbuild. Very helpful if the infrastructure is hosted on Google Cloud Platform.

This quick guide will show how to run test and deploy application that is using:

  • Google App Engine (python37 server)
  • Datastore (NoSQL db)
  • Google Secure Manager (store and encrypt/decrypt secrets)
  • redis (cache)
  • pytest (tests)


Setup is very easy, all that needs to be done is to jump into documentation and start playing with it.

First, create your GCP project and enable the billings.

Go to Cloudbuild Console and create a trigger.

Each trigger can be fired on one of the events:

  • Push to a branch
  • Push new tag
  • Pull Request (currently only in Github)

Authorize with Github and select the repository with regular expression matching a chosen event

cloudbuild trigger branch

There are two options to make your configuration: yaml file or custom Dockerfile

build config options

cloudbuild.yaml setup

Everything is based on Docker. You can either create a custom image or use the existing one.

Setup services

First, if the project is using a database, cache, datastore, or any other service I would recommend creating a use docker-compose to set everything up:

version: '3'

    image: redis
      - "6379:6379"
      - "6379"

    command: gcloud beta emulators datastore start --project test --host-port "" --no-store-on-disk --consistency=1
      - "8002:8002"
      - "8002"

      name: cloudbuild

The important part here is to use cloudbuild network name to have access to defined services in every step of the build. Remember to use the service name in the code when you are connecting services in the test mode.

Define cloudbuild.yaml with all steps


# 1
- name: "docker/compose:1.15.0"
  id: "docker-compose"
  args: ["up", "-d"]

# 2
- name: python:3.7
  id: requirements
  entrypoint: bash
  args: ["scripts/"]
  waitFor: ['docker-compose']

# 3
- name: python:3.7
  id: tests
  entrypoint: bash
    - "PYTHONPATH=/workspace:/workspace/lib"
  args: ["scripts/"]
  waitFor: ['docker-compose', 'requirements']

# 4
- name:
  id: secrets
  entrypoint: bash
  args: ['scripts/']
  waitFor: ['tests']

# 5
- name:
  id: deployment
  entrypoint: bash
  - "YAML_CONFIG=app_service1.yaml"
  args: ['scripts/']
  waitFor: ['tests', 'secrets']

# 6
- name:
  id: deployment
  entrypoint: bash
  - "YAML_CONFIG=app_service2.yaml"
  args: ['scripts/']
  waitFor: ['tests', 'secrets']


#1 Run docker-compose to setup database and cache services

#2 Install python dependencies

pip install -r requirements.txt -t /workspace/lib
pip install -r requirements-test.txt -t /workspace/lib

#3 Start tests

python -m pytest -c cloudbuild_pytest.ini -vvv

#4 Pull all the secrets to the instance

secrets=$(gcloud secrets list --format='table(NAME)')

for secret in ${secrets[@]:4}; do
  value=$(gcloud secrets versions access latest --secret=$secret)
  // do anything you want with the secrets

#5/#6 Deploy two app engine services asynchronously

pip install setuptools
pip install pyyaml python-dotenv

# render app yaml and load secrets

gcloud components install beta

gcloud app deploy --quiet queue.yaml
gcloud app deploy --quiet cron.yaml
gcloud app deploy --quiet index.yml

gcloud beta app deploy --quiet $YAML_CONFIG --promote


Last but not least, permissions. There is no need to create any special service accounts or credentials, the important part happens in IAM console.

Go to the console, find a member with email: ??? and edit permissions.

Common permissons for App engine case:

  • App Engine Admin (app engine deployment)
  • Cloud Scheduler Admin (cron job deployment)
  • Cloud Tasks Admin (tasks job deployment)
  • Compute Instance Admin (beta) (beta deployer for redis in GCP)
  • Cloud Datastore Index Admin (index datastore)
  • Secret Manager Secret Accessor (access secrets)
  • Secret Manager Viewer (display secrets)
  • Serverless VPC Access User (Network for redis connection inside GCP)

Build history

build history

Github build status

github status


I'm very happy with this solution, mainly because:

  • high security (everything stays in google network)
  • access to all Google Services (useful if you try to keep architecture inside GCP)
  • easy and fast to use
  • docker support
  • readable results (in google console and GitHub)
  • cheap

Built with love to Python logo React logo Gatsby logo ยท Piotr Rogulski