Continuous Integration and Deployment using Google Cloudbuild

October 19, 2020

GCP
Cloudbuild
Python
AppEngine
CI&CD

An amazing alternative for people who need fast and easy Continuous Integration and Continous Deployment tool on Google - Cloudbuild. Very helpful if the infrastructure is hosted on Google Cloud Platform.

This quick guide will show how to run test and deploy application that is using:

  • Google App Engine (python37 server)
  • Datastore (NoSQL db)
  • Google Secure Manager (store and encrypt/decrypt secrets)
  • redis (cache)
  • pytest (tests)

Console

Setup is very easy, all that needs to be done is to jump into documentation and start playing with it.

First, create your GCP project and enable the billings.

Go to Cloudbuild Console and create a trigger.

Each trigger can be fired on one of the events:

  • Push to a branch
  • Push new tag
  • Pull Request (currently only in Github)

Authorize with Github and select the repository with regular expression matching a chosen event

cloudbuild trigger branch

There are two options to make your configuration: yaml file or custom Dockerfile

build config options

cloudbuild.yaml setup

Everything is based on Docker. You can either create a custom image or use the existing one.

Setup services

First, if the project is using a database, cache, datastore, or any other service I would recommend creating a use docker-compose to set everything up:

version: '3'

services:
  redis:
    image: redis
    ports:
      - "6379:6379"
    expose:
      - "6379"

datastore:
    image: gcr.io/cloud-builders/gcloud
    command: gcloud beta emulators datastore start --project test --host-port "0.0.0.0:8002" --no-store-on-disk --consistency=1
    ports:
      - "8002:8002"
    expose:
      - "8002"

networks:
  default:
    external:
      name: cloudbuild

The important part here is to use cloudbuild network name to have access to defined services in every step of the build. Remember to use the service name in the code when you are connecting services in the test mode.

Define cloudbuild.yaml with all steps

steps:

# 1
- name: "docker/compose:1.15.0"
  id: "docker-compose"
  args: ["up", "-d"]

# 2
- name: python:3.7
  id: requirements
  entrypoint: bash
  args: ["scripts/requirements.sh"]
  waitFor: ['docker-compose']

# 3
- name: python:3.7
  id: tests
  entrypoint: bash
  env:
    - "PYTHONPATH=/workspace:/workspace/lib"
  args: ["scripts/run_tests.sh"]
  waitFor: ['docker-compose', 'requirements']

# 4
- name: gcr.io/cloud-builders/gcloud
  id: secrets
  entrypoint: bash
  args: ['scripts/secrets.sh']
  waitFor: ['tests']

# 5
- name: gcr.io/cloud-builders/gcloud
  id: deployment
  entrypoint: bash
  env:
  - "YAML_CONFIG=app_service1.yaml"
  args: ['scripts/deploy.sh']
  waitFor: ['tests', 'secrets']

# 6
- name: gcr.io/cloud-builders/gcloud
  id: deployment
  entrypoint: bash
  env:
  - "YAML_CONFIG=app_service2.yaml"
  args: ['scripts/deploy.sh']
  waitFor: ['tests', 'secrets']

Steps

#1 Run docker-compose to setup database and cache services

#2 Install python dependencies

pip install -r requirements.txt -t /workspace/lib
pip install -r requirements-test.txt -t /workspace/lib

#3 Start tests

python -m pytest -c cloudbuild_pytest.ini -vvv

#4 Pull all the secrets to the instance

secrets=$(gcloud secrets list --format='table(NAME)')

for secret in ${secrets[@]:4}; do
  value=$(gcloud secrets versions access latest --secret=$secret)
  // do anything you want with the secrets
done

#5/#6 Deploy two app engine services asynchronously

pip install setuptools
pip install pyyaml python-dotenv

# render app yaml and load secrets

gcloud components install beta

gcloud app deploy --quiet queue.yaml
gcloud app deploy --quiet cron.yaml
gcloud app deploy --quiet index.yml

gcloud beta app deploy --quiet $YAML_CONFIG --promote

Permissions

Last but not least, permissions. There is no need to create any special service accounts or credentials, the important part happens in IAM console.

Go to the console, find a member with email: ???@cloudbuild.gserviceaccount.com and edit permissions.

Common permissons for App engine case:

  • App Engine Admin (app engine deployment)
  • Cloud Scheduler Admin (cron job deployment)
  • Cloud Tasks Admin (tasks job deployment)
  • Compute Instance Admin (beta) (beta deployer for redis in GCP)
  • Cloud Datastore Index Admin (index datastore)
  • Secret Manager Secret Accessor (access secrets)
  • Secret Manager Viewer (display secrets)
  • Serverless VPC Access User (Network for redis connection inside GCP)

Build history

build history

Github build status

github status

Summary

I'm very happy with this solution, mainly because:

  • high security (everything stays in google network)
  • access to all Google Services (useful if you try to keep architecture inside GCP)
  • easy and fast to use
  • docker support
  • readable results (in google console and GitHub)
  • cheap

Built with love to Python logo React logo Gatsby logo ยท Piotr Rogulski