An amazing alternative for people who need fast and easy Continuous Integration and Continous Deployment tool on Google - Cloudbuild. Very helpful if the infrastructure is hosted on Google Cloud Platform.
This quick guide will show how to run test and deploy application that is using:
Setup is very easy, all that needs to be done is to jump into documentation and start playing with it.
First, create your GCP project and enable the billings.
Go to Cloudbuild Console and create a trigger.
Each trigger can be fired on one of the events:
Authorize with Github and select the repository with regular expression matching a chosen event
There are two options to make your configuration: yaml file or custom Dockerfile
Everything is based on Docker. You can either create a custom image or use the existing one.
Setup services
First, if the project is using a database, cache, datastore, or any other service I would recommend creating a use docker-compose to set everything up:
version: '3'
services:
redis:
image: redis
ports:
- "6379:6379"
expose:
- "6379"
datastore:
image: gcr.io/cloud-builders/gcloud
command: gcloud beta emulators datastore start --project test --host-port "0.0.0.0:8002" --no-store-on-disk --consistency=1
ports:
- "8002:8002"
expose:
- "8002"
networks:
default:
external:
name: cloudbuild
The important part here is to use cloudbuild network name to have access to defined services in every step of the build. Remember to use the service name in the code when you are connecting services in the test mode.
redis.Redis(host=‘localhost’, port=6379) redis.Redis(host=‘redis’, port=6379)
Define cloudbuild.yaml with all steps
steps:
# 1
- name: "docker/compose:1.15.0"
id: "docker-compose"
args: ["up", "-d"]
# 2
- name: python:3.7
id: requirements
entrypoint: bash
args: ["scripts/requirements.sh"]
waitFor: ['docker-compose']
# 3
- name: python:3.7
id: tests
entrypoint: bash
env:
- "PYTHONPATH=/workspace:/workspace/lib"
args: ["scripts/run_tests.sh"]
waitFor: ['docker-compose', 'requirements']
# 4
- name: gcr.io/cloud-builders/gcloud
id: secrets
entrypoint: bash
args: ['scripts/secrets.sh']
waitFor: ['tests']
# 5
- name: gcr.io/cloud-builders/gcloud
id: deployment
entrypoint: bash
env:
- "YAML_CONFIG=app_service1.yaml"
args: ['scripts/deploy.sh']
waitFor: ['tests', 'secrets']
# 6
- name: gcr.io/cloud-builders/gcloud
id: deployment
entrypoint: bash
env:
- "YAML_CONFIG=app_service2.yaml"
args: ['scripts/deploy.sh']
waitFor: ['tests', 'secrets']
Steps
#1 Run docker-compose to setup database and cache services
#2 Install python dependencies
pip install -r requirements.txt -t /workspace/lib
pip install -r requirements-test.txt -t /workspace/lib
#3 Start tests
python -m pytest -c cloudbuild_pytest.ini -vvv
Note that all steps are having waitFor parameter. It is not starting a step until all defined steps in array are finished
#4 Pull all the secrets to the instance
secrets=$(gcloud secrets list --format='table(NAME)')
for secret in ${secrets[@]:4}; do
value=$(gcloud secrets versions access latest --secret=$secret)
// do anything you want with the secrets
done
#5/#6 Deploy two app engine services asynchronously
pip install setuptools
pip install pyyaml python-dotenv
# render app yaml and load secrets
gcloud components install beta
gcloud app deploy --quiet queue.yaml
gcloud app deploy --quiet cron.yaml
gcloud app deploy --quiet index.yml
gcloud beta app deploy --quiet $YAML_CONFIG --promote
Last but not least, permissions. There is no need to create any special service accounts or credentials, the important part happens in IAM console.
Go to the console, find a member with email: ???@cloudbuild.gserviceaccount.com and edit permissions.
Common permissons for App engine case:
I’m very happy with this solution, mainly because: