Chapter10 Continuous Integration Delivery

If you find this content useful, consider buying this book:

If you enjoyed this book considering buying a copy

Chapter 10: Continuous Integration and Continuous Delivery #

author: Noah Gift

There is no shortage of options to build source code. The three main offerings for build servers are:

  1. SaaS Build providers like Circleci.
  2. Cloud build providers like AWS Code Build.
  3. Open source like Jenkins

This section tackles both the concepts around CI/CD and shows many practical examples. Let’s get started.

What is Continuous Integration and Continuous Delivery and Why Do They Matter? #

Automation is the central component of both testing and deployment. Continuous Integration is the process of building and testing software. Continuous Delivery is taking this a step further and also deploying the software to a new environment.

Let’s take a look at this diagram and walk through a few of the concepts. Notice that a local developer has a few key components: Makefile, linting, formatting, reporting, and a Dockerfile. A developer will use all of these components to improve the quality of their code locally consistently. They will also make sure their code passes tests locally before pushing to source control.

continuous-integration-delivery

Using a Makefile or a similar tool is very important because it serves the purpose of a recipe book. You can run the same commands on the build server as you did locally, say make lint, or make test. This step simplifies the complexity of setting up a build server. There is nothing worse than an extremely complex local setup for testing and wholly different and complex remote setup for testing.

The build server in this diagram can be any build server. The build server notifies of changes to source control, and a “build job” will be triggered. This step is the “continuous” component of a continuous integration system. With every change that pushed to source control, the build server has the tools necessary not just to enforce a level of quality but also improve the quality of the code.

How could the build server improve quality? The build server could automatically format code to meet a standard. Perhaps with a tool like Python black. The build server could also run behavioral analytics and hot spots in the code like code churn on unusual check-ins, perhaps using a tool like devml.

Now let’s move on to the Continuous Delivery aspect. The critical change enabler of Continuous Delivery is the cloud. Previous to the cloud, it was very difficult or impossible to create a new environment to test deployment. You had to literally buy servers, and then a human was involved in provisioning the hardware. Even virtual machines did little to make deployment fully automated. Later tools like puppet and chef helped automate things, but often there were still humans involved and processes that were not fully automated. The latest generation of automation works well because the entire infrastructure is code.

This process is IaC (Infrastructure as Code). The infrastructure is then checked in side by side with the source code. It can provision a new environment and configure that application. This step leads to a fully automatable deployment with humans “out of the loop.” Examples of IaC include Terraform, Pulumi and AWS Cloud Formation. All modern systems should build in continuous Delivery as a requirement. There are few, if any, valid reasons not automatically to test and deploy modern software application.

Jenkins #

Jenkins is a long-running open-source build server that has over 1,500 plugins and works on Linux, Windows, and OS X. There are many exciting ways to use Jenkins for testing. One of the easiest ways to use Jenkins is to first run it in via a war file. The latest installation instructions is here. From a high-level, though, the general idea is to curl this file and run it in Java.

Here is one example of how to do this on an OS X machine (although this should work well on Linux environment like AWS Cloud9)

  1. cd into /tmp
$ cd /tmp
  1. Use wget to download the jenkins.war file or curl.
$ wget http://mirrors.jenkins.io/war-stable/latest/jenkins.war
  1. Next, run it in foreground mode (this means watching the output and keeping the shell) using Java.
$ java -jar jenkins.war

You will see some output like this:

Running from: /private/tmp/jenkins.war
webroot: $user.home/.jenkins
2020-02-16 21:24:41.139+0000 [id=1]    INFO    org.eclipse.jetty.util.log.Log#initialized:
  1. Go to http://localhost:8080 and unlock it by copying the output of the password location. In this particular scenario, it put into the web page.

Screen Shot 2020-02-16 at 4 26 38 PM

Next, you can put in throwaway user credentials like admin or test to initially get things working.

  1. Setup a project and then make a build step.

Screen Shot 2020-02-16 at 4 32 55 PM

I typically use this style of the workflow first to test out what I want to do. From here, I may decide to create a Jenkins slave or two on a different operating system, say the OS X build machine I use to compile iOS applications. This same setup style also works well when configuring Jenkins in the cloud.

CircleCI #

Setup CircleCI inside AWS Cloud9 #

The steps to set up CircleCI outlined is here, but at a high level, it is:

  1. Authenticate to CircleCI with Github account credentials
  2. “Add Project.”
  3. Copy and edit .circleci/config.yml in the development environment.
  4. Build from CircleCI manually to test (then it should be automatic)
  5. Integrate build badge in Github README.md

One tricky item is that you will need to “toggle” editing hidden files if you are using a cloud editor like AWS Cloud9 or Google Cloud Shell. With AWS Cloud9 you right-click on the left pane folder here as shown here to toggle hidden files:

hidden files

Afterward, you should be able to edit a hidden directory like .circleci.

.circleci

Extending a Makefile for use with Docker Containers and CircleCI #

Beyond the simple Makefile, it is also useful to extend it to do other things. An example of this is as follows:

{caption: “Example Makefile for Docker and Circleci”}

setup:
    python3 -m venv ~/.container-revolution-devops

install:
    pip install --upgrade pip &&\
        pip install -r requirements.txt

test:
    #python -m pytest -vv --cov=myrepolib tests/*.py
    #python -m pytest --nbval notebook.ipynb

validate-circleci:
    # See https://circleci.com/docs/2.0/local-cli/#processing-a-config
    circleci config process .circleci/config.yml

run-circleci-local:
    # See https://circleci.com/docs/2.0/local-cli/#running-a-job
    circleci local execute

lint:
    hadolint demos/flask-sklearn/Dockerfile
    pylint --disable=R,C,W1203,W1202 demos/**/**.py

all: install lint test

A Dockerfile linter is called hadolint checks for bugs in a Dockerfile. A local version of the CircleCI build system allows for testing in the same environment as the SaaS offering. The minimalism is still present: make install, make lint and make test, but the lint step is more complete and authoritative with the inclusion of Dockerfile as well as Python linting.

Notes about installing hadolint and circleci: If you are on OS X you can brew install hadolint if you are on another platform follow the instructions from hadolint/ To install the local version of circleci on OS X or Linux you can run curl -fLSs https://circle.ci/cli | bash or follow the official instructions for local version of the CircleCI build system

GCP Cloud Build #

These are the steps to setup Cloud Build Continuous Delivery on Google App Engine.

  1. Create a Github repo
  2. Create a project in GCP UI (your project name will be different) 2A Setup API as well

Project UI

  1. Activate cloud-shell and add ssh-keys if not already added to Github: i.e., ssh-keygen -t rsa than upload key to Github ssh settings.

  2. Create an initial project scaffold. You will need the following files, which you can create with the following commands. Note you can copy app.yaml, main.py, main_test.py and requirements.txt from this repo from google.

  • Makefile: touch Makefile

This step allows an easy to remember convention.

  • requirements.txt: touch requirements.txt

These are the packages we use.

  • app.py: touch app.yaml

This step is part of the IaC (Infrastructure as Code) and configures the PaaS environment for Google App Engine.

  • main.py: touch main.py

This step is the logic of the Flask application.

  1. Run describe

verify the project is working

gcloud projects describe $GOOGLE_CLOUD_PROJECT

The output of the command:

createTime: '2019-05-29T21:21:10.187Z'
lifecycleState: ACTIVE
name: helloml
projectId: helloml-xxxxx
projectNumber: '881692383648'
  1. You may want to verify you have the correct project and if not, do this to switch:
gcloud config set project $GOOGLE_CLOUD_PROJECT
  1. Create an app engine app:
gcloud app create

This step will ask for the region. Go ahead and pick us-central [12]

Creating App Engine application in project [helloml-xxx] and region [us-central]....done.
Success! The app is created. Please use `gcloud app deploy` to deploy your first app.
  1. create and source the virtual environment:
virtualenv --python $(which python) venv
source venv/bin/activate

double check it works:

which python
/home/noah_gift/python-docs-samples/appengine/standard_python37/hello_world/venv/bin/python
  1. activate cloud shell editor

code editor

  1. install packages:
make install

This step should install flask and other packages you have created

Flask==1.x.x
  1. run flask locally

This step runs flask locally in gcp shell

python main.py
  1. preview

preview

  1. update main.py

from flask import Flask
from flask import jsonify

app = Flask(__name__)

@app.route('/')
def hello():
    """Return a friendly HTTP greeting."""
    return 'Hello I like to make AI Apps'

@app.route('/name/<value>')
def name(value):
    val = {"value": value}
    return jsonify(val)

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=8080, debug=True)
  1. Test out passing in parameters to exercise this function:
@app.route('/name/<value>')
def name(value):
    val = {"value": value}
    return jsonify(val)

For example, calling this route will take the word lion and pass into the name function in flask:

https://8080-dot-3104625-dot-devshell.appspot.com/name/lion

returns value in web browser:

{
value: "lion"
}
  1. Now deploy the app
gcloud app deploy

Warning first deploy could take about 10 minutes FYI!!! you may also need to enable cloud build API.


Do you want to continue (Y/n)?  y
Beginning deployment of service [default]...
╔════════════════════════════════════════════════════════════╗
╠═ Uploading 934 files to Google Cloud Storage              ═╣

  1. Now stream the log files:
gcloud app logs tail -s default
  1. The production app is deployed and should like this:

Setting traffic split for service [default]...done.
Deployed service [default] to [https://helloml-xxx.appspot.com]
You can stream logs from the command line by running:
  $ gcloud app logs tail -s default

  $ gcloud app browse
(venv) noah_gift@cloudshell:~/hello_world (helloml-242121)$ gcloud app
 logs tail -s default
Waiting for new log entries...
2019-05-29 22:45:02 [2019-05-29 22:45:02][INFO] Starting gunicorn 19.9.0
2019-05-29 22:45:02 [2019-05-29 22:45:02][INFO] Listening at: 0.0.0.0:8081
2019-05-29 22:45:02 [2019-05-29 22:45:02][INFO] Using worker: threads
2019-05-29 22:45:02 [2019-05-29 22:45:02] [INFO] Booting worker with pid: 25
2019-05-29 22:45:02 [2019-05-29 22:45:02] [INFO] Booting worker with pid: 27
2019-05-29 22:45:04 "GET /favicon.ico HTTP/1.1" 404
2019-05-29 22:46:25 "GET /name/usf HTTP/1.1" 200
  1. Add a new route and test it out
@app.route('/html')
def html():
    """Returns some custom HTML"""
    return """
    <title>This is a Hello World World Page</title>
    <p>Hello</p>
    <p><b>World</b></p>
    """
  1. Install pandas and return JSON results

At this point, you may want to consider creating a Makefile and do this:

touch Makefile
#this goes inside that file
install:
    pip install -r requirements.txt

you also may want to setup lint:

pylint --disable=R,C main.py
------------------------------------
Your code has been rated at 10.00/10

The route looks like this:

add pandas import at top:

import pandas as pd
@app.route('/pandas')
def pandas_sugar():
    url = ("https://raw.githubusercontent.com/noahgift/"
           "sugar/master/data/education_sugar_cdc_2003.csv")
    df = pd.read_csv(url)
    return jsonify(df.to_dict())

When you call the route https://<yourapp>.appspot.com/pandas

you should get something like this:

example out

Cloud Build Continuous Deploy #

Now to set up Cloud Build Continuous Deploy you can follow the guide here.

  • Create a cloudbuild.yaml file
  • Add to the repo and push git add cloudbuild.yaml, git commit -m "add cloudbuild config", git push origin master.
  • Create a build trigger
  • Push a simple change
  • View progress in build triggers page

Continuous Delivery for Hugo Static Site from Zero using AWS Code Pipeline #

Hugo is a popular static site generator. This tutorial will guide you through using AWS Cloud9 to create a Hugo website and develop against it using the cloud development environment. The final step will be the set up a continuous integration pipeline using AWS Code Pipeline.

Note these steps will be similar for other cloud environments or your OS X laptop, but this particular tutorial targets AWS Cloud9.

The steps described below cover in detail in this screencast, HUGO CONTINOUS DELIVER WITH AWS:

AWS Hugo Continuous Deliver!

  • Step 1: Launch an AWS Cloud9 Environment

Use the AWS Free Tier and a Cloud9 Environment with the defaults.

  • Step2: Download the hugo binary and put it in your Cloud9 path

Go to the latest releases of hugo https://github.com/gohugoio/hugo/releases. Download the latest release using the wget command. It should look something like this:

wget https://github.com/gohugoio/hugo/releases/download/v0.63.0/hugo_0.63.0_Linux-32bit.tar.gz

Note that you shouldn’t just blindly cut and paste the code above! Make sure you get the latest release or if not on Cloud9, use the appropriate version

Now put this file in your ~/.bin directory using these commands (again make sure you put your version of Hugo here: i.e., hugo_0.99.x_Linux-32bit.tar.gz):

tar xzvf hugo_0.62.2_Linux-32bit.tar.gz
mkdir -p ~/bin
mv ~/environment/hugo . #assuming that you download this into ~/environment
which hugo              #this shows the `path` to hugo

The output of which hugo should be something like:

ec2-user:~/environment $ which hugo
~/bin/hugo

Finally, check to see that the version flag works as a basic sanity check. This step is what it looks like on my cloud9 machine (your version number will likely be different)

ec2-user:~/environment $ hugo version
Hugo Static Site Generator v0.62.2-83E50184 linux/386 BuildDate: 2020-01-05T18:51:38Z

These steps should get you access to hugo, and you can run it like any other tool. If you cannot or get stuck, refer to the screencast later on and look at the quickstart guide.

  • Step3: Make a hugo website locally and test it in Cloud9

One great thing about hugo is that it just a go binary. It makes it simple to both develop and deploy hugo sites. The following section is loosely based on the official hugo quickstart guide.

  1. Create a new site using the following command: hugo new site quickstart
  2. Add a theme (you could swap this part with any theme you want).
cd quickstart
git init
git submodule add https://github.com/budparr/gohugo-theme-ananke.git themes/ananke
echo 'theme = "ananke"' >> config.toml
  • Step4: Create a post

To create a new blog post, type the following command.

hugo new posts/my-first-post.md

This post is easily editable inside of AWS Cloud9 as shown.

aws cloud 9 edit hugo post

  • Step5: Run Hugo locally in Cloud9

Up to this point, things have been relatively straightforward. In this section, we are going to run hugo as a development server. This step will require us to open up a port on EC2 security groups. This item is relatively easy to find.

  1. Open a new tab on the AWS Console and type in EC2 and scroll down to security groups and look for the security group with the same name as your AWS Cloud9 environment as shown:

AWS Cloud9 environment.

  1. Open up via new TCP rule port 8080 and the edit button. You will see this change made. This step will allow us to browse to port 8080 to preview our website as we develop it locally on AWS Cloud9.

  2. Navigate back to AWS Cloud9 and run this command to find out the IP Address (we will use this IP Address when we run hugo).

curl ipinfo.io

You should see something like this (but with a different IP Address)

ec2-user:~/environment $ curl ipinfo.io
{
  "ip": "34.200.232.37",
  "hostname": "ec2-34-200-232-37.compute-1.amazonaws.com",
  "city": "Virginia Beach",
  "region": "Virginia",
  "country": "US",
  "loc": "36.8512,-76.1692",
  "org": "AS14618 Amazon.com, Inc.",
  "postal": "23465",
  "timezone": "America/New_York",
  "readme": "https://ipinfo.io/missingauth"
  1. Run hugo with the following options, you will need to swap this IP Address out with the one you generated earlier. Notice that the baseURL is important so you can test navigation.
hugo serve --bind=0.0.0.0 --port=8080 --baseURL=http://34.200.232.37/

If this was successful, you should get something similar to the following output.

hugo local

  1. Open in a new tab in your browser and type paste in the URL in the output. In my output, it is http://34.200.232.37:8080/, but it will be different for you.

hugo website

If you edit the markdown file, it will render out the changes live. This step allows for an interactive development workflow.

  • Step6: Create Static Hosted Amazon S3 website and deploy to bucket

The next thing to do is to deploy this website directory to an AWS S3 bucket. You can follow the instructions here on how to create an s3 bucket and set it up for hosting.

Note this also means setting a bucket policy via the bucket policy editor, as shown below. The name of your bucket WILL NOT BE cloud9-hugo-duke you must change this.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::cloud9-hugo-duke/*"
            ]
        }
    ]
}

The bucket policy editor workflow looks as follows.

bucket policy editor

  • Step7: Deploy the website manually before it becomes fully automated

With automation, it is essential to first manually write down the steps for a workflow before fully automating it. The subsequent items will need the following:

  1. The config.toml will need editing, as shown below. Note that your s3 bucket URL will be different.
baseURL = "http://cloud9-hugo-duke.s3-website-us-east-1.amazonaws.com"
languageCode = "en-us"
title = "My New Hugo Sit via AWS Cloud9"
theme = "ananke"

[[deployment.targets]]
# An arbitrary name for this target.
name = "awsbucket"
URL = "s3://cloud9-hugo-duke/?region=us-east-1" #your bucket here
  1. Now you can deploy by using the built-in hugo deploy command. The deployment command output should look like this after you run hugo deploy. You can read more about the deploy command in the official docs.
ec2-user:~/environment/quickstart (master) $ hugo deploy
Deploying to target "awsbucket" (s3://cloud9-hugo-duke/?region=us-east-1)
Identified 15 file(s) to upload, totaling 393 kB, and 0 file(s) to delete.
Success!

The contents of the AWS S3 bucket should look similar to this.

bucket contents

The website demonstrated in this tutorial is visible here: http://cloud9-hugo-duke.s3-website-us-east-1.amazonaws.com/

  • Step8: Check into Github
  1. Create a new Github repo (and add .gitignore)

add git repo

(Optional but recommended add public to `.gitignore)

  1. In AWS cloud9 in the quickstart directory, create a Makefile with a clean command. This will rm -rf the public HTML directory that hugo creates. You don’t want to check this into source control.

create Makefile

clean:
    echo "deleting generated HTML"
    rm -rf public
  1. Now run make clean to delete the public directory and all of the source code hugo generated (don’t worry it regenerates HTML anytime you run hugo).

  2. Add Github repo as a “remote.” This step will be the name of the Github repository you just created. It will look something like this where you change the name of your site.

git remote add origin git@github.com:<github_username>/my_hugo_site.git

My git remote add command looks like this (note I run git remote -v to verify afterwards):

ec2-user:~/environment/quickstart (master) $ git remote add origin git@github.com:noahgift/hugo-continuous-delivery-demo.git
ec2-user:~/environment/quickstart (master) $ git remote -v
origin  git@github.com:noahgift/hugo-continuous-delivery-demo.git (fetch)
origin  git@github.com:noahgift/hugo-continuous-delivery-demo.git (push)
  1. Add the source code and push it to Github.

Typically I get the “lay of the land” before I commit. I do this by running git status. Here is my output. You can see that I need to Makefile archetypes config.toml and content/.

ec2-user:~/environment/quickstart (master) $ git status
On branch master

No commits yet

Changes to commit:
  (use "git rm --cached <file>..." to unstage)

        new file:   .gitmodules
        new file:   themes/ananke

Untracked files:
  (use "git add <file>..." to include in what will be committed)

        Makefile
        archetypes/
        config.toml
        content/

I add them by typing the command git add *. You can see below that this will add all of those files and directories:

ec2-user:~/environment/quickstart (master) $ git add *
ec2-user:~/environment/quickstart (master) $ git status
On branch master

No commits yet

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

        new file:   .gitmodules
        new file:   Makefile
        new file:   archetypes/default.md
        new file:   config.toml
        new file:   content/posts/my-first-post.md
        new file:   themes/ananke

Now push these files by doing the following commands (note you will need to merge the files):

git pull --allow-unrelated-histories origin master
git branch --set-upstream-to=origin/master
git push

You can see what this looks like below:

git push hugo

The Github repo looks like this now:

github repo

NOTE: Using git can be very challenging in edge cases. If this workflow doesn’t work, you can also start over from scratch and clone your GitHub repo and manually add hugo into it

(Optional step: If you want to verify your hugo site, check out this project on your laptop or another AWS Cloud9 instance and run hugo.)

  • Step9: Continuous Delivery with AWS CodeBuild

Now it is time for the final part. Let’s continuous setup delivery using AWS CodeBuild. This step will allow changes that get pushed to Github to deploy automatically.

  1. Go to AWS CodeBuild and create a new project. It should look like this:

code build

Note create a build in the same region you created your bucket: i.e., N. Virginia!

  1. The source code section should look similar to this screenshot. Note the webhook. This step will do continuous Delivery on changes

setup source

  1. The code build environment should look similar to this. Click the “create build” button:

codebuild environment

  1. After you create the build, navigate to the “Build details” section and select the service role. This step is where the privileges to deploy to S3 will be setup:

codebuild service role

You will add an “admin” policy that looks like this:

admin policy

Now, in AWS Cloud9, go back and create the final step.

The following is a buildspec.yml file. You can paste it. You create the file with AWS Cloud9 by typing: touch buildspec.yml then editing.

NOTE: Something like the following aws s3 sync public/ s3://hugo-duke-jan23/ --region us-east-1 --delete is an effective and explicit way to deploy if hugo deploy is not working correctly

version: 0.2

environment_variables:
  plaintext:
    HUGO_VERSION: "0.63.0"

phases:
  install:
    runtime-versions:
      docker: 18
    commands:
      - cd /tmp
      - wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
      - tar -xzf hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
      - mv hugo /usr/bin/hugo
      - cd -
      - rm -rf /tmp/*
  build:
    commands:
      - rm -rf public
      - hugo
      - aws s3 sync public/ s3://hugo-duke-jan23/ --region us-east-1 --delete
  post_build:
    commands:
      - echo Build completed on `date`

Now check this file into git and push:

git add buildspec.yml
git commit -m "adding final build step"
git push

It should look like this:

buildspec push

Now every time you make changes, it will “auto-deploy” as shown:

auto-build

As you create new posts, etc., it will deploy:

auto deploy

Hugo AWS Continuous Delivery Conclusion #

Continuous Delivery is a powerful technique to master, and in this situation, it could immediately be put to use to build a portfolio website for a Data Scientist.

If you are having issues with the git workflow, you can simply create the repo first, then git clone on Cloud9 to prevent the advanced git workflow.

  • Post Setup (Optional Advanced Configurations & Notes)

Setting up SSL for CloudFront #

Go to AWS Certificate Manager and click Request a certificate button. First, we need to add domain names, in our case (example.com). When you enter the domain name as *.example.com, click Add another signature to this certificate button and add the everyday domain example.com too. On a next step, select DNS validation option, and click Confirm and request button in Review. To use DNS validation, you must be able to add a CNAME record to the DNS configuration for your domain. Add CNAME record created on ACM to the DNS configuration for your area on Route 53.

CloudFront configurations #

Create a web distribution in the CloudFront section. In the Origin Domain Name field, select Endpoint of your bucket. Select “Redirect HTTP to HTTPS” from the Viewer Protocol Policy. Add your domain names in the Alternate Domain Name filed and select the SSL certificate you have created in the ACM. In the Default Root Object type index.html. Once done, please proceed and create the distribution.

Integrating Route53 with CloudFront distribution: #

Copy the domain name from the CloudFront distribution and edit A record in your Route53. Select Alias, in Alias Target, enter your CloudFront domain URL which is ******.cloudfront.net. Click Save Record Set. Now that you have created A record. The domain name example.com will route to your CloudFront distribution. We need to create a CNAME record to point other sub-domains like www.example.com to map to the created A record Click Create Record Set, enter * in name textbox. Select CNAME from Type. In value, type the A record, in our case, it will be example.com. Click Save Record Set. Now even www.example.com will forward to example.com, which in turn will forward to CloudFront distribution.

Building Hugo Sites Automatically Using AWS CodeBuild #

The first thing that we need is a set of instructions for building the Hugo site. Since the build server starts clean every time, this includes downloading Hugo and all the dependencies that we require. One of the options that CodeBuild has for specifying the build instruction is the buildspec.yaml file.

Navigate to the CodeBuild console and create a new project using settings similar to this or that meet your project’s demands:

  • Project name: somename-hugo-build-deploy
  • Source provider: GitHub
  • Repository: Use a repository in my account
  • Choose a repository: Choose your GitHub repository
  • Click on Webhook checkbox for rebuilding project every time a code change pushes to this repository
  • Environment image: Use an image managed by AWS CodeBuild
  • Operating System: Ubuntu
  • Runtime: Base
  • Runtime version: Choose a runtime environment version
  • Buildspec name: buildspec.yml
  • Artifact type: No artifact
  • Cache: No cache
  • Service role: Create a service role in your account

Creating IAM Role #

For building projects, deploy to S3 and enable CloudFront Invalidation, we need to create an individual IAM role. Add IAM role and attach CloudFrontFullAccess and AmazonS3FullAccess policies. After that, click Add permissions button again select “Attach existing policies directly” and click Create policy button. Select “JSON” and paste the following user policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "cloudfront:CreateInvalidation",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::s3-<bucket-name>",
                "arn:aws:s3:::s3-<bucket-name>/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::s3-<bucket-name>",
                "arn:aws:s3:::s3-<bucket-name>/*"
            ]
        }
    ]
}

Github Actions #

One new and fun entrants into the SaaS build server world is Github Actions. To get started with Github actions and test using the style we have used previously: make install, then make lint, then make test it reasonably intuitive.

You can follow along with this entire example project by looking at the sample Github project here: https://github.com/noahgift/github-actions-pytest

First, I create a new Github Action.

Screen Shot 2020-02-16 at 5 58 51 PM

There are many examples of how to set up a Github Action, but this is a yaml config located here: https://github.com/noahgift/github-actions-pytest/blob/master/.github/workflows/pythonapp.yml

The default yaml file looks like this.

Screen Shot 2020-02-16 at 6 00 01 PM

Let’s change this to a much simpler format that takes advantage of the Makefile.

name: Python application test with Github Actions

on: [push]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up Python 3.8
      uses: actions/setup-python@v1
      with:
        python-version: 3.8
    - name: Install dependencies
      run: |
                make install
    - name: Lint with pylint
      run: |
                make lint
    - name: Test with pytest
      run: |
                make test

Notice how it simplifies the workflow to use a Makefile. Additionally, locally I can run these same commands. Finally, the build process succeeds, and the output of each command displays.

Screen Shot 2020-02-16 at 6 02 29 PM

There is a lot to like about Github Actions, primarily if you already use Github.