Taking Continuous Delivery to the Next Level: GitLab CI/CD + AWS

Introducing GitLab with AWS: how to build your Continuous Delivery from scratch.

Lorenzo Cadamuro
15 min readApr 29, 2019

In the last article, Increase Your Productivity With Continuous Delivery: GitLab + Netlify, I talked about the benefits of Continuous Delivery and how to implement it with third-party tools — if you’re approaching CD for the first time, I suggest to give it a read before digging deep.

Today, I’m going to illustrate you how to take full control of your projects delivery; I’ll guide you into the rough world of AWS and I’ll show you how to connect it with the powerful GitLab CI/CD built-in tools.

Overview

We’ll start by creating an S3 Bucket on AWS where storing our projects; then we’ll configure GitLab to handle deploys to the bucket. Finally, we’ll put in front of it a CloudFront distribution, mapped to our domain name, to host our sites via HTTPS. We’ll also set up a Lambda@Edge that creates specific subdomains for each project.

Outcome

What we’re going to achieve is system that, for every project on GitLab, sets up automatically a continuous delivery pipeline for you. Each branch will have its own url in this format:

  • master[PROJECT_NAME].stage.yourdomain.com
  • other branches[PROJECT_NAME]--[BRANCH_NAME].stage.yourdomain.com

A system that, for every project on GitLab, sets up automatically a continuous delivery pipeline for you.

Let’s get it started!

Amazon S3

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

In terms of implementation, S3 is composed in buckets and objects. The bucket is the container where to upload objects; in other words, it’ll we be the place where hosting our projects.

Create the bucket

Log in to your AWS account and go to your AWS Management Console. In “Find Services” type “S3” and open the S3 Management Console, then click on “Create Bucket”. At this point you have to enter the bucket name; for our purposes, it must be the domain name that will host your projects.

Choose this syntax: stage.[DOMAIN_NAME] e.g.stage.yourdomain.com

Why “stage”? Well, since we need custom subdomains, we have to map a wildcard domain name to our CloudFront distribution; it’s better to circumscribe this behavior, and leave free the first level subdomain.

Creating the S3 Bucket

Click “Next” till the end and then “Create bucket”.

By default, buckets are private containers; in order to host your static websites, we have to enable the related property.

Configure the bucket for website hosting

Open your bucket, click on the “Properties tab and expand the “Static website hostingcard; then, select “Use this bucket to host a website, enter “index.html” as index document and save.

As you can see, now you have a public endpoint to access the bucket; it should be something like this: http://stage.yourdomain.com.s3-website-us-east-1.amazonaws.com.

If you open it you’ll notice that an AccessDenied error occurs; this is because it’s not enough to enable web hosting, you must also make it accessible by changing its policies.

Change the Bucket policies

Open your bucket and click on the “Permission” tab. Under “Public Access Settings”, click on “Edit” and set the properties like the image below, then save.

With these changes now we have the ability to add a new Bucket Policy, therefore make it public to the world.

Always in “Permissions”, click on “Bucket Policy” and add the following (replace stage.yourdomain.com with your bucket name); then save.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::stage.yourdomain.com/*"
}
]
}

Now your bucket has public access; if you refresh the url, since we don’t have content yet, you should see a normal “404 Not Found” message.

Now that we have our public bucket, we can configure GitLab to deploy projects through the AWS CLI. But before do it, we have to create an AWS User with privileges to write; this user will have full access to S3 and CloudFront.

IAM

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

Create a group

Navigate to the IAM Management Console, then open “Groups” and click on “Create new group”. Enter the name “Deploy” and go to the next step; now attach “AmazonS3FullAccess” and “CloudFrontFullAccess” policies, then click next and create the group.

Create the user

Go to “Users” and click on “Add user”. Give a descriptive name such as “GitLab-CI” and select the option “Programmatic access”.

In the second step, add the previously created group in order to give the user the right permissions.

Click next till the last step; here, in addition to the success message, you have two important informations: the “Access key ID” and the “Secret access key”. This key pair will be your pass to manage your AWS services from the outside.

Write them down, and ensure nobody gains access to these keys.

Note: Once you leave this screen, you’ll no longer have access to the secret access key.

Fine, now that we’ve configured S3 and create a user to access to it, we can play with GitLab.

GitLab CI/CD

GitLab offers a continuous integration service. If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your GitLab project to use a Runner, then each commit or push triggers your CI pipeline.

Head over GitLab and sign in to your account — or create a new one if you don’t have one.

Before create a new project, I suggest you to start by creating a new group. The keys previously created are necessary for the deploy, and they must be stored on GitLab as Environment variables; in order to avoid configuring every future project, you can put them in a group, so that all projects inside it can have access to them.

In order to avoid configuring every future project, you can put the keys in a group, so that all projects inside it can have access to them.

If you don’t want to create a group, you can skip the group creation.

Create the group

From your dashboard go to “Groups” and create a new group; choose a unique name, for example it could be your bucket name: “stage.yourdomain.com”.

Create the environment variables

Now go to your group’s Settings › CI / CD and expand the “Environment variables” panel. Add the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and fill the values with the associated keys that you copied before — if you don’t have a group, you have to repeat the same process but for every project in which you need CD.

Now that we have the two variables set up, let’s create a project for test.

Create a pilot project

Open your group, click on “New project” and name it pilot-project just to be aligned. Once you have the repository, fill it with an existing app or start from this react-boilerplate.

Create the .gitlab-ci.yml

As mentioned above, GitLab CI/CD pipelines are configured using a YAML file called .gitlab-ci.yml.

In the root directory, create the .gitlab-ci.yml file and copy the content of the following Gist.

If you want to learn more about this file, here the documentation.

What does this configuration do? Let’s see in detail.

variables:
S3_BUCKET: "stage.yourdomain.com"
BUILD_COMMAND: "npm run build"
DIST_DIR: "./dist"

GitLab allows you to define variables that are then passed in the job environments; The delivery of every project will be configured using the variables at the beginning; it needs your bucket name, your build command and the directory where your application will be built.

stages:
- build
- deploy

This tells GitLab the stages to run during the process. We have two phases: the first one builds the application, whereas the second one deploys the distribution folder to S3.

build:
stage: build
image: debian:latest
...

Here we go to define the “build” job. As you can see, the job launches a Docker image based on Debian. Since we need Node to build our application, in before_script it installs NVM to download the right version based on the .nvmrc file — if it’s not present, the stable version will be installed. Finally, in script it installs all the dependencies via npm and runs the build command.

deploy:
stage: deploy
image: python:latest
...

This is the “deploy” job; since AWS CLI is written in Python, we need a Docker image with the environment to execute it. When it runs, it will download and install the AWS tool in before_script, then in script it will run the AWS command to make the sync.

aws s3 sync ${DIST_DIR} s3://${S3_BUCKET}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_NAME} --delete

This command will upload our distribution folder on S3, creating the following directory structure: S3/[PROJECT_NAME]/[BRANCH_NAME]/[...]

Ok, it’s time to see it in action!

Start the pipeline

Push the file on your repository and a new pipeline in GitLab will start automatically. Go to your projects’s CI / CD › Pipelines to check it.

When it ends, head over AWS and open your bucket; you should see a copy of your application. Retrieve your S3 url and navigate to /pilot-project/master to see it live. If you’ve correctly configured the bucket for hosting websites, everything should work.

But we don’t like it, do we?

  • We want to use our domain name.
  • We want our application hosted on the root of our domain.
  • We want HTTPS.

Amazon has a powerful feature called Lambda@Edge that lets us to run code when users make requests. We’re going to use it to map a custom url to our application on S3 — in this way:

  • Master branchpilot-project.stage.yourdomain.com
  • Develop branchpilot-project--develop.stage.yourdomain.com

What we’re going to do now is:

  1. Create an hosted zone on Route 53 to route traffic of our domain name;
  2. Create the CloudFront Distribution;
  3. Create the Lambda@Edge function;

Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.

To let Amazon to route traffic of our domain and its subdomains, we need to create a public hosted zone.

Create an hosted zone

Navigate to the Route 53 Management Console and open “Hosted zones”; then click on “Create Hosted Zone”, enter your domain name and click “Create”.

When the hosted zone is created, copy the value of the NS record.

Now go to your domain registrar. There should be a panel named “Nameservers” or something; to use Amazon DNS, you have to change your domain’s nameservers with the ones just copied.

Now that we have the hosted zone configured, we need to create a certificate to enable HTTPS.

Certificate Manager

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.

Request a certificate

Navigate to the AWS Certificate Manager, and click on “Request a certificate”; choose “Request a public certificate” and go next. On the second step enter the following domain names:

  • *.stage.[DOMAIN_NAME] e.g. “*.stage.yourdomain.com
  • stage.[DOMAIN_NAME] e.g.stage.yourdomain.com

Then click “Next”. On the third step choose “DNS validation” as validation mode, click on “Review” and then “Confirm and request”.

Now open the two accordions and click on “Create record in Route 53” to create a CNAME record and allow ACM to issue certificates for your domain name.

When you’re done, click on “Continue” and wait until the status of your request become “Issued” — it could take a few minutes.

CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment

Create the CloudFront Distribution

Navigate to the AWS CloudFront Management Console, click on “Create Distribution” and choose a web distribution.

The “Origin Domain Name” field tells CloudFront from which service get your web content. The dropdown list enumerates the AWS resources associated with the current AWS account, but we won’t take the value from this list. We have to fill the value with the S3 endpoint of our bucket.

In “Viewer Protocol Policy”, select “Redirect HTTP to HTTPS” in order to force HTTPS.

Since we’ll invalidate cache on every deployment, we can set a higher TTL; in “Object Caching” select “customize” and change “Default TTL” with “31536000” — it corresponds at 1 year.

Now fill the “Alternate Domain Names (CNAMEs)” textarea with the two domains for which you had request the certificate.

In “SSL Certificate” select “Custom SSL Certificate”; clicking on the input field it should appear a dropdown listing all your certificates; select the one created before.

Ok, we have complete the configuration. Click on “Create Distribution” to finish.

If everything went well, a new distribution has been created. Wait until the value of the “Status” column changes from “InProgress” to “Deployed” — it could take from 20 to 40 minutes.

Once the distribution is deployed, it remains only to point our domain name to the CloudFront distribution you just created; we need only to create an alias record on Route 53.

Add the alias record on Route 53

Navigate to the Route 53 Management Console and open your hosted zone. Since we need every subdomain before stage to point to our distribution, we have to set a wildcard DNS record.

A wildcard DNS record is a record in a DNS zone that will match requests for non-existent domain names. A wildcard DNS record is specified by using a “*” as the leftmost label (part) of a domain name, e.g. *.example.com.

Clicking on “Create Record Set” a new form on the right side should appear. Fill the “Name” input with *.stage and enable “Alias” by clicking on “Yes”; therefore, open the “Alias Target” dropdown and select your distribution. Click on “Create” and repeat the same process, but with the value stage.

If you’ve followed these two steps, then you should have a situation like this:

Try the distribution by navigating the following url: https://stage.[YOUR_DOMAIN]/pilot-project/master

If everything works, we can go ahead with the Lambda function.

Lambda@Edge

Lambda@Edge runs your code in response to events generated by the Amazon CloudFront content delivery network (CDN). Just upload your code to AWS Lambda, which takes care of everything required to run and scale your code with high availability at an AWS location closest to your end user.

The job of our Lambda is to intercept the CloudFront Origin Request, triggered after our distribution requests an object from the S3 bucket, and override the value of the response header based on the current subdomain. So, if we navigate the address https://project.stage.yourdomain.com, it will respond with the content of https://yourdomain.com/project/master.

But before create the function, we have to configure the CloudFront Cache Behavior to whitelist the Host header.

Whitelist the Host header

Navigate to the AWS CloudFront Management Console and open your distribution; click on the “Behaviors” tab and edit the behavior. Now change the value of “Cache Based on Selected Request Headers” in “Whitelist” and, under the “Whitelist Headers” section, add “Host” to whitelisted headers.

Now click on “Yes, Edit” to save the changes.

Create the Lambda@Edge

Navigate to the Lambda Management Console and click on “Create function”. Choose a name like “subdomain-redirect” making sure the “Runtime” field is set to “Node.js 8.10”.

Since Lambda@Edge needs a special role to operate, expand the permissions section by clicking on “Choose or create an execution role”. From the dropdown choose “Create a new role from AWS policy templates”, enter the name “LambdaEdgeRole” and, on “Policy templates”, search and add the policy “Basic Lambda@Edge permissions”.

Fine, now click on “Create Function” — all that remains now, is to insert the code and connect it to CloudFront.

The function code

In “Function code” copy the content of the following Gist.

Change the value of the variable BASE_HOST with your domain name, making sure the first character is ..stage.yourdomain.com; then save the changes by clicking on “Save”.

To make the function respond when a new origin request is made, we need to create a trigger.

Create the trigger

From the “Designer” section add the “CloudFront” trigger and click on “Deploy to Lambda@Edge”; check “Include body” and confirm the deploy by checking the acknowledgement. Click on Deploy to deliver the changes and wait 20/40 minutes.

When the distribution is deployed, test the Lambda by navigating the following url: https://pilot-project.stage.[YOUR_DOMAIN]

If everything works, we can forget about AWS and go ahead with the last and final step: the GitLab Environments.

GitLab Environments

GitLab CI/CD is capable of not only testing or building your projects, but also deploying them in your infrastructure, with the added benefit of giving you a way to track your deployments. In other words, you can always know what is currently being deployed or has been deployed on your servers.

GitLab comes with a powerful functionality named Environments, a web interface that keeps track of your deployments providing features such as:

  • Retrieve the url to see the environment live.
  • See the full history of your deployments.
  • Rollback the environment to a previous version.
  • Delete the environments from the server.

With Environments you have full control of your project delivery.

Configuring environments its pretty easy, all you have to do is to add some row to your .gitlab-ci.yml.

Configuring environments

Open your pilot-project and edit the .gitlab-ci.yml file by overriding the content with the following Gist:

Let’s see in detail what we’ve just added.

deploy:master:
<<: *deploy
only:
- master
...
deploy:branches:
<<: *deploy
only:
- branches
- tags
except:
- master
...

The first thing you should notice is that the deploy job has been split. Since Master has a different environment configuration — because of the final url composition — we needed to separate it into two distinct jobs.

To avoid having duplicate properties between these two jobs, we used a special YAML features called “Anchor” (<<: *deploy) to inherit the deploy configuration.

...
environment:
name: ${CI_COMMIT_REF_NAME}
url: https://${CI_PROJECT_NAME}.${S3_BUCKET}
on_stop: clean

To define an environment associate to a job we used the environment keyword. The environment will be so configured:

  • name → the environment name; it takes the branch name from which the pipeline has been launched.
  • url → it exposes buttons in various places within GitLab which when clicked take you to the defined URL; the value is composed to have something like https://pilot-project--develop.stage.yourdomain.com.
  • on_stop → it declares the job to run when environments are closed; in our case we run the clean job.
aws s3 rm s3://${S3_BUCKET}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_NAME} --recursive

This is the AWS command used to delete a folder and its content from S3. When branches are deleted, or users stop the environment manually by clicking the stop button, an event trigger the clean job which task is to delete the folder, related to that branch, from S3.

Invalidate the cache

If you note, a new configuration variable has been added in .gitlab-ci.yml: DISTRIBUTION_ID; since we must invalidate CloudFront cache when content is updated, we need the ID of the distribution to execute the following command:

aws cloudfront create-invalidation --distribution-id ${DISTRIBUTION_ID} --paths "/*"

You can retrieve the ID on your AWS CloudFront Management Console.

Create the environments

Push the new .gitlab-ci.yml to start the pipeline and create the environment. When it ends, go to your projects’s Operations › Environments to check it. Try also to create a new branch in addition to Master, like Develop, to see the relative environment.

Conclusion

With this configuration, now you have the ability to start a Continuous Delivery pipeline for all the projects which contain the .gitlab-ci.yml. Every branch will have its own environment, that means being able to deploy more versions of your projects and using your favourite staging workflow.

I hope you found the reading helpful and please, share your improvements and how you implemented it within your workflow. ✌️

--

--

Responses (1)