Skip to main content

In this section we will cover the steps required to get your code to trigger automated browser-based tests using BitBucket pipelines and Selenium. We will then cover the steps to successfully push a newly created Docker image to AWS ECR for storage (and later deployment).

 

Create BitBucket Pipeline

In order to orchestrate your CI & CD pipelines, add a new file to your repo, at the root directory add the file: bitbucket-pipelines.yml

When this file is included in your repo, everytime a ‘push’ is made, or PR is merged etc, BitBucket will check this file and see if any events contained within it should be triggered.

For our purposes we will have 2 pipelines, 1 for Continuous Integration, which will trigger automated tests when a Pull Request (PR) is made.

Important note: Before PHPUnit can run any tests, you will need to create/copy phpunit.xml file in the root directory. See appendix for a working example, we also use a later version of PHPUnit: 8.4.1

Our developer workflow is as such:

When a ticket in Jira (Jira is not required for this purpose), the assigned developer will create a new branch from the ‘develop’ branch. How you create branches is not so important, what is important for this example is that each new branch label has the pattern:

feature/sensible-name-for-branch

The below pipeline script will listen for any new PR where the branch label begins with:

feature/*

options:
 docker: true
 size: 2x

pipelines:
  pull-requests:
    feature/*:
      - step:
        services:
          - docker
        caches:
          - docker
        script:
          - echo `uname -s`-`uname -m`
          - curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
          - chmod +x /usr/local/bin/docker-compose
          - mv docker-compose.yml docker-compose.yml.dev
          - mv docker-compose.yml.test docker-compose.yml
          - docker-compose up -d
          - docker exec d9_base_php composer install -d app --no-interaction
          - docker exec d9_base_php mkdir app/web/sites/default/files
          - docker exec d9_base_php chmod -R 777 app/web/sites/default/files
          - docker exec d9_base_php app/vendor/bin/phpunit -c /app/phpunit.xml -v app/web/modules/custom
          - docker-compose down

A breakdown of the above config:

options - required at the start of the pipeline yml file

docker: true

size: 2 - these relate more to the second pipeline which requires extra processing power

pull-requests:

   feature/*:

This tells BitBucket to trigger this pipeline when a PR is created from a branch which begins ‘feature/*’

script

What appears below the script section is a list of commands, just like if you were manually setting up a new linux host that only contained your code.

The first thing we do is download and install ‘docker-compose’, secondly we swap docker-compose files (I will explain below why we use different files for local vs deploy).

We launch our containers, run ‘composer install’ (as we don’t commit dependencies to our repo). 

After the basic setup, a working version of your code now exists within this BitBucket Virtual Machine. You can’t access it remotely, and it won’t include your database. All Drupal automated tests are run from a clean DB so your DB is not required.

The next step is to run your tests; in this example it is only running tests that exist in any custom modules. Changing the path to /app/web would run all core and contrib tests too, be cautious as this would take a very long time and BitBucket will start to charge you if you use more than 1 hour of pipeline computation within 1 month.

To test if the above works, follow the instructions to create a new branch, push any changes, then in BitBucket create a new PR against the develop branch.

If you then click ‘Pipelines’ in the sidebar you should see the status of any old or current pipelines being run.

bitbucket pipeline progress

If any pipeline fails, BitBucket will notify you via email.

As alluded to above, we use a slightly modified version of docker-compose.yml for running tests/deployment.

I created a new docker template, which is a clone of 

webdevops/php-nginx:7.4

called:

adamclareyuk/drupal-nginx-php-ssh:latest

Which is a simple substitution and comes with a Drupal nginx server config file baked in (see appendix) as well as ssh installed. Having ssh installed and accessible is not strictly necessary but I find I find it incredibly useful for the dev server to have ssh so that the container can be remotely accessed for debugging purposes.

So to recap, by this point you should have a local Drupal development environment setup and working within minutes, you should also have a pipeline configured to run automated tests when creating Pull Requests.

Create Elastic Container Registry (ECR)

When you have developed your app to the point where you want to deploy it, first you need somewhere to deploy it to. AWS offers ECR which is similar to DockerHub, it’s a simple repository for images. ECR integrates seamlessly with ECS for new task deployments.

In AWS console, select ECR.

Give your repo a sensible name.

Create aws ECR repository

That’s all that is required to create the repository, but in order for our BitBucket pipeline to push the new containers, we need to grant permissions.

In AWS console, select IAM.

Here we create a new user, let’s call the user ‘ECRuser’.

Select only ‘Programmatic access

Important note: You will need to keep a copy of the API credentials for this user, they will be needed later.

We will create 2 permissions policies for this user, first we will call ‘ECRReadWrite’:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "ecr:PutImageTagMutability",
        "ecr:StartImageScan",
        "ecr:GetDownloadUrlForLayer",
        "ecr:PutImageScanningConfiguration",
        "ecr:GetAuthorizationToken",
        "ecr:UploadLayerPart",
        "ecr:ListImages",
        "ecr:PutImage",
        "ecs:RegisterTaskDefinition",
        "ecr:BatchGetImage",
        "ecr:CompleteLayerUpload",
        "ecr:DescribeImages",
        "ecr:DescribeRepositories",
        "ecs:DescribeServices",
        "ecr:StartLifecyclePolicyPreview",
        "ecr:InitiateLayerUpload",
        "ecr:BatchCheckLayerAvailability"
      ],
      "Resource": "*"
    }
  ]
}

 

For the second we’ll call it ‘DeployECS’ and it contains the policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "ecs:UpdateService",
        "ecs:RegisterTaskDefinition"
      ],
      "Resource": "*"
    },
    {
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::*:role/ecsTaskExecutionRole"
    }
  ]
}

 

With our new user, and it’s 2 policies attached. We will update our Bitbucket pipeline to push our new containers to ECR

In our bitbucket-pipelines.yml file we will have 2 pipelines, the first is ‘pull-requests:’ (see above). The second pipeline is below:

branches:
  develop:
    - step:
      name: Push to ECR
      image: atlassian/pipelines-awscli
      services:
        - docker
      caches:
        - docker
      script:
        - docker run -d -p 80:80 -e WEB_DOCUMENT_ROOT=/var/www/html/app/web -e SSH_PASSWORD=${SSH_PASSWORD} -v $BITBUCKET_CLONE_DIR/:/app --name example_name adamclareyuk/drupal-nginx-php-ssh
        - docker exec example_name cp -r /app /var/www/html
        - docker exec example_name cp /var/www/html/app/web/sites/default/settings.php.dev /var/www/html/app/web/sites/default/settings.php
        - docker exec example_name cp /var/www/html/app/nginx/php.ini /usr/local/etc/php/conf.d/99-docker.ini
        - docker exec example_name cp /app/nginx/vhost.dev.conf /etc/nginx/conf.d/10-docker.conf
        - docker exec example_name chmod +x /var/www/html/app/setpermissions.sh
        - docker exec example_name composer install -d var/www/html/app --no-interaction
        - docker exec example_name bash /var/www/html/app/setpermissions.sh --drupal_path=/var/www/html/app/web --drupal_user=application --httpd_group=www-data
        - docker commit example_name example_image_name
        - pipe: atlassian/aws-ecr-push-image:1.2.0
          variables:
          AWS_ACCESS_KEY_ID: ${AWS_KEY} 
          AWS_SECRET_ACCESS_KEY: ${AWS_SECRET} 
          AWS_DEFAULT_REGION: ${AWS_REGION} 
          IMAGE_NAME: "example_image_name"

definitions:
 services:
   docker:
     memory: 4096

There will be 2 steps in this second pipeline, a create & push step, then a deploy step. We will complete the deploy step when we have working infrastructure.

Important note: If you do intend to use SSH, then you will need to add an additional BitBucket variable (see below) called ‘SSH_PASSWORD’ where you set a strong password for the root login.

In this step we are using a docker container created by Atlassian called:

atlassian/pipelines-awscli

This container makes it easier to integrate with AWS ECR. 

Up until now, we have been mounting our codebase within the container we have been using for development/testing. However, in order to push a new image that has the code ‘baked’ into it, we need to copy the mounted code directory to a directory within the container.

example_name is what I’ve called the container here, but you should use something sensible.

We have a specially configured settings.php file in our repo [see appendix] that we copy into this deployable container. We also have a custom nginx vhost file and php.ini config we copy into the container [see appendix].

setpermissions.sh is a bash script we use to ensure all file permissions are set throughout [see appendix].

Next we see a ‘pipe’ sub-process. After we have committed our new image that contains our codebase and customised config, the atlassian/aws-ecr-push-image:1.2.0 container will then post this image to our ECR repo.

This pipe requires 4 variables, you could hard-code them in this file but as we should know, you should never save credentials in version control. So instead BitBucket provides secured ‘Repository variables’. 

Within your BitBucket repository dashboard click ‘Repository settings’ in the sidebar, then click ‘Repository variables’ in the following sidebar.

Here we create our 4 variables.

AWS_SECRET: From ECRuser credentials 
AWS_KEY: From ECRuser credentials 
AWS_URI: ECR repository URI
AWS_REGION: ECR region

Assuming all is well, now when you push or merge to ‘develop’ branch, your container should be sent to and stored in ECR. You can verify this by clicking on the repo within AWS. 

You should see something like:

View new AWS ECR repository

 

Previous section: Local setup

Next section: Setup AWS Infrastructure

For more information on this topic, to ask questions, or to find out how we might be able to help you.

Please feel free to email: info@citywebconsultants.co.uk

 

 

Want to join our team?

Copyright © City Web Consultants Ltd. 2017. All Rights Reserved
Company number: 27234161

Want to start something?

Fantastic! We'd love to hear from you, whethere it's a pitch, an idea or just saying hello!

Feel free to get in touch.

 

Drop us an email?

info@citywebconsultants.co.uk

 

Or give us a call?

+44 (0) 191 691 1296

 

Playing hard to get? We'll contact you