Skip to main content

This is the most verbose section of this tutorial. Conceptually there is not really a lot to do, when compared to other forms of infrastructure setup, however there are a lot of little steps to complete this section. You should be able to step through each one quickly, making this as pain-free as possible.


Create Fargate service

So we have our local environment setup and we can manage commits and run automated tests. We have also pushed our updated images to ECR.

Before we can deploy our application we first need somewhere to deploy it to. The following steps will cover the creation of the entire AWS infrastructure step-by-step.

We start with the Fargate service which will host our deployable docker image which will contain the codebase for our application. Using Fargate instead of standard EC2 instances means we don’t need to setup and maintain any EC2 instances, which is the raison d'etre for serverless design. It's surprisingly simple so let’s begin.

Elastic Container Service is where the magic happens. 

When you create your first cluster, you will be guided through the initial steps to create your cluster, service and task definitions.

As we will be deploying from our custom docker image, we select ‘Custom’ from the options and click ‘Configure’

Create AWS ECS cluster

The next form is where we define the spec of your tasks that run your images.

Give the container a name and then next we paste in the URI of your deployed image in ECR. As we push subsequent images, the newest image will always be tagged as ‘latest’, so we can use 1 URI to always locate the most recent image.

Important note: Make sure you setup rules in ECR to delete older images otherwise you will be charged for their storage space.

As we are running Drupal, we give it 4GB of memory because... well it’s Drupal.

In ‘Advanced settings’, you can tweak as necessary but for this example:

CPU units: 1024

Annoyingly, at this point ECS won’t let us attach the EFS volume yet, so we leave this section blank for now.

After submitting this form, you should see something like this:

Create AWS Cluster

Click ‘Edit’ next to ‘Task definition

Set ‘Task memory’ and ‘Task CPU’ to match what we set previously and save.

Set CPU and memory

Click ‘Next’, then choose ‘Application Load Balancer’

Application load balancer

On the next step, give your new cluster a sensible name.

Next review the configuration and click ‘Create’

It will take a little while to spin-up the relevant components

Creating AWS Fargate cluster

Now your Cluster, Service and first Task should now be running.

Running AWS Fargate cluster

Here we will take a detour to create the EFS filesystem that we will then mount. 


Create Elastic File Storage (EFS) mount

Important note: Remember that whenever a new deployment takes place, the previous tasks (containers) will be destroyed, so anything that is stored will be lost. That’s why we mount EFS. S3 or another storage service could be used, but in this example we use EFS.

In the AWS console, navigate to ‘EFS’.

Create a new filesystem with a sensible name.

Click on your new filesystem, then select the ‘Access point’ tab, the ‘Create access point’.

Important note: Ensure you select the VPC created by the ECS cluster.

For our purpose, the root directory will be ‘/files’

For ‘POSIX user’ and ‘Root directory creation permissions’ set:

Owner id = 1000

Group id = 82


Permissions = 0777

Then finish by clicking ‘Create access point

In the ‘Network’ tab, create a new mount target.

Select an availability zone, a subnet id and the security group created by the ECS cluster.

Repeat this step for each availability zone/subnet id.

From your ECS Service page, click the Tasks tab, then click the link in the Task Definition column.

Click ‘Create new revision’

In the ‘Volumes’ section click ‘Add volume’.

Choose ‘EFS’ for the Volume type, you should then see more fields where you can select the volume you created earlier as well as the access point for it.

Add volume to AWS cluster

Click ‘Add’ then ‘Create’

Now that we have added the volume, we now need to create a new Task Definition where we can then specify the mount point. I realise this process is convoluted, maybe there is a more straightforward way of doing this all in 1 go, but I haven’t found it.

On the new Task Definition form, click the container definition link

Attached volume


Select your volume in ‘Mount points

Container path: /var/www/html/app/web/sites/default/files

Click ‘Update’ then ‘Create’

One ‘gotcha’ that will prevent you from triggering deployments is the service version number.

On the service page, click ‘Update’ and make sure version is 1.4.0

Update cluster

Edit or leave the remaining field values and save.

At this point, it will fail to run any tasks because it can’t connect to the EFS. You will need to update the security group of the ECS Service to allow connection to/from EFS.

For the ECS Security group, click ‘Edit Inbound rules’,

Add a new rule with port 2049, where the ‘Source’ is your ECS security group.

You should now have a working ‘server’ serving drupal with a persistent file store. As a quick sanity check, click on the currently running Task (or run a new Task if there isn’t one). You will see listed the public IP of this instance. Put this IP into your browser and you should see the Drupal error “The provided host name is not valid for this server.” If you do then great, it means Drupal is operational and correctly blocking page requests because the IP address isn’t listed as a ‘trusted_host_pattern’ in the settings.php file.

So although we have the working codebase, we currently don’t have a DB to connect to. Let’s do that next.


Create Aurora RDS Database

From the AWS console, select ‘RDS’.

Click ‘Create database’

Select ‘Standard create’ and ‘Amazon Aurora’

Leave Edition as ‘Amazon Aurora with MySQL compatibility

Capacity type should be ‘Serverless’

Version should be the latest 5.7

Give this new DB a sensible name, and set the admin username and password.

Capacity settings can be whatever you feel necessary.

Ensure that the same VPC as your ECS Cluster is selected.

In the ‘Additional connectivity configuration’ section, choose the Security group used by your ECS Cluster.

Once your new DB has been created, copy the ‘Endpoint’ URI as you will need this in your Drupal database settings.

If you using an existing database that has already been created locally, at this point you will want to import a copy of that DB to your new Serverless Aurora instance then back in your local codebase, clone your settings.php to and edit with your new DB username, password and host (Endpoint).

If you are installing a new site straight from your deployed infrastructure, then you won’t have a local settings.php file. In which case, you will need to set your DB credentials while installing (later).


Create Route 53 (Routing)

We’re nearly there! We just now need to configure routing so that we can access your new site online.

If you haven’t already, you will need to create a Hosted Zone with a designated domain for this site. For more information on this see [].

Now, we’ll assume you have a Hosted Zone setup for the next step.

Click on your Hosted Zone and then click ‘Create record’

There are several options, but for this example we will choose ‘Simple routing’

AWS Route 53 simple routing

Click ‘Define simple record’ next.

Add a Record name, then ensure you select:

Value/route traffic to: Alias to Application and Classic Load balancer

Region = [the region of your ECS Cluster]

Then select the ALB that was created with your Cluster.

Finish by clicking ‘Define simple record’ then ‘Create records’

At this point you might think that going to your new domain would show a working site… it probably won’t. I get a 503 error when I try to access it. Here are some troubleshooting tips.

You may find that Tasks are started, then a few minutes later they are shut down. There are logs you can view in the ECS dashboard, but they can often be vague. One definite problem will be the automated health checks from your ALB. It will ping your target group and always receive a 400 error, after several failed attempts it will shut down the Task.

In the Docker image we use for deployments, it comes with a Drupal specific vhost config in it, within that configuration file [see appendix] I’ve added a separate location:

location = /health/ { return 200; }

Because of the way the ALB makes the requests and the way nginx expects headers and such, it will always return a 400 response. To get around this, we create a location that when pinged, will always return 200 instead.

To update your health checker to ping this route instead, in the AWS console, select the ‘EC2’ service.

In the sidebar, select ‘Target groups’, select your target group, then click ‘Edit’ by ‘Health check settings’.

Change ‘Health check path’ to: /health/

To ensure a speedy transition from the old container to the new, set the following advanced settings:

Health settings

Also, if your Tasks have been getting shut down frequently, it could be the case that there is no ‘Target’ currently registered in your ‘Target group’.

AWS Target group

If this is the case then update the service and ‘Force new deploy’ after saving. Or ‘Run new task’.

When you see a registered target here, try accessing your domain again, if it’s a new install you should see the Drupal install screen, if not then you might see the error “The provided host name is not valid for this server.” unless your already had this set.


Previous section: BitBucket Pipelines

Next section: Finalise deployment steps

For more information on this topic, to ask questions, or to find out how we might be able to help you.

Please feel free to email:



Want to join our team?

Copyright © City Web Consultants Ltd. 2017. All Rights Reserved
Company number: 27234161

Want to start something?

Fantastic! We'd love to hear from you, whethere it's a pitch, an idea or just saying hello!

Feel free to get in touch.


Drop us an email?


Or give us a call?

+44 (0) 191 691 1296


Playing hard to get? We'll contact you