Discovering Docker for AWS

This article is the first one of a serie dedicated to Docker for AWS.
A couple of days ago, I was happy to receive an invitation to the Docker for AWS beta.


Working in a startup that just started to benefit from Amazon AWS Activate program is a really great fit to test and play with one of the big features announced during DockerCon 2016.

Let’s start to discover all this stuff.

Methods to create a swarm on AWS

Prior to Docker 1.12, to create a swarm on AWS you needed to instanciate VMs on EC2 and install your swarm on them (manually or with a CM) and setup all the necessary services (VPC, ELB, Security groups, …).

Docker for AWS brings a solution closely integrated to the AWS services, and all the setup is done for us out of the box. A couple of click and a secure swarm cluster using AWS services is configured for us. Let’s see this in action.

Just a couple of minutes to get a working swarm

Clicking on the Launch Stack button within the email above triggers a redirection to the AWS CloudFormation service with a pre-populated template field. The template is that thing that describes all the AWS services that will be instantiated with our swarm.

Of course, Amazon describes the template and stack much better than I do:

A stack is a collection of AWS resources that you can manage as a single unit.
In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks.
All the resources in a stack are defined by the stack's AWS CloudFormation template.
A stack, for instance, can include all the resources required to run a web application, such as a web server, a database, and networking rules.
If you no longer require that web application, you can simply delete the stack, and all of its related resources are deleted.


Nothing to change during this first step as I need to use the template predefined.

The next step enables to set the size of our swarm (number of manager and worker nodes) and to select the type of node that should be used. Let’s go with a 3 manager nodes and 5 worker nodes with t2.micro instances.



The next step enables to add tags for the stack that we are creating. I’ll leave all of this empty for now on as I just want to test the creation of the swarm. I’ll come back to the configuration options in details in a future article.


The last step is the review of all taht we entered so far. One click away to have our stack created… exciting 🙂


The next display we are presented is the following one:


After a couple of refresh, we get the status of your stack currently been created.


Judging by the list of events, a lot of things are happening behind the scene.


A couple of minutes to go and there it is, our stack is created. Great !!!


Let’s now have a look at what as been created for us.

Check the status of our swarm

When the creation of our stack is completed, the command to ssh onto the swarm is provided in the Outputs tab. Let’s connect to our beast and list the nodes.


Our 8 nodes are there

  • 1 manager node elected as the Leader
  • 2 manager nodes with state ‘Reachable’
  • 5 worker nodes

The ssh command also gives us the endpoint to contact our cluster

We will come back to this guy later on.

Note: the stack will be deleted at the end of this article, all the IPs, DNS entries, … are temporary.

Let’s have a look, for example, at the security groups created for us… A lot of things out there (please note that the last 2 in the list are not part of the one created by Docker for AWS, there are security groups that remains from previous testing).


Deploying our first bundle

Let’s use our swarm and deploy a bundle on it. In this previous article we created a DAB (Distributed Application Bundle) from a docker-compose.yml file. We will use this bundle (demoswarm.dab) and deploy it on our cluster.

Note: as a reminder, this bundle reference an application that enables to create / retrieve simple message objects. It uses a web front-end, an Node.js api, a Redis session store and a MongoDB data store. I strongly encourage to give a quick look at this article to better understand the following.

The thing is, we have our demoswarm.dab locally (on the development machine), how can we deploy it on the swarm without to scp it on a swarm’s machine ?

If we have used docker-machine to create our Docker hosts, that would be easy to issue the following command to make our local docker client talking to the docker daemon running on our swarm.

eval $(docker-machine env manager)

But docker-machine is not in the picture, so let’s use a ssh tunnel and redirecting local command towards Docker host running on our swarm (thanks Arun Gupta for this).

In a first terminal

ssh -i .ssh/devops-key-pair-eu-ireland.pem -NL localhost:2375:/var/run/docker.sock

What do that mean ? Basically it enables local port forwarding. All the command sent locally on localhost port 2375 will be forwards to the socket /var/run/docker.sock running on the remote host (our swarm).

In a second terminal, set the DOCKER_HOST environment variable

export DOCKER_HOST=localhost:2375

Setting the DOCKER_HOST to localhost:2375 tells the local Docker client to target this host (which is more a logical host than a real one) instead of the default one.

We can now send commands to the Docker daemon running on our AWS swarm.

docker deploy demoswarm

After a couple of minutes, our services are up and running.

$ docker service ls
ID            NAME           REPLICAS  IMAGE                                                                                  COMMAND
34fcpt5s2nqu  demoswarm_api  1/1       lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973
cx5cbclw9aia  demoswarm_db   1/1       mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967
eakv4cy3jwdc  demoswarm_kv   1/1       redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c
ezxf1eul37bq  demoswarm_www  1/1       lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6

Let’s update the port of the api and www services. (thanks to Michael Friss @mfriss for the command) so that the outside world can access the web part on port 80 and the api part on port 8000.

$ docker service update -p "8000:80" demoswarm_api

$ docker service update -p "80:80" demoswarm_www

Doing so, we can notice that the ELB is updated automatically.


At the same time, we can see there is another ELB configured for us for port 22 for the ssh access.

Test the application

Let’s start by creating a message through the api and make sure it can be retrieved. You can note here that the endpoint used for the test is the one we saw earlier.

$ curl -XPOST
  "msg": "PlayingWithDocker4AWS",
  "createdAt": "2016-07-10T14:50:22.578Z",
  "updatedAt": "2016-07-10T14:50:22.578Z",
  "id": "578260ae35c9940e00ff764e"

$ curl
    "msg": "PlayingWithDocker4AWS",
    "createdAt": "2016-07-10T14:50:22.578Z",
    "updatedAt": "2016-07-10T14:50:22.578Z",
    "id": "578260ae35c9940e00ff764e"

This part seems ok. Let’s move to the web interface now.


The message created using the api is correctly shown on the web interface. We can create another one directly from the interface and see that it is retrieved correctly.


Delete the stack

Lets now delete this stack.




In this article we saw how Docker for AWS can make the creation of a swarm, using swarm mode, very easy. We have only covered here the basic stuff but it’s seems a really promising product. We’ll go in more details in future articles.

I would love to hear your feedback / comments on this topic.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *