Deploy a multi services application with swarm mode

Purpose

In the previous article we saw how to deploy a service using the services api available in swarm mode. The setup of the initial swarm was done here.

The service deployed was a very basic Node.js application that returns the name of a random city of the world and the ip of the container handling the request. The only usage of this service was to test the services api.
We will now deployed a whole application made of several services as represented below.

demo application

Details of the application

This application is made up of 4 services:

  • www front-end to create and list messages
  • a redis store to handle web sessions
  • api back-end to handle requests received from www or mobile clients
  • mongodb data storage

Creating an overlay network

Creating an user-defined overlay network and running all the services in this network with enables them to communicate with each other using their names.


$ docker-machine ssh manager1 docker network create -d overlay appnet
94v022kyqa18mj52oaogxnhxn

Creating redis kv store for session storage

As said above, this service will run on the newly created appnet network. It uses a lightweight redis:3.0.7-alpine public image.


$ docker-machine ssh manager1 docker service create --name redis --network appnet redis:3.0.7-alpine
f4h2u1kqhfcl6cq7hn71sipwe

Let’s verify the only instances of the redis service is up and running.

Note: in a production configuration we would consider a master/slave redis setup.


$ docker-machine ssh manager1 docker service ls
ID            NAME   REPLICAS  IMAGE               COMMAND
f4h2u1kqhfcl  redis  1/1       redis:3.0.7-alpine

$ docker-machine ssh manager1 docker service tasks redis
ID                         NAME     SERVICE  IMAGE               LAST STATE          DESIRED STATE  NODE
e979wno6b8fnu18jplk8b59e1  redis.1  redis    redis:3.0.7-alpine  Running 59 seconds  Running        manager2

Our redis service is up and deployed on manager2.

Creating mongodb data store

This service will run on the appnet network. It uses mongo:3.2 public image.


$ docker-machine ssh manager1 docker service create --name mongo --network appnet mongo:3.2
cdbzit448i8g7y4g4he7k2xkn

Is our service working fine ?


$ docker-machine ssh manager1 docker service ls
ID            NAME   REPLICAS  IMAGE               COMMAND
cdbzit448i8g  mongo  1/1       mongo:3.2
f4h2u1kqhfcl  redis  1/1       redis:3.0.7-alpine

$ docker-machine ssh manager1 docker service tasks mongo
ID                         NAME     SERVICE  IMAGE      LAST STATE         DESIRED STATE  NODE
06ifu9dqc9zgwdz0r9t4zu8d2  mongo.1  mongo    mongo:3.2  Running 3 minutes  Running        manager1

Yes, and it’s deployed on manager1.

Creating the api service

The api is based on a public image lucj/demo-api:1.0. It’s a Node.js / Sails.js application that handles POST and GET requests on a message model (code is available here).

Let’s create this service and we’ll show how it works later on.


$ docker-machine ssh manager1 docker service create --name api --replicas 5 -p 8000:80/tcp --network appnet --env "MONGO_URL=mongodb://mongo/demo" --network appnet lucj/demo-api:1.0
d5bbxdirag8zjo072ai9iemef

Several things to note here:

  • 5 replicas (tasks, containers) will be deployed
  • port 8000 will be exposed to the outside of the cluster
  • MONGO_URL environment variable is provided

Let’s see how our list of services evolves.


$ docker-machine ssh manager1 docker service ls
ID            NAME   REPLICAS  IMAGE               COMMAND
cdbzit448i8g  mongo  1/1       mongo:3.2
d5bbxdirag8z  api    5/5       lucj/demo-api:1.0
f4h2u1kqhfcl  redis  1/1       redis:3.0.7-alpine

$ docker-machine ssh manager1 docker service tasks api
ID                         NAME   SERVICE  IMAGE              LAST STATE         DESIRED STATE  NODE
cv3nac5ro9k3zowupr5toz5zj  api.1  api      lucj/demo-api:1.0  Running 3 minutes  Running        worker1
9g1dn2xwof1cspba198stcn2h  api.2  api      lucj/demo-api:1.0  Running 3 minutes  Running        worker2
0rl3gd09x1fqr4lck0xl5hhnh  api.3  api      lucj/demo-api:1.0  Running 3 minutes  Running        worker2
6ph3t3kbshfvoeq9y5cqkysml  api.4  api      lucj/demo-api:1.0  Running 3 minutes  Running        worker1
0qbre935zia8l15do0m0hzlmv  api.5  api      lucj/demo-api:1.0  Running 3 minutes  Running        manager1

5 tasks of the api service are running. Can we access it ? Let’s try.


$ curl -XPOST http://192.168.99.100:8000/message?msg=hello
{
  "msg": "hello",
  "createdAt": "2016-07-07T20:44:55.535Z",
  "updatedAt": "2016-07-07T20:44:55.535Z",
  "id": "577ebf47e8458e0e001c557c"
}

$ curl http://192.168.99.100:8000/message
[
  {
    "msg": "hello",
    "createdAt": "2016-07-07T20:44:55.535Z",
    "updatedAt": "2016-07-07T20:44:55.535Z",
    "id": "577ebf47e8458e0e001c557c"
  }
]

We have created a message and retrieved it, seems to be fine. Out of curiosity, we’ll check in mongo if the message is there.

First, where does the mongo db unique container is running ?


$ docker-machine ssh manager1 docker service tasks mongo
ID                         NAME     SERVICE  IMAGE      LAST STATE          DESIRED STATE  NODE
06ifu9dqc9zgwdz0r9t4zu8d2  mongo.1  mongo    mongo:3.2  Running 52 minutes  Running        manager1

Let’s ssh on manager1 and check the mongo container.


$ docker-machine ssh manager1
docker@manager1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
61b437459508        lucj/demo-api:1.0   "npm start"              10 minutes ago      Up 10 minutes       80/tcp              api.5.0qbre935zia8l15do0m0hzlmv
0d1a71a90b91        mongo:3.2           "/entrypoint.sh mongo"   54 minutes ago      Up 54 minutes       27017/tcp           mongo.1.06ifu9dqc9zgwdz0r9t4zu8d2

mongo is running on container 0d1a71a90b91, let’s list the messages stored.


docker@manager1:~$ docker exec -ti 0d1a71a90b91 mongo demo --eval 'db.message.find()'
MongoDB shell version: 3.2.7
connecting to: demo
{ "_id" : ObjectId("577eba82debbaa0e00aa0bd9"), "msg" : "hello", "createdAt" : ISODate("2016-07-07T20:44:55.535Z"), "updatedAt" : ISODate("2016-07-07T20:44:55.535Z") }
docker@manager1:~$

Great, the link between api and mongo is running fine.

Creating the www service

The www is based on a public image lucj/demo-www:1.0. It’s a Node.js / Sails.js application that displays a single page to create and list messages (code is available here).

Let’s create this service and we’ll show how it works later on.


$ docker-machine ssh manager1 docker service create --name www --replicas 5 -p 80:80/tcp --env "API=http://api" --env "KV_STORE=redis" --network appnet lucj/demo-www:1.0
1ax9mz3xs94zcrgfat2abeqh5

Several things to note here:

  • 5 replicas (tasks, containers) will be deployed
  • port 80 will be exposed to the outside of the cluster
  • API and KV_STORE environment variables are provided

Let’s inspect our list of services now.


$ docker-machine ssh manager1 docker service ls
ID            NAME   REPLICAS  IMAGE               COMMAND
1ax9mz3xs94z  www    5/5       lucj/demo-www:1.0
cdbzit448i8g  mongo  1/1       mongo:3.2
d5bbxdirag8z  api    5/5       lucj/demo-api:1.0
f4h2u1kqhfcl  redis  1/1       redis:3.0.7-alpine

$ docker-machine ssh manager1 docker service tasks www
ID                         NAME   SERVICE  IMAGE              LAST STATE              DESIRED STATE  NODE
elpenhksevn5pj1yyscu14xhb  www.1  www      lucj/demo-www:1.0  Running About a minute  Running        worker2
edstur5xr620rw59za7fszygw  www.2  www      lucj/demo-www:1.0  Running About a minute  Running        manager1
ef0ewv2x774k6embim2betsdn  www.3  www      lucj/demo-www:1.0  Running About a minute  Running        worker1
63hdmq8j5c7badqgxbrfni198  www.4  www      lucj/demo-www:1.0  Running About a minute  Running        manager2
3hkksttqrgj3jlsbjm3wnhgze  www.5  www      lucj/demo-www:1.0  Running About a minute  Running        manager2

Everything seems to be working as expected. Let’s test the web interface now.

multi-service-swarm-web1

Let’s play a little bit with the interface and see that new messages get created.

multi-service-swarm-web2

Check the KV store

Just out of curiosity, we’ll make sure a session has been added to our redis kv store.

Note: as your application was made with Sails.js, we have configured the session to be stored in redis through KV_STORE environnement variable (config/sessions.js file).

Where is the only task of our redis service running ?


$ docker-machine ssh manager1 docker service tasks redis
ID                         NAME     SERVICE  IMAGE               LAST STATE             DESIRED STATE  NODE
e979wno6b8fnu18jplk8b59e1  redis.1  redis    redis:3.0.7-alpine  Running About an hour  Running        manager2

SSHing onto manager2, let’s check what is inside redis.


docker@manager2:~$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES
bef327c64b4c        lucj/demo-www:1.0    "npm start"              10 minutes ago      Up 10 minutes       80/tcp              www.4.6d3ummjdirg30aau9ke3hfjyf
04f47393db62        redis:3.0.7-alpine   "docker-entrypoint.sh"   About an hour ago   Up About an hour    6379/tcp            redis.1.e979wno6b8fnu18jplk8b59e1

docker@manager2:~$ docker exec -ti 04f47393db62 sh
/data # redis-cli
127.0.0.1:6379> keys *
1) "sess:VWScwQTUjnOl2SzYgHiX0qKD2UCVfpCt"

We can then see that the connection between our front and the redis kv store is running correctly.

Summary

In the quite long article we have deployed a multi services application with swarm mode. Obviously we will not issue all the command manually each time but IMHO it’s a great exercice to get used to the command and to understand the underlying stuff. We’ll see how to do it using Distributed Application Bundle in the next article.

Feel free to provide some comments / feedback.

7 réflexions au sujet de « Deploy a multi services application with swarm mode »

    • Hello Ashish,
      when creating the mongo and redis services, the –replicas option is not provided. Then, only one replica of each service will be instanciated. « –replicas 5 » are used for www and api services though.
      Does that make sense ?
      Thanks,
      Luc

  1. Hi Luc, thanks for this excellent tutorial series.

    I can see that when you’re in a task, you can address other tasks on the same network by name, eg (from your example) the « api » in « http://api » references the API service rather than any specific node.

    But if you’re outside the network, you have to use a node IP address, eg http://192.168.88.100 rather than http://api. I appreciate that the service doesn’t have to be running on that particular node, but if the node you’ve used is taken out of the swarm for some reason, then your all your requests will fail to connect with a timeout. Naturally.

    I think I could use an externally hosted load balancer (eg nginx in reverse proxy mode) to reduce the risk of this failure, but do you know if Docker Swarm provide a more robust way of addressing services from outside the swarm?

    Thanks again –

    Francis.

    • Hi Francis, thanks for your comment.
      I think, to prevent the api from being unavailable in case of a node going down, I would first consider to have several instances of the api running on the swarm so losing one instance will have no impact.
      If the api also needs to be directly available to the outside world (and not only through the web front-end), I would tell swarm to expose its port on each node. Does that make sense ?
      Thanks a lot.
      Luc

      • Hi Luc,

        Thanks for the reply.

        Just to check that I understand correctly:

        If I have a service (eg web) which is externally callable from outside the Swarm, then it can to be called via a node address and its published port.

        As long as that node is still an active part of the Swarm, and there is at least one task running that service on the Swarm, I should be able to connect to the service from outside the Swarm.

        But if for some reason I have to « stop » that node, then I can no longer reach the service from outside the Swarm, however many instances may be running inside the Swarm.

        To prevent that Single Point of Failure, I could configure a load balancer to call different Swarm nodes if the default node stops.

        I just want to make sure I’m not missing something that is already in Swarm, and that would solve this problem.

        Thanks again!

        Francis.

        • Hello Francis,
          first of all, sorry I did not see your comment before 🙁
          You’r right, you could set up a Load Balancer in front of all your swarm nodes so if a node goes down a resquest towards a service will be sent to an active node and then rerouted internally (via the routing mesh) to a node that actually run a task for this service.
          This is the way Docker for AWS implemented this (hope this can help: https://medium.com/lucjuggery/docker-for-aws-zoom-on-elb-8236ce0590e#).
          Once again, sorry for the late response.
          Luc

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *