Create a swarm cluster with Docker 1.12 swarm mode

Swarm made easy

One of the biggest changes presented during DockerCon 2016 (gosh… I wish I was there) is the swarm mode of Engine 1.12. What does that mean ? That you can create a swarm cluster out of the box if you have the Docker daemon 1.12 running.

A command as simple as:

$ docker swarm init

is enough to create a Swarm. A Swarm with a single management node but still a Swarm.


$ docker node ls
ID                           HOSTNAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
7sytb3zk0yswdfky6mbh7nzk2 *  moby      Accepted    Ready   Active        Leader

Let’s go multi nodes

A Swarm with a single node is of limited use, let’s create one with 2 manager nodes and 2 worker nodes.
We will first start by creating 4 Docker hosts. Docker Machine is a great tool for this purpose, so let’s use it.


$ docker-machine create --driver virtualbox manager1
$ docker-machine create --driver virtualbox manager2
$ docker-machine create --driver virtualbox worker1
$ docker-machine create --driver virtualbox worker2

If you do not have any other hosts created with Machine, you should have an output pretty similar to the following one when listing all the machines.


$ docker-machine ls
NAME              ACTIVE   DRIVER         STATE     URL                         SWARM   DOCKER        ERRORS
manager1          -        virtualbox     Running   tcp://192.168.99.100:2376           v1.12.0-rc3
manager2          -        virtualbox     Running   tcp://192.168.99.101:2376           v1.12.0-rc3
worker1           -        virtualbox     Running   tcp://192.168.99.102:2376           v1.12.0-rc3
worker2           -        virtualbox     Running   tcp://192.168.99.103:2376           v1.12.0-rc3

Init the swarm

We saw earlier the simplest command possible to create a Swarm with 1.12 (reminder: docker swarm init), but we’ll use several additional options here to enable the hosts of the cluster to communicate and to join the cluster without approval)


$ MANAGER1_IP=$(docker-machine ip manager1)
$ docker-machine ssh manager1 docker swarm init --auto-accept manager --auto-accept worker --listen-addr $MANAGER1_IP:2377
No --secret provided. Generated random secret:
	aedac6jbd08g804jkjgyf6mtu

Swarm initialized: current node (dj0s04kxqo6bacuvexend0s8v) is now a manager.

To add a worker to this swarm, run the following command:
	docker swarm join --secret aedac6jbd08g804jkjgyf6mtu \
	--ca-hash sha256:2da129878f18fea3216348b54eddb81cd1642bc3e5fbc7ab35259bb0f852a975 \
	192.168.99.100:2377

note: --listen-addr is the address the node will be accessible to the other nodes of the Swarm

note: Docker 1.12 rc4 introduced the creation of secret + ca hash when swarm is initialized. Those parameters will need to be provided when additional nodes (manager or worker) join the cluster, we’ll see that shortly.

Let’s first save the secret and ca hash in env variables, it will be easier to use them later on.

 
$ SECRET="aedac6jbd08g804jkjgyf6mtu"
$ CA_HASH="sha256:2da129878f18fea3216348b54eddb81cd1642bc3e5fbc7ab35259bb0f852a975"

Add the second manager

Several options needs to be provided to the docker swarm command:

* join: to indicate a new node will be added to the Swarm
* --manager: to indicates the nature of the node (manager vs worker)
* --secret
* --ca_hash
* --listen-addr: address the newly added node will be accessible to the other nodes of the Swarm
* the last parameter is the address of the first manager (the node this command will is sent to)


$ MANAGER2_IP=$(docker-machine ip manager2)
$ docker-machine ssh manager2 docker swarm join --manager --secret $SECRET --ca-hash $CA_HASH --listen-addr $MANAGER2_IP:2377 $MANAGER1_IP:2377
This node joined a Swarm as a manager.

Note: as the --auto-accept manager option was provided during the Swarm initialization, the second manager is automatically accepted. Without this option, it needs to be accepted by the first manager.

Add the workers

Worker nodes are added in the cluster in pretty much the same way:


$ WORKER1_IP=$(docker-machine ip worker1)
$ docker-machine ssh worker1 docker swarm join --secret $SECRET --ca-hash $CA_HASH --listen-addr $WORKER1_IP:2377 $MANAGER1_IP:2377
This node joined a Swarm as a worker. 
$ WORKER2_IP=$(docker-machine ip worker2)
$ docker-machine ssh worker2 docker swarm join --secret $SECRET --ca-hash $CA_HASH --listen-addr $WORKER2_IP:2377 $MANAGER1_IP:2377
This node joined a Swarm as a worker. 

Note: as the --auto-accept worker option was provided during the Swarm initialization, the worker are automatically accepted. Without this option, it needs to be accepted by a manager.

What does our Swarm look like

Let’s check it


$ docker-machine ssh manager1 docker node ls
ID                           HOSTNAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
109a5ufy8e3ey17unqa16wbj7    manager2  Accepted    Ready   Active        Reachable
4chbn8uphm1tidr93s64zknbq *  manager1  Accepted    Ready   Active        Leader
8nw7g1q0ehwq1jrvid1axtg5n    worker2   Accepted    Ready   Active
8rrdjg4uf9jcj0ma2uy8rkw5v    worker1   Accepted    Ready   Active

Every node now belongs to the Swarm and is in the Ready status. Manager1 is the leader. Everything looks good.

What makes it so special ?

This Swarm is TLS secured, with automated certificates updates out of the box.

swarm 1.12

Also, there is no need to use a KV Store anymore (Consul, Zookeeper, etcd), everything is handled for us.

swarm 1.12 kv

Let’s use it now

What is the purposes of a Swarm without having anything running on it… ? Of course, we’ll deploy some services on our Swarm, but this is in a next article.

Copy and paste

Below is small shell script that creates the Docker hosts, and deploys a Swarm. Of course, feel free to change the number of manager / worker nodes.

Note: the setup with 2 manager node and 2 worker nodes was used for the example. For a production cluster, we would probably go with a 3 manager nodes and 5 worker nodes (thanks Stefan for pointing this out).


# Define the number of managers/workers
MANAGER=3
WORKER=5

# Create the Docker hosts
for i in $(seq 1 $MANAGER); do docker-machine create --driver virtualbox manager$i; done
for i in $(seq 1 $WORKER); do docker-machine create --driver virtualbox worker$i; done

# Init the swarm
docker-machine ssh manager1 docker swarm init --auto-accept manager --auto-accept worker --listen-addr $(docker-machine ip manager1):2377

# Add additional manager(s)
for i in $(seq 2 $MANAGER); do docker-machine ssh manager$i docker swarm join --manager --listen-addr $(docker-machine ip manager$i):2377 $(docker-machine ip manager1):2377; done

# Add workers
for i in $(seq 1 $WORKER); do docker-machine ssh worker$i docker swarm join --listen-addr $(docker-machine ip worker$i):2377 $(docker-machine ip manager1):2377; done

22 réflexions au sujet de « Create a swarm cluster with Docker 1.12 swarm mode »

  1. Very good article. Can you have specify that manager nodes are not to accept tasks and services? In my deployment, I have a separate utility cluster of three servers that provide all sorts of support services (discovery, K/V, distrib file system etc) to my worker cloud. I’d like them to run teh swarm managers too, but not accept worker tasks.

  2. Can docker swarm can be used for configuring high availability for worker image/container.
    Scenario:
    Two Node1 and Node2. Node1 is the manager and Node2 is the worker
    In Node1/Manager: following cmd is run
    ==>docker service create –name ubuntuHA –network dockernet –replicas 2 ubuntu:latest

    This cmd creates a replica of Ubuntu: latest on both node1 and node2.

    My question:
    1) How can I start that service in /bin/bash for ubuntuHA in Node1. If the containers are started and ‘touch test.txt’ is run and creates test.txt file in Node1. Does test.txt file is replicated in all the containers in Node2??
    2) if there is any error in node1/node2, how can i get latest good file?

    Actually, I was trying (if possible) docker swarm as redundancy purpose, so that the same image/container can be hosted in a different docker host environment.

    • If 2 containers are running for your ubuntuHA service, by default, there is nothing that enables file sharing between them. You would need to mount a volume to enable file sharing, for instance a distributed filesystem that would be available for each node.

      • Thank you for your insight. So with docker tools how can I create two containers in diferent docker host insync (file sync,mysql database sync) with each other. e.g veritas software does like master/slave server architecture.

        • Be careful, files or data, should not be created within a container.
          A container should be seen as a stateless thing, data need to be kept externally. Of course you could create containers for MySQL engine (I’m not very familiar with the HA architecture of MYSQL though), but each one should rely on a volume to read / write the data.
          Regarding files, I think the best way is to store them on a distributed filesystem that would be available to each Docker host and mounted as a volume in the containers. If a container fails and a new one is ran, this last guy will have access to the files through the volume.

  3. Hi Luc,

    When I executed this:
    docker-machine ssh manager2 docker swarm join –manager –listen-addr $MANAGER2_IP:2377 $MANAGER1_IP:2377

    This error appeared:
    Error response from daemon: rpc error:doce = 3 desc A valid secret token is necessary to join this cluster: invalid exit status 1

    Can you help me?

    Thank you

    • Hi Ricardo,
      Thanks for your comment.
      I’ve done this article using rc3, it seems rc4 now generates a token that needs to be provided when joining nodes.
      Laxman gives an example of the command to use with rc4 (see his comment below). I’ll update the article to be compliant with this new version 🙂
      Thanks.

  4. Hi Luc,
    i saw today with v1.12.0-rc4 release, docker swarm init command automatically generate secret token and ca-hash that required during the other node adding in cluster. so the seems this are required parameter now. so if you need to update script accordingly.

    docker-machine ssh manager1 docker swarm init –auto-accept manager –auto-accept worker –listen-addr $MANAGER1_IP:2377
    No –secret provided. Generated random secret:
    ekfi9qnfow6l81xdg5n3uq3sz

    Swarm initialized: current node (b4sq2l5ohxt5ivqck60xl8sib) is now a manager.

    To add a worker to this swarm, run the following command:
    docker swarm join –secret ekfi9qnfow6l81xdg5n3uq3sz \
    –ca-hash sha256:f8e9a464fe17753fb2d6b90f2a70f8687de3984faaf165c2e049d021ad2c6008 \
    192.168.99.100:2377

  5. I believe the number of manager nodes should be 3-5-7…(odd, likely single digit) because of how consensus is achieved using Raft using a majority of nodes of the cluster; single digit because of scalability problems with more nodes using raft. Probably a good idea to update the script to reflect this…

    • Totally agree with you, I’d definitively go for a 3-5 for a production swarm.
      The choices 2-2 is just for the example.
      Thanks for the feedback.

      ps: Update done ! 🙂

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *