My first try with DAB (aka Distributed Application Bundle)

Purpose

Docker 1.12 introduced the notion of Distributed Application Bundle. As a very happy user of Docker Compose , I was wondering how Compose and DAB were related. Will DAB replace Compose ? Does not seem so. In fact a DAB file can be created from a Compose file and then deployed on a cluster.
Let’s try to see how this is working.

The application

We will use the following setup for our application

compose application

Basically the application display a web page where we can create messages and see all the messages created until now. Behind the scene, the web part send creation and list requests to the underlying api which insert or extract data from the mongodb data store. The web part also uses a redis kv store to persist the sessions.

Our compose file

The application is defined in the following docker-compose.yml file.


version: '2'
services:

  # Data store
  db:
    image: mongo:3.2
    volumes:
      - mongo-data:/data/db
    expose:
      - 27017

  # Session store
  kv:
    image: redis:3.0.7-alpine
    volumes:
      - redis-data:/data
    expose:
      - 6379

  # back-end
  api:
    image: lucj/demo-api:1.0
    depends_on:
      - db
    environment:
      - MONGO_URL=mongodb://db/demo
    ports:
      - "8000:80"

  # front-end
  www:
    image: lucj/demo-www:1.0
    depends_on:
      - kv
      - api
    environment:
      - API=http://api
      - REDIS_HOST=redis
    ports:
      - "80:80"

volumes:
  mongo-data:
  redis-data:

Run the application


$ docker-compose up
Creating network "demoswarm_default" with the default driver
Creating volume "demoswarm_redis-data" with default driver
Creating volume "demoswarm_mongo-data" with default driver
Pulling db (mongo:3.2)...
3.2: Pulling from library/mongo
8ceedfe606fc: Pull complete
de56a622d4ac: Pull complete
6f6965220a2d: Pull complete
290580b9cb91: Pull complete
74518025c1d4: Pull complete
3be42c3d566b: Pull complete
1f3f56933a51: Pull complete
9a2e0c784afa: Pull complete
a600166aa315: Pull complete
Digest: sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967
Status: Downloaded newer image for mongo:3.2
Pulling api (lucj/demo-api:1.0)...
1.0: Pulling from lucj/demo-api
d0ca440e8637: Pull complete
a99b075f7f3d: Pull complete
8dba5681f14c: Pull complete
f839806d2789: Pull complete
14a6ee38597d: Pull complete
f4e4306d1e46: Pull complete
Digest: sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973
Status: Downloaded newer image for lucj/demo-api:1.0
Pulling kv (redis:3.0.7-alpine)...
3.0.7-alpine: Pulling from library/redis
e110a4a17941: Pull complete
bccbb6980b59: Pull complete
0e8c804c1644: Pull complete
5c31210b0294: Pull complete
3df9e211e6a7: Pull complete
bed268e92669: Pull complete
5192b2b8af32: Pull complete
Digest: sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c
Status: Downloaded newer image for redis:3.0.7-alpine
Pulling www (lucj/demo-www:1.0)...
1.0: Pulling from lucj/demo-www
d0ca440e8637: Already exists
a99b075f7f3d: Already exists
fcfc91f79ed7: Pull complete
995bd8994d0a: Pull complete
eff612dbefa6: Pull complete
f9aa869c7e1c: Pull complete
Digest: sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6
Status: Downloaded newer image for lucj/demo-www:1.0
Creating demoswarm_db_1
Creating demoswarm_kv_1
Creating demoswarm_api_1
Creating demoswarm_www_1
Attaching to demoswarm_kv_1, demoswarm_db_1, demoswarm_api_1, demoswarm_www_1
kv_1   | 1:C 08 Jul 09:08:43.959 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
kv_1   |                 _._
kv_1   |            _.-``__ ''-._
kv_1   |       _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit
kv_1   |   .-`` .-```.  ```\/    _.,_ ''-._
kv_1   |  (    '      ,       .-`  | `,    )     Running in standalone mode
kv_1   |  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
kv_1   |  |    `-._   `._    /     _.-'    |     PID: 1
kv_1   |   `-._    `-._  `-./  _.-'    _.-'
kv_1   |  |`-._`-._    `-.__.-'    _.-'_.-'|
kv_1   |  |    `-._`-._        _.-'_.-'    |           http://redis.io
kv_1   |   `-._    `-._`-.__.-'_.-'    _.-'
kv_1   |  |`-._`-._    `-.__.-'    _.-'_.-'|
kv_1   |  |    `-._`-._        _.-'_.-'    |
kv_1   |   `-._    `-._`-.__.-'_.-'    _.-'
kv_1   |       `-._    `-.__.-'    _.-'
kv_1   |           `-._        _.-'
kv_1   |               `-.__.-'
kv_1   |
kv_1   | 1:M 08 Jul 09:08:43.961 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
kv_1   | 1:M 08 Jul 09:08:43.961 # Server started, Redis version 3.0.7
kv_1   | 1:M 08 Jul 09:08:43.961 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
kv_1   | 1:M 08 Jul 09:08:43.961 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
kv_1   | 1:M 08 Jul 09:08:43.961 * The server is now ready to accept connections on port 6379
api_1  |
api_1  | > messageApp@0.0.0 start /app
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=91d8e22da7db
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] db version v3.2.7
api_1  | > node app.js
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] git version: 4249c1d2b5999ebbf1fdf3bc0e0e3b3ff5c0aaf2
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] allocator: tcmalloc
api_1  |
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] modules: none
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] build environment:
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten]     distmod: debian71
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten]     distarch: x86_64
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten]     target_arch: x86_64
db_1   | 2016-07-08T09:08:43.977+0000 I CONTROL  [initandlisten] options: {}
db_1   | 2016-07-08T09:08:43.981+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten]
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten]
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
db_1   | 2016-07-08T09:08:44.028+0000 I CONTROL  [initandlisten]
db_1   | 2016-07-08T09:08:44.029+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1   | 2016-07-08T09:08:44.029+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
db_1   | 2016-07-08T09:08:44.036+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
www_1  |
www_1  | > www@0.0.0 start /app
www_1  | > node app.js
www_1  |
db_1   | 2016-07-08T09:08:46.508+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51768 #1 (1 connection now open)
db_1   | 2016-07-08T09:08:46.532+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51770 #2 (2 connections now open)
db_1   | 2016-07-08T09:08:46.536+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51772 #3 (3 connections now open)
db_1   | 2016-07-08T09:08:46.538+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51774 #4 (4 connections now open)
db_1   | 2016-07-08T09:08:46.539+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51776 #5 (5 connections now open)
db_1   | 2016-07-08T09:08:46.540+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:51778 #6 (6 connections now open)
db_1   | 2016-07-08T09:08:46.551+0000 I NETWORK  [conn1] end connection 172.18.0.4:51768 (5 connections now open)
api_1  | info:
api_1  | info:                .-..-.
api_1  | info:
api_1  | info:    Sails              <|    .-..-.
api_1  | info:    v0.12.3             |\
api_1  | info:                       /|.\
api_1  | info:                      / || \
api_1  | info:                    ,'  |'  \
api_1  | info:                 .-'.-==|/_--'
api_1  | info:                 `--'-------'
api_1  | info:    __---___--___---___--___---___--___
api_1  | info:  ____---___--___---___--___---___--___-__
api_1  | info:
api_1  | info: Server lifted in `/app`
api_1  | info: To see your app, visit http://localhost
api_1  | info: To shut down Sails, press  + C at any time.
api_1  |
api_1  | debug: -------------------------------------------------------
api_1  | debug: :: Fri Jul 08 2016 09:08:46 GMT+0000 (UTC)
api_1  |
api_1  | debug: Environment : development
api_1  | debug: Port        : 80
api_1  | debug: -------------------------------------------------------
www_1  | info:
www_1  | info:                .-..-.
www_1  | info:
www_1  | info:    Sails              <|    .-..-.
www_1  | info:    v0.12.3             |\
www_1  | info:                       /|.\
www_1  | info:                      / || \
www_1  | info:                    ,'  |'  \
www_1  | info:                 .-'.-==|/_--'
www_1  | info:                 `--'-------'
www_1  | info:    __---___--___---___--___---___--___
www_1  | info:  ____---___--___---___--___---___--___-__
www_1  | info:
www_1  | info: Server lifted in `/app`
www_1  | info: To see your app, visit http://localhost
www_1  | info: To shut down Sails, press  + C at any time.
www_1  |
www_1  | debug: -------------------------------------------------------
www_1  |
www_1  | debug: :: Fri Jul 08 2016 09:08:47 GMT+0000 (UTC)
www_1  | debug: Environment : development
www_1  | debug: Port        : 80
www_1  | debug: -------------------------------------------------------

I left all the logs sent by compose so we can see exactly what is going on

  • default network is created
  • specified volumes are created
  • images are pulled
  • services are started

Both our api and www services are Node.js application based on the great Sails.js framework

See our application in action

In docker-compose.yml, 2 ports mappings are specified

  • api exposes port 8000 to the outside
  • www exposes port 80 to the outside

Let’s test the api first.


$ curl http://localhost:8000/message
[]

$ curl -XPOST http://localhost:8000/message?msg=hello
{
  "msg": "hello",
  "createdAt": "2016-07-08T09:30:42.629Z",
  "updatedAt": "2016-07-08T09:30:42.629Z",
  "id": "577f72c2ce34d610005ac7c6"
}

$ curl http://localhost:8000/message
[
  {
    "msg": "hello",
    "createdAt": "2016-07-08T09:30:42.629Z",
    "updatedAt": "2016-07-08T09:30:42.629Z",
    "id": "577f72c2ce34d610005ac7c6"
  }
]

The first request shows that there is no message in the system. We then created a message and requested the list once again. Happy to see the newly created message is there.

Let’s have a look at the web interface then.

dab-web-1

The message created above is there. Good news !

Note: I’m using localhost in the request, the reason is that I’m using Docker for Mac. This great guy uses localhost as my Docker host.

Create a bundle

Now that we saw our application is working fine, let’s see how DAB (Distributed Application Bundle) enters into the picture.

A bundle can be created from a Compose file, let’s try it.


$ docker-compose bundle
WARNING: Compose needs to pull the image for 'db' in order to create a bundle. This may result in a more recent image being used. It is recommended that you use an image tagged with a specific version to minimize the potential differences.
Pulling db (mongo:3.2)...
3.2: Pulling from library/mongo
Digest: sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967
Status: Image is up to date for mongo:3.2
WARNING: Compose needs to pull the image for 'api' in order to create a bundle. This may result in a more recent image being used. It is recommended that you use an image tagged with a specific version to minimize the potential differences.
Pulling api (lucj/demo-api:1.0)...
1.0: Pulling from lucj/demo-api
Digest: sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973
Status: Image is up to date for lucj/demo-api:1.0
WARNING: Compose needs to pull the image for 'kv' in order to create a bundle. This may result in a more recent image being used. It is recommended that you use an image tagged with a specific version to minimize the potential differences.
Pulling kv (redis:3.0.7-alpine)...
3.0.7-alpine: Pulling from library/redis
Digest: sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c
Status: Image is up to date for redis:3.0.7-alpine
WARNING: Compose needs to pull the image for 'www' in order to create a bundle. This may result in a more recent image being used. It is recommended that you use an image tagged with a specific version to minimize the potential differences.
Pulling www (lucj/demo-www:1.0)...
1.0: Pulling from lucj/demo-www
Digest: sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6
Status: Image is up to date for lucj/demo-www:1.0
WARNING: Unsupported top level key 'volumes' - ignoring
WARNING: Unsupported key 'depends_on' in services.www - ignoring
WARNING: Unsupported key 'depends_on' in services.api - ignoring
WARNING: Unsupported key 'volumes' in services.db - ignoring
WARNING: Unsupported key 'volumes' in services.kv - ignoring
Wrote bundle to demoswarm.dsb

Several warnings, and a new file generated demoswarm.dsb. Let’s see what this one looks likes.


$ cat demoswarm.dsb
{
  "services": {
    "api": {
      "Env": [
        "MONGO_URL=mongodb://db/demo"
      ],
      "Image": "lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973",
      "Networks": [
        "default"
      ],
      "Ports": [
        {
          "Port": 80,
          "Protocol": "tcp"
        }
      ]
    },
    "db": {
      "Image": "mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967",
      "Networks": [
        "default"
      ],
      "Ports": [
        {
          "Port": 27017,
          "Protocol": "tcp"
        }
      ]
    },
    "kv": {
      "Image": "redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c",
      "Networks": [
        "default"
      ],
      "Ports": [
        {
          "Port": 6379,
          "Protocol": "tcp"
        }
      ]
    },
    "www": {
      "Env": [
        "REDIS_HOST=redis",
        "API=http://api"
      ],
      "Image": "lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6",
      "Networks": [
        "default"
      ],
      "Ports": [
        {
          "Port": 80,
          "Protocol": "tcp"
        }
      ]
    }
  },
  "version": "0.1"
}

Although this file is in json format, there is a lot of similarities with our original docker-compose.yml file’s content. A big noticeable thing here: there is a hash at the end of each of our images. This will ensure this bundle will always deploy the same version of the images, even if new images with the same tag are uploaded later on.

Another important thing that should catch our attention, there is no port mapping for the api and www services, even if there are some in our original docker compose file.


  ...
  api:
    image: lucj/demo-api:1.0
    depends_on:
      - db
    environment:
      - MONGO_URL=mongodb://db/demo
    ports:
      - "8000:80"

  # front-end
  www:
    image: lucj/demo-www:1.0
    depends_on:
      - kv
      - api
    environment:
      - API=http://api
      - REDIS_HOST=redis
    ports:
      - "80:80"
   ...

Note: On Docker’s site, we are told that DAB is not related to Docker / Docker Compose, and can be used to even package non containerized application. We’ll come back to this in a next article.

Setting up a swarm using swarm mode

In a previous article we saw how to create a swarm using Docker 1.12 swarm mode. Make sure you follow the procedure so we can have a swarm with 2 manager nodes + 2 worker nodes running. We’ll then be able to test the deployment of our fresh bundle.

Run this bundle against a swarm

There is currently a Github issue regarding the naming of the generated bundle, it should be a .dab instead of a .dsb. We will just rename our file waiting for this issue to be fixed in a future release.


$ mv demoswarm.dsb demoswarm.dab

Make sure the docker client points towards a swarm’s manager


eval $(docker-machine env manager1)

Let’s now test the deployment of our bundle


$ docker deploy demoswarm
Loading bundle from demoswarm.dab
Creating network demoswarm_default
Creating service demoswarm_kv
Creating service demoswarm_www
Creating service demoswarm_api
Creating service demoswarm_db

We are told the service are been created, let’s check it with the services api.


$ docker service ls
ID            NAME           REPLICAS  IMAGE                                                                                  COMMAND
4oxgvht8no5u  demoswarm_db   1/1       mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967
8szvr95b0eqa  demoswarm_api  1/1       lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973
9s72v3tghku4  demoswarm_kv   1/1       redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c
d8c99f79qqme  demoswarm_www  1/1       lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6

Seems all good. Just a quick check to see where the tasks are deployed (just a reminder, the tasks of a service are the list of instances running for this service).


$ docker service tasks demoswarm_db
ID                         NAME            SERVICE       IMAGE                                                                          LAST STATE         DESIRED STATE  NODE
bzr5o0qagr3pd0tqe1ojipzsb  demoswarm_db.1  demoswarm_db  mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967  Running 6 minutes  Running        worker2

$ docker service tasks demoswarm_api
ID                         NAME             SERVICE        IMAGE                                                                                  LAST STATE         DESIRED STATE  NODE
2ocx1l5niz9el4qcirddcfs6d  demoswarm_api.1  demoswarm_api  lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973  Running 5 minutes  Running        manager2

$ docker service tasks demoswarm_kv
ID                         NAME            SERVICE       IMAGE                                                                          LAST STATE         DESIRED STATE  NODE
3ytq2etjaaab0625pdr7ddrx2  demoswarm_kv.1  demoswarm_kv  redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c  Running 6 minutes  Running        manager1

$ docker service tasks demoswarm_www
ID                         NAME             SERVICE        IMAGE                                                                                  LAST STATE         DESIRED STATE  NODE
e4g81kjh684nkbwrvsmxgbrwl  demoswarm_www.1  demoswarm_www  lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6  Running 6 minutes  Running        worker1

All the tasks are in running state. That is cool. Also, as we did not specified a number of replica, each service only have one task running.

Test our application

When we created the bundle, we saw there was no port mapping for api and www services. So how are our services exposes to the outside ? Let’s use the service api to find this out.

$ docker service inspect -f '{{ json .Endpoint.Ports }}' demoswarm_api | python -m json.tool
[
    {
        "Protocol": "tcp",
        "PublishedPort": 30002,
        "TargetPort": 80
    }
]

$ docker service inspect -f '{{ json .Endpoint.Ports }}' demoswarm_www | python -m json.tool
[
    {
        "Protocol": "tcp",
        "PublishedPort": 30001,
        "TargetPort": 80
    }
]

Let’s test the api first, it’s exposing port 30002 to the outside world.

Note: services can be accessed from the outside from all the cluster nodes.


$ curl -XPOST http://192.168.99.100:30002/message?msg=hi
{
  "msg": "hi",
  "createdAt": "2016-07-08T10:53:59.388Z",
  "updatedAt": "2016-07-08T10:53:59.388Z",
  "id": "577f8647da08850e00804f99"
}

$ curl http://192.168.99.100:30002/message
[
  {
    "msg": "hi",
    "createdAt": "2016-07-08T10:53:59.388Z",
    "updatedAt": "2016-07-08T10:53:59.388Z",
    "id": "577f8647da08850e00804f99"
  }
]

Great, this is working fine.

Last step: making sure the web front-end is working and can retrieve the message created above through the api (remember, the front-end targets the api). Web part is exposing port 30001 to the outside world.

DAB-web2

Everything seems to be good.

Introducing the stack

When we deployed our bundle, a stack was created. It can be manipulated through the stack api.


$ docker stack --help

Usage:	docker stack COMMAND

Manage Docker stacks

Options:
      --help   Print usage

Commands:
  config      Print the stack configuration
  deploy      Create and update a stack from a Distributed Application Bundle (DAB)
  rm          Remove the stack
  tasks       List the tasks in the stack

Run 'docker stack COMMAND --help' for more information on a command.

Our stack is named demoswarm (based on the name of the application folder). Let’s inpect its configuration first.


{
    "Version": "0.1",
    "Services": {
        "api": {
            "Image": "lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973",
            "Env": [
                "MONGO_URL=mongodb://db/demo"
            ],
            "Ports": [
                {
                    "Protocol": "tcp",
                    "Port": 80
                }
            ],
            "Networks": [
                "default"
            ]
        },
        "db": {
            "Image": "mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967",
            "Ports": [
                {
                    "Protocol": "tcp",
                    "Port": 27017
                }
            ],
            "Networks": [
                "default"
            ]
        },
        "kv": {
            "Image": "redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c",
            "Ports": [
                {
                    "Protocol": "tcp",
                    "Port": 6379
                }
            ],
            "Networks": [
                "default"
            ]
        },
        "www": {
            "Image": "lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6",
            "Env": [
                "REDIS_HOST=redis",
                "API=http://api"
            ],
            "Ports": [
                {
                    "Protocol": "tcp",
                    "Port": 80
                }
            ],
            "Networks": [
                "default"
            ]
        }
    }
}

Logically, this is almost exactly the content of your demoswarm.dab file.
We can also check the status of our tasks.


$ docker stack tasks demoswarm
ID                         NAME             SERVICE        IMAGE                                                                                  LAST STATE          DESIRED STATE  NODE
2ocx1l5niz9el4qcirddcfs6d  demoswarm_api.1  demoswarm_api  lucj/demo-api@sha256:7959369d7eec7d56ea69883947341e0ef3354da5d4c57c1e11ec94d59c85d973  Running 44 minutes  Running        manager2
bzr5o0qagr3pd0tqe1ojipzsb  demoswarm_db.1   demoswarm_db   mongo@sha256:5b9a35332e2e538d721e83dab78041fe251b89d97b16990a45489189ea423967          Running 45 minutes  Running        worker2
e4g81kjh684nkbwrvsmxgbrwl  demoswarm_www.1  demoswarm_www  lucj/demo-www@sha256:30179d726e18e51869e4fb9447817485afa894efbb41f26d87ef512d0c7597d6  Running 45 minutes  Running        worker1
3ytq2etjaaab0625pdr7ddrx2  demoswarm_kv.1   demoswarm_kv   redis@sha256:215fa745788070c35e9494ee894b9de19b208f46adca076bbed5b1b963e7ec9c          Running 45 minutes  Running        manager1

Summary

When I started using Docker Cloud several months ago, I was surprised to see the notion of stack where I only expected to be able to run a Compose file as is. It seems the DAB comes to fill the gap and enables to create stacks (list of services) directly.
DAB is a much more general concept, I definitively need to dig further to better understand all of this.

Feel free to comments and give feedback.

8 réflexions au sujet de « My first try with DAB (aka Distributed Application Bundle) »

  1. Hey Luc, thanks for the write up, excellent work!

    As you have noted that ports and volumes are not supported by DAB yet, I’m wondering what gonna happen with `mongo-data:/data/db` when you fire up the mongodb container?

    Obviously it needs ‘mongo-data’ in place to get mongodb started, and that ‘docker service update’ hack happens after the container is running, so what gives?

    • Hello Jeremy,
      thanks for your comment.
      You’r right, as both volumes « keys » are ignored when building the DAB

      WARNING: Unsupported top level key ‘volumes’ – ignoring
      WARNING: Unsupported key ‘volumes’ in services.db – ignoring

      when mongod with start it will store the data in its filesystem without relying on an external (more secure and persistent) volume.

      Definitely interested by this topic, I’ll probably write a more detailed article soon.
      Thanks,
      Luc

      note: btw, I’m moving this blog to https://medium.com/lucjuggery if you’r interested 🙂

  2. # Data store
    db:
    image: mongo:3.2
    volumes:
    – mongo-data:/data/db
    expose:
    – 27017

    Also, Have you verified that the volumes mapping appear in the dab file. It seems no

    • Hello Michael, thanks for your comment.
      Something I’m not really sure about: docker-compose file contains the ports exposed to the outside (by the api and the www services), but when the bundle is created, those ports are not referenced in the .dab file. I guess this is done in purpose so the mapping is not done at the release (.dab file) level but at the execution environment level. Is that correct ?

      • This is because when you create the dab file you don’t know wether the ports specified in you docker compose file will be available on the destionation host.
        You might not even know beforehand where exactly the dab will be deployed.
        Making sure you do not pin down specific port numbers, but rather relying on a different mechanism to expose the service (e.g. jwilder’s docker proxy) guarantees the ability to deploy in any environment.

        Thank you, Luc, for Your great post!

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *