What is a docker swarm?
A Docker Swarm is a group of either physical or virtual machines that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you’re used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.
Docker swarm uses overlay network. The overlay
network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container.
Docker swarm network
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
- an overlay network called
ingress
, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to theingress
network by default. - a bridge network called
docker_gwbridge
, which connects the individual Docker daemon to the other daemons participating in the swarm.
Let’s do some example and see how the service discovery works in swarm. I have 4 EC2 Instances, 1 as docker swarm manager and 3 as docker swarm workers.
Initialize docker swarm manager
docker swarm init
Add other 3 nodes as docker swarm worker
Get the actual command from below command and run on 3 nodes where you want to join as docker swarm worker.
docker swarm join-token worker
Check if all nodes are added to swarm cluster
docker node ls (on the swarm master)
To avoid any container creation on docker swarm master, I have set the availability of the swarm to Drain. Use below command to set the availability of a node to Drain.
docker node update — availability drain ip-172–31–22–94
To verify if the node availability becomes drain.
docker node inspect ip-172–31–22–94| grep -i Availability
Create a custom overlay network
docker network create — subnet 10.10.0.0/16 — driver overlay useast1overlay
Create swarm service on our custom network
docker service create — name myapache — replicas 2 — network useast1overlay — publish 8080:80 httpd
docker service create — name myrabbitmq — replicas 1 — network useast1overlay — publish 15672:15672 — publish 5672:5672 rabbitmq:latest
docker service create — name myredis — replicas 2 — network useast1overlay — publish 6379:6379 redis:latest
Here, I have created 3 services, myapache with 2 replicas, myrabbitmq with 1 replica, myredis with 2 replicas. Below snap shows that the mynginx service randomly picked 2 nodes from swarm cluster and launched the containers in it.
Let’s login in to the container and dig our service.
As we see the service mynginx is resolving properly. But when you see the IP of the service mynginx its resolving to 10.10.0.24 only. Let’s login to each mynginx container and see the ip’s assigned to each of the containers.
root@50be79089b49:/# hostname -I
10.0.0.26 172.18.0.6 10.10.0.26
root@5b77d3615f69:/# hostname -I
10.0.0.25 172.18.0.3 10.10.0.25
Both the containers have IP’s as 10.10.0.25 and 10.10.0.26. When you actually inspect the service mynginx you will see these two containers are already in a VIP load balancer and the IP 10.10.0.24 comes from that VIP.
docker service inspect mynginx (check the virtual IP section).
Summary
When you create a swarm service, docker automatically creates a overlay network for your group of nodes on your swarm cluster and automatically load balance the service you create on top of it using VIP. So the IP you see when resolving mynginx service is the load balancer VIP of the service mynginx, all requests for the service by defaults comes to the load balancer VIP and then to the individual container nodes.