Containerization has revolutionized software deployment, and Docker Swarm provides a powerful solution for orchestrating and managing containerized applications. In this guide, we will take you through the step-by-step process of setting up and effectively using Docker Swarm on three virtual machines, and then extend it to include a load balancing proxy using Nginx. By the end of this tutorial, you’ll be equipped with the knowledge to harness the benefits of container orchestration and load balancing for your projects.
![](https://tomsitcafe.com/wp-content/uploads/2023/08/1200px-docker_small_logo.jpg.png?w=656)
Don’t forget to join my Discord: https://discord.gg/YbSYGsQYES
Introduction to Docker Swarm
Docker Swarm is a native clustering and orchestration tool for Docker containers. With Docker Swarm, we can manage a cluster of Docker nodes collectively, enabling load balancing, service discovery, and fault tolerance. This orchestration solution simplifies the deployment and scaling of applications across multiple machines.
Setting Up Virtual Machines
Our first step is to create some virtual machines. Utilize platforms like VMware, VirtualBox, or cloud providers for this purpose. Ensure that the virtual machines can communicate with each other over the network.
Installing Docker on Virtual Machines
Begin by logging into each virtual machine and installing Docker using the appropriate package manager for the operating system.
Initializing the Swarm
Select one of the virtual machines to act as the manager node. Issue the following command:
docker swarm init --advertise-addr <MANAGER_NODE_IP>
As an example:
docker swarm init --advertise-addr 192.168.1.42
Swarm initialized: current node (379nhgrc0k21mdrbowfmqxvm5) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0p9dy7kllmrob6sjfy7xqujwah4aw299k0pintpvl1j5ix4m55-5zn6rtueqaaycfs56110s2j1f 192.168.1.42:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
A command will be provided for adding worker nodes to the swarm.
We can check the status of the Swarm with the docker info
command.
Swarm: active
NodeID: 379nhgrc0k21mdrbowfmqxvm5
Is Manager: true
ClusterID: yer9zsh6b9tojeruio00vm0ev
Managers: 1
Nodes: 2
(...)
Joining Nodes to the Swarm
On the remaining virtual machines, run the provided join command to incorporate them as worker nodes in the swarm:
docker swarm join --token <TOKEN> <MANAGER_NODE_IP>:<PORT>
As an example:
docker swarm join --token SWMTKN-1-0p9dy7kllmrob6sjfy7xqujwah4aw299k0pintpvl1j5ix4m55-5zn6rtueqaaycfs56110s2j1f 192.168.1.42:2377
This node joined a swarm as a worker.
The node will join to the Docker Swarm.
Deploying Services on the Swarm
With the swarm set up, it’s time to deploy services. Create a Docker Compose file, such as docker-compose.yml
, to define your services, networks, and volumes. Here’s a simple example:
version: '3'
services:
web:
image: nginx:latest
ports:
- "80"
deploy:
replicas: 3
Deploy the stack using the following command:
docker stack deploy -c docker-compose.yml myapp
Adding Load Balancing with Nginx Proxy
To balance the load across the Nginx instances, we’ll use another service to act as a reverse proxy. Create a new Docker Compose file, docker-compose-proxy.yml
, for the Nginx proxy:
version: '3'
services:
proxy:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web
In the nginx.conf
file, configure the load balancing:
http {
upstream webapp {
server myapp_web_1;
server myapp_web_2;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
}
}
}
Deploying the Nginx Proxy
Deploy the Nginx proxy service using the following command:
docker stack deploy -c docker-compose-proxy.yml proxy
Managing the Swarm
Effectively monitor your swarm using Docker commands like docker service ls
, docker node ls
, and docker stack ps
.
Conclusion
Incorporating a load balancing proxy into your Docker Swarm setup using Nginx enhances the scalability and reliability of your containerized applications. By following this extended guide, you’ve learned how to create a Docker Swarm on three virtual machines, deploy services, scale applications, implement load balancing, and manage the swarm effectively. This orchestration solution combined with a load balancing proxy empowers you to build resilient and scalable applications effortlessly.
Take your container orchestration skills to the next level by incorporating load balancing into your Docker Swarm setup. Your applications will be well-equipped to handle increasing traffic demands and deliver a seamless user experience.
Don’t forget to join my Discord: https://discord.gg/YbSYGsQYES