Preparation
For preparation of RPi, we can use this article.
How I configure my docker swarm:
- rpi-01 as “manager”
- rpi-02 as “manager”
- rpi-03 as “manager”, swarm will require more than 50% online
- All connected via Wi-Fi and zerotier installed
- UFW installed and configured for basic use (docker will change configuration directly to built-in iptables)
Installation
Install Docker
Do the following on all nodes
# Download and run install script from docker
curl -sSL https://get.docker.com | sh
# Post-install user right configuration
sudo usermod -aG docker [yourusername]
# Verify using
groups [yourusername]
Install Docker-Compose
Do the following on all nodes
# Install docker-compose
sudo apt-get install libffi-dev libssl-dev
sudo apt install python3-dev
sudo apt-get install -y python3 python3-pip
sudo pip3 install docker-compose
Configure Docker UFW
Do the following on all nodes
# Modify /etc/ufw/after.rules
sudo sed -i '$ a \
# BEGIN UFW AND DOCKER\
*filter\
:ufw-user-forward - [0:0]\
:ufw-docker-logging-deny - [0:0]\
:DOCKER-USER - [0:0]\
-A DOCKER-USER -j ufw-user-forward\
\
-A DOCKER-USER -j RETURN -s 10.0.0.0/8\
-A DOCKER-USER -j RETURN -s 172.16.0.0/12\
-A DOCKER-USER -j RETURN -s 192.168.0.0/16\
\
-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN\
\
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16\
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8\
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12\
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 192.168.0.0/16\
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 10.0.0.0/8\
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 172.16.0.0/12\
\
-A DOCKER-USER -j RETURN\
\
-A ufw-docker-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW DOCKER BLOCK] "\
-A ufw-docker-logging-deny -j DROP\
\
COMMIT\
# END UFW AND DOCKER' /etc/ufw/after.rules
# Restart UFW
sudo systemctl restart ufw
Read more here.
Configuration
Create Swarm
Do the following on manager node
# Initialize docker swarm from manager
# Advertise using Zerotier network address
docker swarm init --advertise-addr [10.x.x.x]
# Take note of the output command
# docker swarm join --token SWMTKN-#-##################################################-######################### 10.x.x.x:2377
# Confirm swarm has one manager node
# Current node will have asterisk on the ID column
docker node ls
Get join-token
We will not need to memorize the join token, as we can refer back from the manager node
# To get worker join token
docker swarm join-token worker
# To get manager join token
docker swarm join-token manager
Introduce another manager node
On another node which we’ll prepare as manager, run the following command
# Join token can be obtained from section above
docker swarm join --token SWMTKN-#-##################################################-######################### 10.x.x.x:2377
Join worker node
Run the following command on worker node
# Join token can be obtained from section above
docker swarm join --token SWMTKN-#-##################################################-######################### 10.x.x.x:2377
Configure UFW
Here, we are going to allow port range from certain IP to our node. Run the following on ALL node
# Allow docker swarm routing mesh
sudo ufw allow 2376/tcp comment "Docker - Client Comm"
sudo ufw allow 2377/tcp comment "Docker - Client Comm"
sudo ufw allow 7946 comment "Docker - Routing Mesh Discovery"
sudo ufw allow 4789/udp comment "Docker - Routing Mesh ingress"
sudo ufw allow 9001/tcp comment "Portainer Agent"
# Allow incoming request to port 5####
sudo ufw allow from 10.x.x.0/24 to 10.x.x.0/24 port 50000:59999 proto tcp
sudo ufw allow from 10.x.x.0/24 to 10.x.x.0/24 port 50000:59999 proto udp
Virtual Router Redundancy Protocol (VRRP)
Let’s configure some automatic failover.
For the moment, we can point to any of the 3 RPi node (e.g. http://rpi-01:80 by mean of web browser or port forwarding) to get into any service we deploy into swarm. However, if we are to bring said node offline, we won’t be able to access the service at all.
Virtual Router Redundancy Protocol is a technology to allow sharing of “virtual IP address” that we can use to access docker swarm. We’ll be using keepalived to configure VRRP.
In example below, we’ll configure the Virtual IP from outside DHCP pool
# Need to install on all master nodes
sudo apt-get install keepalived
# Modify /etc/keepalived/keepalived.conf
# Change eth0 to your network interface (e.g. wlan0)
# Change virtual_router_id, all nodes in the cluster must have the same router_id
# Change priority (e.g. primary 250, secondary 200, tertiary 150) with range of 1-255
# Change password, all nodes must have the same password
sudo touch /etc/keepalived/keepalived.conf
sudo sed -i '$ a \
vrrp_instance VI_1 { \
state MASTER \
interface eth0 \
virtual_router_id 51 \
priority 250 \
advert_int 1 \
\
authentication { \
auth_type PASS \
auth_pass <PASSWORD> \
} \
\
virtual_ipaddress { \
10.255.x.x \
} \
}' /etc/keepalived/keepalived.conf
# Enable service and start keepalived
sudo systemctl enable --now keepalived
Run Container
First Container
We’ll run Docker Swarm visualizer from alexellis2/visualizer-arm Tags | Docker Hub.
Run the following from manager node
docker service create \
--name swarm-vis \
--publish 58080:8080/tcp \
--constraint node.role==manager \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
alexellis2/visualizer-arm:latest
The command above will take some time to download and run.
Check services
Run the following on any manager node. If the replicas column is showing 1/#, it is running on some node
# To check running services
docker service ls
# To check specific service served by which node
docker service ps [servicename]
Install Portainer
We’ll use portainer to manage our swarm.
Create Portainer data directory
We are using gluster as our persistent volume. Gluster volume is mounted on /mnt/gfsvolume1
On any gluster node, run the following command
# Create portainer data directory
sudo mkdir /mnt/gfsvolume1/portainer
sudo mkdir /mnt/gfsvolume1/portainer/data
# Create portainer yaml file
nano /mnt/gfsvolume1/portainer/portainer-swarm-compose.yml
Add the following into the yaml file
version: '3.2'
services:
agent:
image: portainer/agent
environment:
AGENT_CLUSTER_ADDR: tasks.agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer-ce
command: -H tcp://tasks.agent:9001 --tlsskipverify
ports:
- "59999:9443"
- "50000:9000"
- "59998:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/gfsvolume1/portainer/data:/data
networks:
- agent_network
environment:
- "TZ=Asia/Singapore"
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
agent_network:
driver: overlay
attachable: true
Now, we are ready to deploy docker from the stack
# Deploy docker stack for portainer
docker stack deploy portainer -c /mnt/gfsvolume1/portainer/portainer-swarm-compose.yml
# Monitor deploy status
docker service ls
If everything is okay, we’ll be able to browse from http://10.x.x.x:50000