Setup K3s Multi-Master HA Cluster with WireGuard VPN
27 Nov 2025 - 10 min read
K3s is a lightweight Kubernetes distribution that’s perfect for edge computing, IoT, and resource-constrained environments. In this guide, we’ll set up a highly available (HA) K3s cluster with 3 master nodes using WireGuard VPN for secure communication.
The unique challenge we’re solving: only node1 has a public IP, while node2 and node3 are behind NAT. We’ll use WireGuard in a hub-and-spoke topology to connect all nodes securely.
Table of Contents
- Architecture Overview
- Prerequisites
- Step 1: Install WireGuard
- Step 2: Configure WireGuard VPN
- Step 3: Install K3s Master Nodes
- Step 4: Verify Cluster
- Step 5: Configure kubectl Access
- Troubleshooting
- Security Best Practices
- Advanced Configuration
- Monitoring
- Conclusion
Architecture Overview
Our architecture consists of three main components working together to create a robust, secure Kubernetes cluster:
Network Topology
The cluster uses a hub-and-spoke VPN topology where node1 (with public IP) acts as the central hub, and node2 & node3 (behind NAT) connect to it as spokes.
graph TD
Internet[Public Internet]
Internet -->|Port 6443| Node1
Node1[Node1 - Hub Master<br/>Public IP<br/>VPN: 10.10.10.1]
Node2[Node2 - Spoke Master<br/>Behind NAT<br/>VPN: 10.10.10.2]
Node3[Node3 - Spoke Master<br/>Behind NAT<br/>VPN: 10.10.10.3]
Node2 -->|WireGuard<br/>UDP:51820| Node1
Node3 -->|WireGuard<br/>UDP:51820| Node1
Node2 -.->|Via Hub| Node3
style Node1 fill:#4285f4,color:#fff,stroke:#1565c0,stroke-width:3px
style Node2 fill:#34a853,color:#fff
style Node3 fill:#34a853,color:#fff
style Internet fill:#ffa726,color:#fff
K3s Cluster Architecture
graph TD
Node1[Node1<br/>K3s API Server<br/>etcd Member<br/>10.10.10.1]
Node2[Node2<br/>K3s API Server<br/>etcd Member<br/>10.10.10.2]
Node3[Node3<br/>K3s API Server<br/>etcd Member<br/>10.10.10.3]
VPN[WireGuard VPN Network<br/>10.10.10.0/24<br/>Encrypted Tunnel]
Node1 <-->|VPN Traffic| VPN
Node2 <-->|VPN Traffic| VPN
Node3 <-->|VPN Traffic| VPN
Node1 <-.->|etcd Raft<br/>Port 2379-2380| Node2
Node2 <-.->|etcd Raft<br/>Port 2379-2380| Node3
Node3 <-.->|etcd Raft<br/>Port 2379-2380| Node1
style Node1 fill:#ff9800,color:#fff,stroke:#e65100,stroke-width:3px
style Node2 fill:#ff9800,color:#fff
style Node3 fill:#ff9800,color:#fff
style VPN fill:#2196f3,color:#fff
Data Flow
graph LR
Client[kubectl Client]
subgraph Hub[Node1 - Hub]
API1[API Server<br/>:6443]
ETCD1[etcd]
end
subgraph Spoke1[Node2 - Spoke]
API2[API Server<br/>:6443]
ETCD2[etcd]
end
subgraph Spoke2[Node3 - Spoke]
API3[API Server<br/>:6443]
ETCD3[etcd]
end
Client -->|1. API Request| API1
API1 -->|2. Write to etcd| ETCD1
ETCD1 -->|3. Replicate| ETCD2
ETCD1 -->|3. Replicate| ETCD3
ETCD2 -.->|4. Consensus| ETCD3
ETCD3 -.->|5. Confirm| API1
API1 -->|6. Response| Client
style Client fill:#9c27b0,color:#fff
style API1 fill:#4caf50,color:#fff
style ETCD1 fill:#f44336,color:#fff
style ETCD2 fill:#f44336,color:#fff
style ETCD3 fill:#f44336,color:#fff
Setup Flow
sequenceDiagram
participant Node1
participant Node2
participant Node3
Note over Node1,Node3: 1. WireGuard Setup
Node2->>Node1: Connect VPN
Node3->>Node1: Connect VPN
Note over Node1,Node3: 2. K3s Installation
Node1->>Node1: k3s server --cluster-init
Node2->>Node1: k3s server --server https://10.10.10.1:6443
Node3->>Node1: k3s server --server https://10.10.10.1:6443
Note over Node1,Node3: 3. Cluster Ready
Node1-->>Node3: HA Cluster Formed ✓
High Availability Features
| Component | HA Mechanism | Failure Tolerance |
|---|---|---|
| etcd | Raft consensus (3 nodes) | 1 node failure |
| API Server | Active-active (3 instances) | 2 node failures |
| Scheduler | Leader election | Automatic failover |
| Controller Manager | Leader election | Automatic failover |
| WireGuard VPN | Hub-and-spoke | Hub must be available |
Key Features
- High Availability: 3 master nodes with embedded etcd providing quorum-based redundancy
- Secure Communication: All nodes communicate via encrypted WireGuard VPN tunnels
- NAT-Friendly: Works even when only one node has a public IP address
- Production-Ready: Suitable for production workloads with automatic failover
- Lightweight: K3s binary < 70MB, minimal resource footprint
- Edge Optimized: Perfect for edge computing and distributed deployments
Prerequisites
Before starting, ensure you have:
| Requirement | Details |
|---|---|
| 3 servers (physical or virtual machines) | node1: Public IP required (e.g., 203.0.113.10) node2: Private/NAT IP (e.g., 192.168.1.102) node3: Private/NAT IP (e.g., 192.168.1.103) |
| Operating System | Ubuntu 22.04/20.04 or CentOS 8/9 |
| Root access | User with sudo privileges |
| Network requirements | Port 51820 (UDP) - WireGuard Port 6443 (TCP) - Kubernetes API server Port 2379-2380 (TCP) - etcd Port 10250 (TCP) - kubelet |
Step 1: Install WireGuard
WireGuard is a modern, fast VPN that will create a secure tunnel between all nodes.
Install on All Nodes
Run on all three nodes:
sudo apt updatesudo apt install wireguard -ysudo dnf install epel-release -ysudo dnf install wireguard-tools -yGenerate Key Pairs
On each node, generate a private and public key:
cd /etc/wireguardsudo suwg genkey | tee privatekey | wg pubkey > publickeyThis creates two files:
privatekey- Keep this secret!publickey- Share this with other nodes
Important: Save these keys securely. You’ll need:
- Each node’s own private key
- All other nodes’ public keys
Step 2: Configure WireGuard VPN
We’ll use a hub-and-spoke topology where node1 acts as the central hub.
VPN IP Assignment
- node1:
10.10.10.1/24(Hub) - node2:
10.10.10.2/24(Spoke) - node3:
10.10.10.3/24(Spoke)
Node1 Configuration (Hub)
Create /etc/wireguard/wg0.conf on node1:
[Interface]Address = 10.10.10.1/24ListenPort = 51820PrivateKey = <node1-private-key>
# Peer: node2[Peer]PublicKey = <node2-public-key>AllowedIPs = 10.10.10.2/32
# Peer: node3[Peer]PublicKey = <node3-public-key>AllowedIPs = 10.10.10.3/32Note: Replace <node1-private-key>, <node2-public-key>, and <node3-public-key> with actual keys.
Node2 Configuration (Spoke)
Create /etc/wireguard/wg0.conf on node2:
[Interface]Address = 10.10.10.2/24PrivateKey = <node2-private-key># In case node2 does not behind NAT, uncomment the following line# ListenPort = 51820
[Peer]PublicKey = <node1-public-key>AllowedIPs = 10.10.10.1/32, 10.10.10.0/24Endpoint = <node1-public-ip>:51820PersistentKeepalive = 25Key Parameters:
AllowedIPs: Includes both node1’s IP and the entire VPN subnet for routingEndpoint: node1’s public IP addressPersistentKeepalive: Keeps NAT connections alive (25 seconds)
Node3 Configuration (Spoke)
Create /etc/wireguard/wg0.conf on node3:
[Interface]Address = 10.10.10.3/24PrivateKey = <node3-private-key># In case node3 does not behind NAT, uncomment the following line# ListenPort = 51820
[Peer]PublicKey = <node1-public-key>AllowedIPs = 10.10.10.1/32, 10.10.10.0/24Endpoint = <node1-public-ip>:51820PersistentKeepalive = 25Enable WireGuard
On all three nodes:
# Start WireGuard interfacesudo wg-quick up wg0
# Enable on bootsudo systemctl enable wg-quick@wg0
# Check statussudo wg showTest VPN Connectivity
From node1, ping other nodes:
ping -c 3 10.10.10.2ping -c 3 10.10.10.3From node2 or node3:
ping -c 3 10.10.10.1ping -c 3 10.10.10.2 # From node3ping -c 3 10.10.10.3 # From node2All pings should succeed. If not, check:
- Firewall rules (allow UDP 51820)
- WireGuard configuration
- Public key accuracy
If pings still fail, you may need to allow additional traffic on the WireGuard interface:
sudo ufw allow in on wg0 to any port 6443 proto tcpsudo ufw allow in on wg0 to any port 2379:2380 proto tcpsudo ufw reloadStep 3: Install K3s Master Nodes
Now we’ll install K3s on all nodes, creating a highly available cluster using the VPN network.
Install on Node1 (First Master)
Node1 will initialize the cluster:
curl -sfL https://get.k3s.io | sh -s - server \ --cluster-init \ --node-ip 10.10.10.1 \ --advertise-address 10.10.10.1 \ --tls-san 10.10.10.1Parameter Explanation:
--cluster-init: Initialize HA cluster with embedded etcd--node-ip: Use VPN IP for node communication--advertise-address: Advertise this IP to other nodes--tls-san: Add VPN IP to TLS certificate
Get the Cluster Token
After installation, retrieve the token for joining other nodes:
sudo cat /var/lib/rancher/k3s/server/node-tokenSave this token - you’ll need it for node2 and node3.
Example output:
K107c4a8e9b7f4d8a9c6b5e4f3d2c1b0a9::server:a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6Install on Node2 (Second Master)
curl -sfL https://get.k3s.io | sh -s - server \ --server https://10.10.10.1:6443 \ --token <node-token> \ --node-ip 10.10.10.2 \ --advertise-address 10.10.10.2 \ --tls-san 10.10.10.1Replace <node-token> with the token from node1.
Install on Node3 (Third Master)
curl -sfL https://get.k3s.io | sh -s - server \ --server https://10.10.10.1:6443 \ --token <node-token> \ --node-ip 10.10.10.3 \ --advertise-address 10.10.10.3 \ --tls-san 10.10.10.1Wait for Cluster Initialization
The cluster needs a few minutes to initialize. Monitor progress on node1:
# Watch node statussudo k3s kubectl get nodes -w
# Check pods in all namespacessudo k3s kubectl get pods -AStep 4: Verify Cluster
Check Node Status
On node1:
sudo k3s kubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSIONnode1 Ready control-plane,etcd,master 5m v1.28.5+k3s1node2 Ready control-plane,etcd,master 3m v1.28.5+k3s1node3 Ready control-plane,etcd,master 2m v1.28.5+k3s1All nodes should show Ready status with control-plane,etcd,master roles.
Check etcd Members
Verify all three nodes are part of the etcd cluster:
sudo k3s etcd-snapshot lsYou should see the etcd container running on each control-plane node.
Test Pod Deployment
Deploy a test application:
sudo k3s kubectl create deployment nginx --image=nginxsudo k3s kubectl expose deployment nginx --port=80 --type=NodePortsudo k3s kubectl get pods,svcStep 5: Configure kubectl Access
For easier cluster management, set up kubectl access from your local machine.
Export kubeconfig from Node1
On node1:
sudo cat /etc/rancher/k3s/k3s.yamlCopy the output to your local machine at ~/.kube/config and modify the server URL:
apiVersion: v1clusters:- cluster: server: https://10.10.10.1:6443 # Change from 127.0.0.1 to VPN IP name: default# ... rest of the configNote: You’ll need VPN access to 10.10.10.1 from your local machine, or use the public IP of node1.
Alternative: Use Node1’s Public IP
For remote access without VPN:
# On node1, reinstall with public IP as TLS SANcurl -sfL https://get.k3s.io | sh -s - server \ --cluster-init \ --node-ip 10.10.10.1 \ --advertise-address 10.10.10.1 \ --tls-san 10.10.10.1 \ --tls-san <node1-public-ip>Then in kubeconfig, use https://<node1-public-ip>:6443.
Troubleshooting
Nodes Not Ready
Check logs on the problematic node:
sudo journalctl -u k3s -fCommon issues:
- Network connectivity via VPN
- Firewall blocking required ports
- Insufficient resources (CPU, memory)
WireGuard Not Connecting
Check WireGuard status:
sudo wg showVerify handshake:
sudo wg show wg0 latest-handshakesIf handshake timestamp is old or missing:
- Check firewall (allow UDP 51820)
- Verify public keys are correct
- Ensure node1’s public IP is accessible
etcd Issues
Check etcd member health:
sudo k3s kubectl -n kube-system exec -it \ $(sudo k3s kubectl -n kube-system get pod -l component=etcd -o name | head -n 1) \ -- etcdctl member listSecurity Best Practices
1. Firewall Configuration
On node1 (public-facing):
# Allow WireGuardsudo ufw allow 51820/udp
# Allow K3s API only from VPNsudo ufw allow from 10.10.10.0/24 to any port 6443
# Enable firewallsudo ufw enableOn node2 and node3:
# Only allow traffic from VPN subnetsudo ufw default deny incomingsudo ufw allow from 10.10.10.0/24sudo ufw enable2. Regular Updates
Keep both WireGuard and K3s updated:
# Update K3scurl -sfL https://get.k3s.io | sh -
# Update WireGuardsudo apt update && sudo apt upgrade wireguard -y3. Backup etcd
Regular etcd backups are crucial:
# Create backupsudo k3s etcd-snapshot save
# List snapshotssudo k3s etcd-snapshot list
# Restore from snapshotsudo k3s server --cluster-reset --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/snapshot-nameAdvanced Configuration
Load Balancer for API Server
For production, use a load balancer in front of all master nodes:
# Example HAProxy configfrontend k3s_api bind *:6443 mode tcp default_backend k3s_masters
backend k3s_masters mode tcp balance roundrobin server node1 10.10.10.1:6443 check server node2 10.10.10.2:6443 check server node3 10.10.10.3:6443 checkCustom CNI (Optional)
K3s uses Flannel by default. For advanced networking, disable it and install your own:
curl -sfL https://get.k3s.io | sh -s - server \ --cluster-init \ --flannel-backend=none \ --disable-network-policy \ --node-ip 10.10.10.1Then install Calico, Cilium, or another CNI.
Monitoring
Install Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yamlView Resource Usage
kubectl top nodeskubectl top pods -AConclusion
You now have a production-ready, highly available K3s cluster with:
- 3 master nodes for redundancy
- Embedded etcd for distributed consensus
- WireGuard VPN for secure communication
- NAT traversal using hub-and-spoke topology
This setup is ideal for:
- Edge computing scenarios
- Multi-site deployments
- Development/staging environments
- Small to medium production workloads
The cluster can survive the loss of one master node and continues operating normally. You can add worker nodes using the same WireGuard VPN network, and scale horizontally as needed.
Next Steps
- Set up persistent storage with Longhorn or Rook
- Configure Ingress Controller (Traefik is included by default)
- Implement monitoring with Prometheus and Grafana
- Set up automated backups
- Configure RBAC and security policies
Happy clustering!