Networking Guide
Network architecture and configuration guides
Network Architecture
Physical Network
Each cluster node must have at least one network interface card (NIC) configured as the default gateway. If the node lacks a pre-configured default route, it must be established prior to installation.
K3s requires a default route to auto-detect the node’s primary IP and for kube-proxy ClusterIP routing to function properly. If no default route exists, create a dummy interface as a workaround:
ip link add dummy0 type dummy
ip link set dummy0 up
ip addr add 203.0.113.254/31 dev dummy0
ip route add default via 203.0.113.255 dev dummy0 metric 1000
Overlay Network
Kubernetes creates virtual network interfaces for pods that are typically not associated with any specific firewalld zone. The cluster uses the following network ranges:
| Network | CIDR | Purpose |
|---|
| Pod | 10.42.0.0/16 | Inter-pod communication |
| Service | 10.43.0.0/16 | Kubernetes service discovery |
Firewall regulations should target the primary physical interface. The overlay network traffic is handled by Flannel VXLAN.
Port Requirements
Inter-Node Communication
The following ports must be permitted between all cluster nodes for Kubernetes and cluster infrastructure:
| Port | Protocol | Source | Destination | Purpose |
|---|
| 2379-2380 | TCP | Server nodes | Server nodes | etcd cluster communication |
| 6443 | TCP | All nodes | Server nodes | Kubernetes API server |
| 8472 | UDP | All nodes | All nodes | Flannel VXLAN overlay network |
| 10250 | TCP | All nodes | All nodes | Kubelet metrics and management |
| 5001 | TCP | All nodes | Server nodes | Spegel registry mirror |
| 9500-9503 | TCP | All nodes | All nodes | Longhorn management API |
| 8500-8504 | TCP | All nodes | All nodes | Longhorn agent communication |
| 10000-30000 | TCP | All nodes | All nodes | Longhorn data replication |
| 3260 | TCP | All nodes | All nodes | Longhorn iSCSI |
| 2049 | TCP | All nodes | All nodes | Longhorn RWX (NFS) |
Application Services Ports
The following ports must be accessible for application services within the cluster:
| Port | Protocol | Service |
|---|
| 6379 | TCP | Redis |
| 9093 | TCP | Alertmanager |
| 9095 | TCP | Kafka |
| 8086 | TCP | Telegraf (InfluxDB v2 listener) |
External Access Ports
The following ports must be accessible from external clients to cluster nodes:
| Port | Protocol | Service |
|---|
| 80 | TCP | HTTP ingress (Optional, redirects to HTTPS) |
| 443 | TCP | HTTPS ingress (Required, all services) |
| 9095 | TCP | Kafka (external client connections) |
| 6379 | TCP | Redis (external client connections) |
| 8125 | TCP/UDP | Telegraf (metrics collection) |
Network Configuration Guides
Deployment Type
Choose the guide that matches your deployment architecture:
| Guide | Description | Who Should Use This |
|---|
| Configuring Segregated Networks | Multi-NIC deployments with air-gapped cluster backplane | Most users - If you have separate interfaces for cluster traffic and external internet access |
| Shared Interface Setup | Single-NIC deployments where all traffic shares one interface | Users with a single network interface for both cluster traffic and external access |
Not sure which to use? If you have explicitly separate interfaces for cluster communication and external access, start with Configuring Segregated Networks. Only use the shared interface guide if your hardware is limited to a single NIC.
1 - Shared Interface Network Setup
Network configuration for standard single-NIC deployments where all traffic shares a single interface.
Overview
This guide covers network configuration for standard single-NIC deployments. In this architecture, all traffic—including internal cluster communication (East-West) and external internet access (North-South)—is routed through a single network interface.
Security Warning: Because all traffic shares the same interface and firewall zone, there is no physical or logical isolation between cluster management traffic and public-facing service traffic. For production environments requiring security isolation, see Configuring Segregated Networks.
Note: The installer script automatically detects if firewalld is enabled. If so, it will verify that the required inter-node ports are open through the firewall in the default zone before proceeding. If any required ports are missing, the installer will report an error and exit. Application service ports (such as Kafka, VictoriaMetrics, and Telegraf) are not checked by the installer as they are configurable.
For network architecture, port requirements, and general information, see the Network Architecture Overview section in the main Networking Guide.
firewall Configuration
Assign Interface to Default Zone
Assign your primary network interface to the default zone:
firewall-cmd --permanent --zone=public --change-interface=<interface>
firewall-cmd --reload
Replace <interface> with your actual interface name (e.g., eth0).
In a shared interface setup, you must manually configure firewall rules for both internal cluster traffic and external access, as K3s does not automatically manage the public zone.
# 1. Allow pod and service networks (Internal CIDRs)
firewall-cmd --permanent --zone=public --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=public --add-source=10.43.0.0/16
# 2. Kubernetes and Cluster Infrastructure (East-West Traffic)
# These ports must be opened manually for the cluster to function on a single interface.
firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
firewall-cmd --permanent --zone=public --add-port=6443/tcp
firewall-cmd --permanent --zone=public --add-port=8472/udp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --permanent --zone=public --add-port=5001/tcp
firewall-cmd --permanent --zone=public --add-port=9500-9503/tcp
firewall-cmd --permanent --zone=public --add-port=8500-8504/tcp
firewall-cmd --permanent --zone=public --add-port=10000-30000/tcp
firewall-cmd --permanent --zone=public --add-port=3260/tcp
firewall-cmd --permanent --zone=public --add-port=2049/tcp
# 3. External Access Ports (North-South Traffic)
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=9095/tcp
firewall-cmd --permanent --zone=public --add-port=6379/tcp
firewall-cmd --permanent --zone=public --add-port=8125/tcp
firewall-cmd --permanent --zone=public --add-port=8125/udp
# Apply changes
firewall-cmd --reload
Verification
Verify all port rules are applied:
firewall-cmd --zone=public --list-all
Expected output:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources: 10.42.0.0/16 10.43.0.0/16
services: dhcpv6-client ssh
ports: 80/tcp 443/tcp 9095/tcp 6379/tcp 8125/tcp 8125/udp
protocols: 2379-2380/tcp 6443/tcp 8472/udp 10250/tcp 5001/tcp 9500-9503/tcp 8500-8504/tcp 10000-30000/tcp 3260/tcp 2049/tcp
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich-rules:
Note: Additional interfaces may appear in the zone (e.g., eth0 eth1) if firewalld auto-assigned them based on network configuration. This is expected and does not affect functionality.
Verify the interface is correctly assigned to the public zone:
firewall-cmd --get-active-zones
Expected output will show eth0 listed under the public zone:
public (active)
interfaces: eth0
Troubleshooting
Expected output will show eth0 listed under the public zone:
public (active)
interfaces: eth0
Troubleshooting
Nodes Cannot Communicate
Verify firewall rules allow inter-node traffic in the public zone:
Test basic connectivity between nodes:
Post-Installation Troubleshooting
Once the cluster is installed, if you encounter issues with pod-to-pod communication or service access, verify the following:
- Flannel Interface: Ensure the
flannel.1 interface is up and has the correct IP addresses. - Network Routes: Verify that the pod and service CIDR routes are present in the routing table.
- Firewall Rules: Ensure all required Kubernetes and cluster ports are allowed in the
public zone.
For detailed troubleshooting of Kubernetes-specific components (like Ingress or Pod connectivity), please refer to the Kubernetes Troubleshooting Guide.
2 - Configuring Segregated Networks
Multi-NIC deployment guide for air-gapped or segregated network setups
Overview
This guide covers configuring a cluster with separate interfaces for internal cluster communication and external internet access (also known as segregated or dual-homed deployments). In this setup, eth1 handles the internal cluster traffic (pod-to-pod, control plane) while eth0 provides public internet access.
Security Benefit: This configuration provides physical isolation between East-West (cluster) and North-South (external) traffic. The trusted zone allows unrestricted internal communication, while the public zone handles external access with controlled port exposure.
When configuring segregated networks with K3s, proper interface binding is essential. K3s uses the --flannel-iface flag to ensure pod traffic stays on the private network, and the --node-external-ip flag to advertise the public address for external access.
Important: K3s manages pod masquerading and service routing automatically. You only need to configure firewalld zones correctly and pass the proper flags to the K3s installer.
Complete, step-by-step instructions follow.
Prerequisites
Before starting, ensure:
- Operating system is installed and updated on all nodes
- Network connectivity between nodes is available
- SSH access is configured for all cluster nodes
This guide configures separate zones for internal cluster traffic and external access.
Assign Interfaces to Zones
K3s uses trusted zone for the internal network to allow unrestricted pod-to-pod and control plane traffic:
# Assign eth0 (external/internet) to public zone
firewall-cmd --permanent --zone=public --change-interface=eth0
# Assign eth1 (internal/cluster) to trusted zone
firewall-cmd --permanent --zone=trusted --change-interface=eth1
# Allow pod and service CIDRs in trusted zone (required for pod communication)
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16
# Reload firewall
firewall-cmd --reload
Open the necessary ports on the public zone for external access:
# External access ports
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=9095/tcp
firewall-cmd --permanent --zone=public --add-port=6379/tcp
firewall-cmd --permanent --zone=public --add-port=8125/tcp
firewall-cmd --permanent --zone=public --add-port=8125/udp
# Apply changes
firewall-cmd --reload
Note: K3s automatically creates iptables rules for internal cluster ports (6443, 10250, 2379-2380, 8472, 5001, 9500-9503, 8500-8504, 10000-30000, 3260, 2049) when using --flannel-iface=eth1. Pod and service CIDRs (10.42.0.0/16 and 10.43.0.0/16) are already allowed in the trusted zone via the --add-source commands above.
Verify Zone Configuration
firewall-cmd --zone=public --list-all
firewall-cmd --zone=trusted --list-all
Expected output for public zone:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth2
sources:
services: dhcpv6-client ssh cockpit
ports: 80/tcp 443/tcp 9095/tcp 6379/tcp 8125/tcp 8125/udp
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Expected output for trusted zone:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: eth1
sources: 10.42.0.0/16 10.43.0.0/16
services: ssh mdns
ports:
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Note: Additional interfaces may appear in a zone (e.g., eth0 eth2) if firewalld auto-assigned them based on network configuration. This is expected and does not affect functionality.
Single-NIC Alternative
If you only have a single network interface, see the Shared Interface Setup guide instead. This guide is specifically for multi-NIC deployments with separate interfaces for cluster and external traffic.
Troubleshooting
Verify Zone Configuration
If pods cannot communicate with services, verify the trusted zone has the correct sources configured:
firewall-cmd --zone=trusted --list-all
Expected output:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: eth1
sources: 10.42.0.0/16 10.43.0.0/16
services: ssh mdns
ports:
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Ensure both 10.42.0.0/16 (pod network) and 10.43.0.0/16 (service network) are listed under sources. If missing, re-run:
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16
firewall-cmd --reload