2.5. Planning the network¶
The network configuration for Acronis Cyber Infrastructure depends on the services you plan to deploy.
If you want to use only the Backup Gateway and storage services, configure two networks, for internal and external traffic:
If you plan to deploy the compute service on top of the storage cluster, you can create a minimum network configuration for evaluation purposes, or expand it to an advanced network configuration, which is recommended for production.
The minimum configuration includes two networks, for internal and external traffic:
The recommended configuration expands to five networks connected to the following logical network interfaces:
One bonded connection for internal management and storage traffic
One bonded connection with four VLANs over it:
- For overlay network traffic between VMs
- For management via the admin and self-service panels, compute API, SSH, and SNMP, as well as for public export of iSCSI, NFS, S3, and Backup Gateway data
- For external VM traffic
- For pulling VM backups by third-party backup management systems
2.5.1. General network requirements¶
- Internal storage traffic must be separated from other traffic types.
- The network for internal traffic can be non-routable, with minimum 10 Gbit/s bandwidth.
2.5.2. Network limitations¶
- Nodes are added to clusters by their IP addresses, not FQDNs. Changing the IP address of a node in the cluster will remove that node from the cluster. If you plan to use DHCP in a cluster, make sure that IP addresses are bound to the MAC addresses of the nodes’ network interfaces.
- Each node must have Internet access so that updates can be installed.
- The MTU value is set to 1500 by default. Refer to Step 2: Configuring the network for information on setting an optimal MTU value.
- Network time synchronization is required for correct statistics. It is enabled by default via the
chronyd
service. If you want to usentpdate
orntpd
, stop and disablechronyd
first. - The Internal management traffic type is assigned automatically during installation and cannot be changed in the admin panel later.
- Even though the management node can be accessed from a web browser by the host name, you still need to specify its IP address, not the host name, during installation.
2.5.3. Per-node network requirements and recommendations¶
All network interfaces on a node must be connected to different subnets. A network interface can be a VLAN-tagged logical interface, an untagged bond, or an Ethernet link.
Even though cluster nodes have the necessary
iptables
rules configured, we recommend using an external firewall for untrusted public networks, such as the Internet.Ports that will be opened on cluster nodes depend on services that will run on the node and traffic types associated with them. Before enabling a specific service on a cluster node, you need to assign the respective traffic type to a network this node is connected to. Assigning a traffic type to a network configures a firewall on nodes connected to this network, opens specific ports on node network interfaces, and sets the necessary
iptables
rules.The table below lists all the required ports and services associated with them:
¶ Service Traffic type Port Description Web control panel Admin panel* TCP 8888 External access to the admin panel. Self-service panel TCP 8800 External access to the self-service panel. Management Internal management Any available port Internal cluster management and transfers of node monitoring data to the admin panel. Metadata service Storage Any available port Internal communication between MDS services, as well as with chunks services and clients. Chunk service Any available port Internal communication with MDS services and clients. Client Any available port Internal communication with MDS and chunk services. Backup Gateway ABGW public TCP 44445 External data exchange with Acronis Backup agents and Acronis Cyber Backup Cloud. ABGW private Any available port Internal management of and data exchange between multiple Backup Gateway services. iSCSI iSCSI TCP 3260 External data exchange with the iSCSI access point. S3 S3 public TCP 80, 443 External data exchange with the S3 access point. OSTOR private Any available port Internal data exchange between multiple S3 services. NFS NFS TCP/UDP 111, 892, 2049 External data exchange with the NFS access point. OSTOR private Any available port Internal data exchange between multiple NFS services. Compute Compute API* External access to standard OpenStack API endpoints: TCP 5000 Identity API v3 TCP 6080 noVNC Websocket Proxy TCP 8004 Orchestration Service API v1 TCP 8041 Gnocchi API (billing metering service) TCP 8774 Compute API TCP 8776 Block Storage API v3 TCP 8780 Placement API TCP 9292 Image Service API v2 TCP 9313 Key Manager API v1 TCP 9513 Container Infrastructure Management API (Kubernetes service) TCP 9696 Networking API v2 TCP 9888 Octavia API v2 (load balancer service) VM private UDP 4789 Network traffic between VMs in compute virtual networks. TCP 5900-5999 VNC console traffic. VM backups TCP 49300-65535 External access to NBD endpoints. SSH SSH TCP 22 Remote access to nodes via SSH. SNMP SNMP* UDP 161 External access to storage cluster monitoring statistics via the SNMP protocol. * Ports for these traffic types must only be open on management nodes.
2.5.4. Kubernetes network requirements¶
To be able to deploy Kubernetes clusters in the compute cluster and work with them, make sure your network configuration allows the compute and Kubernetes services to send the following network requests:
- The request to bootstrap the etcd cluster in the public discovery service - from all management nodes to https://discovery.etcd.io via the public network.
- The request to obtain the “kubeconfig” file - from all management nodes via the public network:
- If high availability (HA) for the master VM is enabled, the request is sent to the public or floating IP address of the load balancer VM associated with Kubernetes API on port 6443.
- If HA for the master VM is disabled, the request is sent to the public or floating IP address of the Kubernetes master VM on port 6443.
- Requests from Kubernetes master VMs to the compute APIs (the Compute API traffic type) via the network with the VM public traffic type (via a publicly available VM network interface or a virtual router with enabled SNAT). By default, the compute API is exposed via the IP address of the management node (or to its virtual IP address if high availability is enabled). But you can also access the compute API via a DNS name (refer to Setting a DNS name for the compute API).
- The request to update the etcd cluster member state in the public discovery service - from Kubernetes master VMs to https://discovery.etcd.io via the network with the VM public traffic type (via a publicly available VM network interface or a virtual router with enabled SNAT).
- The request to download container images from the public Docker Hub repository - from Kubernetes master VMs to https://registry-1.docker.io via the network with the VM public traffic type (via a publicly available VM network interface or a virtual router with enabled SNAT).
It is also required that the network where you create a Kubernetes cluster does not overlap with these default networks:
- 10.100.0.0/24—Used for pod-level networking
- 10.254.0.0/16—Used for allocating Kubernetes cluster IP addresses
2.5.5. Network recommendations for clients¶
The following table lists the maximum network performance a client can get with the specified network interface. The recommendation for clients is to use 10 Gbps network hardware between any two cluster nodes and minimize network latencies, especially if SSD disks are used.
Storage network interface | Node max. I/O | VM max. I/O (replication) | VM max. I/O (erasure coding) |
---|---|---|---|
1 Gbps | 100 MB/s | 100 MB/s | 70 MB/s |
2 x 1 Gbps | ~175 MB/s | 100 MB/s | ~130 MB/s |
3 x 1 Gbps | ~250 MB/s | 100 MB/s | ~180 MB/s |
10 Gbps | 1 GB/s | 1 GB/s | 700 MB/s |
2 x 10 Gbps | 1.75 GB/s | 1 GB/s | 1.3 GB/s |