2.3. Planning node hardware configurations¶
Acronis Cyber Infrastructure works on top of commodity hardware, so you can create a cluster from regular servers, disks, and network cards. Still, to achieve the optimal performance, a number of requirements must be met and a number of recommendations should be followed.
Note
If you are unsure of what hardware to choose, consult your sales representative. You can also use the online hardware calculator. If you want to avoid the hassle of testing, installing, and configuring hardware and/or software, consider using Acronis Appliance. Out of the box, you will get an enterprise-grade, fault-tolerant, five-node infrastructure solution with great storage performance that runs in a 3U form factor.
2.3.1. Hardware limits¶
The following table lists the current hardware limits for Acronis Cyber Infrastructure servers:
Hardware | Theoretical | Certified |
---|---|---|
RAM | 64 TB | 1 TB |
CPU | 5120 logical CPUs | 384 logical CPUs |
A logical CPU is a core (thread) in a multicore (multithreading) processor.
2.3.2. Hardware requirements¶
The following table lists the minimum and recommended disk requirements according to the disk roles (refer to Storage architecture overview):
Disk role | Quantity | Minimum | Recommended |
---|---|---|---|
System | One disk per node | 100 GB SATA/SAS HDD | 250 GB SATA/SAS SSD |
Metadata | One disk per node Five disks recommended for one cluster |
100 GB enterprise-grade SSD with power loss protection, 1 DWPD endurance minimum | |
Cache | Optional One SSD disk per 4-12 HDDs |
100+ GB enterprise-grade SSD with power loss protection and 75 MB/s sequential write performance per serviced HDD; 1 DWPD endurance minimum, 10 DWPD recommended | |
Storage | Optional At least one per cluster |
100 GB minimum, 16 TB maximum recommended SATA/SAS HDD or SATA/SAS/NVMe SSD (enterprise-grade with power loss protection, 1 DWPD endurance minimum) |
The following table lists the amount of RAM and CPU cores that will be reserved on one node, according to the services you will use:
Service | RAM | CPU cores* | |
---|---|---|---|
System | 6 GB | 2 cores | |
Storage services: each disk with Storage role or Cache role (any size)** | 1 GB | 0.2 cores | |
Compute service*** | 8 GB | 3 cores | |
Load balancer service*** | 1 GB | 1 core | |
Kubernetes service*** | 2 GB | 2 cores | |
S3 | 4.5 GB | 3 cores | |
Backup Gateway**** | 1 GB | 2 cores | |
NFS | Service | 4 GB | 2 cores |
Each share | 0.5 GB | 0.5 cores | |
iSCSI | Service | 1 GB | 1 core |
Each volume | 0.1 GB | 0.5 cores |
* 64-bit x86 AMD-V or Intel VT processors with hardware virtualization extensions enabled. For Intel processors, enable “unrestricted guest” and VT-x with Extended Page Tables in BIOS. It is recommended to have the same CPU models on each node to avoid VM live migration issues. A CPU core here is a physical core in a multicore processor (hyperthreading is not taken into account).
** For clusters larger than 1 PB of physical space, please add an additional 0.5 GB of RAM per Metadata service.
*** The compute, load balancer, and Kubernetes service requirements only refer to the management node.
**** When working with public clouds and NFS, Backup Gateway consumes as much RAM and CPU as with a local storage.
As for the networks, at least 2 x 10 GbE interfaces are recommended, for internal and external traffic; 25 GbE, 40 GbE, and 100 GbE are even better. Bonding is recommended. However, for external traffic, you can start with 1 GbE links, but they can limit cluster throughput on modern loads.
Let’s consider some examples and calculate the requirements for particular cases.
If you have 1 node (1 system disk and 4 storage disks) and want to use it for Backup Gateway, see the table below for the calculations.
¶ Service Each node System 6 GB, 2 cores Storage services 4 storage disks with 1 GB and 0.2 cores, that is 4 GB and 0.8 cores Backup Gateway 1 GB, 2 cores Total 11 GB of RAM and 4.8 cores If you have 3 nodes (1 system disk and 4 storage disks) and want to use them for the compute service, see the table below for the calculations.
¶ Service Management node Each secondary node System 6 GB, 2 cores 6 GB, 2 cores Storage services 4 storage disks with 1 GB and 0.2 cores, that is 4 GB and 0.8 cores 4 storage disks with 1 GB and 0.2 cores, that is 4 GB and 0.8 cores Compute 8 GB, 3 cores Load balancer 1 GB, 1 core Kubernetes 2 GB, 2 cores Total 21 GB of RAM and 8.8 cores 10 GB of RAM and 2.8 cores If you have 5 nodes (1 system+storage disk and 10 storage disks) and want to use them for Backup Gateway, see the table below for the calculations. Note that if the compute cluster is not deployed, the requirements are the same for the management and the secondary nodes.
¶ Service Each node System 6 GB, 2 cores Storage services 11 storage disks with 1 GB and 0.2 cores, that is 11 GB and 2 cores Backup Gateway 1 GB, 2 cores Total 18 GB of RAM and 6 cores If you have 10 nodes (1 system disk, 1 cache disk, 3 storage disks) and want to use them for the compute service, see the table below for the calculations. Note that three nodes are used for the management node high availability, and each of them meets the requirements for the management node.
¶ Service Each management node Each secondary node System 6 GB, 2 cores 6 GB, 2 cores Storage services 3 storage + 1 cache disks with 1 GB and 0.2 cores, that is 4 GB and 0.8 cores 3 storage + 1 cache disks with 1 GB and 0.2 cores, that is 4 GB and 0.8 cores Compute 8 GB, 3 cores Load balancer 1 GB, 1 core Kubernetes 2 GB, 2 cores Total 21 GB of RAM and 8.8 cores 10 GB of RAM and 2.8 cores
In general, the more resources you provide for your cluster, the better it works. All extra RAM is used to cache disk reads. And extra CPU cores increase the performance and reduce latency.
2.3.3. Hardware recommendations¶
In general, Acronis Cyber Infrastructure works on the same hardware that is recommended for Red Hat Enterprise Linux 7, including AMD EPYC processors: servers, components.
The following recommendations further explain the benefits added by specific hardware in the hardware requirements table. Use them to configure your cluster in an optimal way.
2.3.3.1. Storage cluster composition recommendations¶
Designing an efficient storage cluster means finding a compromise between performance and cost that suits your purposes. When planning, keep in mind that a cluster with many nodes and few disks per node offers higher performance, while a cluster with the minimum number of nodes (3) and a lot of disks per node is cheaper. See the following table for more details.
Design considerations | Minimum nodes (3), many disks per node | Many nodes, few disks per node (all-flash configuration) |
---|---|---|
Optimization | Lower cost. | Higher performance. |
Free disk space to reserve | More space to reserve for cluster rebuilding, as fewer healthy nodes will have to store the data from a failed node. | Less space to reserve for cluster rebuilding, as more healthy nodes will have to store the data from a failed node. |
Redundancy | Fewer erasure coding choices. | More erasure coding choices. |
Cluster balance and rebuilding performance | Worse balance and slower rebuilding. | Better balance and faster rebuilding. |
Network capacity | More network bandwidth required to maintain cluster performance during rebuilding. | Less network bandwidth required to maintain cluster performance during rebuilding. |
Favorable data type | Cold data (e.g., backups). | Hot data (e.g., virtual environments). |
Sample server configuration | Supermicro SSG-6047R-E1R36L (Intel Xeon E5-2620 v1/v2 CPU, 32 GB RAM, 36 x 12 TB HDDs, a 500 GB system disk). | Supermicro SYS-2028TP-HC0R-SIOM (4 x Intel E5-2620 v4 CPUs, 4 x 16 GB RAM, 24 x 1.9 TB Samsung PM1643 SSDs). |
Take note of the following:
- These considerations only apply if the failure domain is the host.
- The speed of rebuilding in the replication mode does not depend on the number of nodes in the cluster.
- Acronis Cyber Infrastructure supports hundreds of disks per node. If you plan to use more than 36 disks per node, contact our sales engineers who will help you design a more efficient cluster.
2.3.3.2. General hardware recommendations¶
- At least five nodes are required for a production environment. This is to ensure that the cluster can survive failure of two nodes without data loss.
- One of the strongest features of Acronis Cyber Infrastructure is scalability. The bigger the cluster, the better Acronis Cyber Infrastructure performs. It is recommended to create production clusters from at least ten nodes, for improved resilience, performance, and fault tolerance in production scenarios.
- Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in each node will yield better cluster performance, capacity, and overall balance.
- Any cluster infrastructure must be tested extensively before it is deployed to production. Such common points of failure as SSD drives and network adapter bonds must always be thoroughly verified.
- It is not recommended for production to run Acronis Cyber Infrastructure on top of SAN/NAS hardware that has its own redundancy mechanisms. Doing so may negatively affect performance and data availability.
- To achieve the best performance, keep at least 20 percent of the cluster capacity free.
- During disaster recovery, Acronis Cyber Infrastructure may need additional disk space for replication. Make sure to reserve at least as much space as any single storage node has.
- It is recommended to have the same CPU models on each node to avoid VM live migration issues. For more details, refer to the Administrator Command Line Guide.
- If you plan to use Backup Gateway to store backups in the cloud, make sure the local storage cluster has plenty of logical space for staging (keeping backups locally before sending them to the cloud). For example, if you perform backups daily, provide enough space for at least 1.5 days’ worth of backups. For more details, refer to the Administrator Guide.
- It is recommended to use UEFI instead of BIOS if this supported by your hardware. This is recommended particularly if you use NVMe drives.
2.3.3.3. Storage hardware recommendations¶
- It is possible to use disks of different size in the same cluster. However, keep in mind that, given the same IOPS, smaller disks will offer higher performance per terabyte of data compared to bigger disks. It is recommended to group disks with the same IOPS per terabyte in the same tier.
- Using the recommended SSD models may help you avoid loss of data. Not all SSD drives can withstand enterprise workloads and may break down in the first months of operation, resulting in TCO spikes.
- SSD memory cells can withstand a limited number of rewrites. An SSD drive should be viewed as a consumable that you will need to replace after a certain time. Consumer-grade SSD drives can withstand a very low number of rewrites (so low, in fact, that these numbers are not shown in their technical specifications). SSD drives intended for storage clusters must offer at least 1 DWPD endurance (10 DWPD is recommended). The higher the endurance, the less often SSDs will need to be replaced, and this will improve TCO.
- Many consumer-grade SSD drives can ignore disk flushes and falsely report to operating systems that data was written while it, in fact, was not. Examples of such drives include OCZ Vertex 3, Intel 520, Intel X25-E, and Intel X-25-M G2. These drives are known to be unsafe in terms of data commits, they should not be used with databases, and they may easily corrupt the file system in case of a power failure. For these reasons, use enterprise-grade SSD drives that obey the flush rules (for more information, refer to http://www.postgresql.org/docs/current/static/wal-reliability.html). Enterprise-grade SSD drives that operate correctly usually have the power loss protection property in their technical specification. Some of the market names for this technology are Enhanced Power Loss Data Protection (Intel), Cache Power Protection (Samsung), Power-Failure Support (Kingston), and Complete Power Fail Protection (OCZ).
- It is highly recommended to check the data flushing capabilities of your disks as explained in Checking disk data flushing capabilities.
- Consumer-grade SSD drives usually have unstable performance and are not suited to withstand sustainable enterprise workloads. For this reason, pay attention to sustainable load tests when choosing SSDs.
- Performance of SSD disks may depend on their size. Lower-capacity drives (100 to 400 GB) may perform much slower (sometimes up to ten times slower) than higher-capacity ones (1.9 to 3.8 TB). Check the drive performance and endurance specifications before purchasing hardware.
- Using NVMe or SAS SSDs for write caching improves random I/O performance and is highly recommended for all workloads with heavy random access (for example, iSCSI volumes). In turn, SATA disks are best suited for SSD-only configurations but not write caching.
- Using shingled magnetic recording (SMR) HDDs is strongly not recommended, even for backup scenarios. Such disks have unpredictable latency, which may lead to unexpected temporary service outages and sudden performance degradations.
- Running metadata services on SSDs improves cluster performance. To also minimize CAPEX, the same SSDs can be used for write caching.
- If capacity is the main goal and you need to store infrequently accessed data, choose SATA disks over SAS ones. If performance is the main goal, choose NVMe or SAS disks over SATA ones.
- The more disks per node, the lower the CAPEX. As an example, a cluster created from ten nodes with two disks in each will be less expensive than a cluster created from twenty nodes with one disk in each.
- Using SATA HDDs with one SSD for caching is more cost effective than using only SAS HDDs without such an SSD.
- Create hardware or software RAID1 volumes for system disks using RAID or HBA controllers, respectively, to ensure its high performance and availability.
- Use HBA controllers, as they are less expensive and easier to manage than RAID controllers.
- Disable all RAID controller caches for SSD drives. Modern SSDs have good performance that can be reduced by a RAID controller’s write and read cache. It is recommended to disable caching for SSD drives and leave it enabled only for HDD drives.
- If you use RAID controllers, do not create RAID volumes from HDDs intended for storage. Each storage HDD needs to be recognized by Acronis Cyber Infrastructure as a separate device.
- If you use RAID controllers with caching, equip them with backup battery units (BBUs), to protect against cache loss during power outages.
- Disk block size (for example, 512b or 4K) is not important and has no effect on performance.
2.3.3.4. Network hardware recommendations¶
- Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and public traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent possible denial-of-service attacks from the outside.
- Network latency dramatically reduces cluster performance. Use quality network equipment with low latency links. Do not use consumer-grade network switches.
- Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.
- To avoid intrusions, Acronis Cyber Infrastructure should be on a dedicated internal network inaccessible from outside.
- Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
- For maximum sequential I/O performance, use one 1 Gbit/s link per each hard drive or one 10 Gbit/s link per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is important in backup scenarios.
- For maximum overall performance, use one 10 Gbit/s link per node (or two bonded for high network availability).
- It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (for example, 9000-byte jumbo frames). Such settings require additional configuration of switches and often lead to human error. 10+ Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.
- The currently supported Fibre Channel host bus adapters (HBAs) are QLogic QLE2562-CK and QLogic ISP2532.
- It is recommended to use Mellanox ConnectX-4 and ConnectX-5 InfiniBand adapters. Mellanox ConnectX-2 and ConnectX-3 cards are not supported.
- Adapters using the BNX2X driver, such as Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet / HPE FlexFabric 10Gb 2-port 536FLB Adapter, are not recommended. They limit MTU to 3616, which affects the cluster performance.
2.3.4. Hardware and software limitations¶
Hardware limitations:
- Each management node must have at least two disks (one for system and metadata, one for storage).
- Each secondary node must have at least three disks (one for system, one for metadata, one for storage).
- Three servers are required to test all of the product features.
- The system disk must have at least 100 GB of space.
- The admin panel requires a Full HD monitor to be displayed correctly.
- The maximum supported physical partition size is 254 TiB.
Software limitations:
- One node can be a part of only one cluster.
- Only one S3 cluster can be created on top of a storage cluster.
- Only predefined redundancy modes are available in the admin panel.
- Thin provisioning is always enabled for all data and cannot be configured otherwise.
- The admin panel has been tested to work at resolutions 1280x720 and higher in the following web browsers: latest Firefox, Chrome, Safari.
For network limitations, refer to Network limitations.
2.3.5. Minimum storage configuration¶
The minimum configuration described in this table will let you evaluate the features of the storage cluster. It is not meant for production.
Node # | 1st disk role | 2nd disk role | 3rd+ disk roles | Access points |
---|---|---|---|---|
1 | System | Metadata | Storage | iSCSI, S3 private, S3 public, NFS, Backup Gateway |
2 | System | Metadata | Storage | iSCSI, S3 private, S3 public, NFS, Backup Gateway |
3 | System | Metadata | Storage | iSCSI, S3 private, S3 public, NFS, Backup Gateway |
3 nodes in total | 3 MDSs in total | 3+ CSs in total | Access point services run on three nodes in total. |
Note
SSD disks can be assigned System, Metadata, and Cache roles at the same time, freeing up more disks for the storage role.
Even though three nodes are recommended even for the minimum configuration, you can start evaluating Acronis Cyber Infrastructure with just one node and add more nodes later. At the very least, a storage cluster must have one metadata service and one chunk service running. A single-node installation will let you evaluate services such as iSCSI, Backup Gateway, etc. However, such a configuration will have two key limitations:
- Just one MDS will be a single point of failure. If it fails, the entire cluster will stop working.
- Just one CS will be able to store just one chunk replica. If it fails, the data will be lost.
Important
If you deploy Acronis Cyber Infrastructure on a single node, you must take care of making its storage persistent and redundant, to avoid data loss. If the node is physical, it must have multiple disks so you can replicate the data among them. If the node is a virtual machine, make sure that this VM is made highly available by the solution it runs on.
Note
Backup Gateway works with the local object storage in the staging mode. It means that the data to be replicated, migrated, or uploaded to a public cloud is first stored locally and only then sent to the destination. It is vital that the local object storage is persistent and redundant so the local data does not get lost. There are multiple ways to ensure the persistence and redundancy of the local storage. You can deploy your Backup Gateway on multiple nodes and select a good redundancy mode. If your gateway is deployed on a single node in Acronis Cyber Infrastructure, you can make its storage redundant by replicating it among multiple local disks. If your entire Acronis Cyber Infrastructure installation is deployed in a single virtual machine with the sole purpose of creating a gateway, make sure this VM is made highly available by the solution it runs on.
2.3.6. Recommended storage configuration¶
It is recommended to have at least five metadata services to ensure that the cluster can survive simultaneous failure of two nodes without data loss. The following configuration will help you create clusters for production environments:
Node # | 1st disk role | 2nd disk role | 3rd+ disk roles | Access points |
---|---|---|---|---|
Nodes 1 to 5 | System | SSD; metadata, cache | Storage | iSCSI, S3 private, S3 public, Backup Gateway |
Nodes 6+ | System | SSD; cache | Storage | iSCSI, S3 private, Backup Gateway |
5+ nodes in total | 5 MDSs in total | 5+ CSs in total | All nodes run required access points. |
A production-ready cluster can be created from just five nodes by using the recommended hardware. However, it is recommended to enter production with at least ten nodes if you are aiming to achieve significant performance advantages over direct-attached storage (DAS) or improved recovery times.
Following are a number of more specific configuration examples that can be used in production. Each configuration can be extended by adding chunk servers and nodes.
2.3.6.1. HDD only¶
This basic configuration requires a dedicated disk for each metadata server.
Nodes 1-5 (base) | Nodes 6+ (extension) | ||||
---|---|---|---|---|---|
Disk # | Disk type | Disk roles | Disk # | Disk type | Disk roles |
1 | HDD | System | 1 | HDD | System |
2 | HDD | MDS | 2 | HDD | CS |
3 | HDD | CS | 3 | HDD | CS |
… | … | … | … | … | … |
N | HDD | CS | N | HDD | CS |
2.3.6.2. HDD + system SSD (no cache)¶
This configuration is good for creating capacity-oriented clusters.
Nodes 1-5 (base) | Nodes 6+ (extension) | ||||
---|---|---|---|---|---|
Disk # | Disk type | Disk roles | Disk # | Disk type | Disk roles |
1 | SSD | System, MDS | 1 | SSD | System |
2 | HDD | CS | 2 | HDD | CS |
3 | HDD | CS | 3 | HDD | CS |
… | … | … | … | … | … |
N | HDD | CS | N | HDD | CS |
2.3.6.3. HDD + SSD¶
This configuration is good for creating performance-oriented clusters.
Nodes 1-5 (base) | Nodes 6+ (extension) | ||||
---|---|---|---|---|---|
Disk # | Disk type | Disk roles | Disk # | Disk type | Disk roles |
1 | HDD | System | 1 | HDD | System |
2 | SSD | MDS, cache | 2 | SSD | Cache |
3 | HDD | CS | 3 | HDD | CS |
… | … | … | … | … | … |
N | HDD | CS | N | HDD | CS |
2.3.6.4. SSD only¶
This configuration does not require SSDs for cache.
When choosing hardware for this configuration, keep the following in mind:
- Each Acronis Cyber Infrastructure client will be able to obtain up to about 40K sustainable IOPS (read + write) from the cluster.
- If you use the erasure coding redundancy scheme, each erasure coding file, for example, a single VM HDD disk, will get up to 2K sustainable IOPS. That is, a user working inside a VM will have up to 2K sustainable IOPS per virtual HDD at their disposal. Multiple VMs on a node can utilize more IOPS, up to the client’s limit.
- In this configuration, network latency defines more than half of overall performance, so make sure that the network latency is minimal. One recommendation is to have one 10Gbps switch between any two nodes in the cluster.
Nodes 1-5 (base) | Nodes 6+ (extension) | ||||
---|---|---|---|---|---|
Disk # | Disk type | Disk roles | Disk # | Disk type | Disk roles |
1 | SSD | System, MDS | 1 | SSD | System |
2 | SSD | CS | 2 | SSD | CS |
3 | SSD | CS | 3 | SSD | CS |
… | … | … | … | … | … |
N | SSD | CS | N | SSD | CS |
2.3.6.5. HDD + SSD (no cache), 2 tiers¶
In this configuration example, tier 1 is for HDDs without cache and tier 2 is for SSDs. Tier 1 can store cold data (for example, backups), tier 2 can store hot data (for example, high-performance virtual machines).
Disk # | Disk type | Disk roles | Tier |
---|---|---|---|
1 | SSD | System, MDS | |
2 | SSD | CS | 2 |
3 | HDD | CS | 1 |
… | … | … | … |
N | HDD/SSD | CS | 1/2 |
Disk # | Disk type | Disk roles | Tier |
---|---|---|---|
1 | SSD | System | |
2 | SSD | CS | 2 |
3 | HDD | CS | 1 |
… | … | … | … |
N | HDD/SSD | CS | 1/2 |
2.3.6.6. HDD + SSD, 3 tiers¶
In this configuration example, tier 1 is for HDDs without cache, tier 2 is for HDDs with cache, and tier 3 is for SSDs. Tier 1 can store cold data (for example, backups), tier 2 can store regular virtual machines, and tier 3 can store high-performance virtual machines.
Disk # | Disk type | Disk roles | Tier |
---|---|---|---|
1 | HDD/SSD | System | |
2 | SSD | MDS, T2 cache | |
3 | HDD | CS | 1 |
4 | HDD | CS | 2 |
5 | SSD | CS | 3 |
… | … | … | … |
N | HDD/SSD | CS | 1/2/3 |
Disk # | Disk type | Disk roles | Tier |
---|---|---|---|
1 | HDD/SSD | System | |
2 | SSD | T2 cache | |
3 | HDD | CS | 1 |
4 | HDD | CS | 2 |
5 | SSD | CS | 3 |
… | … | … | … |
N | HDD/SSD | CS | 1/2/3 |
2.3.7. Raw disk space considerations¶
When planning the infrastructure, keep in mind the following to avoid confusion:
- The capacity of HDD and SSD is measured and specified with decimal, not binary prefixes, so “TB” in disk specifications usually means “terabyte.” The operating system, however, displays a drive capacity using binary prefixes meaning that “TB” is “tebibyte” which is a noticeably larger number. As a result, disks may show a capacity smaller than the one marketed by the vendor. For example, a disk with 6 TB in specifications may be shown to have 5.45 TB of actual disk space in Acronis Cyber Infrastructure.
- 5 percent of disk space is reserved for emergency needs.
Therefore, if you add a 6 TB disk to a cluster, the available physical space should increase by about 5.2 TB.
2.3.8. Checking disk data flushing capabilities¶
It is highly recommended to ensure that all storage devices you plan to include in your cluster can flush data from cache to disk if the power goes out unexpectedly. Thus you will find devices that may lose data in a power failure.
Acronis Cyber Infrastructure ships with the vstorage-hwflush-check
tool that checks how a storage device flushes data to disk in emergencies. The tool is implemented as a client/server utility:
- The client continuously writes blocks of data to the storage device. When a data block is written, the client increases a special counter and sends it to the server that keeps it.
- The server keeps track of counters incoming from the client and always knows the next counter number. If the server receives a counter smaller than the one it has (for example, because the power has failed and the storage device has not flushed the cached data to disk), the server reports an error.
To check that a storage device can successfully flush data to disk when power fails, follow the procedure below:
On one node, run the server:
# vstorage-hwflush-check -l
On a different node that hosts the storage device you want to test, run the client. For example:
# vstorage-hwflush-check -s vstorage1.example.com -d /vstorage/stor1-ssd/test -t 50
where
vstorage1.example.com
is the host name of the server./vstorage/stor1-ssd/test
is the directory to use for data flushing tests. During execution, the client creates a file in this directory and writes data blocks to it.50
is the number of threads for the client to write data to disk. Each thread has its own file and counter. You can increase the number of threads (max. 200) to test your system in more stressful conditions. You can also specify other options when running the client. For more information on available options, refer to thevstorage-hwflush-check
manual page.
Wait for at least 10-15 seconds, cut power from the client node (either press the Power button or pull the power cord out), and then power it on again.
Restart the client:
# vstorage-hwflush-check -s vstorage1.example.com -d /vstorlage/stor1-ssd/test -t 50
Once launched, the client will read all previously written data, determine the version of data on the disk, and restart the test from the last valid counter. It then will send this valid counter to the server and the server will compare it to the latest counter it has. You may see output like:
id<N>:<counter_on_disk> -> <counter_on_server>
which means one of the following:
- If the counter on the disk is lower than the counter on the server, the storage device has failed to flush the data to the disk. Avoid using this storage device in production, especially for CS or journals, as you risk losing data.
- If the counter on the disk is higher than the counter on the server, the storage device has flushed the data to the disk but the client has failed to report it to the server. The network may be too slow or the storage device may be too fast for the set number of load threads, so consider increasing it. This storage device can be used in production.
- If both counters are equal, the storage device has flushed the data to the disk and the client has reported it to the server. This storage device can be used in production.
To be on the safe side, repeat the procedure several times. Once you have checked your first storage device, continue with all of the remaining devices you plan to use in the cluster. You need to test all devices you plan to use in the cluster: SSD disks used for CS journaling, disks used for MDS journals and chunk servers.