Cloud storage: How to measure its performance

About 50% of enterprise data is now stored in the cloud, and this number increases even more when considering private and hybrid clouds. Cloud storage offers a great deal of flexibility and is quite economical. Businesses can opt for superior web services, such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure, or for local cloud providers, which are often more specialized. But how do you measure the effectiveness of cloud storage services? When warehousing is on premises, there are a thousand well-established ways to track performance. In the cloud, dimming begins.

It’s partly because of the multitude of choices that make things complicated. In fact, cloud storage comes in a variety of formats, capacities, and performance. This can be file, cluster or object storage, hard drive-based systems, virtual machine attached storage, NVMe, solid state drives and even tape, or technology that runs on-site “like the cloud”.

Therefore, it is more difficult to compare and monitor storage instances in the cloud than on-premises. In addition to traditional storage performance metrics, such as IOPS and throughput, IT professionals evaluating cloud systems must consider other criteria, such as cost, service availability, and even security.

Classic storage metrics

Typical metrics obviously apply to cloud storage, but it’s hard to interpret. Professional storage systems are rated by two main measures of “speed”. Transfer rate is the rate at which data is transferred to and from the storage media, expressed in bytes per second. IOPS measures the number of reads and writes (input/output operations) performed per second.

For these measurements, hardware manufacturers distinguish between write and read speeds: write speeds are generally faster because data is first written to a cache on the volume’s board; This memory, DRAM, is much faster than the storage medium itself (a magnetic platter flapping by a header or NAND).

Manufacturers of hard disk drives, SSDs, and disk arrays also distinguish between sequential and random read/write. These measurements depend in particular on the movement of the read/write heads on the platters of the mechanical hard disk and on the need to erase pre-existing data before writing. Random read/write performance is usually the best indicator of actual performance.

Hard disk drive manufacturers also talk about revolutions per minute (rpm, or RPM) for mechanical drives. It’s usually 5400 or 7200 rpm for consumer hard drives and sometimes 10,000 or 12,000 rpm for professional models. These rotational speeds do not affect the flow; Moreover, at constant speed, the faster the hard disk spins, the lower its capacity, but its data, spread over a larger surface area, is likely to be written more reliably. On the other hand, the RPM goes hand in hand with the access speed (IOPS, so), because it allows the header to position itself more quickly where the coil is.

Therefore, the higher the number of IOPS, the more efficient the system is in accessing data. Mechanical hard drives typically range from 50 to 200 IOPS. SSD media is significantly faster, up to 25,000 IOPS or more.

In fact, the differences in performance decrease once we take into account the functional operations around the disks: processing done by the storage controller, data transmission on the network, use in a RAID configuration or even cache use.

Latency is the time taken to complete each I/O request. For a mechanical hard disk, it takes between 10 and 20 milliseconds. For hard drives, the number drops to a few thousandths of a second. Response time is often the most important metric in determining the best storage for an application.

Cloud Metrics

Translating traditional storage metrics to the cloud is rarely simple. Cloud storage buyers are usually not sure how to save their capacity. The exact combination of SSDs, mechanical drives, tapes, or optical devices depends solely on the provider and its service levels.

Large-scale cloud providers typically combine multiple storage types and use caching and load balancing, making performance data for the device itself less important.

Cloud providers also offer different storage formats: mainly block, file, and object, which makes comparisons between metrics more difficult.

Measurements will also vary depending on the quality of storage the company is purchasing. In fact, highballs now offer multiple levels of storage depending on performance and price.

Let’s not forget the service-oriented offerings, such as data backup, restore, and archiving, which come with their own metrics, including Recovery Time Objective (RTO) and recency. Recovered data (RPO, recovery point target), which is conditioned by the frequency of backups.

Block storage, the basic foundation, is still the easiest standard, at least among the big cloud providers. For example, for its GCP cloud platform, Google advertises the maximum number of continuous IOPS and the maximum sustainable throughput (in MB/sec) supported for block mode storage. This breaks down into read and write IOPS, and throughput per gigabyte of data per instance. But as Google points out: “Hard disk performance (IOPS and throughput) depends on disk size, number of vCPU instances, and I/O block size, among other factors. Google also provides a useful comparison of its infrastructure against a 7200-rpm physical disk.

Microsoft publishes a guide for customers who want to monitor their use of its object storage service called Blob. This guide provides a first idea of ​​storage performance metrics in the Azure world.

AWS offers similar guidelines for its Elastic Block Store (EBS) offering. Buyers can learn about different levels of storage, from high-performance SSDs to cold hard drive data storage.

Cost, availability and other useful metrics

Since cloud storage is pay-as-you-go, cost is always a major consideration. Again, the major cloud providers offer different tiers depending on the required performance and budget.

AWS, for example, offers general-purpose SSD storage for both gp2 and gp3, storage optimized for io1 and io2 performance, and st1 productivity-focused HDD volumes intended for “large-scale serial applications” (understanding the fundamentals of big data).

Cloud storage metrics aren’t just about cost and performance. We must also add the cost per gigabyte or per instance, the cost of the input data and especially the output (recovery). Thus some very cheap long-term storage offers can get quite expensive when it comes to recovering your data.

Another metric to consider is the usable capacity. It’s about knowing how much purchased storage is actually available for the app using it, and when that storage usage starts to affect the app’s actual performance. The numbers may differ, again, from those of domestic technologies.

IT managers will also take care of service availability. The reliability of storage components and subsystems has been evaluated traditionally based on mean time to failure (MTBF index) or more recently for SSDs, based on the TBW index (total terabytes written through life).

For cloud provisioning at scale, availability remains the most common and useful metric. Cloud service providers increasingly express metrics of availability or uptime as “five nines” (“available 99.999% of the time”) just as data centers or carriers are. This is often the best level of service (SLA) and the most expensive.

This is not all. Cloud storage buyers will also need to consider geographic location, redundancy, data protection, compliance, security, and even the financial strength of the provider. These standards are not performance measures per se, but if a service provider fails, you risk not being able to use their service at all.

Leave a Reply

Your email address will not be published.