Data center: Slower Disk Drives Could Slash Data Centre Power

Disks that slow down when their data goes ‘cold’ could cut power in data centres, says a Facebook engineer.

A simple change allowing hard drives to spin at a slower rate could allow many data centres to significantly reduce their power consumption, according to a Facebook engineer.
Writing on the blog of the Open Compute Project, an initiative created by Facebook in April to share custom data centre designs for improving efficiency, engineer Eran Tal noted that many data centres have tens or hundreds of thousands of “cold” hard drives, which contain data that must be retained but is rarely accessed.

‘Cold’ drives

While the servers are considered cold because they are rarely utilized, their hard drives are usually spinning at full speed although they are not serving data” Tal wrote. “The drives must keep rotating in case a user request actually requires retrieving data from disk, as spinning up a disk from sleep can take up to 30 seconds. In RAID configurations this time can be even longer if the HDDs in the RAID volume are staggered in their spin up to protect the power supply. Obviously, these latencies would translate into unacceptable wait times for a user who wishes to view a standard resolution photo or a spreadsheet.

Tal argued that reducing disk drive rotation speed by half would save roughly 3 to 5 watts per drive.
Data centres today can have up to tens and even hundreds of thousands of cold drives, so the power savings impact at the data centre level can be quite significant, on the order of hundreds of kilowatts, maybe even a megawatt”, Tal wrote. “The reduced HDD bandwidth due to lower RPM would likely still be more than sufficient for most cold use cases, as a data rate of several (perhaps several dozen) MBs should still be possible. In most cases a user is requesting less than a few MBs of data, meaning that they will likely not notice the added service time for their request due to the reduced speed HDDs. What is critical is that the latency response time of the HDD isn’t higher than 100 ms in order to not degrade the user experience.

The tricky part is that drives aren’t “cold” from the beginning, but progress gradually into that state, and may come out of it again in some cases.

Copying over the data to a low bandwidth system requires too much overhead and would be slow (since the target is low bandwidth), and as a result isn’t a standard mode of operation for most providers”, Tal wrote.

Dynamic RPM

The solution Tal suggests is having drives that can operate either at full or reduced speed, with the ability to toggle between the two.
The transition between these states can be long (like 15 seconds), as this would likely be a one-time event, triggered by an entity capable of determining that the box is no longer hot”, he wrote.

Tal estimated that for a 3TB or larger SATA enterprise 7200 RPM drive, switching down to 3600 RPM, the toggle time would be about 15 seconds and would reduce idle power consumption from 7W to 3W. The normal latency would go from about 10 ms to something under 100 ms, Tal said.

The idea is still only a concept, but some companies are already looking at implementing it in their products, according to users commenting on the Open Compute Project blog.
For instance, Western Digital is already shipping drives with low-RPM standby modes, according to a user identifying himself as Brandon Smith, an employee of Western Digital.

Implementations

However, Western Digital’s implementation requires the disk to spin back up to its normal rate before delivering the data, a process which takes “a few hundred milliseconds”, according to Smith.

The idea may need other tweaks before it can realistically work with hard-disk technology, Smith said. “A spindle motor designed to spin at 7200 RPM will not spin efficiently or consistently at 3600 RPM”, he wrote. “4500 to 5000 RPM is a more realistic number.”

Users noted that solid-state disks don’t have the idling or latency problems of standard disks, but SSDs are not yet widely used in datacentres due to their higher cost.
HDDs are the way to go, but we need to be realistic about the time it will take to access the data if we want to save energy”, Smith wrote.

Researchers have been developing the concept of multiple-speed disk drives for some time. Research published by the IEEE in 2003 (PDF), for instance, already noted that efficiency was becoming more important than performance for hard disk implementations in some cases, and looked at possible solutions for the dynamic RPM question.

For more information and a personalized IT Solutions business offer, please contact us.

  
Source: www.eweekeurope.co.uk