It’s a common misconception that to be green and optimized you must practice energy avoidance while boosting storage capacity. It’s also a myth that optimized green storage isn’t appropriate for environments that want to boost productivity, enhance quality of service, reduce response time or improve performance. The reality is that there are optimization models  that yield productive, environmental and economic benefits.
By moving heavily accessed files or data — essentially consolidating I/Os to faster, yet more heavily used solid-state drives  (SSDs) or 15,500 RPM (15.5K) SAS  and Fibre Channel disks  — overall net capacity utilization can go up without impacting quality of service.
Specifically, using a mix of technologies aligned to meet specific tasks provides a balance of performance, availability, capacity and energy use. It can save money, provide for growth and make room for an additional storage system in the same footprint (floor space, power and cooling, and operating costs).
When optimizing storage, it’s helpful to establish a performance and capacity baseline by which to measure improvement. This will provide insight into how resources are used to deliver a given level of service.
It’s also good practice to align the applicable RAID-level configuration to meet specific application QOS requirements. Leveraging tiered storage mediums by aligning performance, availability, capacity, energy (PACE) and economic points to a given level-of-service requirement is another effective means of storage optimization.
It’s important to understand the difference between energy avoidance and energy effectiveness, which are often thought to be the same in the context of being green. Energy avoidance is, as its name implies, the process of avoiding work to eliminate the need for energy, which for inactive data has advantages.
But not all storage or data is inactive. That’s where the need for energy effectiveness comes in. Energy effectiveness is getting more work done per watt of energy. For example, intelligent power management (IPM), which is also referred to as second-generation MAID, uses disk or drive spin-down as a way to align energy usage to work being done without penalizing performance. There are those who think that the concept of massive arrays of idle disks as a technology is dead, but many vendors use IPM-based approaches to balance energy with performance needs.
Data center managers often view archiving as necessary for records management and regulatory compliance. But the reality is that It’s a tested technique for reducing the data footprint. Look into archiving of database, e-mail and general-purpose files, or home directory data.
Another time-tested data footprint reduction technique is compression. Compression can be done in-line for real-time access to online high-performance storage, for streaming backup or for network movement of data in post-processing scenarios.
Another emerging technique for reducing the data footprint is data deduplication, which trades time (performance) for maximized storage capacity (space). A common measurement is the dedupe ratio: the amount by which data is reduced. But for some applications, it’s more important to ensure that the data is protected over a specified period of time. When moving data over local, metro or wide area networks, bandwidth optimization techniques can let you either reduce the amount of network resources needed or enable more data to be moved in a given amount of time.
Thin provisioning for applications that are non-performance-sensitive, such as sparsely populated databases or file systems, is another technique for maximizing storage capacity and reducing the data footprint.
Finally, don’t forget to apply the correct RAID level to balance performance, availability, capacity, energy and economies.
In general, you want to consolidate underutilized storage onto higher-capacity storage mediums while keeping performance and availability in perspective. In other words, align the applicable tiered storage medium — including large-capacity Serial Attached SCSI or Serial ATA disk drives and magnetic tape — to reduce cost per capacity while boosting capacity per watt of energy consumed.
If your aim is to boost performance, the best approach is to use fewer but faster devices, such as 15K SAS and Fibre Channel hard disk drives (HDDs), and DRAM or flash-based solid-state drives. To boost storage capacity, increase storage utilization by placing less frequently accessed data on larger capacity, slower, lower-cost SAS and SATA 1-terabyte or 2-gigabyte disk drives or tape.
For storage capacity comparisons, look at capacity per watt; for performance, I/O operations per second. IOPS are essentially a measure of activity per watt. Likewise, from a cost perspective, look at cost per IOP or cost per unit of work performed and data moved.
If you strike a balance between fast 15.5K SAS and Fibre Channel HDDs and ultra-high-performance yet higher-cost SSDs, you can maximize effectiveness from a performance, availability, capacity and energy perspective. And, again, don’t forget to apply the applicable RAID level to meet specific performance needs — for example, RAID 1 for write-intensive, highly available applications.
With storage, more is not always better. Why? Because more disks, controllers, processors, cache memory or fast devices (including SSDs) do not guarantee better performance. If an SSD is in your future, look for solutions with demonstrated performance (including particularly low latency) to achieve efficient and economical storage.
There are many shades of green — and no silver bullet for optimizing storage. In the quest to reduce the data footprint and energy use, organizations often fail to consider all storage optimization and management options. By using a combination of techniques , net performance, capacity and feature functionality can be increased, while floor space, power, cooling and footprint can be reduced.