You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side.
The struggle to manage ever-escalating volumes of data has become a significant challenge for today’s CIOs and IT professionals. Historically, the standard method of connecting hosts to storage devices has been direct, one-to-one SCSI attachments.
Yet, as more and more storage devices and servers are added to meet growing demands, a direct-attached storage (DAS) environment can result in a proliferation of appliances — creating an enormous management burden for administrators, inefficient utilization of resources and severely limited data-sharing capabilities.
Thankfully, organizations are finding much-needed relief in storage virtualization — the pooling of physical storage from multiple network storage devices into what appears to users as a single device. Managed from a sole console, storage virtualization removes the complexity and dramatically reduces the time involved in overseeing traditional direct-attached storage solutions.
“It’s definitely a topic that most of my medium-to-large organizations are interested in,” acknowledges Anil Desai, an independent consultant based in Austin. “Once you move into an environment with higher storage requirements, it becomes too difficult to manage direct-attached storage. Plus, with DAS, you are dealing with issues such as wasted space, inefficient disaster recovery backups and other general IT management issues.”
Conversely, storage virtualization provides easier manageability, scalability from both a capacity and performance standpoint and increased storage efficiency. The technique also helps slash power and cooling costs while providing the enterprise with flexibility to respond to changing business requirements.
Virtualization gives IT managers the tools needed to stay in tempo with the vast explosion in data, most notably among medium-to-large organizations. Data stores are expanding exponentially, in part because of today’s tracking of ever-increasing volumes of data.
“Because of legal and regulatory compliance issues, the enterprise must store more data for longer periods of time,” Desai points out. “They need the additional capacity and performance that storage virtualization offers.”
“Storage technology has had to evolve rapidly to keep pace with the 24x7 demand for data access and reliability, largely influenced by Internet computing,” adds Jonathan Siegal, senior director of product marketing for EMC’s Unified Storage Division.
Another driving factor is an increase in static data, according to Siegal. Consisting predominantly of images and video, static data is often linked to supporting online catalogs, as well as corporate content such as medical imaging, geophysical and spatial imaging, movies, video surveillance and digitization of archived material.
“These trends continue to create data at a rapid pace with no end in sight,” Siegal says. “The newest phrase for this in terms of managing it all is ‘Big Data.’ This requires massive-scale storage and the file/indexing/compression and deduplication software to cope with and optimize the never-ending growth.”
Administrators who are grappling with managing these massive amounts of data will welcome the ability to easily tap into large pools of information through virtualization. “You can provision storage almost instantly and reallocate it as needed,” Desai notes.
The ease of completing backups also makes virtualization a boon for administrators, and can be the key to efficient, reliable disaster recovery initiatives, as well. “You can back up all storage centrally, rather than running individual backups on 20 servers,” Desai points out.
Implementing a virtualized storage environment also furnishes IT managers with a host of valuable tools. Exceptional efficiency can be gained from thin provisioning, a method of optimizing the available space in storage area networks (SANs). Thin provisioning operates by allocating disk storage space in a flexible manner among multiple users, based on the minimum space required by each user at any given time.
Another popular technique is data deduplication, which is a method of reducing storage needs by eliminating redundant data. Only one unique instance of the data is actually retained on storage media, with redundant data replaced with a pointer to the unique data copy.
Data deduplication can help organizations keep more money in their coffers by lowering storage requirements. The more efficient use of disk space also allows for longer disk retention periods, which provides better recovery time objectives and reduces the need for tape backups. The technique also decreases the amount of data sent across a wide area network for remote backups, replication and disaster recovery.
Yet another advantage facilitated through virtualization is storage tiering: the assignment of different categories of data to different types of storage media in order to reduce total storage costs. Categories may be based on levels of protection needed, performance requirements, frequency of use or other considerations.
“A properly optimized tiered-storage environment can minimize the inherent trade-offs between performance, storage efficiency and cost,” Siegal reports.
The two major approaches to achieving storage virtualization are SANs and network-attached storage (NAS). “Both are networking technology implementations enabling the storage, movement and sharing of data across a network environment,” Siegal explains.
SAN devices use block-level data access and generally rely on iSCSI, Fibre Channel, or FC over Ethernet protocols, while NAS provides file-based access much like a standard file server, and most often uses Server Message Block or related file protocols.
SAN solutions offer unlimited scalability, data isolation and high availability. Because a SAN can accommodate hundreds of disks, it does not limit a company to a handful of attached storage devices. Another advantage is the SAN’s ability to be configured into zones. For example, both UNIX and Windows servers can connect to the same SAN, but the data that each can access is different.
NAS systems provide their own lineup of benefits, including faster data access, easier administration and simplified configuration. By removing the file server function from overloaded general-purpose servers via a specialized, high-performance network-attached file server, NAS offers IT administrators a dependable, expandable and easy-to-install option to alleviate server storage overload.
Beyond the ease of management and increase in efficiency, storage virtualization can also positively influence an organization’s bottom line. Whether the enterprise uses return on investment or total cost of ownership to calculate potential savings, Siegal cautions that it should compile all of the related expenditures for the storage hardware and software deployment, as well as costs related to their choices that are beyond the storage itself. These include personnel and management costs; data center and operational costs such as maintenance, power and cooling; and elements such as networking costs that are affected by choices in storage technology.
“Many organizations today have short time horizons for measuring reruns on investment, often three years or less,” Siegal says. “When comparing alternative solutions over these short time horizons, total cost of ownership is often the simplest calculation and provides an answer in purely monetary terms, which is often the most meaningful.”
One measure that has helped make the implementation of storage virtualization more efficient and affordable is iSCSI, an IP-based storage networking standard for linking data storage facilities. The popularity of iSCSI has grown within the entry networked storage market based on the ubiquitous nature of the underlying Ethernet technology, as well as lower associated costs for cabling, server connections and switch ports compared with Fibre Channel.
“This simplifies the decision for organizations that may never have adopted Fibre Channel and who prefer to not add another technology to their environment,” Siegal points out.
“The real benefit is that iSCSI can run over existing infrastructure copper or Ethernet connections,” Desai reveals. “There are really no disadvantages to iSCSI. It’s much easier to manage traffic on a familiar network, and it generally has widespread support. It’s readily available and easy to set up, which is a key factor for many businesses.”
Looking toward the future of storage virtualization, Desai predicts improvements in techniques like deduplication, as well as more efficient methods of compression and more automated tiering processes. “Existing technologies will be improved, more efficient and more readily available,” he says.