Here are the influential voices leading the conversations where nonprofits and technology overlap.
Tronair has been on a growth curve for more than a decade. Today, the manufacturer of aircraft ground-support systems based in Holland, Ohio, sells more than 1,000 products and has recently opened offices in England, Italy and Thailand.
But the continued expansion greatly taxed Tronair’s computing resources. The company’s 4-year-old Windows Small Business Server — acting as domain controller, Exchange server, application server and print server — became a single point of failure. Maintenance was nearly impossible, and malfunctions were devastating.
Clearly, the SBS system’s roles had to be revamped, but simply adding more servers wasn’t the answer. The distributed server system — buying a separate server for each major task — was outmoded; exponential increases in processing power made it possible to do much more with much less.
Tronair faced two choices: It could go with the traditional approach of buying a server for each task (with a few extra to fill the gaps), ignoring for now the limitations down the road; or, it could invest in virtualization software, which had the potential to consolidate the company’s servers, increase utilization rates, and greatly reduce energy consumption and cooling costs while maintaining maximum uptime.
The IT team at Tronair recognized early on that the Small Business Server was never intended to handle a company the size that Tronair had become. The single server was overburdened while newer servers were mostly sitting idle.
The technology was aging, the operating systems were bloated with patches and becoming prone to error, overall performance was declining while demand for resources was increasing, critical systems were expected to remain available longer to service the company’s new global locations, and there was insufficient redundancy to handle errors — all of which added up to one massive disaster-recovery headache.
Capturing a snapshot for disaster recovery took hours to complete, even with advanced tools from Symantec. While the images were being taken, certain services had to be shut off, which meant anyone overseas who needed to use the system was out of luck. If the SBS server were ever to need reimaging, calculating the loss to the company would be painful.
The final straw came almost a year after the release of Windows 2003 Service Pack 2. A lot of research had been done to prepare for the installation of the service pack and all known issues had been documented. A block of downtime was scheduled late on a Sunday night to reduce interference. The server was backed up and ready, and confidence was high — until more than halfway through, an error message popped up saying that the installation had failed. It was the last time Tronair would suffer through such humiliation and headache. Something had to be done.
If they followed the standard path — getting new, resource-laden servers; spending days, if not weeks, configuring and tweaking; divvying up the workload of the old servers onto the new; upgrading with all necessary patches and packs; migrating data and applications; reconfiguring workstations to point to the new servers, then saying a prayer — it would buy them only another three to five years. There had to be a better way.
Enter VMware Infrastructure. Imagine a system that can increase the overall performance of each server and the infrastructure as a whole by optimizing server resources (using 60 percent to 80 percent of resources instead of wasting more than 70 percent). Imagine a system that needs less physical rack space, saves on energy and heat, automatically handles server fail-over and load balancing, and is designed with disaster recovery in mind. And finally, imagine a system that can run not only the Exchange servers the IT team needs, but also the Linux servers the IT team wants.
If you are considering server virtualization, which features are most important to you?
37% Not considering virtualization
22% Server or disaster recovery
12% Better processor utilization
12% Reduced server maintenance
11% Better on-the-fly resource allocation
6% Reduced power consumption
VMware Infrastructure replaces multiple physical servers by converting them into virtual machines (VMs) on one or more ESX host servers, which are physical servers that run a small Linux-based operating system that creates the virtual environment. The servers use a storage area network (SAN) that stores the virtual machine files and various other tidbits for the ESX servers. The infrastructure — its features, licensing and configuration — is coordinated by another computer running VMware’s VirtualCenter (VC), which is accessible from any computer in the network using a small fat-client known as the virtual infrastructure client (VIC).
This design is simple, but incredibly powerful. The key to understanding it is to know how all PCs operate.
All the software is housed for the long term on hard drives, but that software must be loaded into RAM to operate. Why not just load an entire server — hardware, BIOS and all — into RAM, then move the hard drives to a place where more than one server can access them? Because the servers are virtual, several of them can run simultaneously on one or more ESX servers loaded with enough processing power and RAM to go around. The shared storage on the SAN, accessible by all of the ESX servers, replaces the hard drives. One more system acts as the controller. This configuration can run multiple virtual servers, each running several applications, which makes much more efficient use of hardware and eliminates wasted resources.
The number of ESX servers (at least two are recommended) and the amount of processing power and RAM they need depend on the number of VMs the infrastructure will host. Try to configure every physical ESX server as identically as possible. Making a comprehensive map of the resources used in the existing environment is critical to planning this correctly. Tronair gathered information from Counter Logs made over a few weeks’ time and a quick inventory of system resources, then extrapolated from a best guess on company growth.
The VC offers the administrator an interface that lets him control the entire infrastructure centrally. This can be done using any system; an old server displaced by the implementation will do nicely. The VC provides the tools to monitor and control each ESX server individually, or the global enterprise as a whole, broken down into individually manageable logical units called Data Centers.
Because the virtual machines exist in RAM and are loaded from shared storage, they can be migrated between ESX servers while running without anyone noticing. This requires at least two ESX hosts and one VC to control the process. To demonstrate, Tronair created the VC server on a VM and commanded it to migrate itself (remember, it controls this process), while remaining connected through the console software, a Remote Desktop session, and continually pinged it to boot. Only one packet of data was lost, which is a stellar performance, tantamount to pulling one’s self up by the boot straps. The benefit of this is easily seen when an ESX host needs to be taken down for maintenance.
An ESX host server can be taken offline at any time; simply migrate the VMs to another ESX host and shut the first host down. There will never again be a service interruption or weekend work hours for planned maintenance.
The VM files can be backed up, moved to any other similar ESX server infrastructure with adequate resources and powered up again. The VMs themselves are hardware independent. All similar ESX servers create uniform VMs in RAM, so driver issues are eliminated. A virtual machine created in ESX 3, for example, can be run on any other ESX 3 infrastructure on Earth so long as it has adequate resources. There is no major reconfiguration needed. Plus, any good SAN will take snapshots of the files it stores, which means additional disaster-recovery software is no longer needed.
When you take into consideration all the benefits the ESX servers offer, the total cost of ownership (TCO) is very low. VMware boasts cost savings of more than $3,000 annually for every workload virtualized.
The major costs for a changeover are the VMware Infrastructure software and the SAN. A small business could expect to spend about $10,000 to $50,000 on hardware (depending on the SAN), about $14,000 on software and about $16,000 in professional services. Extra money will be needed for training, tech support and warranties.
But money is saved by removing old, unneeded servers and discontinuing their expensive warranties. Before implementing virtualization, Tronair used five physical servers and three network-attached storage devices. When the project was completed, the company had only three physical servers and one SAN. Calculated at an average cost of $3,000 per server and $200 per extended warranty, Tronair’s removal of two servers (the hardware and warranties) and two storage devices (the warranties only) would save $6,800. Tronair added five virtual servers for which no physical hardware or warranty was needed, saving an estimated $16,000. All told, that’s more than $22,000 saved simply by implementing VMware.
VMware provides a TCO/ROI calculator on their Web site (http://roitco.vmware.com/vmw/).
Jeremy Dotson is a LAN administrator for Tronair (www.tronair.com), a manufacturer of aircraft ground-support equipment in Holland, Ohio.