Here are the influential voices leading the conversations where nonprofits and technology overlap.
When it comes to transporting customers to airports or special events, EmpireCLS Worldwide Chauffeured Services is all about riding in style. Serving 650 cities worldwide, it counts itself as the largest ground transport company in the world.
And now, EmpireCLS is expanding its business by taking its proprietary dispatch and reservation systems to the cloud and offering them in a software as a service (SaaS) model for other ground transportation companies.
So what approach does a top-shelf company take when it comes to data centers? It counts every penny, of course. Case in point: EmpireCLS completely refreshes its servers every two to three years to take advantage of the latest technology innovations, which helps the company run more efficiently and significantly reduces overall IT costs.
“We’re finding that if we don’t do a refresh at least every three years it costs us more money,” says CIO Alan Bourassa. Through a combination of faster processors and virtualization technology, EmpireCLS not only brings in new hardware, but also consolidates the hardware by a factor of four with every refresh cycle. That means an average 33 percent reduction in power costs each cycle, plus additional savings from needing fewer network switches and less LAN cabling.
Citing similar results, server experts say the time may be right to take a fresh look at server environments and how often they’re refreshed.
“It’s all about boosting productivity, removing complexity and reducing cost,” says Greg Schulz, senior analyst for the Server and StorageIO Group, a consulting firm. “The key is the ability to promote effectiveness as well as efficiency, thanks to what today’s servers can offer.”
Why consider a server refresh now? The incentive is being able to do more to support the mission-critical and everyday goals of the organization. “When we talk to IT managers, it’s not just about doing more of the same activities more efficiently — they’re also being challenged to do new things,” says Alex Yost, IBM vice president and business line executive.
“How do you achieve that goal?” he asks. “The entities that want to do more than just keep the lights on have focused on innovation in the way they operate their data centers.”
That includes taking advantage of servers that run the fastest processors and greatest numbers of cores, such as the Intel Xeon processor E7 and Intel Xeon processor E5 families. With high-end processors, organizations get more work done — perform more financial transactions throughout the day, for instance, or analyze trends that can offer managers the latest data.
But heightened processing power can also have a ripple effect throughout the IT department. For example, IT administrators can take advantage of that extra power to consolidate servers and drive up utilization rates.
“Many people try to keep their servers running for five to seven years — until they die,” Bourassa says. “What they don’t realize is that considering the added costs to run those servers over seven years, it would have been better to buy a new server in year three.”
A server refresh also provides the opportunity to capitalize on the more extensive use of 64-bit technology by today’s operating systems and applications. Solutions such as Microsoft SharePoint and SQL Server, systems for Big Data and streaming video applications all require large blocks of memory to run successfully.
A move to the new Microsoft Windows Server 2012 is another reason to upgrade the hardware environment. The latest version of the widely used Windows server platform offers a number of tools that work hand in hand with the productivity-boosting goals of the latest hardware.
“The operating system addresses common challenges people encounter when they are managing servers, while also increasing reliability and availability,” says Anil Desai, an independent IT consultant. For example, Windows Server 2012 now offers two tools for server management, including both physical and virtual servers. “The operating system is designed from the ground up for the management of multiple servers,” Desai says.
Another driving force behind server modernization: increasing demand for greater collaboration. For example, the widespread adoption of mobile devices, including tablets and smartphones, is spurring demand for streaming video in meetings and pre-recorded web content. “All of these video assets demand more in performance and I/O than earlier systems have been able to support,” Yost says.
He adds that overall server performance depends on more than just processors: The solid-state storage resources in high-end machines further speed I/O capabilities.
Some IT managers are using the increased power and flexibility of high-end servers to refine their approaches to virtualization, Schulz says. Over the past several years, IT departments have capitalized on virtualization to consolidate hardware in an effort to boost utilization rates and reduce the number of boxes — saving on power costs, capital expenses and licensing fees in the process.
“Consolidation is relatively easy, low-hanging fruit,” Schulz says. “If you are looking at virtualization only in that context, you are missing out on other big opportunities.”
Rather than focusing on how many virtual machines can be packaged into each physical unit, organizations today are exploring how to use physical machines more effectively. For example, an IT department may invest in the latest servers to provide optimum performance for mission-critical applications during normal business hours.
But after hours, instead of powering down the machines, they can take advantage of virtualization to dynamically allocate other tasks to the equipment, such as running reports and analytics programs or performing backups. “Virtualization becomes an even more effective productivity aid,” Schulz says.
There’s a lot to like about the sophisticated new capabilities of the latest server platforms, but this may not be enough to convince budget-conscious CFOs and other senior executives to shorten established refresh schedules and invest in new servers.
So before they take a proposal to the C-suite, IT managers should spend time outlining the business benefits of a server refresh. In many cases, the most compelling argument may be a combination of hard numbers and enlightening anecdotes.
“Take a holistic view of your data center operations and how they connect to the rest of your business,” Yost advises. This includes not only the servers, but how this environment impacts other areas, such as storage and networking resources.
When all the components are in sync, IT services can be delivered via private clouds, for example, which can significantly reduce the time it takes to roll out new computing capabilities to business units.
“Putting more resources into innovation, rather than in managing and maintaining the existing data center, makes a big difference for entities we’ve talked to,” Yost says.
To bolster this vision for innovation, IT managers should also develop some concrete financial assessments to demonstrate the value of an accelerated server refresh. Energy savings is one area to research.
“You may end up using the same amount of energy when you replace servers, but you may boost the processing capability by 20 to 40 percent or more. This means you can do more work with that amount of energy,” Schulz says.
In some cases, the energy draw of high-end servers may actually decline compared with older units, thanks to new power-saving features. Intelligent Power Technology, which is available in Intel Xeon processor E7 and E5 CPUs, makes it possible to slow or idle processor cores when they’re not being used by an application or business process.
Other financial benefits may materialize simply from being able to pack more equipment into a data center without having to expand its physical dimensions.
“If you can accommodate more business needs in the same data center, that can save millions of dollars, and that’s a very compelling story for looking at new technology,” Yost says.
The data may be compelling, but CIOs shouldn’t assume a cost-conscious senior executive will automatically approve the regular capital expenditures (CAPEX) required for server refreshes.
To bolster his case, Bourassa creates a spreadsheet that clearly spells out how many servers he plans to buy in a given year and the spending this represents. Then, because each new server will do the work of four existing devices, he can show reduced costs for electricity bills and networking equipment.
“My spreadsheet shows that our capital expense is X and we’re going to save Y through consolidation, therefore here’s the saving to the company for its investment in servers,” Bourassa says. “We prove the savings to the finance people because they can see where the reductions are coming from.”
Another argument for accelerating the refresh cycle is the risk associated with delaying modernization. In an analysis titled Server Refresh Cycles: The Costs of Extending Life Cycles, Randy Perry, IDC vice president of business value consulting, found that while longer refresh cycles may postpone some capital expenses, the strategy can be counterproductive.
“In many cases, extending the life of servers too long can lead to an increase in operational expenses that could pay for investments in new technology,” he writes. “Before simply delaying new hardware purchases, organizations need to assess the impact on both capital budgets and operational budgets.”
According to Perry, a number of IDC studies have shown steep declines in the availability and reliability of most x86 servers once they have been in operation for about three and a half years. Organizations that push refresh cycles to five years increase the failure rate by 85 percent, on average, and see 21 percent more downtime than with equipment that’s three years old, he reports.
And when organizations update their software platforms but not their hardware environments, they may encounter incompatibilities that create patching and maintenance challenges — which means IT will have to spend more time maintaining the equipment just to keep the lights on, he concludes.
Some IT departments use a methodology known as total value proposition to promote the ROI in a server refresh. Proponents of TVP examine the processes that will see the greatest impact from a change in server technology, and assign a score of one to five to rate the impact.
Low scores may be assigned to processes that will see their costs decline significantly, while higher scores indicate added expenses. The number that results from dividing the sum of the scores by the number of processes being evaluated indicates whether a refresh will yield relative value.
While formulas like these can be broad indicators of potential financial benefits, they’re no substitute for a detailed analysis of the organization’s underlying needs and challenges.
“Every data center environment is different, and each one has unique constraints,” Yost says. “In some cases, the constraint may be that the organization is running an application it wrote 20 years ago and it’s not going to be rewritten at this time. In other cases, the constraint may be limits on capital or available power. So tools like TVP can be helpful, but it is absolutely essential to determine what your biggest limitations are, and your biggest organizational challenges, and then make sure to solve them.”
Getting the go-ahead to invest in a server refresh kicks off a series of important moves. Among the most important is coming up with a plan to ensure that implementing the new servers will keep business disruptions to a minimum. This is another area in which virtualization shows its value, especially if the organization has already combined server and storage virtualization.
“We migrate virtual servers in real time from the old hardware to the new without any disruption to the production environment,” Bourassa says. “The business people don’t even know the machine has been moved.”
Schulz points out a final step for getting the most out of a server investment: Don’t fail to take advantage of the useful life in the decommissioned servers. They may not be as powerful and sophisticated as the latest units that now run first-tier applications, but they may find a niche running other systems.
“By moving these servers to another computing pool, you may be able to extend their lives for another 18 months to two years,” Schulz says.