You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side.
In its first 10 years, Arsalon Technologies’ staff of two grew tenfold. As its headcount expanded, so did the Lenexa, Kan., web/business hosting provider’s footprint. By summer 2010, it had begun building out the IT infrastructure in its third Kansas City–area data center.
Soon after, growing pains started to kick in. By 2012, “our clients’ storage needs were growing exponentially, which forced us to look at a larger storage platform,” explains Arsalon Director of Engineering Brad Hajek.
He and his team were surprised to find themselves shopping to replace systems they had put in place just two years earlier. Instead of trying to predict the future, this time around they looked for a storage solution that could expand alongside the company. “We didn’t want to have to go through this every year or two,” says Hajek.
That’s the thought process behind one of today’s biggest trends in data centers, says David Cappuccio, managing vice president and chief of research at Gartner. To meet unprecedented and unpredictable storage and processing demands, companies are turning to modular design techniques in their data centers — or even modular data centers themselves. This could mean constructing one small building and leaving space for an adjacent one in a few years; building a large data center but equipping only a small portion of the space and filling it as needed; or going the modular equipment route, which is what Arsalon did.
After researching comparable products from various vendors, Arsalon settled on the FlexPod platform, which combines a NetApp storage system with Cisco System servers tied to the company’s VMware infrastructure. Aside from the price, which was at the lower end of the spectrum, Hajek and his team were attracted to FlexPod’s tiered fee structure.
“We don’t have to buy it all up front and hope we use it. But at the same time, we never have to worry about whether we have enough storage available,” says Hajek. “We get the latest, greatest, high-end storage platform and all the benefits that come with it, but we don’t have to foot the bill for this large SAN that who knows if we’re going to have to use it.”
The challenge for Fairport, N.Y.-based ConServe, an accounts receivable management company, is to run an enterprise-level data center with the resources of a midsize company. “We’re not very big, but we have a lot of data and a lot of processes going on behind the scenes that allow us to meet our business goals,” says ConServe Network Operations Manager Justin Spooner.
ConServe typically is not compensated unless the accounts it is working on behalf of its clients are resolved. So it does a lot of portfolio analysis to ensure it’s focusing the right types of efforts and strategies on successfully closing accounts.
Five years ago, when ConServe won a new contract from the U.S. Department of Education and saw its headcount jump from about 200 to 500, it moved from a data center with physical servers, applications and storage to centralized virtual servers.
Last year, ConServe was ready to upgrade its storage, but with Spooner’s limited IT staff, he was concerned about the time and expertise it would take to integrate a new system with the VMware infrastructure.
“We need enterprise features, but I’ve got a seven-person IT infrastructure team to support that entire architecture,” he says.
That’s what made EMC’s VSPEX platform so appealing. EMC configured and tested the modular system, which manages data across tiers — archived data on slower, less expensive storage and active data on more expensive, faster storage — to be interoperable with VMware for virtualization, saving his team that work. They were able to simply follow the comprehensive technical documentation provided by EMC to do the configuration quickly and easily.
“That was a huge benefit for us,” says Spooner. “Without having a team of engineers on staff, that’s the nice thing that the EMC solution brought to us: a storage solution that’s been tested with virtualization and backup solutions as well as core switching solutions. It’s almost a data center in a box.”
“We get the latest, greatest, high-end storage platform and all the benefits that come with it, but we don’t have to foot the bill for this large SAN that who knows if we’re going to have to use it.” says Arsalon Technologies Director of Engineering Brad Hajek.
Modular data centers not only save money on equipment and construction, they also conserve energy, which accounts for a big portion of a data center’s budget.
“Data centers take a lot of power to run, so getting the most efficiency out of your power infrastructure and your building infrastructure is very important,” says Hajek.
He views energy savings goals as an important component of data center planning and predicts that more companies will look to alternative energy solutions in the years to come. “Maybe solar becomes more attractive, or even wind,” he says.
Cappuccio is seeing more companies use outside air or liquid cooling in their data centers. The downside to liquid cooling is that it’s more complex than forced-air systems, and it poses the slight risk of getting liquid on the equipment. But the advantage is that it uses an absolute minimum of electricity. The energy savings from cooling can be in the 40 to 50 percent range, he says. “It’s huge.”
A related trend is building data centers using power and cooling zones, Cappuccio adds. For instance, 85 percent of the data center may be built for low-density equipment, while 15 percent of the floor holds the higher-density machines. That high-density zone is optimized for high-efficiency cooling, says Cappuccio, “so you’re not designing the whole floor for ultimate capacity.”
As senior manager of enterprise architecture at the Government Employees Health Association (GEHA), Brenden Bryan worked to shrink the organization’s data center’s footprint while significantly boosting its capacity.
The Lee Summit, Mo., company, which provides health insurance to federal government employees, was replacing a mainframe core with a distributed architecture when Bryan came on board in 2010. “I knew I had to have a new system in place that was going to be able to run that,” explains Bryan. “So we took a greenfield approach and built out a new core network, new storage and new compute, basically setting the foundation for a private cloud.”
They put in new blade servers, NetApp storage and Brocade VDX/VCS Ethernet fabric and virtualized the entire infrastructure — email, web servers and the database and production environments.
GEHA went from 350 physical servers, only a few of which were providing virtual services, to 136 VMware hypervisors hosting well over 1,000 virtual machines. “There are cost savings to be had with that kind of physical reduction,” Bryan says. “It was a drastic reduction in data center floor space usage. Plus, we were able to basically increase performance since we built a very dense, high-speed network and computer infrastructure. Performance went through the roof.”
But the real difference came last year, when GEHA put in a storage array built from flash memory and put its SQL production database tier on it.
“It’s extremely fast,” Bryan says.
His advice for those looking to modernize their data centers: “Take a real hard look at flash. I think it’s is going to revolutionize data center design in the same the way that virtualization did.”
With a virtual infrastructure, underperforming storage can negate the gains from a fast network, but Bryan likens the difference in speed with flash to going from dial-up Internet to broadband. “It’s a game changer,” he says.