You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side.
For the large organization, a typical infrastructure scenario will include several centralized data centers that may even be located on different continents to serve global operations. Over time, these hubs generally will expand the data they handle as well as the applications they push out to far-flung users.
In addition, as the enterprise consolidates to take advantage of data center capacity, many are migrating to virtualized environments. Keep in mind, while this reduces the footprint, it also adds to network traffic.
In such a scenario, it can prove expensive to maintain effective replication of the data and provide adequate failover for applications. The result can leave the organization exposed without a viable disaster recovery approach, notes Nik Rouda, director of marketing, solutions and verticals at Riverbed Technology.
“We have seen an increased use of networks for disaster recovery in the past,” he adds. “But it has been very expensive to perform backup and replication across the WAN.”
The cost associated with disaster recovery is dramatically decreased on an optimized network. This is because moving data is simpler and cheaper to do, which means DR strategies are more affordable.
But beyond cost factors, WAN optimization also alleviates the technical constraint of moving massive amounts of information over long distances. “The challenge is to keep the data centers in sync with one another,” according to Mark Urban, senior director of product marketing at Blue Coat Systems. “Firms also want to be able to constantly move vast amounts of data between them.”
Center-to-center backup and replication thus becomes practical and feasible for organizations, and freeing bandwidth creates the opportunity to generate a high-availability network and resources. “More data can be protected,” Rouda says. “What’s more, backups can be more frequent and data recovered much faster, which makes recovering from a disaster easier and cheaper.”
The ability to recover quickly is not up for debate, Rouda says. He points to the recent spate of outages that several household-brand companies have experienced. “There have been some big outages recently, and companies need to understand the broader range of solutions and the need for comprehensive protection.” WAN optimization offers a big step toward that protection.
One concern many companies have about WAN optimization is security. Won’t increasing the amount of data going across the network spawn increased security needs? By bumping up the speed and volume of data, the ability of existing security solutions to handle the increased load should certainly be assessed.
“Security people are seeing the effect of the ‘tyranny of the numbers,’” says Mark Kadrich, an independent consultant and former head of The Security Consortium. “More traffic means there is more to inspect in order to find that golden nugget telling us there is a breach.”
With the focus on network infrastructure moving from the LAN to the WAN, with large optimized connections and different features being applied such as load balancing, it gets tricky for security tools to perform well.
“Many companies are operating at near capacity,” Kadrich says, “In addition, there is a trade-off between increasing network loads by several orders of magnitude and the need to inspect traffic.”
What this means is that companies must include both the network providers and security enforcers on their IT teams in the optimization process. “It comes down to how well we distribute resources,” says Kadrich.
Despite the concerns, there is a major advantage for the corporate infrastructure in optimizing WAN traffic and creating a matching information assurance strategy, Kadrich acknowledges. Specifically, it strengthens a business’s disaster recovery capabilities.