A user calls in to the help desk, complaining of poor application performance. The application is critical to her productivity. A ticket is opened, and it’s noted that she is calling from one of the organization’s branch locations. Eventually a network engineer gets involved, notes that someone at the branch is transferring a large data set by the File Transmission Protocol from the main office to the branch. Or perhaps it’s a new marketing flyer, several gigabytes in size, that’s being reviewed across the link by the branch manager.
Problems like these occur every day in organizations. While the cost of network bandwidth is decreasing in price, it is still a recurring cost, particularly above T1 speeds. Most of the time the network connections aren’t saturated, so buying larger gear to accommodate the exception would not be an efficient use of expenditures. What, then, can you do to help users share data across your existing WAN links?
The first item on the agenda is to understand your organization’s priorities and existing processes, and learn what the users experience firsthand. What applications are critical or — just as important — sensitive to network disruption and bandwidth limitations? Is there a daily pattern to the utilization? Many organizations see a double-hump pattern: a bump in utilization between 8 and 10 a.m., then a drop-off before lunch, followed by another bump after lunch and finally tapering off after 4 p.m.
You also need to see for yourself what your users experience. One quick and easy way to do this is to set up a spare workstation at a branch, connect a remote desktop (RDP) session to it and then actually work from it. It’s not completely true to the experience — the RDP session itself depends on the available bandwidth — but you’ll get a good idea of what’s happening. End users often can’t articulate in technical terms what they experience, but walking in their shoes for a week or two will really open your eyes.
Armed with a good understanding of your organization and a sense of what needs to be accomplished, how do you achieve your goals? Here are just a few ideas to help get you started:
1. Implement a solid Quality of Service policy.
A QoS policy is simply a list of prioritized applications and how much bandwidth they are guaranteed. QoS is most often implemented in your network infrastructure on the Cisco routers  on either end of the branch connection. Typically, a policy refers to a set of IP addresses on both ends (and possibly a port number) to define the clients and the server application to which they are connecting.
For example, consider a QoS policy guaranteeing 5 percent of the bandwidth from the clients in a branch office to a Citrix server . This policy would prevent clients from being dropped during high utilization — the main complaint among Citrix users. When there are no clients connected, or they aren’t using their 5 percent of bandwidth, other protocols and connections can use it. If the clients need the bandwidth, the router guarantees them up to 5 percent before they start competing with other traffic.
In general, it is better to apply a policy that guarantees a specific service level than to “throttle” traffic. I’ve seen a number of organizations restrict an application to a certain level (for example, allowing only 30 percent bandwidth for web surfing), only to see the other 70 percent of the bandwidth go unused. I’ve also seen organizations “reserve” bandwidth, which means they dedicate a certain fraction of the pipe to an application. Again, if no one uses it, it goes to waste.
2. Deploy e-mail attachment policies or use Cached Mode.
If your mail server is centralized, e-mail attachments across WAN links can be the kiss of death.
The worst case is when someone in a branch office sends a large attachment (greater than 10 megabytes) to several users. This can bog down the network and be a drain on all e-mail traffic.
One way to combat this is to deploy attachment size limits: Keep attachments under 10MB. That will keep the hanging e-mail clients to a minimum. You could also deploy Outlook Cached Mode, which lets the e-mail client synchronize a local copy of mail, much like a smartphone does. Large attachments still take a long time to be received, but they won’t hang the client while they are being transmitted. Cached Mode does come with its own issues, though, such as a delayed Global Address Book update and some e-discovery pitfalls.
3. Use Windows Distributed File System.
If you limit attachment sizes, within a week someone will come up with a reason for needing to send a 50MB attachment to 16 people in other branches. If you’re going to take away a medium, you must provide another that’s equally easy to use.
Windows Distributed File System could be part of that plan. DFS allows you to have multiple copies of a file system, which it automatically replicates using the Background Intelligent Transfer Service or the Windows File Replication Service. In addition, based on the sites and subnets created in Active Directory, it directs each client to the nearest available replica, which hopefully is in their own branch. There is one catch: If multiple users write to the same file before it is replicated, the last change overwrites all others.
4. Virtualize on all fronts.
First, companies virtualized servers and applications; now the focus is on workstations and desktops. Virtualizing a remote office’s desktop has several benefits:
Virtualization can be accomplished in many ways. For example, users could connect to Windows Terminal Services  or a Citrix XenApp server . Or, if you’re feeling adventurous, you could jump into the world of Virtual Desktop Infrastructure . VDI places a simple thin client on the user’s desktop, while the entire operating system — applications and all — runs powered by processors in the data center.
5. Invest in WAN networking solutions.
These products generally use caching to reduce bandwidth utilization. In their simplest form, they cache data and, if the same data are re-sent, the devices are smart enough to serve up the cached bits instead of sending a new copy.
For example, consider a 30MB report published on your company’s intranet. The first person in a branch to click on the link has to wait while all those data make their way down a T1 pipe: about a two to three minute wait. The local network device stores a copy in cache at the branch; the next user at the branch to click on the link will actually get it — and get it quickly — from the cache, rather than from the main office web server.