Here are the influential voices leading the conversations where nonprofits and technology overlap.
If yours is like most small companies, you have one or two backup drives and use some kind of tape rotation that was put in place several years ago. Instead, think of your backup strategy as contingency planning, business continuity in the event of a disaster (including server failure) and ease of data restoration. To that end, techniques such as centralizing data, establishing a recovery flow chart, determining restore parameters and using different types of backup can ensure that your backup strategy will accomplish what you need.
Centralize Your Data: If your data isn’t centrally located, returning to your current state after a hardware failure or natural disaster will take much longer. Think about the impact on your business if you lost critical data. A great deal of valuable data often resides on PCs and not in centralized locations. If you haven’t already done so, the first step in disaster planning is to centralize vital information.
Lightning never strikes the same place twice. That’s what Gregg Skala, IT director at Neuco, would like to think. The Downers Grove, Ill.-based distributor of heating and air-conditioning controls experienced the fallout of a lightning strike this July. The resulting electrical surge damaged two dumb terminals, a few printers and numerous warehouse computers. While Skala needed to purchase replacement machines, the company’s data wasn’t scorched, because of solid backup procedures, such as centralizing data and establishing redundancy on key systems.
“Our main system with all important business data is on one system,” he says. “For the most part, our PC-based data resides on a shared network file.”
To centralize, redirect all end-user documents from my documents folders to secure individual network shares. If you are on a smaller network with just a handful of users, set up a share on the server with a subdirectory for each user. Then you can just right-click on the end-user my documents folders on each PC and redirect them to a network share. If you have a group of users who mostly share documents, you can redirect all of them to a common share so that they see one another’s documents by default.
In Windows, Group Policy can assist in the backup of all the desktop documents and in redirecting the desktops from individual PCs to network shares.
Create a Disaster Flow Chart: Now that you have centralized everything, you need to think about how to store and back up your data and consider an approach that will best suit your business model. You also need to think about where you are geographically, what common threats are experienced in your locale and how your data would survive those threats.
“You never want ownership to feel like they are not informed,” warns Skala. “We started out with our worst-case scenario” when Neuco developed its disaster recovery plans. Neuco decided that a tornado wiping out the business was the worst case, and planned backward to understand what it would take to process accounts receivable. The planners prioritized process by process until they understood how to re-create the business and its key systems.
Suppose there was a fire and the sprinklers went off and soaked the servers, or fire overtook the building. Imagine a flood or a gas explosion. How would your business continue, and what’s the probability of your data surviving? These are hard questions and require a lot of “what if” planning.
A basic flow-chart plan is a good way to start your planning. Think of the scenarios you might encounter and how each would affect your IT environment.
Establish Your Tolerance Level: Once you have determined what can go wrong, figure out how much that would cost per hour, per day, per week or whatever other timeframe works. This can be a difficult calculation, but understanding exactly how your business operates and relies upon various business elements can help. For example, if you work for a hospital, imagine what would happen if record-keeping systems failed and no patient records could be accessed for four hours. Determine how many records are accessed during an average four-hour period; now determine what would happen if those records weren’t available.
“You need to decide what your comfort level is,” says Skala. “How much of your infrastructure are you willing to rebuild? Our comfort level is nightly. We don’t want to have to re-create more than that. Our business relies on being open to ship our products, so being down one day will cost us.”
Select Backup Options: Choices include drive mirroring, tape and disks. Drive mirroring uses two similar drives that read and write the same data. If one drive goes out, you have no interruption and usually not much performance degradation. You usually hear an audible alarm or get a system message if there is an error in the mirror.
A striped array uses three to five drives. In a striped array, the data gets written across several drives with a checksum. If a drive goes out, the system can switch to a hot spare and rebuild using the checksums. The advantage of a striped array is that your data is always live. The disadvantage is there’s a high probability for failure without a good safety net. Recovery will take longer, so most organizations use tape drives for backup.
The type of drive you choose should be based on the density of your storage. If you have a small amount of data, a DAT (DDS3) drive can typically hold up to 24GB. If you have high-density data, perhaps a DLT system and possibly a tape changer are in order. These tapes top out at 80GB each. Remember, tapes are sensitive to environmental variables such as temperature, humidity and magnetic fields.
Another option is to duplicate your servers with their arrays by clustering. With clustering, two essentially independent servers run with identical hardware configurations, which duplicate each other’s stored data. These servers can be in the same room, or across the building from each other, or across town, or even in different states.
Jeff Bobst, a network administrator at Mesa, Ariz.-based Acoustic Technologies, images his data and uses “live state” recovery services that create a snapshot of the server and hard drive. The company keeps daily, weekly and monthly archives. In case of failure, Bobst can restore the images to one of two secondary servers, which are typically idle and are identical to production servers.
“We have other levels of redundancy,” he says. “If a whole server goes down, we keep a spare server in the background.”
Instead of buying tape, Bobst purchases 400GB hard drives to store archives and keeps those replaceable drives offsite. “It’s inexpensive and easy to restore,” Bobst says. “And we can fit more data onto the hard drive than we can on tape.”
Disk-based incremental backup systems, specially configured servers with huge hard disk capacity, can assist in automatically backing up all data that has been added or changed, or even deleted. For security reasons, you may want to consider locating these units in a different area from that which houses your servers, such as a different room, a different floor, or a remote office. Disk-based backup can be used instead of, or in addition to, clustering.
Even if you do this, you can still think about offsite vendors for business continuation services in the event of a serious disaster. It just depends on how you view potential data loss and how much you want to spend to keep your data live and your restore times to a minimum. Typically, the more reliable the system you devote to keeping your data live and liquid is, the more expensive it will be.