Tactical Advice

Why Putting Numbers to Your Risks Is Important

A quantitative risk assessment can make risk-management decisions easier.
Why Putting Numbers to Your Risks Is Important

“So, are you ready to make this investment in the security of your data?”

As the salesperson pushes across the table a proposal for a new fire-suppression system to protect your data center, you question whether the $50,000 investment is warranted.

Many technology professionals base risk-mitigation decisions on instinct and anecdote, sometimes leading to overly conservative approaches because of a fear of underinvesting in critical technology protection. The use of quantitative risk-management techniques can provide a dollars-and-cents basis for making these decisions and explaining the rationale to nontechnical managers.

Step 1: Identifying and Valuing Assets

The first step in the quantitative risk-assessment process is to identify and place values on the assets being considered. It’s important to express value in terms of dollars and cents, so this process works best when considering “hard” assets, such as servers, buildings and network gear. Qualitative approaches that use subjective terms, such as “low,” “medium” and “high,” are more appropriate for expressing intangible values, such as the value of information and reputation.

There are three major approaches to determining quantitative values for assets:

1. Purchase price is the most straightforward method and the easiest data to compile. Simply gather the invoices for all of the assets, and use that data to assign values. Although this is simple, it neglects two important considerations: the fact that equipment tends to become cheaper over time and the fact that a business’s equipment loses value as it progresses through its useful life.

2. Replacement cost compensates for the weaknesses of purchase price by using the cost that would be incurred to replace the assets. This is a favorite approach of many disaster planners, because it approximates the true cost of recovery.

3. Depreciated value is an accounting concept that attempts to convey that most assets decrease in value over time. It spreads the value of the asset over its useful life and then deducts a portion of it each year. For example, depreciating a $10,000 server using a five-year useful life would result in subtracting $2,000 of its value every year. Although the depreciated-value approach does incorporate information about the age of assets, most disaster recovery planners dislike it because of the limited market (and appetite) for used replacement equipment.

Asset identification and valuation provides one major component of the data required for a risk assessment.

Step 2: Identifying Risks and Exposure

The second major category of information to collect concerns the specific risks facing assets. The risks to consider will depend upon the physical, geographic and political climate. For example, businesses in Florida must keep hurricanes top-of-mind, while earthquakes are the greater risk to California-based operations.

When identifying risks, it’s important to be exhaustive, yet reasonable. The goal should be to enumerate every risk that could foreseeably affect IT operations. Don’t go off the deep end worrying about bizarre scenarios, but consider the likelihood of a risk materializing.

After identifying the risks that might affect operations, the next step is to assign an exposure factor (EF) to each risk/asset pair in the environment. Exposure factor, expressed as a percentage of asset value, is meant to approximate the percentage of the asset that will be damaged if the risk materializes. For example, if company leadership believes that a data-center fire could destroy three-quarters of the business’s asset value, the exposure factor is 75 percent.

The last piece of information needed about each risk is its annualized rate of occurrence (ARO). This is, quite simply, the number of times per year that the risk is expected to occur. In most cases, this is expressed as a decimal. For example, a business identified by the Federal Emergency Management Agency as lying in a 100-year flood plain should expect a flood to occur every 100 years. Translating this to a decimal value, they would expect 0.01 floods to occur per year, giving an ARO value of 0.01.

Step 3: Loss Expectancy: Crunching the Numbers

At this point in the risk-assessment process, most important assets should have been identified, values to those assets should have been assigned, the risks facing operations should have been listed, and the exposure to each of those risks should have been determined. The final concept, loss expectancy, pulls all of these values together. It’s necessary to calculate two separate loss expectancies, which build upon each other.

The first of these values is the single loss expectancy (SLE). This is the dollar value of the damage expected to occur each time the risk materializes and is calculated by multiplying the asset value by the exposure factor. If evaluating the risk of fire in a data center with an asset value of $10 million and an exposure factor of 75 percent, the single loss expectancy is $7.5 million. The result? Expect to incur $7.5 million of damage each time fire strikes your data center.

The second value, the annualized loss expectancy (ALE), brings probability into the equation and is the amount of damage expected in a typical year. The ALE is calculated by multiplying the SLE by the ARO. In the data-center fire example, if experts determine there is a 0.01 ARO, the ALE is then $75,000. This value simply annualizes the cost of a rare event for planning purposes.

Step 4: Risk-Mitigation Decisions

After calculating the ALEs for the risk/asset pairs, begin to use this information to make intelligent risk-mitigation decisions. Think back to the fire-suppression salesperson. If his $50,000 system would reduce by 50 percent either the risk of fire or the amount of damage caused by a fire, the $75,000 ALE indicates that the system would pay for itself in less than two years. That’s the power of how quantitative risk assessment can add science to risk-management decision making.

Sign up for our e-newsletter

About the Author

Mike Chapple

Mike Chapple is an IT professional and assistant professor of computer applications at the University of Notre Dame. He is a frequent contributor to BizTech magazine, SearchSecurity and About.com as well as the author of over a dozen books including the CISSP Study Guide, Information Security Illuminated and SQL Server 2008 for Dummies.

Security

Three Ways to Integrate Fire... |
Follow these tips to align the devices with log management and incident tracking systems.
Why Cloud Security Is More E... |
Cloud protection services enable companies to keep up with security threats while...
Securing the Internet of Thi... |
As excitement around the connected-device future grows, technology vendors seek ways to...

Storage

The New Backup Utility Proce... |
Just getting used to the Windows 8 workflow? Prepare for a change.
How to Perform Traditional W... |
With previous versions going unused, Microsoft radically reimagined the backup utility in...
5 Easy Ways to Build a Bette... |
While large enterprises have the resources of an entire IT department behind them, these...

Infrastructure Optimization

Why Cloud Security Is More E... |
Cloud protection services enable companies to keep up with security threats while...
Ensure Uptime Is in Your Dat... |
Power and cooling solutions support disaster recovery and create cost savings and...
The Value of Converged Infra... |
Improvements in security, management and efficiency are just a few of the benefits CI can...

Networking

Securing the Internet of Thi... |
As excitement around the connected-device future grows, technology vendors seek ways to...
How to Maximize WAN Bandwidt... |
Understand six common problems that plague wide area networks — and how to address them.
Linksys Makes a Comeback in... |
The networking vendor introduced several new Smart Switch products at Interop this week.

Mobile & Wireless

Now that Office for iPad Is... |
After waiting awhile for Microsoft’s productivity suite to arrive, professionals who use...
Visualization Can Help Busin... |
Companies need to put their data in formats that make it consumable anytime, anywhere.
Linksys Makes a Comeback in... |
The networking vendor introduced several new Smart Switch products at Interop this week.

Hardware & Software

New Challenges in Software M... |
IT trends such as cloud, virtualization and BYOD pose serious hurdles for software...
Visualization Can Help Busin... |
Companies need to put their data in formats that make it consumable anytime, anywhere.
The Tools That Power Busines... |
Ever-evolving analytic software can greatly improve financial institutions’ decision-...