Total computer security is impossible. No matter how much money you spend on fancy technology, how many training courses your staff attend or how many consultants you employ, you will still be vulnerable. Spending more, and spending wisely, can reduce your exposure, but it can never eliminate it altogether. So how much money and time does it make sense to spend on security? And what is the best way to spend them?
There are no simple answers. It is all a matter of striking an appropriate balance between cost and risk — and what is appropriate for one organisation might be wrong for another. Computer security, when you get down to it, is really about risk management. Before you can take any decisions about security spending, policy or management, the first thing you have to do is make a hard-headed risk assessment.
First, try to imagine all of the possible ways in which security could be breached. This is called “threat modelling”, and is more difficult than it seems. Mr Schneier, the security guru, illustrates this point by asking people to imagine trying to eat at a pancake restaurant without paying. The obvious options are to grab the pancakes and run, or to pay with a fake credit card or counterfeit cash. But a would-be thief could devise more creative attacks.
He could, for example, invent some story to persuade another customer who had already paid for his meal to leave, and then eat his pancakes. He could impersonate a cook, a waiter, a manager, a celebrity or even the restaurant owner, all of whom might be entitled to free pancakes. He might forge a coupon for free pancakes. Or he might set off the fire alarm and grab some pancakes amid the ensuing chaos. Clearly, keeping an eye on the pancakes and securing the restaurant’s payment system is not enough. Threat modelling alerts you to the whole range of possible attacks.
The next step is to determine how much to worry about each kind of attack. This involves estimating the expected loss associated with it, and the expected number of incidents per year. Multiply the two together, and the result is the “annual loss expectancy”, which tells you how seriously to take the risk. Some incidents might cause massive losses, but be very rare; others will be more common, but involve smaller losses.
The final step is to work out the cost of defending against that attack. There are various ways to handle risk: mitigation (in the form of preventive technology and policies), outsourcing (passing the risk to someone else) and insurance (transferring the remaining risk to an insurer).
Suppose you are concerned about the risk of your website being attacked. You can mitigate that risk by installing a firewall. You can outsource it by paying a web-hosting firm to maintain the website on your behalf, including looking after security for you. And you can buy an insurance policy that, in the event of an attack, will pay for the cost of cleaning things up and compensate you for the loss of revenue. There are costs associated with each of these courses of action. To determine whether a particular security measure is appropriate, you have to compare the expected loss from each attack with the cost of the defence against it.
Firewalls make sense for large e-commerce websites, for example, because the cost of buying and maintaining a firewall is small compared with the revenue that would be lost if the site were shut down by an intruder, however briefly. But installing biometric eye-scanners at every turnstile on a city’s public-transport system would be overkill, because fare-dodging can be mitigated with far cheaper technology. By contrast, in high-security environments such as military facilities or intelligence organisations, where a security breach would have serious consequences, the use of expensive security technology may be justified. In some situations, however, the right response may be to do nothing at all.
That different organisations have different security needs is explicitly recognised in the ISO 17799, an international standard for “best practices in information security” that was introduced by the International Organisation for Standardisation in 2000. Risk analysis is a basic requirement of the standard, as is the establishment of a security policy. But, says Geoff Davies of i-Sec, a British security consultancy, “an industrial firm and a bank with ISO 17799 certification will have totally different systems.” The standard does not specify particular technological or procedural approaches to security, but concentrates on broadly defined ends rather than specific means. The standard’s flexibility is controversial, however. Critics believe future versions of the standard should be more prescriptive and more specific about what constitutes “best practice”. Still, even in its current form, ISO 17799 is better than nothing. Many multinational companies have already embraced it to demonstrate their commitment to security. And in several Asian countries, companies that want to do business with the government electronically must conform to the standard.
Just as different organisations require different levels of protection, they will also respond to an attack in different ways. A large company, for example, may find it useful to have a dedicated security-response team. Scott Charney at Microsoft says that when an attack occurs, one of the things the team has to decide is whether to give priority to remediation or to investigation. Blocking the attack will alert the attacker, which may make collecting evidence against him difficult; but allowing the attacker to continue so that he can be identified may cause damage. Which is more appropriate depends on the context. In a military setting, tracking down the attacker is crucial; for a dotcom under attack by a teenager, blocking the attack makes more sense. Another difficult choice, says Mr Charney, is whether to bring in the police. Internal investigations allow an organisation to maintain control and keep things quiet, but law-enforcement agencies have broader powers.
For small and medium-sized companies, a sensible choice may be “managed security monitoring” (MSM). Firms that offer this service install “sentry” software and machines on clients’ networks which relay a stream of messages to a central secure operations centre. Human operators watch for anomalous behaviour and raise the alarm if they detect anything suspicious. Using highly trained specialists to look out for trouble has the advantage that each operator can watch many networks at once, and can thus spot trends that would otherwise go unnoticed.
Risk analysis, and balancing cost and risk, is something the insurance industry has been doing for centuries. The industry is now showing increased interest in offering cover for computer-related risks. In the past, computer risks were included in general insurance policies, but were specifically excluded in the run-up to the year 2000 to avoid “millennium bug” liabilities. Now insurers are offering new products to protect companies against new risks. Because of the Internet, “the landscape has changed,” says David O’Neill, vice-president of e-Business Solutions at Zurich North America, which acts as a matchmaker between customers and underwriters. Greater connectivity means firms are now exposed to risks that were never contemplated by traditional insurance policies, he says.
Mr O’Neill can arrange insurance against a range of risks, including data theft, virus attacks or intrusions by malicious hackers, and loss of income owing to a security breach or network failure. Companies can also take out insurance against having to pay damages if confidential financial or medical data are accidentally or maliciously released. Because no two networks or businesses are alike, each policy is prepared individually.
Such cyber-insurance is, however, still very much in its infancy. The main problem is that the complexity of computer networks makes it very difficult to quantify risk accurately. By comparison, working out the likelihood that a 45-year-old smoker will have a heart attack in the next 12 months is a piece of cake. One reason for the lack of data, says Mr Charney, is that most security breaches are not detected or reported. But this will change. “When a company asks for $1m in damages after a virus outbreak, the insurer will say, ‘Prove it’,” he explains. “Firms will have to substantiate it, and we will get some data.”
Mr Schneier predicts that insurance companies will start to specify what kinds of computer equipment companies should use, or charge lower premiums to insure more secure operating systems or hardware. Already, firms that use the monitoring service provided by his company, Counterpane Internet Security, enjoy a 20-40% reduction in their premiums for cyber-insurance. But Mr Anderson at Cambridge University thinks the need for cyber-insurance is overblown. “Insurers are having a hard time, so they are turning e-risks into a new pot of gold,” he says.
Most organisations already have the expertise required to handle computer security in a sensible way. Usually, however, this risk-management expertise is found not in the systems department but in the finance department. “Chief information officers, chief financial officers and other executives already know how to do risk analysis,” says Mr Davies. The systems department, on the other hand, does not; instead, it tends to be seduced by siren songs about technological fixes.
This survey has consistently argued that enthusiasm for technological solutions can go too far. In two areas in particular, security technology could end up doing more harm than good. First, some measures introduced in the name of improving security may have the side-effect of needlessly infringing civil liberties. Face-scanning systems at airports are a good example. They are almost useless at spotting terrorists, but civil-rights advocates worry about “function creep”, in which such systems are installed for one purpose and then used for another.
Similarly, new legislation has been proposed that would allow far more widespread wire-tapping and interception of Internet communications to combat terrorism. But would it actually improve security? “Broad surveillance is generally the sign of a badly designed system of security,” says Mr Schneier. He notes that the failure to predict the September 11th attacks was one of data sharing and interpretation, not data collection. Too much eavesdropping might actually exacerbate the problem, because there would be more data to sift. It would be better to step up intelligence gathering by humans.
The second area where security technology could do more harm than good is in the world of business. Technology introduced to improve security often seems to have the side-effect of reinforcing the market dominance of the firm pushing it. “Information-security technologies are more and more used in struggles between one company and another,” says Mr Anderson. “Vendors will build in things that they claim are security mechanisms but are actually there for anti-competitive reasons.”
One current, and highly controversial, example is Palladium, Microsoft’s proposed technology for fencing off secure areas inside a computer. It might be very useful for stopping viruses; but it might also enable Microsoft to gain control of the standard for the delivery of digital music and movies.
Security, in sum, depends on balancing cost and risk through the appropriate use of both technology and policy. The tricky part is defining what “appropriate” means in a particular context. It will always be a balancing act. Too little can be dangerous and costly — but so can too much.