Security engineering: broken promises

On the face of it, the field of information security appears to be a mature, well-defined, and an accomplished branch of computer science.
Resident experts eagerly assert the importance of their area of expertise by pointing to large sets of neatly cataloged security flaws, invariably attributed to security-illiterate developers; while their fellow theoreticians note how all these problems would have been prevented by adhering to this year’s hottest security methodology. A commercial industry thrives in the vicinity, offering various non-binding security assurances to everyone, from casual computer users to giant international corporations.

Yet, for several decades, we have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else’s code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.

So, let’s have a look at some of the most alluring approaches to assuring information security - and try to figure out why they fail to make a difference to regular users and businesses alike.

Flirting with formal solutions

Perhaps the most obvious and clever tool for building secure programs would be simply to algorithmically prove they behave just the right way. This is a simple premise that intuitively, should be within the realm of possibility - so why haven’t this approach netted us much? Well, let’s start with the adjective “secure” itself: what is it supposed to convey, precisely? Security seems like a simple and intuitive concept, but in the world of computing, it escapes all attempts to usefully specify it. Sure, we can restate the problem in catchy, yet largely unhelpful ways – but you know we have a problem when one of the definitions most frequently cited by practitioners is:

“A system is secure if it behaves precisely in the manner intended – and does nothing more.”

This definition (originally attributed to Ivan Arce) is neat, and vaguely outlines an abstract goal – but then tells very little on how to achieve it. It could be computer science - but in terms of specificity, it just as easily could be a passage in Victor Hugo’s poem:

“Love is a portion of the soul itself, and it is of the same nature as the celestial breathing of the atmosphere of paradise.”

Now, one could argue that practitioners are not the ones to be asked for nuanced definitions - but, ask the same question to a group of academics, and they will deliver roughly the same. The following common academic definition traces back to Bell-La Padula security model, published back in the sixties (one of about a dozen attempts to formalize the requirements for secure systems, in this particular case in terms of a finite state machine – and one of the most notable ones):

“A system is secure if and only if it starts in a secure state and cannot enter an insecure state.”

Definitions along these lines are fundamentally true, of course, and may serve as a basis for dissertations, perhaps a couple of government grants; but in practice, any models built on top of them are bound to be nearly useless for generalized, real-world software engineering. There are at least three reasons for this:

  • There is no way to define desirable behavior of a sufficiently complex computer system: no single authority can spell out what the “intended manner” or “secure states” are supposed to be for an operating system or a web browser. The interests of users, system owners, data providers, business process owners, and software and hardware vendors, tend to differ quite significantly and shift rapidly – if all the stakeholders are capable and willing to clearly and honestly disclose them out to begin with. To add insult to injury, sociology and game theory suggest that computing a simple sum of these particular interests may not actually result in a satisfactory outcome; the dilemma, known as “the tragedy of the commons”, is central to many disputes over the future of the Internet.
  • Wishful thinking does not automatically map to formal constraints: even if a perfect high-level agreement of how the system should behave can be reached in a subset of cases, it is nearly impossible to formalize many expectations as a set of permissible inputs, program states, and state transitions – a prerequisite for almost every type of formal analysis. Quite simply, intuitive concepts such as “I do not want my mail to be read by others” do not translate to mathematical models particularly well - and vice versa. Several exotic approaches that let such vague requirements to be at least partly formalized exist, but they put heavy constraints on software engineering processes, and often result in rulesets and models far more complicated than the validated algorithms themselves – in turn, likely needing their own correctness to be proven… yup, recursively.
  • Software behavior is very hard to conclusively analyze: static analysis of computer programs to prove they would always behave in accordance to a detailed specification is a task that nobody managed to believably demonstrate in complex real-world scenarios (although as usual, limited success in highly constrained settings or with very narrow goals is possible). Many cases are likely to be impossible to solve in practice (due to computational complexity) – or even may turn out to be completely undecidable due to the halting problem.
Perhaps more frustrating than the vagueness and uselessness of these early definitions is that as decades fly by, little or no progress is made on coming up with something better; in fact, a fairly recent academic paper released in 2001 by the Naval Research Laboratory backtracks some of the earlier work, and arrives at a much more casual, enumerative definition of software security: one that explicitly disclaims it is imperfect and incomplete:

“A system is secure if it adequately protects information that it processes against unauthorized disclosure, unauthorized modification, and unauthorized withholding (also called denial of service). We say ‘adequately’ because no practical system can achieve these goals without qualification; security is inherently relative.”

The paper also provides a retrospective assessment of earlier efforts, and the unacceptable sacrifices made to preserve the theoretical purity of said models:

“Experience has shown that, on one hand, the axioms of the Bell-La Padula model are overly restrictive: they disallow operations that users require in practical applications. On the other hand, trusted subjects, which are the mechanism provided to overcome some of these restrictions, are not restricted enough. [...] Consequently, developers have had to develop ad hoc specifications for the desired behavior of trusted processes in each individual system.”

In the end, regardless of the number of elegant, competing models introduced, all attempts to understand and evaluate the security of real-world software using algorithmic foundations seem to be bound to fail. This leaves developers and security experts with no method to make authoritative statements about the quality of produced code. So, what are we left with?

Risk management

In absence of formal assurances and provable metrics, and given the frightening prevalence of security flaws in key software relied upon by modern societies, businesses flock to another catchy concept: risk management. The idea, applied successfully to the insurance business (as of this writing, with perhaps a bit less to show for in the financial world), simply states that system owners should learn to live with vulnerabilities that would be not cost-effective to address, and divert resources to cases where the odds are less acceptable, as indicated by the following formula:risk = probability of an event * maximum loss

The doctrine says that if having some unimportant workstation compromised every year is not going to cost the company more than $1,000 in lost productivity, maybe they should just budget this much and move on – rather than spending $10,000 on additional security measures or contingency and monitoring plans. The money would be better allocated to isolating, securing, and monitoring that mission-critical mainframe that churns billing records for all customers instead.

Prioritization of security efforts is a prudent step, naturally. The problem is that when risk management is done strictly by the numbers, it does deceptively little to actually understand, contain, and manage real-world problems. Instead, it introduces a dangerous fallacy: that structured inadequacy is almost as good as adequacy, and that underfunded security efforts plus risk management are about as good as properly funded security work.

Guess what? No dice:

  • In interconnected systems, losses are not capped, and not tied to an asset: strict risk management depends on the ability to estimate typical and maximum cost associated with a compromise of a resource. Unfortunately, the only way to do it is to overlook the fact that many of the most spectacular security breaches in history started in relatively unimportant and neglected entry points, followed by complex access escalation paths, eventually resulting in near-complete compromise of critical infrastructure (regardless of any superficial compartmentalization in place). In by-the-numbers risk management, the initial entry point would realistically be assigned a lower weight as having low value compared to other nodes; and the internal escalation path to more sensitive resources would be likewise downplayed as having low probability of ever being abused.
  • Statistical forecasting does not tell you much about your individual risks: just because on average, people in the city are more likely to be hit by lightning than mauled by a bear, does not really mean you should bolt a lightning rod to your hat, but then bathe in honey. The likelihood of a compromise associated with a particular component is, on an individual scale, largely irrelevant: security incidents are nearly certain, but out of thousands exposed non-trivial resources, any resource could be used as an attack vector, and none of them is likely to see a volume of events that would make statistical analysis meaningful within the scope of the enterprise.
  • Security is not a sound insurance scheme: an insurance company can use statistical data to offset capped claims they might need to pay across a large populace with the premiums collected from every participant; and to estimate reserves needed to deal with random events, such as sudden, localized surges in the number of claims, up to a chosen level of event probability. In such a setting, formal risk management works pretty well. In contrast, in information security, there is nothing contributed by healthy assets to directly offset the impact of a compromise, and there is an insufficient number of events to model their distribution with any degree of certainty; plus, there is no way to reliably limit the maximum per-incident loss incurred.

Enlightenment through taxonomy

The two schools of thought discussed previously have something in common – both assume that it is possible to define security as a set of computable goals, and that the resulting unified theory of a secure system or a model of acceptable risk would then elegantly trickle down, resulting in an optimal set of low-level actions needed to achieve perfection in application design.There is also the opposite approach preached by some practitioners – owing less to philosophy, and more to natural sciences: that much like Charles Darwin back in the day, by gathering sufficient amounts of low-level, experimental data, we would be able to observe, reconstruct, and document increasingly more sophisticated laws, until some sort of an unified model of a secure computing is organically arrived at.

This latter world view brings us projects like the Department of Homeland Security-funded Common Weakness Enumeration (CWE). In the organization’s own words, the goal of CWE is to develop a unified “Vulnerability Theory”; to “improve the research, modeling, and classification of software flaws”; and “provide a common language of discourse for discussing, finding and dealing with the causes of software security vulnerabilities”. A typical, delightfully baroque example of the resulting taxonomy may be:

Improper Enforcement of Message or Data Structure → Failure to Sanitize Data into a Different Plane → Improper Control of Resource Identifiers → Insufficient Filtering of File and Other Resource Names for Executable Content.

Today, there are about 800 names in this dictionary; most of them as discourse-enabling as the one quoted here.

A slightly different school of naturalist thought is manifested in projects such as the Common Vulnerability Scoring System (CVSS), a business-backed collaboration aiming to strictly quantify known security problems in terms of a set of basic, machine-readable parameters. A real-world example of the resulting vulnerability descriptor may be:

AV:LN / AC:L / Au:M / C:C / I:N / A:P / E:F / RL:T / RC:UR / CDP:MH / TD:H / CR:M / IR:L / AR:M

Given this 14-dimensional vector, organizations and researchers are expected to transform it in a carefully chosen, use-specific manner – and arrive at some sort of an objective, verifiable, objective conclusion about the significance of the underlying bug (say, “42”), precluding the need to more subjectively judge the nature of security flaws.

I may be poking gentle fun at their expense - but rest assured, I do not mean to belittle these CWE or CVSS: both projects serve notable goals, most notably giving a more formal dimension to risk management strategies implemented by large organizations (any general criticisms of certain approaches to risk management aside). Having said that, none of them yielded a grand theory of secure software yet - and I doubt such a framework is within sight.

Original article by:

Michal Zalewski
www.zdnet.com/blog/security/security-engineering-broken-promises/6503