IWS - The Information Warfare Site
News Watch Make a  donation to IWS - The Information Warfare Site Use it for navigation in case java scripts are disabled

Google Ads






CYBER SECURITY: BEYOND THE MAGINOT LINE

 

Statement of

 

Wm. A. Wulf,  Ph.D.

President, National Academy of Engineering

and

AT&T Professor of Engineering and Applied Science, University of Virginia

 

before the

 

House Science Committee

U.S. House of Representatives

 

OCTOBER  10, 2001

 

 

Good morning, Mr. Chairman and members of the committee.  I am  Wm.. A. Wulf, president of the National Academy of Engineering and AT&T Professor of Engineering and Applied Science in the Department of Computer Science at the University of Virginia.  I appreciate the opportunity to testify today on cyber security. 

 

A few words about my background will provide a context for my remarks.  I was a professor at Carnegie Mellon University (CMU) for 13 years (from 1968 to 1980); and during that time computer security was one of the areas of my research.  I left CMU in 1980 to found and run a software company and subsequently served as an assistant director of the National Science Foundation (NSF).  In 1991, I returned to academia at the University of Virginia, where after a time I resumed my research on computer security.  The gap of more than 15 years between my first and second exposures to the state of the art in computer security gave me a different perspective than I would have had if I had stayed in the field.

 

In a report by the National Research Council[1], a committee of experts concluded that  the immediate vulnerabilities of government computer systems could be ameliorated by rigorous implementation  of industrial “best practices.”  I agree with that assessment...

 

I am  troubled, however, by a deeper problem.  We have virtually no research base on which to build truly secure systems and only a tiny cadre of academic, long-term, basic researchers who are thinking deeply about these problems.  The immediate problems of cyber systems can be patched by implementing “best practices,” but not the fundamental problems.  Well funded, long-term basic research on computer security is crucial to our national security.

 

For historical reasons, no federal funding agency has assumed  responsibility  for supporting  basic research in this area -- not the Defense Advanced Research Projects Agency (DARPA), not the National Science Foundation (NSF), not the Department of Energy (DoE), not the National Security Agency (NSA).  Because no funding agency  feels  it “owns” this problem, relatively small, sporadic research projects have been funded, but no one has  questioned the underlying assumptions on cyber security that were established  in the 1960s mainframe environment.

 

In my view, the little research that is being done is focused on answering the wrong question!  When funds are scarce, researchers become very conservative, and bold challenges to the conventional wisdom are not likely to pass peer review.  As a result, incrementalism has become the norm.  Unfortunately, in this context, the right answer to the wrong question is worse than useless because it leads to counterproductive efforts and a false sense of security.

 

 I should point out that researchers in this area might disagree with my assessment of the problem. As I said, the research community in this area is very  small and very conservative -- and some will surely not like my  implicit challenge to their life’s work.  However, I believe it is imperative that we reassess our approach to this urgent problem.

 

My analysis can be broken down into four areas:

-         1.  The need for a new “model” of the threat to replace the “Maginot Line” model

-         2. The need for a  new definition of cyber security

-         3. The need for “active defense”

-         4. The need for coordination with the legal, and regulatory systems.

 

The Maginot Line Model

 

Most research on cyber security  is based on the assumption  that the "thing" we need to protect is “inside” the system.  Therefore,  we have tried to develop “firewalls” and other mechanisms the like to keep outside attackers from penetrating our defenses  and gaining access or taking control of  it.  This model of computer security--I call it the Maginot Line model--has been used since the first mainframe operating systems were built in the 1960s.  Unfortunately, it is dangerously flawed.

 

First, like the Maginot Line, it is fragile.  In WWII, France fell in 35 days because of its reliance on this model.  No matter how formidable the defenses,  the attacker can make an end run around them, and once  inside, the entire system is compromised.  The Maginot Line model is especially inappropriate in a networked environment, which does not have  an “inside” or “outside” defined by the hardware.  Many attempts have been made  to simulate a networked environment,  especially through  various cryptographic techniques, but so far  none of these has ve not worked.

 

Second, the Maginot Line model fails to recognize that many security flaws are “designed in.”  In other words, a system may  fail by performing exactly as specified.  Flaws are not always “bugs” or errors -- they can also result when  a  system behaves  as designed,  but in ways the designers did not anticipate.  In 1993, the Naval Research Laboratory did an  analysis of some 50 security flaws and found that nearly half of them (22) were part of the requirements or specifications.  It is's impossible to defend or provide a firewall against security flaws  that were conceived of as perfectly legitimate -- that were, in fact, considered  requirements of correct system behavior!

 

In the 1990s, I did research  on cryptographic protocols-- very short pieces (10 to 20 lines) of code that use cryptographic techniques to perform certain functions, such as establishing the identity of participants in a network transaction.  These protocols are the principal techniques of creating software simulations of  networked systems based on  the “inside” vs. “outside” paradigm of the Maginot Line model.  Even when  these protocols have been mathematically proven to be correct, they can be, and have been, compromised by the clever manipulation  of a feature critical to its “correct” operation.  If we cannot recognize the flawed specification of a 10-line programs, it is  highly unlikely  we 'willl  be able to recognize flaws  of a programs  with millions of lines of code.

 

Third, the Maginot Line cannot protect against  insider attacks.  No one has ever compromised the CIA by mounting a frontal assault on its external fence in Virginia.  But security breaches have been made by employees inside the fence.  The analogy to computer systems is clear.  If we only  direct  our defenses outward, we ignore our greatest vulnerability, the legitimate insider.

 

Fourth, one need not “penetrate” a system to do major damage.  This was demonstrated by the distributed denial-of-service attacks on Yahoo and others last year, which showed that expected behavior can be disrupted or prevented without any form of penetration.  Simply by flooding a system with false requests for service, it became impossible to respond to legitimate requests.  We can be grateful that so far these denial-of-service attacks have been against Internet sites and not against 911 services in major cities.

 

Finally, the Maginot Line model has  never worked! Every system ever built to protect a Maginot Line-type system has been compromised -- including the systems  I built in the 1970s.  After 40  years of trying to develop a foolproof system, it’s time we  realized  that we' are not likely to succeed.  It’s time to change the  flawed inside-outside model of security.

 

This is not the place to espouse alternatives, but I’ll mention a few just to show that alternatives exist.  DARPA has  a program to investigate  models based on biological immune responses. Other models could distribute the responsibility for defining  and enforcing  security to every object in the system so that the compromise of one object would be  just that -- a compromise of one object and not a compromise of the whole system.  The point is that there are much more robust models on which we might build the architecture of a secure cyberspace.

 

Definition  of Security

 

The military definition of security emphasizes protecting access to sensitive information.  This  is the basis of the compartmentalized, layered {confidential, secret, top secret} classification of information.  The slightly broader  definition of security used in the research community  includes two other notions: integrity and denial of service.

 

Integrity implies that information in the system cannot be modified by an attacker.  In some cases,  medical records for instance, integrity is much more important than secrecy.  We may not like other people seeing our medical records, but we may die if someone alters our allergy profile.

 

Denial of service is just what it says--the attacker does not necessarily access or modify information in a system but does deny its users a service provided by that system.  In the case of logistical operations, for instance, the ability to flood a communication channel with traffic can cripple an operation.  Several years ago, for example, the Joint Chiefs of Staff asked a small team to see whether they could disrupt a major multi-service military exercise called Eligible Receiver.;  I in fact the team caused the exercise to be cancelled, in part by using denial of service techniques.  This relatively unsophisticated form of attack could also be used against phone systems (military base exchanges, 911, etc.), financial systems, and, of course, Internet hosts.

 

A In fact, a  practical definition of security must be is  more complex than privacy, integrity, and denial of service.  A proper definition will differ for each kind of object -- credit card, medical record, tank, aircraft flight plan, student examination, and so forth.  The notion of restricting access to a credit card to individuals with, say, secret clearance is nonsensical . Other factors, such as the timing, or at least the temporal order, of operations, correlative operations on related objects, and so on, are essential to the security of real-world information. (An example often cited is that the best way to anticipate major U.S. military operations is to count the pizza deliveries to the Pentagon).

 

The military concept of sensitive but unclassified information has a counterpart in spades in the cyber world.  Indeed, the line between sensitive and nonsensitive information is often blurred. in cyberspace. In principle, one must consider  how any piece of information might be combined with innumerable other pieces of information and used in some way to compromise our interests.  The vast amount  of information available on the Internet and the speed of modern computers make it impossible to anticipate how information will be combined or what inferences will  be drawn from such combinations.

 

A simple model of “penetration” does not  reflect any of these dimensions of realistic  security concerns.  Hence,  an analysis of the vulnerability of a system in terms of  how it can be “attacked”  in terms of the inside-outside Maginot Line model--is unlikely to reveal  its true  vulnerabilities.

 

Active Defense

 

Based on my experience over the past 30 years, passive defense alone will not work, especially  if one holds to the Maginot Line model.  Effective cyber security must include  some kind of active response, some threat, some cost higher than the attacker is  willing to pay, to complement passive defense.  Our current computer security is primarily passive (although there are a few laws against crimes using a computer).

 

Our ability to identify and respond to an attack, in  the cyber world or the physical world, can be improved substantially, but  these approaches are not being aggressively pursued.   Much better models of passive defense are possible -- especially models like such as the immune system response model  that distribute  the responsibility for protection and defense rather than concentrating it at the Maginot Line. 

 

Developing an active defense will not be easy.  The practical and legal implications of active defense have not been determined,  and the opportunities for mistakes are legion.   The international implications are especially troublesome.  It is difficult, sometimes  impossible, to pinpoint the physical location of an attacker.  If the  attacker is in another country, could a countermeasure by a U.S. government  computer be considered an act of war?  Resolving this issue and related issues will require a thoughtful approach and  careful international diplomacy.

 

Precisely because these issues have not been thought about in depth, we desperately need long-term basic scholarship in this area.

 

Coordination with the Legal and Regulatory System

Any  plan of action  must begin with a dialog on legal issues.  I am not a legal expert, but there are two kinds of issues I think should  be addressed soon: (1)issues raised in cyberspace that do not  have counterparts in the physical world; and (2)  issues raised by place-based assumptions in current law. 

 

The first category includes everything from new forms of intellectual property (databases, for example) to new forms of crime (spamming, for example).  Issues of particular interest to this discussion are right(s) and limitation(s) on active countermeasures to intrusions (indeed, what constitutes  an intrusion).  Issues raised  by place-based assumptions in current law include many basic questions. How does the concept  of jurisdiction apply in  cyberspace?  For tax purposes (sales taxes in the United States and value-added taxes  in Europe),where does a cyberspace transaction take place?  Where do you draw the line between national security and law enforcement?  How do you  apply the principle of posse comitatis?

 

Not all of these issues are immediately and obviously related to cyberspace protection.  But cyberspace protection is a “wedge” issue that can forces  us to rethink some fundamental questions about the role of government, the relationship between the public and private sectors, the balance between privacy and public safety,  and the definition of security.

 

 Addressing the Problem

In 1998, the Presidential Commission on Critical Infrastructure Protection released a report focused on vulnerabilities to cyber attack in the military, law enforcement, commerce, indeed virtually every aspect of life in the United States.  I was hopeful that the report would lead to the creation of a serious, long-term research program.  Unfortunately, it hasn’t.

 

In our typical fashion, research since then has focused on solving short-term problems.  NSA, for example, has recently established a number of (unfunded) centers of excellence and earmarked an institute for information infrastructure protection at Dartmouth.  All of these are focused on near-term problems.

 

Although industrial best practices will plug the most obvious holes in any computing system,  in the long run we must develop  a conceptual foundation that includes  a strong research base and a cadre of committed researchers to address these issues.  This will require that a single agency, with enough resources to fund a long-term, stable research program, be assigned responsibility for coordinating the development of a fundamental science base  and a community of researchers.

As a former assistant director of NSF, I have been both the source and the target of requests for research funds. . I hope my remarks today will not be interpreted as “more of the same”.  I believe the United States is extremely vulnerable to cyber terrorism.  Unlike the situation in the 1940s when the country was attacked, we have no pool of scientists and engineers today to fill the breach.  We must do everything we can to create that pool as quickly as possible -- and, unfortunately, it may not be quickly enough.

 

I believe that ensuring stable, long-term funding, at whatever level will be most effective, is the most important change we can make immediately.  Academics build their careers by establishing their reputations among their colleagues over a long period of time.  Attracting the brightest minds to this critical field will require reasonable assurances that they can continue to work in the field.

 

Thank you for the opportunity to testify on this critical matter.



[1] Realizing the Potential of c4I: Fundamental Challenges, 1999


IWS Mailing Lists






Mailing Lists Overview