Oversight hearing on
Technology -- Essential Yet Vulnerable:
How Prepared Are We for Attacks?
September 26, 2001
Software Engineering Institute
Carnegie Mellon University
Subcommittee on Government Efficiency,
and Intergovernmental Relations
Mr. Chairman and Members of the Subcommittee: My
name is Rich Pethia. I am the director of the CERTŪ Centers, which include
the CERT Coordination Center (CERT/CC) and CERT Analysis Center (CERT/AC).
Thank you for the opportunity to testify on computer security issues that
affect the government. Today I will discuss the vulnerability of information
technology on the Internet, including information about the Nimda worm,
and how prepared I believe the nation is for cyber attacks such as Nimda.
My perspective comes from the work we do at the
CERT Centers, which are part of the Survivable Systems Initiative of the
Software Engineering Institute, a federally funded research and development
center operated by Carnegie Mellon University. We have 13 years of experience
with computer and network security. The CERT/CC was established in 1988,
after an Internet "worm" became the first Internet security
incident to make headline news, acting as a wake-up call for network security.
In response, the CERT/CC was established at the SEI. The center was activated
in just two weeks, and we have worked hard to maintain our ability to
react quickly. The CERT/CC staff has handled well over 63,000 incidents
and cataloged more than 3,700 computer vulnerabilities.
The CERT Analysis Center, established just last
year, addresses the threat posed by rapidly evolving, technologically
advanced forms of cyber attacks. Working with sponsors and associates,
the CERT Analysis Center collects and analyzes information assurance data
to develop detection and mitigation strategies that provide high-leverage
solutions to information assurance problems, including countermeasures
for new vulnerabilities and emerging threats. The CERT Analysis Center
builds upon the work of the CERT Coordination Center.
The CERT Centers are now recognized by both government
and industry as a neutral, authoritative source of data and expertise
on information assurance. In addition to handling reports of computer
security breaches and vulnerabilities in network-related technology, we
identify preventive security practices, conduct research, and provide
training to system administrators, managers, and incident response teams.
More details about our work are attached to the end of this testimony
(see Survivable Systems Initiative).
The Nimda Worm Illustrates
How Prepared We Are for Attacks
The recent attacks by the Nimda, or W32/Nimda,
worm demonstrate our vulnerability. The worm modifies web documents (files
ending with .htm, .html, and .asp) and certain executable files found
on the systems it infects. It then creates numerous copies of itself under
various file names. It scans the network for vulnerable computers and
propagates through email, thereby causing some sites to experience denial
of service or degraded performance. Computers that have been compromised
are also at high risk for being used for attacks on other Internet sites.
One of Nimda’s behaviors is to attack
computers that had been compromised by the Code Red worm and left in a
vulnerable state. It also targets home users’ computers, which are among
the most vulnerable. Because of the network traffic generated, Internet
Service Providers (ISPs) for home users suffered a negative impact from
the worm. Nimda
used many means to infect computers, as shown the attached illustration,
"Complexity of Nimda Infection Vectors." For example, the worm
not only propagates though email attachments and by compromises of vulnerable
Internet Information Servers, but it also spreads through shared files
been altered on a compromised server.
The algorithm used to spread the worm concentrated
for the most part on local networks, so the primary effect of the worm
occurred at the "edges" of the Internet. Operators of the backbone
of the Internet were not significantly affected; however, they did experience
an increase in customer service calls. Callers could not reach the Internet
because of the local scanning and email traffic caused by the worm, so
they thought the Internet was "down."
Nimda is the first significant worm or virus that
attacks both computers that act as servers and those that are desktop
computers. A server provides services such as a web site. Code Red exploited
the Internet Information Server (IIS), which is a web server. The Melissa
virus spread by means of users’ email on desktop computers. Nimda merges
the damaging features of both Code Red and Melissa—and more.
The first public report of Nimda infections occurred
Tuesday, September 18, 2001, between 8:30 and 9:00 a.m. Within an hour,
numerous organizations were telling the CERT/CC that they were paralyzed
by the worm. By the end of the day, more than 100,000 computers had been
That same morning, the CERT/CC published
initial information about the worm and actions to take against it. We
were also in contact with the vendors of anti-virus products and other
response organizations to further spread the word of the problem and to
develop antidotes. Later that day, we issued more complete information
in a CERT advisory (CA-2001-26). The advisory went to a mailing list of
more than 150,000 addresses and was published on our web site (www.cert.org).
A copy is attached, along with copies of related advisories.
The worm spread so fast that system administrators,
users, and vendors did not have time to prepare. Quick response was a
challenge because there was no lead time for advance analysis. In contrast,
even with Code Red, analysts had a small amount of lead time to examine
an early version of the worm before the later, more aggressive version
began causing serious damage.
Analysts were also hampered by the lack of source
code for Nimda. Source code is the original form of the program, basic
code that reveals how the worm works. Thus, it was not possible to determine
quickly what the worm did and what it could potentially do. Analysts quickly
obtained the binary code, but it is time consuming to decompile this code
and analyze the inner workings of the worm. Analysis through decompiling
can take hours, days, or even weeks, depending on the complexity of the
Current State of Internet
The Nimda worm clearly points out multiple factors
that contribute to Internet security problems and pose obstacles to the
solutions. They include the vulnerability of technology on the Internet,
the nature of intruder activity, the difficulty of fixing vulnerable systems,
and the limits of effectiveness of reactive solutions.
Vulnerability of Technology
Last year, the CERT/CC received 1,090 vulnerability
reports, more than double the number of the previous year. In the first
half of 2001, we have already received 1,151 reports and expect well over
2,000 reports by the end of this year. These vulnerabilities are caused
by software designs that do not adequately protect Internet-connected
systems and by development practices that do not focus sufficiently on
eliminating implementation flaws that result in security problems.
There is little evidence of movement toward improvement
in the security of most products; software developers do not devote enough
effort to applying lessons learned about the sources of vulnerabilities.
We continue to see the same types of vulnerabilities in newer versions
of products that we saw in earlier versions. Technology evolves so rapidly
that vendors concentrate on time to market, often minimizing that time
by placing a low priority on the security of their products. Until customers
demand products that are more secure or there are changes in the way legal
and liability issues are handled, the situation is unlikely to change.
Additional vulnerabilities come from the difficulty
of securely configuring operating systems and applications software packages.
These products are often shipped to customers with security features disabled,
forcing the technology user to go through the difficult and error-prone
process of properly enabling the security features they need. While the
current practices allow the user to more quickly use the product and reduces
the number of calls the product vendor’s service center might receive
when a product is released, it results in many Internet-connected systems
that are misconfigured from a security standpoint.
Intruder Activity: The Ease of Exploitation
CERT/CC experience shows that there has been a
steady advance in the sophistication and effectiveness of attack technology.
Intruders quickly develop exploit scripts for vulnerabilities discovered
in products such as IIS. They then use these scripts to compromise computers
and, moreover, share these scripts so that more attackers can use them.
These scripts are combined with other forms of technology to develop programs
that automatically scan the network for vulnerable systems, attack them,
compromise them, and use them to spread the attack even further.
These new attack technologies are causing damage
more quickly than those created in the past. The Code Red worm spread
around the world faster than the so-called Morris worm moved through U.S.
computers in 1988, and faster than the Melissa virus in 1999. With the
Code Red worm, there were days between first identification and widespread
damage. The Nimda worm caused serious damage within an hour of the first
report of infection.
In the past, intruders found vulnerable computers
by scanning each computer individually, in effect limiting the number
of computers that could be compromised in a short period of time. Now
intruders use worm technology to achieve exponential growth in the number
of computers scanned and compromised. They can now reach tens of thousands
of computers in minutes where it once took weeks or months.
This fast exploitation limits the time security
experts like those at the CERT/CC have to analyze the problem and warn
the Internet community. Likewise, system administrators and users have
little time to protect their systems.
Exacerbating the problem is the difficulty of catching
the attackers. Today’s Internet protocols make it easy for intruders to
disguise their identity and location. Automated attack technology further
distances the attacker from the attack. In the great majority of attacks,
attackers go unidentified and fear of prosecution offers little deterrent.
Difficulty of Fixing Vulnerable Systems
With an estimated 2,000 (and climbing) vulnerabilities
being discovered each year, system and network administrators are in a
difficult situation. They are challenged with keeping up with all the
systems they have and all the patches released for those systems. Patches
can be difficult to apply and might even have unexpected side effects.
We have found that, after a vendor releases a security patch, it takes
a long time for system administrators to fix all the vulnerable computer
systems. It can be months or years before the patches are implemented
on 90-95 percent of the vulnerable computers. For example, we still receive
reports of outbreaks of the Melissa virus, which exploits vulnerabilities
that are more than two years old.
There are a variety of reasons for the delay. The
job might be too time-consuming, too complex, or just given too low a
priority for the system administration staff to handle. With increased
complexity comes the introduction of more vulnerabilities, so solutions
do not solve problems for the long term—system maintenance is never-ending.
Because many managers do not fully understand the risks, they neither
give security a high enough priority nor assign adequate resources. Exacerbating
the problem is the fact that the demand for skilled system administrators
far exceeds the supply.
Even in an ideal situation, conscientious system
administrators cannot adequately protect their computer systems because
other system administrators and users, including home users, do not adequately
protect their systems. Incident reports to the CERT/CC indicate
that many people do not keep their anti-virus software up to date; and
they do not apply patches to close vulnerabilities. Computers on the Internet
are extremely interdependent. The security of each system on the Internet
affects the security of every other system.
Limits of Effectiveness of Reactive Solutions
For the past 13 years, we have relied heavily on
the ability of the Internet community as a whole to react quickly enough
to security attacks to ensure that damage is minimized and attacks are
quickly defeated. Today, however, it is clear that we are reaching the
limits of effectiveness of our reactive solutions. While individual response
organizations are all working hard to streamline and automate their procedures
and are working together to better coordinate activities, a number of
factors have combined to limit the effectiveness of reactive solutions.
- The number of vulnerabilities in commercial
off-the-shelf software is now at the level that it is virtually impossible
for any but the best resourced organizations to keep up with the vulnerability
- The Internet now connects over 109,000,000 computers
and continues to grow at a rapid pace. At any point in time, there are
hundreds of thousands of connected computers that are vulnerable to
one form of attack or another.
- Attack technology has now advanced to the point
where it is easy for attackers to take advantage of these vulnerable
machines and harness them together to launch high-powered attacks.
- Many attacks are now fully automated and spread
at nearly the speed of light across the entire Internet community.
- The attack technology has become increasingly
complex and in some cases intentionally stealthy, thus increasing the
time it takes to discover and analyze the attack mechanisms in order
to produce antidotes.
- Internet users have become increasingly dependent
on the Internet and now use it for many critical applications as well
as online business transactions; even relatively short interruptions
in service cause significant economic loss and can jeopardize critical
These factors, taken together, indicate that we
are now at the point where we can expect many attacks to cause significant
economic losses and service disruptions within even the best response
times that we can realistically hope to achieve. Aggressive, coordinated
response will continue to be necessary, but we must also move quickly
to put other solutions in place.
Working our way out of the vulnerable position
we are in requires a multi-pronged approach that helps us deal with the
escalating near-term problem while at the same time building stronger
foundations for the future. The work that must be done includes achieving
- Higher quality information technology products
with security mechanisms that are better matched to the knowledge, skills,
and abilities of today’s system managers, administrators, and users
- Expanded research programs that lead to fundamental
advances in computer security
- A larger number of technical specialists who
have the skills needed to secure large, complex systems
- Increased and ongoing awareness and understanding
of cyber-security issues, vulnerabilities, and threats by all stakeholders
in cyber space
Higher quality products: In today’s Internet
environment, a security approach based on "user beware" is unacceptable.
The systems are too complex and the attacks too rapid for this approach
to work. Fortunately, good software engineering practices can dramatically
improve our ability to withstand attacks. The solutions required are a
combination of the following:
- Virus-resistant/virus-proof software – There
is nothing intrinsic about digital computers or software that makes
them vulnerable to virus attack or infestation. Viruses propagate and
infect systems because of design choices that have been made by computer
and software designers. Designs that allow the import of executable
code, in one form or another, and allow the unconstrained execution
of that code on the machine that received it, are the designs that are
susceptible to viruses and their effects. Unconstrained execution allows
code developers to easily take full advantage of a system’s capabilities,
but does so with the side effect of making the system vulnerable to
virus attack. To effectively control viruses in the long term, vendors
must provide systems and software that constrain the execution of imported
code, especially code that comes from unknown or not-trusted sources.
Some techniques to do this have been known for decades. Others, such
as "sandbox" techniques, have been more recently developed.
- Reducing implementation errors by at least two
orders of magnitude – Most vulnerabilities in products come from software
implementation errors. They remain in products, waiting to be discovered,
and are fixed only after they are found while in use. Worse, the same
flaws continue to be introduced in new products. Vendors need to be
proactive, and adopt known, effective software engineering practices
that dramatically reduce the number of flaws in software products.
- High-security default configurations – With
the complexity of today’s products, properly configuring systems and
networks to use the strongest security built into the products is difficult,
even for people with strong technical skills and training. Small mistakes
can leave systems vulnerable and put users at risk when connected to
the Internet. Vendors can help reduce the impact of security problems
by shipping products with "out of the box" configurations
that enable security options rather than require the user to enable
them. The user can change these "default" configurations if
desired, but would have the benefit of starting from a secure base.
Expanded research in information assurance:
It is critical to maintain a long-term view and invest in research toward
systems and operational techniques that yield networks capable of surviving
attacks while protecting sensitive data. In doing so, it is essential
to seek fundamental technological solutions and to seek proactive, preventive
approaches, not just reactive, curative approaches.
The research agenda should seek new approaches
to system security. These approaches should include design and implementation
strategies, recovery tactics, strategies to resist attacks, survivability
trade-off analysis, and the development of security architectures. Among
the activities should be the creation of
- A unified and integrated framework for all information
assurance analysis and design
- Rigorous methods to assess and manage the risks
imposed by threats to information assets
- Quantitative techniques to determine cost/benefit
of risk mitigation strategies
- Systematic methods and simulation tools to analyze
cascade effects of attacks, accidents, and failures across interdependent
- New technologies for resisting attacks and for
recognizing and recovering from attacks, accidents, and failures
More technical specialists: The recent
government identification and support of cyber-security centers of excellence
and the provision of scholarships that support students working on degrees
in these universities are steps in the right direction. The current levels
of support, however, are far short of what is required to produce the
technical specialists we need to secure our systems and networks. These
programs should be expanded over the next five years to build the university
infrastructure we will need for the long-term development of trained security
More awareness and training for Internet users:
The combination of easy access and user-friendly interfaces have drawn
users of all ages and from all walks of life to the Internet. As a result,
many users of the Internet have little understanding of Internet technology
or the security practices they should adopt. To encourage "safe computing,"
there are steps we believe the government could take:
- Support the development of educational material
and programs about cyberspace for all users. There is a critical need
for education and increased awareness of the security characteristics,
threats, opportunities, and appropriate behavior in cyberspace. Because
the survivability of systems is dependent on the security of systems
at other sites, fixing one’s own systems is not sufficient to ensure
those systems will survive attacks. Home users and business users alike
need to be educated on how to operate their computers most securely,
and consumers need to be educated on how to select the products they
buy. Market pressure, in turn, will encourage vendors to release products
that are less vulnerable to compromise.
- In addition, support programs that provide early
training in security practices and appropriate use. This training should
be integrated into general education about computing. Children should
learn early about acceptable and unacceptable behavior when they begin
using computers just as they are taught about acceptable and unacceptable
behavior when they begin using libraries. Although this recommendation
is aimed at elementary and secondary school teachers, they themselves
need to be educated by security experts and professional organizations.
Parents need be educated as well and should reinforce lessons in security
and behavior on computer networks.
Problems such as the Nimda worm will occur again,
and attack technology will evolve to support attacks that are even more
virulent and damaging. Our current solutions are not keeping pace with
the increased strength and speed of attacks; our information infrastructures
are at risk. Solutions are not simple, but must be pursued aggressively
to allow us to keep our information infrastructures operating at acceptable
levels of risk. However, we can make significant progress by making changes
in software design and development practices, increasing the number of
trained system managers and administrators, improving the knowledge level
of users, and increasing research into secure and survivable systems.
Additional government support for research, development, and education
in computer and network security would have a positive effect on the overall
security of the Internet.