Looking for the Elephant
Slicing, dicing, and boiling the various manifestations of information warfare produces a lumpy stew. Information takes in everything from gossip to supercomputers. Warfare spans human activities from by-the-rules competition to to-the-death conflict. Some forms of warfare use the human mind as the ultimate battleground; others work just as well even if people go home. Information warfare, in some guises, almost seems to predate organized societies; in other guises, it may continue long after human society has evolved to transcend today's organization whatsoever.
With the background of the first part of this essay, it seems reasonable to return to the underlying issue of information as a medium of conflict. Is information warfare sufficiently coherent to permit the emergence of information warriors? Does information dominance have any meaning, and, if it does, is that dominance the core goal of information warfare or a distraction that either applies so selectively that it is only one of many possible viewpoints or so broadly that further discussion is useless?
Naval War Is to Navies as Information War Is to What?
Can information be considered a medium of conflict parallel to other media? If so, is a separate service needed to house information warriors, however defined? There is a certain logic, for instance, to organizing a corps capable of managing the sensor- to-shooter cycle. Note 62 It could develop and organize the elements of the system, oversee their emplacement, interpret their emanations, maintain their integrity, and convey the results generated to the units that need them. This task would encompass IBW directly; the defense of the cycle would complement other information warfare efforts, such as defensive C2 warfare, counter-EW, and antihacker warfare. If information architectures are similar across competing militaries, than this corps may have the best feel for how the other side goes about developing its own sensor-to-shooter cycle. Conceivably, this corps would contribute to broader efforts in offensive C2 warfare, EW, and hacker warfare (as industrial economists helped pick targets of the U.S. strategic bombing campaign in World War II), but it would not conduct the war.
As the author can attest, the notion of an information corps falls short of intuitive obviousness. Even true believers understand that many forms of information warfare transcend the DoD: from certain aspects of intelligence collection, to the defense of civilian information systems, to most psychological warfare, to almost all economic information warfare, and to who knows what percentage of cyberwarfare. No DoD corps, regardless of how broadly constituted, has cognizance of more than perhaps half the territory of information warfare.
Even within that subset, however, the notion of an information warfare corps defined in terms of in its medium is problematic. Corpsmen of all stripes tend to see their primary job as facing off against their opposites. Tank drivers know that the best weapons to take on tanks are other tanks: ditto for submariners. Jet drivers may be last to recognize how few countries believe their own jets can win air-to-air engagements with U.S. forces Note 63. Denizens of the U.S. Space Command admit only grudgingly that their role in life is to help air-breathing commanders; given their druthers, they would rather conduct dustups with the space systems of other countries.
Unless an information corps is continually oriented to supplying (and protecting) information to support operations (a mission that overshadows the possession of raw firepower in determining conventional engagements) it may be tempted to orient itself against its counterparts. How ironic it would be if an information corps took defeat of the other side's systems as its mission -- just when such warfare becomes increasingly difficult to pursue, unproductive of results, and generally irrelevant to outcomes.
Is Information Warfare Possible?
Is information warfare a struggle for control of the information battlespace? Does information dominance -- a counterpart of, say, maritime supremacy, air superiority, or territorial control -- make sense as a goal? A nation claiming maritime superiority demonstrates its strength when its vessels have unquestioned right of passage over open oceans and can deny the same to enemy vessels. Similar claims to air superiority, or air supremacy, arise when one side can send its warplanes everywhere in the heavens while the other cannot even guarantee its birds' survival on the tarmac long enough to launch them.
Information warfare admits of the concept of superiority. One side in a conflict may have better access to information than the other. It has more sensors in better places, more powerful collection and analytical machines, and a more reliable process for turning data into information and information into decisions. It can rely on the integrity of command-and-control systems, while the enemy might have only a probabilistic set of weak links over which its messages pass. This state of affairs does not mean that one side's systems can keep the other side from functioning (in contrast to England's ability to bottle up the German surface fleet after Jutland).
Does the possibility of superiority say anything about supremacy? Only in some cases. One side's jamming device may be powerful or agile Note 64 enough to block radioelectronic emissions from the other side, yet this superiority would be local and may not imply that its devices can transmit without interference. Because radiation falls off to the square of distance (to the fourth for reflected radar), a wide-area superiority translates poorly into local unintelligibility. Even so, one side might overcome power using such techniques as nulling, directional antennas, or spread spectrum (hiding a narrowband signal in a broadband swath). The result might not be to silence the other side but to reduce its bandwidth to only essential messages. Note 65 More likely, both sides' bits get through.
Can psychological warfare be understood as a zero-sum contest over mind-share? If two messages are opposed to each other, one side's message may dominate the other's, whose bits are received but whose messages fade. In practice, debates are not usually conducted as a direct clash of opposites (crime is down versus crime is up) but through selective emphasis or deemphasis (crime is up versus educational scores are up). Given enough conflict, listeners could resolve the issue by saying they're both lying.
Overarching concepts such as an information warfare corps or information dominance end up having limited application over the entire or even a large segment of whatever falls under the rubric of information warfare. A comparison can be made to logistics supremacy; clearly one side's trucks do not prevent those of the other side from getting through. Opposing information systems can probably each expect to go about their business without overwhelming or even corrupting the other.
First, almost certainly there is less to information warfare than meets the eye. Although information systems are becoming more important, they are also becoming more dispersed and, if prepared, can easily become redundant (e.g., through duplication, compression, and error-correction algorithms). Other commercially employed techniques, such as distributed networking, spread spectrum, and trellis coding, can ensure the integrity of messages. The growth of networking systems has created new vulnerabilities, but they can be managed once they have been taken seriously. A strategy that strangles the other side by applying pressure on its information pipe may be self-defeating; if the other side's bureaucracy is well understood it may be defeated even more easily by flooding it with more information than it can handle.
Second, information warfare has no business being considered as a single category of operations. Of the seven types of information warfare presented here, two -- information blockade and cyberwarfare -- are notional and a third -- hacker warfare -- although a real activity, is grossly exaggerated as an element of war viewed as policy by other means. Disregarding these as premature forms of information warfare, and associating EW techniques with whatever ends they support (e.g., C2W, IBW), three forms remain: C2W, IBW, and psychological operations, each of which can stand as a separate discipline. As it so happens, command-and- control systems are vulnerable because they tend to be centralized, while IBW systems are vulnerable because they rely on communications to unify a decentralized sensor architecture. C2W and IBW are linked in that EW techniques can be used against both command and intelligence systems.
Third, most of what U.S. forces can usefully do in information warfare will be defensive, rather than offensive. Much that is labelled information warfare is simply not doable -- at least under rules of engagement the United States will likely observe for the foreseeable future. Information systems are more important to U.S. forces than they are likely to be to opposing forces; what the United States might do in offensive operations is limited by the restrictive rules of engagement it operates under; and because the United States's open information systems are by their nature more likely to be understood than systems of other countries.
Information Warfare and Information Architecture
One concept that recurs in almost all forms of information warfare -- and thus offers a unifying subtext -- is that the details of a system's architecture determine the effects of attacks on it -- far more than details, of say, a city's architecture determines the effects of its being bombed.
Following Sun Tzu, the side that understands its enemy better is better prepared for conflict. Understanding the enemy's culture and the ways in which its society uses information remain important. These days, grasping the way the enemy uses information systems -- notably, communications networks, databases, and, someday, systematic knowledge algorithms (e.g., neural nets) -- is equally important.
At the core physical level, architecture incorporates sensors and emitters and their power, acuity, availability, and reliability. At the network level, architecture encompasses the interconnection of those elements: do they feed into the core processor directly, are they filtered through particular systems (algorithmic or human or some combination) or intermediate nodes (e.g., whether a field processor extracts semantic information and passes it along or just filters bits). At a higher level are the integrity systems: encoding and encryption, message prioritization (e.g., filtering systems to replace what hierarchies used to do; useful for heavy EW environments), access (who can see what), digital signatures (to ensure that a sensor's readings come from a sensor or that commands come from a valid source), and redundancy (at the levels of bytes and semantics).
Architecture speaks to the way bits are transformed into information. A commander in one headquarters may pay attention to little else but the three top aides (who apply intuition to what they hear from lower echelons). The commander in another may insist on a large group of analysts who look examine raw data, the relative influence of each analyst varying with the commander's estimate of their ability and with the correlation between the analysis and reality. Yet a third commander may reserve looking at slightly massaged bit streams for himself; analysts at this headquarters may suggest interpretations, but the analysis would get its due only if it is both out of the box but within the ballpark. Clearly, each commander has a different decisionmaking style, and a campaign of C2 warfare would have very different effects on each command apparatus.
Architecture links information to decision: how readings are interpreted, what readings are correlated to one another, what constitutes recognition, where boundaries are set to eliminate false positives and false negatives, and under what circumstances sensor bit streams are given higher relative priority. Are data from heterogeneous streams melded to influence decisions or to support them after the fact? The sensor-to-shooter complexes of tomorrow are but one channel; other channels include political direction, rules of engagement, and the status of one's own forces.
Information warfare waged without regard for the architecture of decisionmaking is no better than a shot in the dark. U.S. forces in the Gulf exploited a long period of preparation doing figuring out how Iraq's leadership was thinking: extracting from Soviet doctrine and from recent Iraqi history (e.g., the tenets of Baath ideology, lessons from the war against Iran), listening to intercepted messages, exploring Soviet equipment, perhaps even feinting to test Iraqi systems. By 17 January allied forces had a fairly good feel for the way Iraq used information.
Architectural issues pervade civilian systems under attack from tomorrow's hackers. Most issues of access and security are essentially questions of who the system will let talk to it. How are messages and messengers are linked -- for example, by digital signature (proposed for electronic commerce) or telltale threads Note 66 (proposed for intellectual property protection)? Issues of whether others can feed the system executable code or parsable text are questions of what the system can absorb without rejecting. Unerasable archiving schemes are connections between the possibly corrupted present state of a system and its past, presumably uncorrupted state. To say that a system is hackable because it is physically open is scarcely to offer an adequate description of a system with complex and often correctly thought-out architectures.
Psychological warfare must correspond to media architectures, in multiple dimensions, if it is to have an effect. The first issue is the seemingly simple one of how to inject bit streams into the media mesh of another country: directly (e.g., through DBS), indirectly (e.g., through CNN), or reflection (e.g., through media reaction to particular events). Is the target population "pre- media" (e.g., when information mainly is word of mouth), mass media (e.g., one or, at most, only a few outlets), or "post-media" (e.g., five hundred channels or even Me-TV)? How do most people treat information -- as gospel, as advertising claims, as reliable indications of the opposite view (e.g., popular reaction to Soviet newscasts)? How do official news sources respond to anomalous information -- ignore it, flood it, refute it, suppress it? In this example, architecture has both a simple technical component and a more complex cultural one.
The dependence of information warfare on the other side's architecture suggests that its effectiveness is only as good as its intelligence on that architecture. To conduct C2W requires, minimally, knowledge of who talks to whom about what using which systems wired how. Equally necessary is a feel for the way command systems operate under stress or in degraded mode. To say that this information is difficult to collect (let alone verify) is an understatement. With the Cold War over, the number of countries needing to be mapped is larger and the resources to do it smaller than while the Cold War raged. Note 67 In contrast to the forty plus years the United States spent studying the Soviet Union, new enemies now can arise in weeks. Yet, most potential enemies of the United States have acquired information systems from Western firms, a source of intelligence that was not available about the Soviets. If the knowledge required to conduct and assess attacks on the other side's command systems is sufficiently below what the United States has or can get, resources devoted to such attacks may be wasted.
Now, consider that foreign defense systems designed to interoperate with U.S. IBW collection systems will be easier for the United States to understand should the tides of friendship ebb. The international assimilation of computers and communications through the global information infrastructure is giving rise to information systems that respond to a variety of requests and generate a variety of answers (e.g., airline reservations systems, environmental monitoring systems, interbank fund transfers) -- and perform in relatively understandable ways. This situation leads to several conclusions.
First, to know the other side's systems in wartime, it may be enough to know them in peacetime. Is it too much to expect that other people's peacetime systems will be influenced partly by their need to interconnect with U.S. systems during years when they and the U.S. enjoy mutual comity?
Second, little will help the United States to know the other side's architecture in peacetime better than helping to shape it. Other nations' systems are strongly influenced by the extent to which their architectures are subsystems of those of international systems, (hardware, software, content, and systems integration).
Third, the shrewdest U.S. national security strategy may be expressed through support for the development of a global information infrastructure. Favorable pricing policies, accessible software and technology, and mutually accepted standards offer one method. Common networks help; so, too, does global availability of services both for data dissemination, and for intelligent dataprocessing. Sensors and other space information systems for which common interfaces are available, and global access promote a shared visibility of the earth. Public key infrastructures and interlinked ambient monitoring systems can assist information security. The exact architecture of such emerging information systems need not be detailed immediately, but its most important feature -- a global system that is an extension of the U.S. system -- remains.