Personal tools

Fall1985.txt

Airland Battle Doctrine and the Strategic Computing
Initiative
Gary Chapman - CPSR Executive Director

This is the first part of a two-part series.

One of the major programs in research and development
funding for advanced computer science in the United States
is the Strategic Computing Initiative (SCI), a $600 million
five-year agenda administered by the Defense Advanced
Research Projects Agency (DARPA). The SCI has been funding
three large projects in artificial intelligence research
and development since the program was announced in October,
1983: an autonomous land vehicle, or general-purpose
vehicle designed to be driven by a computer; a "pilot's
associate," or computer system to assist pilots flying
high-speed jet fighters; and a 'battle management" system
for the Navy's aircraft carrier task forces. This latter
system would help the commander of the task force keep
track of what was going on in a naval battle, when the rush
and the sheer number of events can surpass the human
ability to comprehend them.

Now the Army and DARPA have added a fourth project to
DARPA's artificial intelligence program, called Airland
Battle Management. It was introduced to a group of
contractor officials at a meeting in Washington, D.C., in
September. The bidding process officially begins November
1.

Airland Battle Management

Airland Battle Management is meant to be the land-based
version of the battle management system being developed for
the Navy, which is currently under development aboard the
aircraft carrier USS Carl Vinson and in Pacific Fleet
headquarters in Pearl Harbor, Hawaii. The Airland Battle
Management computer is supposed to be able to have a
"picture" of ground and air battles in its memory at all
times, and to be able to constantly update this picture
with information received from the battle zones. The system
is meant to advise a corps commander about variables such
as weather, terrain, enemy capabilities, time constraints,
and other signficant data.

The program will recommend various courses of action and,
through simulation routines, propose possible outcomes of
these courses of action. The commander may choose various
recommendations, or change them, and the system will
produce orders for military sub-units. As these missions
are accomplished, the new data will be entered into the
program and the process repeated.

The Army and DARPA see this system as a necessary
supplement to conventional intelligence and command
decision-making, because the military envisions battles of
the future that involve more weapons, more chaos and more
destructiveness-in less time-than any other battles in
history. The battle of the future, according to the
military, will be won with high technology systems. One
Army document describes "a battlefield environment which
will be densely populated with sophisticated combat systems
whose range, lethality and employment capabilities surpass
anything known today. The airspace over that battlefield
also will be saturated with aerial and space surveillance,
reconnaissance and target acquisition." The author of this
document notes that "The potential for confusion in such an
environment will be greatly magnified compared to that in
which the Army now trains. Command and control will be
complicated to a much greater degree than it has in the
past."

Computer Professionals for Social Responsibility has
written about the technical obstacles of the Strategic
Computing Initiative in the paper by Severo Ornstein, Brian
Smith, and Lucy Suchman, "Strategic Computing: An
Assessment." This critique, which was directed at the
general assumptions about the role of artificial
intelligence in warfare embodied in the SCI, applies to the
new project, Airland Battle Management, as well. What has
not been explicitly clear until now, however, is how the
SCI is to be integrated into a significant revision of U.S.
military doctrine that was started in 1982. The new
military doctrine is called Airland Battle, and this is the
name the American Army now uses to describe the way it
intends to fight the next war. As the insider tip sheet
Advanced Military Computing recently reported, DARPA's
Airland Battle Management program is considered "key" to
the Airland Battle doctrine.

Thus in order to have a more complete picture of the goals
of the Strategic Computing Initiative, it is necessary for
computer professionals to understand the military context
in which the SCI is being developed. This article is a
description of that military environment, and it will
discuss why some of the goals of the SCI are curiously at
odds with the purposes of the radically new American
military doctrine known as Airland Battle.

The Historical Development of Airland Battle Doctrine

The story of how Airland Battle came to be the latest
version of Army doctrine goes back to the war in Vietnam.
It was clear that the American Army in Vietnam did not do
as well as expected by political and even military leaders.
There were of course many reasons for this, political as
well as military, but within the military the bitter
feeling left over from the war was that it was the first
war that the United States had "lost." Following too
closely on the end of the war in Vietnam were the Mayaguez
incident, the taking of the hostages in Iran, and then the
disastrous Delta Force raid in Iran. Particularly after the
tragedy of the raiding force sent to rescue the American
hostages in Iran, many people both inside and outside of
the American military began to wonder what was going wrong.
Speculation began to run high that the American military
would be incapable of conducting itself effectively in a
major conventional war, especially one that involved the
numerically superior, well-equipped and highly trained
forces of the Soviet Union.

Once the Vietnam war ended, American strategic attention
returned to Western Europe, and here the picture was not
encouraging. Throughout the mid-70s the Soviet Union had
been steadily increasing its forces in the Warsaw Pact,
until in several key conventional weapons systems the
Soviets had an overwhelming numerical advantage over NATO
forces. The Warsaw Pact, for example, now has 46,230 battle
tanks to NATO's 17,730 and 94,800 armored personnel
carriers to NATO's 39,580. If the Soviet Union were to
invade Western Europe, the United States claims it might be
forced to use nuclear weapons because conventional NATO
forces would be unable to stop a full Soviet offensive.
This is not an appealing strategy, particularly since it
destroys the very country one is trying to defend.
Political pressures in Europe, and pragmatic considerations
within the United States, made American commanders search
for a strategy that would make conventional NATO forces a
more formidable opponent and thus raise the nuclear
threshold. Secretary of Defense Weinberger and NATO
Commander General Rogers called this the attempted
"conventionalization" of NATO.

Official American policy with regard to the defense of
Western Europe today is to preserve "options," including
the use of nuclear weapons. But one of these options has
always been fighting a strictly conventional war in Europe
and it is in this option that the United States feels there
is a serious strategic imbalance.

The traditional ways that armies have changed a perceived
strategic imbalance have been to increase the number of
soldiers and weapons available, come up with new
technologies that are more effective in destroying an
opponent, or develop new tactics and strategies designed to
take advantage of an opponent's weaknesses. For a variety
of reasons, the first option has been considered
unavailable to NATO planners. It is highly unlikely that
the U.S.'s NATO partners will increase their defense
budgets and their ranks to levels that can compete with the
Warsaw Pact. A military peace-time draft is politically
unpalatable in the United States as well. And nearly all
Western countries are undergoing major demographic changes
that have dramatically lowered the draft-age population.
West Germany, for example, has a negative birth rate.
Similar constraints exist in procuring vast numbers of
weapons. The chief Soviet advantage over the West is cheap
labor, and the Soviet Union has taken advantage of this by
producing twice as many tanks as the United States per
year. At current levels of Soviet weapons production, there
is virtually no chance that NATO countries could, or would,
match such an output.

The options available to American planners have thus been
technological development and changes in doctrine. The
American military has always been a major user of
innovative high technology, so the favored option was one
already working at full steam. The traditional rationale
for using high technology in weapons systems has been that
it saves American lives-"nothing but the best" for American
soldiers. The unspoken corollary to this, but what is more
to the point, is that technological advances have made
weapons more lethal and war more hazardous. The United
States military is now counting on continuous and reliable
technological development, funded through the Department of
Defense, to offset the numerical superiority of Soviet
forces. It is clear that U.S. planners feel that high
technology is the most significant comparative advantage of
the United States over the Soviet Union, particularly in
computers, optics and sensor systems.

The New Generation of High-tech Weapons

The United States and other NATO partners are at the
leading edge of a whole new generation of weapons which are
sometimes called "ET," for "emerging technology" weapons,
or "PGMs," for "precision guided munitions," or, in more
conventional parlance, "smart" weapons. These weapons are
dependent upon computer and opticoelectronic technologies
that allow them to "see" a target and then destroy it
without simultaneous aiming by a human eye. An illustration
can be found in the history of anti-tank weapons. First-
generation anti-tank weapons were the traditional World War
II "bazooka," which was simply a projectile aimed by a
soldier and fired at a target. Second-generation weapons
included the TOW, which is steered after firing by a wire
guidance system connected to an optical sight. Third-
generation weapons, which are just coming into operation,
use lasers for steering the projectiles- but the operator
still has to keep his eye on the target. The next
generation of anti-tank weapons, true PGMs that are only in
development today, are called "fire and forget" weapons,
because after the weapon is fired, the projectile itself
follows the target and the operator is free to engage
another target. Eventually these may be combined with an
autonomous system in which the human operator is replaced
by a target identification machine. Very sophisticated PGM
systems are already in operation on F-18 fighter planes,
which can engage six enemy aircraft at once, even when the
pilot cannot see them. An advantage to these systems is
that, since it is actually the projectile that is "smart,"
older systems can be fitted with deadlier ammunition. The
United States' Copperhead warhead, for example, is a
"smart" artillery shell that can be fired from older,
standard artillery tubes.

The principal rationale for the development of these so-
called "smart" weapons is that the United States can knock
out more Soviet forces with them than with other weapons
and hence equalize the conventional imbalance in terms of
numbers. The U.S. will need fewer units of hardware, and
fewer soldiers, if each anti-tank system can knock out 25
tanks instead of two. But in playing this numbers game,
such a strategy makes the relative value of the "smart"
weapons increase significantly. If the Soviets have 40,000
tanks, losing 100 of them is no great setback; if the U.S.
has a hundred "smart" anti-tank helicopters, for example,
the destruction of 25 of them will be a serious loss.
Consequently, the traditional battle doctrine of Firepower
and attrition" is inadvisable. With such potentially
indispensable and increasingly expensive weapons systems
facing an enemy with considerably more material resources,
it is more desirable to keep these weapons mobile and
elusive.

Battle Doctrine Rediscovering the "Blitzkrieg"

This then introduces the concept of battle doctrine. A
battle doctrine is the general description of how any army
is to conduct itself when it fights-whether it will be
organized in large, massed units or small, independent
units, and so on. The U.S. Army's battle doctrine is
codified in a manual called Field Manual (FM) 100-5,
Operations and Tactics. This is the manual that is used as
the standard source of education and planning for Army
operations. The Army's FM 100-5 is periodically reviewed
and updated to reflect current thinking about Army
doctrine. It was revised in 1976 and again in 1982. The
differences between these two versions are very significant
and have received relatively little attention in the United
States outside a small group of theorists generally
identified with the Military Reform Movement. By contrast,
the 1982 FM 100-5, the one now known as Airland Battle, has
been one of the most controversial political issues of the
last few years in Western Europe.

The difference between the FM-100-5 of 1976 and Airland
Battle is somewhat similar to the difference between the
ways World War I and the early days of World War II were
fought. Everyone is familiar with the picture of the grim,
murderous hardships of trench warfare in World War I. This
traditional "set piece" way of fighting is known as
"firepower and attrition," because the principal elements
of success are massive firepower and lower attrition than
that of the enemy. World War I was characterized by almost
round-the-clock artillery barrages, with infantry troops
only holding territory that had been abandoned by the
enemy. The casualties were staggeringly high; at the Battle
of the Somme, the worst battle in all history, 60,000
troops died the first day, and over 1.2 million died in
eight days of fighting.

Most people are equally aware of the radically different
picture of the German "blitzkrieg" of World War II. The
early days of this war were characterized by extremely
rapid advances, no discernible lines of fighting, and the
collapse of armies that would under other circumstances
have been capable of putting up more of a fight. In World
War I the opponents focused on killing or capturing more
men than the other. In World War II the German Army
captured or broke up whole enemy units by using tactics
that are the opposite of "fire and attrition" and have come
to be known as "maneuver warfare." The blitzkrieg model of
war is a war of wits, not material and personnel. And a
numerically inferior force can demonstrably overpower an
opponent by sticking to the cardinal rule of maneuver
warfare: "Be smart."

The Airland Battle doctrine has institutionalized, at least
in writing, maneuver warfare in the U.S. Army. This was a
self-conscious revision of the previous doctrine of the
1976 FM 100-5, which was known as "active defense." The
"active defense" doctrine was a codification of what was
the conventional thinking about the way a war in Western
Europe might develop: the Warsaw Pact would attack and NATO
forces would hold their ground long enough to arrange for
reinforcements, after which the Soviets could be turned
back. But with the growing numerical superiority of the
Warsaw Pact forces and the necessary assumption that the
Soviets might use tactical nuclear weapons, a certain
pessimism began to surround the idea of "active defense."
Some critics derisively called the plan "fall back by
ranks."

Between 1976 and 1982 Army tacticians rediscovered the
German blitzkrieg model. The blitzkrieg model not only
carries the potential of a materially inferior force
defeating an opponent, but it is also enhanced by
technological superiority in three key areas: rapid
mobility, communications and air support. In the early days
of World War II, General Guderian demonstrated the German
advantage in each of these three areas with the famous
Panzer tanks, the first use of portable radios, and German
air superiority through the use of Stuka dive bombers. The
United States has recently concentrated on these three
areas by devoting a tremendous amount of development
funding to projects like the Abrams M-1 tank, the Bradley
fighting vehicle, battle management computer systems and
enhanced communications, and improved tactical air
capabilties using F-16s, F-18s and advanced attack
helicopters.

The Role of
"Auftragstaktik"

But the most important feature of the blitzkrieg model is
simply an idea, one the Germans called "Auftragstaktik."
This means literally "out of the tactical mission," but in
German practice it took on a very specific meaning. It is
essentially a way of thinking on the part of tactical unit
commanders, one that encourages initiative, creativity,
innovativeness and boldness. It is described in the
following way by a West Point cadet, Stephen W. Richey, in
Military Review:

It was left up to the subordinate to work out for himself
the means of obtaining the objective. The subordinate was
expected to understand not merely the letter of his orders,
but the overall spirit of what his superior wanted to
achieve. If rapidly changing circumstances made it
necessary, the subordinate was expected to have the good
judgement and moral courage to disobey the letter of his
orders to attain the ultimate objective toward which these
orders had been directed.

The purpose of doing it this way was to maximize the
freedom of the leader on the spot to exercise immediate
personal initiative in seizing unforeseen problems, without
having to wait for permission from higher headquarters.
This served to "lubricate" the war machine, enabling it to
strike faster, farther, and harder.

One of the principal authors of the Airland Battle
doctrine, Colonel Huba Wass de Czege, similarly describes
the fundamentals of the new American doctrine:

The maintenance of the initiative through speed and
violence; flexibility and reliance on the initiative of
junior leaders; clear definition of objectives, concepts of
operations and the main effort; and attack on the enemy in
depth.

This shift in emphasis to highly mobile units with leaders
encouraged to find targets of opportunity has significant
implications for the composition of American forces.
Instead of the large, massed units of divisions and corps,
the focus now is on units of regimental size or even
smaller. And understandably there is greater interest in
highly mobile and increasingly deadly weapons systems using
precision-guided munitions. There is simultaneously an
emphasis on improving the so-called "teeth to tail" ratio,
or the proportion of combat troops on line to support
troops in the rear. Highly mobile, deep-strike units should
be self-sufficient by design.

The most politically controversial aspects to the Airland
Battle doctrine are its clear emphasis on offense and its
commitment to deep-strike attacks. The manual says that
"whatever the defensive techniques, the overall scheme
should maximize maneuver and offensive tactics." It also
says that the "attacker's single greatest assetÑthe
initiativeÑis the greatest advantage in war." This general
attitude is now combined with the tactical commitment to
what is called "follow-on forces attack," or FOFA. In this
scheme, Army forces would strike deep behind the enemy's
lead forces to attack the second echelon or "follow-on"
forces waiting to take up the fight. The goal is the
surprise, chaos, demoralization and isolation that would
result with the breakup of reinforcement units just as the
lead forces were expecting them.

Political and Stategic
Implications

What has become so controversial about this is the shift in
NATO policy to one of offense, when NATO forces have always
been committed to a defensive role for politiical reasons.
Furthermore, there are serious reservations within Western
European governments about a policy which explicitly
intends to invade Eastern European countries and take the
war to the Soviet homeland. Many European leaders, citing
Soviet statements in the past, think that the Soviet
obsession with protecting against invasion of the homeland
and of their Warsaw Pact allies will, in combination with
the Airland Battle doctrine, increase the potential for a
theater-based nuclear war. The new American doctrine, which
was introduced into the European theater without
consultation with NATO partners, has started a firestorm of
controversy in public discussions and within Western
European parliaments. The Military Committee of the North
Atlantic Assembly, for example, has refused to endorse the
new doctrine, saying that any military posture based on
offensive intentions is unacceptable in Western Europe.
European leaders are also averse to considering the inter-
German border as a field for maneuver warfare.

The likely consequence of an American Army striking deep
and fast into Warsaw Pact territory is a battle with
extreme confusion and difficult communication. Colonel de
Czege writes that one of the most important realizations of
the authors of FM 100-5 was that "the chaos of the
battlefield will make centralized control of subordinates
always difficult, sometimes impossible." Cadet Richey has
described the need for "an attitude toward combat that will
enable our higher echelon commanders to remain undismayed
if their situation maps look more like whirlpools than like
straight lines" and for "senior commanders who have
absolute trust in their subordinates to do the right thing
when imperfect information, broken-down communications and
the rush of events make it impossible for the senior to
issue detailed orders that will have any bearing on
reality."

Coping with such problems has not been one of the American
Army's fortes. Martin van Crewald, a military historian,
notes in his 1982 book on World War II that "the Germans
viewed battle as a free play of wills in the realm of
chance and eschewed rules, while the Americans tended to
view battle almost as an engineering problem subject to the
application of preconceived formulas." This has been one of
the points that military reformers like Edward Luttwak,
William Lind, and Stephen Canby have made in the aftermath
of Vietnam. Colonel de Czege has noted that, in preparing
the 1982 FM 100-5, the authors assumed that, "if we are to
operate more flexibly and more effectively, our leaders
will have more need for principles and less need for
cookbook formulas."

In this, once again, the doctrine of Airland Battle has
followed the advice of its German forebears who developed
the blitzkrieg model of warfare. Adolf von Schell. a German
Army captain of World War I, writes in his book, Battle
Leadership, "Every soldier should know that war is
kaleidoscopic, replete with constantly changing,
unexpected, confusing situations. Its problems cannot be
solved by mathematical formulae or set rules."

Given this attitude in the new U.S. Army doctrine, and its
institutionalization in force structure, leadership
training, and battle plans, it is somewhat difficult to
understand the military fascination with artificial
intelligence and automated decision systems. The
description of DARPA's new Airland Battle Management
program appears to be quite incongruous with the attitudes
expressed by the military reformers responsible for the
Airland Battle concept. As some of them have put it, the
Army needs to train soldiers, and particularly leaders, not
"what to do," but "how to think." And yet now large sums of
money are going to be spent for a computer system that will
ostensibly do that thinking for commanders.

A second part to this article will discuss in detail the
current work on battle management for conventional forces,
how this is to be integrated into the new plans for
fighting conventional war, and why there are grave dangers
with such plans.

CPSR/Boston Co-Sponsors Debate on Computer Requirements of
Star Wars
Jon Jacky - CPSR/Seattle

On October 21, CPSR/Boston and the MIT Laboratory for
Computer Science sponsored a panel discussion titled, "Star
Wars: Can the Computing Requirements Be Met?" One of the
speakers was David L. Parnas, whose resignation last June
from the SDI Organization's Panel on Computing in Support
of Battle Management (SDIO/PCSBM) attracted national press
attention. Other speakers included Danny Cohen, who chairs
the SDI computing panel, Charles Seitz, who remains on the
panel, and MIT computer science professor Joseph
Weizenbaum. Michael Dertouzos, Director of the MIT
Laboratory for Computer Science, served as moderator.

Before the event, CPSR/Boston held a reception and
fundraiser that was attended by over one hundred people.
David Parnas spoke briefly to explain why he felt CPSR was
an important organization. "I wouldn't have said that a few
months ago," he allowed. He said that CPSR needs to help
the public develop more skeptical and realistic
expectations about what computers can do. "We like to tell
the press about our successes, but not our failures," he
said. He also noted that lay people need help
distinguishing simple applications from difficult ones. "If
you look at the displays of a video game and of a real
navigation system, they look about the same," he observed,
"but the game can assume that the world is flat, and the
navigation system can't even assume that it is round."
Finally, he remarked that the existence of CPSR was very
important in encouraging people like himself to speak out
about problems they encountered in their work. Joseph
Weizenbaum added that CPSR needs people to join and
contribute.

The debate took place before a capacity crowd of over 1300
people in MIT's Kresge Auditorium. Where else but Boston
would it be possible to assemble a technically
sophisticated audience this large? At one point, when
Michael Dertouzos mentioned that an earlier SDI study panel
had suggested a "consistent distributed database," most of
the audience laughed.

Dertouzos introduced the speakers and asked that they limit
their remarks to the question posed in the event's title,
"Can the computing requirements be met?" "It's not the only
question, and maybe not even the most important one," he
admitted. Dertouzos also credited CPSR/Boston for putting
the event together, and introduced Steve Berlin, CPSR Board
member and one of the event's organizers.

Parnas spoke first, to explain why he thought the
requirements could not be met. He developed the arguments
he first presented in the series of eight memos that
accompanied his resignation letter. These have been widely
circulated in the computing community, and were published
in the September/October 1985 issue of American Scientist.
He said that the ballistic properties of the enemy weapons
and decoys, as well as the organization and timing of the
attack, could not be completely known in advance. Moreover,
the defensive system itself would probably come under
attack, and it would not be possible to say which
components and communication channels would actually be
working when they were needed. He observed that the system
would need to meet stringent real-time deadlines, but said
that it would be possible for an enemy to organize an
attack such that subsystems could be saturated and
deadlines could not be met. He recalled that the control
programs for Safeguard (an anti-missile system developed
and then abandoned in the early 1970s) were designed to
sacrifice some targets when they became saturated. "That
may be okay for hard-point defense of targets like missile
silos, but it is unacceptable for population defense,'' he
noted. He said that an enemy that understood how the
software was supposed to work could always design an attack
to exploit its weaknesses, and observed that the system
would require the efforts of thousands of programmers for
many years. "I'm worried that one of them will be named
Walker," he said, alluding to the recent espionage case.
Otherwise, he noted that most of his criticisms did not
depend on the size of the system.

Parnas concluded that we could never be confident that the
system would work if it was needed. Therefore, we would
feel compelled to retain our offensive nuclear arsenal. The
Russians, on the other hand, would have to assume that the
system might work. To maintain deterrence in the face of
our offensive weapons and anti-missile shield, they might
proliferate offensive missiles or shift their emphasis to
air-breathing bombers and cruise missiles. "I am really
concerned we could end up in a very disadvantageous
strategic posture," Parnas said. "I am for a strong
defense, but this is a poor defense."

Joseph Weizenbaum spoke next. He explained that there were
three classes of impossible systems. First, systems
prohibited by first principles, such as perpetual motion
machines. "We hardly have any first principles in computer
science, so this isn't relevant here," he said. Next,
systems which are syntactically impossible, like a computer
program that is supposed to satisfy a set of specifications
kept secret from the developers. Finally, systems which are
possible in principle but impossible for practical
purposes. "In principle there is no reason why 5.000
monkeys pounding on 5,000 typewriters could not produce the
Encyclopedia Brittanica," he said, explaining that SDI was
a bit like that. "On the other hand, it is pretty easy to
tell if the monkeys succeeded," he observed.

Weizenbaum also observed that SDI was an attempt to
transform the political problem that exists between the
United States and the Soviet Union into the technical
problem of stopping missiles. He said that even if we could
succeed in that most of our problems would still remain or
would reappear in some similar form. "It is important that
members of the technical community say this," he said .

Charles Seitz of Caltech argued in favor of SDI. He said
that SDI was a research program, not a development program;
the point was to determine whether it was feasible or not.
"They cancelled the Sergeant York, and they would cancel
SDI too if it became clear it wouldn't work," he said,
referring to the computer-controlled anti-aircraft gun
abandoned this summer. Seitz emphasized that "We don't even
know what the requirements are yet. We will have a much
clearer idea in a year or so," he said. Seitz read the
conclusion from the SDIO/ PCSBM interim report, which said
that selecting an appropriate system architecture was the
key to the problem, and was much more important than
selection of a particular software engineering methodology.
The interim report warned that selecting a new and untried
development methodology might put the whole project at
risk. Furthermore, Seitz emphasized that the earlier
Fletcher Panel report, which first made the now-famous
estimate of 10 million lines of SDI code, and suggested the
"consistent distributed database" which evoked such mirth,
was not a requirements document, and the architecture
described there was presented merely for purposes of
example. To show how architecture might help, Seitz drew a
tree diagram of the sort familiar to readers of computer
science texts. Indicating the terminal nodes, Seitz
explained that raw sensor data would be represented down
there, then would be abstracted and passed up to the
intermediate nodes, and further abstracted and passed into
the root, where high-level processing, such as the
allocation of weapons to targets, would occur. He argued
that the code at each level would be relatively
straightforward.

The last speaker was Daniel Cohen, Chairman of SDIO/ PCSBM.
He said he would debunk "Parnas' octet," referring to the
eight memos. For example, he said that Parnas' claim that
we would not know the ballistic properties of decoys in
advance was the same as claiming that F does not equal M x
A (and thus denies the basic law of motion). Cohen also
pointed out that the SDI computing requirements did not
violate any fundamental principles like the halting
problem, and in any case it was not yet clear what the
requirements were. "We may be able to define the
requirements so that we can do it," he said. Cohen cited
other very large computer systems, including the telephone
switching system, the 747 airliner avionics, the Space
Shuttle, and the Apollo moon project. "They all worked well
enough," he noted. Writing in symbolic algebraic notation,
Cohen said Parnas had not proved that "For every SDIX SDI
is not feasible" but only that "There exists an SDI which
is not feasible." Cohen also complained that Parnas left
the panel after only two days, while the other members
worked on their report for 18 days. He cited SDI's policy
and ethical advantages, saying, "they should even like it
in Berkeley," illustrating his point with a slide of a
bumper sticker with the slogans "Kill bombs not peopled
festooned with drawings of flowers. Cohen also mentioned
that he thought that the Department of Defense did a good
job of funding computer science research. Finally, he said,
"There are concerns about false alerts, but I would much
rather have false alerts in SDI than with Mutual Assured
Destruction. "

Then followed a series of rebuttals by each speaker. Parnas
said that SDI was not being presented as a research
project, but a development project. He cited a memo from
Defense Secretary Caspar Weinberger to SDI Director
Abrahamson to support this. Furthermore, he said he had
asked one of the SDIO/PCSBM organizers if it was acceptable
for the panel to conclude SDI could not be done. No, he was
told, it was not acceptable. The person to whom this
statement was attributed lept from the audience and yelled,
"I never said that!" Parnas cried back, "You did too!"
Dertouzos intervened to stop the shouting match.

Parnas also said that the requirements of SDI were quite
clear: make nuclear ballistic missiles "impotent and
obsolete." He said that if the SDIO decided to take on
something less ambitious "there need to be speeches by
Reagan, Weinberger and Abrahamson to make that absolutely
clear to the people." Weizenbaum also complained that the
SDIO panelists were trying to make a transformation from
what Reagan promised to what they thought they could do.

Parnas also noted that the examples of successful large
systems cited by Cohen did not have to meet the same kinds
of reliability requirements that SDI would have to meet.
"The reason the Shuttle works is that we can turn it off
when it doesn't," he said.

Weizenbaum expressed disappointment with Cohen and Seitz.
"They haven't responded to any of the criticisms which
David or I have made," he said.

After the rebuttals, the speakers responded to questions
from the audience. Someone asked "How could you quantify
the SDI system reliability so people could decide whether
they thought it was goods bad, or in-between?" Seitz said
that since all components of the SDI system would pass over
the United States for large periods of times there would be
ample opportunities for quite large-scale realistic tests.

Someone else asked Seitz, "What kind of architectures would
make the problem more tractable? We could use them now!"
Seitz replied that the system could be limited to only
certain kinds of communication.

Someone asked if electromagnetic pulse from nuclear
explosions would harm the SDI electronics. Seitz said that
electromagnetic pulse was only experienced within the
atmospheres not in space. "However, the large flux of
neutrons might scramble the contents of dynamic memory, so
in that case you might have to reboot the system," he said
to gasps and astonished laughter. "How can someone say
something like that and still claim it might work?"
Weizenbaum asked.

Someone commented about Cohen's statement that SDI violates
no fundamental law of computing "Yes, but it violates
Murphy's law!

Star Wars Computing
Part III: Accidental Activation
Greg Nelson and Dave Redell - CPSR/Palo Alto

This is the third and last part of this serialized article
on the computer aspects of the Strategic Defense
Initiative. Copies of the full article, complete with
references, may be obtained from the CPSR office for $1.50
to cover postage and handling.

The Possibility of Unanticipated Activation

It is an old aphorism of radar engineers that probability
of detection is meaningless without probability of false
alarm. Similarly, there is an inherent tradeoff between
ensuring that the SDI computer system will respond to a
missile attack and ensuring that it will not activate
itself in unanticipated circumstances.

The existing early warning system depends heavily on human
judgment to identify false alarms and pull in the reins of
the systems and procedures that would otherwise unleash our
strategic nuclear arsenal. According to a Senate report on
false alerts, thousands of "missile display conferences"
were held during the years 1979 and 1980 to evaluate
ambiguous data picked up by warning sensors, or false
alarms caused by faulty hardware, unanticipated natural
events, or human error. If a missile display conference
cannot discount a warning, the next step is to convene a
threat assessment conference, which "brings more senior
people into the evaluation, such as the Chairman of the
Joint Chiefs of Staff." Four threat assessment conferences
were held in 1979 and 1980, for the following reasons: On
October 3, 1979, the radar reflection of a rocket body in
low orbit generated a false alarm. On November 9, 1979, a
technician accidentally directed missile attack simulation
data from a test tape into the early warning system. On
March 15, 1980, a Soviet SLBM test launch was misclassified
as threatening. On June 3, 1980, and again on June 6, a
faulty chip in a communications multiplexer began setting
bits randomly in the message data field containing the
count of the incoming missiles detected by early warning
radars. (Note that we can infer from this accident that the
elementary precaution of parity checking was omitted from
the communication protocol. This revelation should have
provoked an outcry; its quiet reception is evidence of a
lack of mature discipline in the computing profession.) In
short, the complexity of the early warning system and its
constantly changing environment produce a stream of false
alarms, most of them minor.

A similar stream of false alarms can be expected from the
SDI computer system, since it too would be complicated and
deployed in an unpredictable environment. But the time
limit for identifying a false alarm would be drastically
reduced, because the boost phase currently lasts less than
five minutes, and fast burn boosters could reduce it to
less than one minute. Technicians cannot be expected to
identify, in so short a time, the causes of anomalies in
the behavior of a ten million line program running on a
system distributed in space and over much of the earth.
Threat assessment requires the evaluation of the ambiguous,
the uncertain, and the unexpected, and therefore is
unsuited for automation. Automatic systems are reliable
only in circumstances that have been foreseen by their
designers; this fundamental fact holds for expert systems
as well as for other systems.

Unfortunately, the SDI plans involve increasing the level
of automation in threat assessment, both by delegating to
automatic systems functions previously performed by humans,
and by streamlining the functions that are left to humans.
Weapons release will also be automated, although whether
this automation will be confined to defensive weapons
release or extend to offensive nuclear weapons is not
addressed explicitly by the Fletcher Report. The Report
does conclude that "the battle management system must
provide for a high degree of automation to support the
accomplishment of the weapons release function." The 1985
Report to Congress on the SDI states that studies are being
made "on the speed and accuracy with which human test
subjects can assess situations and make decisions.
Performance is being compared as a function of the format
and content of the data displayed, in situations that
realistically represent possible battle scenarios." The
Fletcher Report states that the SDI system would be
programmed so that human operators could "define thresholds
or contingencies within which release of weapons is
delegated to the automated system. Examples are release
nuclear weapons for defense of own resources, release hit-
to-kill weapons if more than ten boosters are in track, and
release all nuclear weapons if more than 100 boosters are
in track."

The dangers of automating threat assessment and weapons
release are increased by the feedback between our C31
system and its Soviet counterpart. Paul Bracken writes in
Command and Control of Nuclear Forces:

A threatening Soviet military action or alert can be
detected almost immediately by American warning and
intelligence systems and conveyed to force commanders. The
detected action may not have a clear meaning, but because
of its possible consequences protective measures must be
taken against it. The action-reaction process does not
necessarily stop after only two moves, however . . . The
possibility exists that each side's warning and
intelligence systems could interact with the other's in
unusual or complicated ways that are unanticipated, to
produce a mutually reinforcing alert. Unfortunately, this
last possibility is not a totally new phenomenon; it is
precisely what happened in Europe in 1914. What is new is
the technology, and the speed with which it could happen.

It is important to remember that the SDI battle management
system would be incorporated into the overall C31 system,
which also drives the decision process for release and
launch of nuclear missiles. Already, in fact, "the Joint
Chiefs of Staff have begun discussions of ... a nuclear war
plan and command structure that would integrate offensive
nuclear forces with the projected anti-missile shield."
Thus, the highly automated SDI threat assessment and
weapons release functions would become factors in the
feedback cycle described by Bracken. The automatic release
of the ABM weapons in response to a perceived Soviet threat
would in turn be perceived by the Soviets as a provocation.

Such feedback effects could occur between the battle
management software of opposing ABM systems, if both sides
deployed them. The short reaction times and resulting
absence of human damping could lead the systems to initiate
preemptive attacks on each other. This fear is consistent
with experience with other complex systems, which can fail
in surprising ways because of unanticipated ripple effects.
Power grid failures and other examples show that
complicated automated systems can exhibit global behavior
that was neither intended nor anticipated by their
designers.

Detailed scenarios for the outbreak of nuclear war are
inherently speculative, but the danger from increased
automation is clear. An accidental war is not likely to be
caused because an isolated hardware or software failure
activates a weapon on an otherwise normal day. The danger
is that the warning and response system as a whole--
hardware, software, and standing orders that direct human
beings to play their individual roles--is sufficiently
complicated that its behavior in a crisis is unpredictable.
It will interact in unexpected ways with itself and with
its Soviet counterpart. In a crisis, the safety catches on
the triggers would be removed, and procedures would be
activated that had only been simulation-tested before.
Flaws in the system would surface, possibly with disastrous
consequences. This danger exists today, but it would grow
in proportion to the level of automation in the process of
threat assessment and weapons release.

Strategic

When a large project is launched with unrealistic
aspirations, the usual result is for the original goals to
be quietly abandoned as the project is channeled in
unforeseen directions by insurmountable technical barriers.
The final result can be dramatically different from the
original vision. For example, Dijkstra has pointed out that
the original goal of the COBOL project was to make the
professional programmer superfluous, but its result was to
provide the language now used by three out of four
professional programmers.

The SDI has already shown this kind of metamorphosis, as
its original goal of freeing us from the need for
deterrence is replaced by the goal of "enhancing
deterrence." President Reagan underscored his original goal
by suggesting that we would eventually share our ABM
technology with the Soviet Union, as a prelude to
disarmament. One of many objections to this suggestion is
that it is not feasible for the computer software, both
because the Soviet Union would have no protection against
"trojan horses" in the programs, and because the constantly
evolving battle management software would contain extensive
details of our assessment of Soviet strategic plans and our
intended responses to them, as well as potential loopholes
in our defenses that could be discovered and exploited.

The technical uncertainty about the outcome of the SDI is
compounded by the strategic uncertainty created by the
dynamic and adversarial relationship between offensive and
defensive weapons systems. As General Herres, head of C31
for the Joint Chiefs of Staff puts it: "Every time you
think you've got one threat whipped, then somebody thinks
up another one. It's a never-ending cycle. I rail at the
guys who think one of these days ... everything will be all
right. It'll never be all right.'' It is difficult to
predict the effects of weapons projects on this never-
ending cycle, even if they are technically straightforward.
For example, the original justification for MIRVs was to
provide insurance against the possibility of Soviet ABM
systems. The ABM treaty removed this justification, but the
MIRVs remained. Today they are widely regarded as
destabilizing, because one MIRVed missile can threaten many
of an opponent's missiles, and therefore they make a first
strike more tempting.

The SDI could backfire in the same way. The current
rationale is that the ABM system would enhance deterrence,
since its tendency to blunt a first strike would make
reprisal more certain. Unfortunately, for technical reasons
it is more likely to undermine deterrence, since its
tendency to blunt a reprisal could make a first strike more
attractive. During a period of international crisis, each
side's hesitancy to launch its missiles would be gradually
eroded by its mounting fear that the other side might
attack first. In this scenario, the important consideration
is' the disadvantage of a second strike relative to a first
strike. Any change that increases the relative advantage of
striking first will tend to make the crisis unstable. The
deployment of an imperfect ABM system would be just such a
change, since the leakage of the system would be due, in
large part, to saturation and overload of the computer
system and other resources. Even with increases in
computation speeds, tasks with computational time or space
requirements that are more than proportional to the problem
size, such as the task of tracking a cloud of objects,
could easily become vulnerable to saturation. Because of
saturation, the leakage of the system is likely to be much
greater during a massive first strike than during a
weakened second strike. This makes the crisis unstable: on
the one hand, the side possessing the ABM system would be
tempted to strike first, since the system would have a
better chance of stopping the opponent's second strike; on
the other hand, the opponent would also be tempted to
strike first, since a first strike would have a better
chance of penetrating the ABM system.

In short, it is impossible to predict the final outcome of
so ambitious a project as the SDI, because of both
technical and strategic uncertainties. Its goals will
continue to be redefined as technical barriers are
encountered and strategic theory changes. The gamble that
something good will come out of it should be weighed
against the foreseeable dangers, discussed in the
references, which include the possibility of Soviet
countermeasures and the loss of the ABM treaty. This treaty
prohibits not only the deployment, but also the development
and testing of ABM systems; therefore proceeding with the
SDI effort would violate the treaty long before we could
know what the final outcome of the effort would be.

Conclusions

The computer system required by the Strategic Defense
Initiative is the most complicated integrated computer
system ever proposed. We have surveyed some of the
difficulties in building such a system. The problems of
software and system integration are far more serious than
the problem of achieving the required radiation hardness
and high computation rates. Flaws in any system of its size
are inevitable. An attempt to build the system would not
necessarily succeed, and if it did succeed, uncertainties
would remain about its reliability. It is impossible to
test the system under operational conditions, yet component
testing and system simulation are totally inadequate
substitutes. It would be folly to rely on such a system in
the absence of full-scale operational testing.

Because of the time constraints for attacking boosters, the
proposed system is required to activate itself within
seconds of a warning. This would require increasing the
level of automation in threat assessment and weapons
release. But automatic systems are unsuited for coping with
the ambiguity, uncertainty, and unexpected events that are
likely in a military crisis. Increasing the degree of
automation in the handling of crises would increase the
risk of nuclear war.

As the state of the art improves, it becomes possible to
build reliable systems that are larger and larger. An
optimist might expect that, except for the fundamental
impossibility of operationally testing the system, the
difficulties outlined in this paper might be solved in a
couple of decades. But if history is any guide, what we
will have in a couple of decades is an unreliable and
destabilizing ABM system along the lines of the one
described in the Fletcher Report, together with a grandiose
plan to build another even bigger system, intended to solve
all the old problems and new ones besides.

The fundamental attitude at work here is what McGeorge
Bundy et al. refer to as "technological hubris"-a chronic
tendency to overestimate our technical capabilities and
underestimate the difficulty of the problems we undertake
to solve. The result of this attitude is the recurring
phenomenon of extravagant aspirations that lead so many
large computer projects to failure. The general
responsibility of the computing profession to restrain the
unrealistic aspirations of its clients becomes a vital
obligation in the case of projects, like the Star Wars
system, that introduce automation into the procedures that
would determine the outcome of a nuclear crisis.

It is vital to the computing profession and to society as a
whole that computer professionals act responsibly to
prevent the continuing march of computing science from
being littered with the relics of costly, dangerous and
unnecessary failures

CPSR at IJCAI
Rodney Hoffman - CPSR/Los Angeles

CPSR/LA was in charge of the CPSR booth at the
International Joint Conference on Artificial Intelligence
(IJCAI) at UCLA August 18-23.

The response was good. During the exhibit hours, there was
almost always someone looking over the material at the
booth, and frequently there was a crowd as several people
at a time tried to pick up literature. Lots of people said
they were glad that we were there. There were a few who
weren't so happy, but they were greatly outnumbered.
Besides the committees behind the scenes, we had a dozen
different people help out in staffing the booth during the
week, about half of them from CPSR chapters other than L.A.
Thanks to all who took part.

From the beginning of our planning for the IJCAI booth, we
hoped to end up with some ideas and material that could be
re-used by CPSR at other conferences.

We had an IBM PC running continously showing flying
missiles and exploding cityscapes. There were two programs.
One involved a missile getting through a "Star Wars"
defense and flying over a map of the U.S. before landing on
a city shown with its landmarks-San Francisco, New York or
Washington. The other program showed, on a map of the
Northeast U.S. coast centered on New York, the extent of
various kinds of damage from a series of nuclear
explosions. All of this was designed primarily to be eye-
catching, rather than dense with information. It was not
interactive. We did use some humorous tag lines, a few
sound effects, and some maps from an outside source. We'll
be happy to duplicate the disk for other chapters at cost.

We had a set of twelve question-and-answer cards labeled
"It's Not Trivial." Compiling the questions, answers, and
references was a collaborative effort involving many CPSR
members and friends. Again, thanks to all. The questions
and answers appear below. We have plenty of these cards
left, and they are available to other CPSR chapters below
cost: 10 cents per set for 100 or more sets (15 cents per
set in smaller quantities). To go along with the cards, we
have three large (20 by 30 inch) cardboard posters with
Questions 1, 6, and 13 on them. We hung the posters as a
backdrop to our booth.

One very successful item was our sticker saying, against a
mushroom cloud background, "It's 11 pm. Do you know what
your expert system just inferred?" We ran out about half an
hour before the Exhibits closed for good. We gave away 2400
of them. The slogan is pretty specific to an AI conference,
but if any chapter wishes to have more made, we have the
original photo sheet (of 12) from which the stickers were
copied and cut.

We had a handout adapted from an idea used by CPSR/
Chicago, comparing the amount of code in the UNIX (c)
operating system with the projected amount needed for "Star
Wars."

Of course, we also used flyers, article reprints, and
newsletters from the CPSR national office.

IT'S NOT TRIVIAL

Q1. How often do attempts to remove program errors in fact
introduce one or more additional errors? A1. The
probability of such an occurrence varies, but estimates
range from 15 to 50 percent. [E.N. Adams, "Optimizing
Preventing Service of Software Products", IBM Journal of
Research and Development, Volume 28(1), January 1984, page
8.]

Q2. True or False: Experience with large control programs
(between 100,000 and 2,000,000 lines) suggests that the
chance of introducing a severe error during the correction
of original errors is large enough that only a small
fraction of the original errors should be corrected. A2.
True. [E.N. Adams, "Optimizing Preventing Service of
Software Products", IBM Journal of Research and
Development, Volume 28(1), January 1984, page 12.]

Q3. What percentage of federal support for academic
computer science research is funded through the Department
of Defense? A3. About 60% in 1984. [Clark Thompson,
"Federal Support of Academic Research in Computer Science",
Computer Science Division, University of California,
Berkeley, 1984.]

Q4. What fraction of the U.S. science budget is devoted to
defense-related R&D in the Reagan 1985/86 budget? A4. 72% .
["Science and the Citizen", Scientific American, 252:6
(June, 1985), p. 64.]

Q5. The Space Shuttle Ground Processing System, with over
one-half million lines of code, is one of the largest real-
time systems ever developed. The stable release version
underwent 2177 hours of simulation testing and then 280
hours of actual use during the third shuttle mission. How
many critical, major, and minor errors were found during
testing? During the mission? A5.
Critical Major Minor
Testing 3 76 128
Mission 1 3 20

[Misra, "Software Reliability Analysis", IBM Sys. J., 1983,
22(3).]

Q6. How large might "Star Wars" software be? A6. 6 to 10
million lines of code, or 12 to 20 times the size of the
Space Shuttle Ground Processing System. [Fletcher Report,
Part 5, p. 45.]

In Questions 7 and 8, the World Wide Military Command and
Control System (WWMCCS) refers to the large, computerized
communications system used by civilian and military
authorities to communicate with U.S. military forces in the
field.

Q7. In November 1978, a power failure interrupted
communications between WWMCCS computers in Washington,
D.C., and Florida. When power was restored, the Washington
computer was unable to reconnect to the Florida computer.
Why? A7. No one had anticipated a need for the same
computer (i.e. the one in Washington) to "sign on" twice.
Human operators had to find a way to bypass normal
operating procedures before being able to restore
communications. [William Broad, "Computers and the U.S.
Military Don't Mix", Science, Volume 207, 14

Q8. During a 1977 exercise in which WWMCCS was connected to
the command and control systems of several regional
American commands, what was the average success rate in
message transmission? A8. 38% . [Broad, page 1184.]

Q9. How much will the average American household spend in
taxes on the military alone in the coming year? A9. $3,400.
[Guide to Military Budget, SANE.]

Q10. Who said, "Global war has become a Frankenstein to
destroy both sides. The great question is: can global war
now be outlawed from the world? If so . . . it would
produce an economic wave of prosperity that would raise the
world's standard of living beyond anything ever dreamed of
by man"? A10. General Douglas MacArthur.

Q11. True or False? Computer programs prepared
independently from the same specifications will fail
independently. A11. False. In one experiment, 27
independently prepared versions, each with reliability of
more than 99%, were subjected to one million test cases.
There were over 500 instances of two versions failing on
the same test case. Indeed, there were two test cases in
which 8 of the 27 versions failed. [Knight, Leveson, and
St. Jean, "A Large-Scale Experiment in N-Version
Programming", Fault-Tolerant Computing Systems Conference,
15.]

Q12. How, in a quintuply redundant computer system, did a
software error cause the first Space Shuttle mission to be
delayed 24 hours only minutes before launch? A12. The error
affected the synchronization initialization among the 5
computers. It was a 1-in-67 probability involving a queue
that wasn't empty when it should have been and the modeling
of past and future time. [J. R. Garman, "The Bug Heard
'Round The World", Software Engineering Notes, Volume 6, #
5, October 1981, pages 3- 10. ]

Q13. How did a programming punctuation error lead to the
loss of a Mariner probe to Venus? A13. In a FORTRAN
program, DO 3 1 = 1,3 was mistyped as DO 3 1 = 1.3 which
was accepted by the compiler as assigning 1.3 to the
variable
DO31. [Annals of the History of Computing,1984, 6(1), p.
61]

Q14. Why did the splashdown of the Gemini V orbiter miss
its landing point by 100 miles? A14. Because its guidance
program ignored the motion of the earth around the sun.
[Joseph Fox, Software and its Development, Prentice Hall,
1982, pages 187-188.]

Q15. What was the first vacuum tube computer used for? A15.
The ENIAC was used at the Army Ordnance Proving Grounds in
Aberdeen, Maryland, to calculate bombing tables. [Ted
Shapin, personal observation, 1950.]

Q16. Who said, "People want peace so much that one of these
days governments had better get out of the way and let them
have it"? A16. Dwight D. Eisenhower.

Q17. Who said, "The unleashed power of the atom has changed
everything save our modes of thinking"? A17. Albert
Einstein.

Q18. True or False: The rising of the moon was once
interpreted by the Ballistic Missile Early Warning System
as a missile attack on the United States. A18. True. In
1960. [J.C. Licklider, "Underestimates and
Overexpectations", in ABM: An Evaluation of the Decision to
Deploy an Anti-Ballistic Missile, Abram Chayes and Jerome
Wiesner (eds.), Harper and Row, 1969, page 122-3.]

Q19. Why, after years of operation without any network-wide
disturbance, was the ARPANET entirely unusable for a time
in October 1980? A19. Due to a program bug evident only in
circumstances rare enough to have slipped through the
testing cracks. [Eric Rosen, ÒVulnerabilities of Network
Control Protocols: An Example", Software Engineering Notes,
Volume 6(1), January 1981, pages 6-8.]

Q20. How did the Vancouver Stock Exchange index gain
574.081 points while the stock prices were unchanged? A20.
The stock index was calculated to four decimal places, but
truncated (not rounded) to three. It was recomputed with
each trade, some 3000 each day. The result was a loss of an
index point a day, or 20 points a month. On Friday,
November 25, 1983, the index stood at 524.811. After
incorporating three weeks of work for consultants from
Toronto and California computing the proper corrections for
22 months of compounded error, the index began Monday
morning at 1098.892, up 574.081. [Toronto Star, 29 November
1983.]

Q21. How did a programming error cause the calculated
ability of five nuclear reactors to withstand earthquakes
to be overestimated and the plants to be shut down
temporarily? A21. A program used in their design used an
arithmetic sum of variables when it should have used the
sum of their absolute values. [Evars Witt, "The Little
Computer and the Big Problem", AP Newswire, 16 March 1979.
See also Peter Neumann, "An Editorial on Software
Correctness and the Social Process" Software Engineering
Notes, Volume 4(2), April 1979, page 3.]

Q22. The U.S. spy ship Liberty was attacked in Israeli
waters on June 8, 1967. Why was it there in spite of
repeated orders from the U.S. Navy to withdraw? A22. In
what a Congressional committee later called "one of the
most incredible failures of communications in the history
of the Department of Defense," none of the three warnings
sent by three different communcations media ever reached
the Liberty. [James Bamford, The Puzzle Palace, Penguin
Books,1983, page 283.]

Q23. AEGIS is a battle management system designed to track
hundreds of airborne objects in a 300 km radius and
allocate weapons sufficient to destroy about 20 targets
within the range of its defensive missiles. In its first
operational test in April 1983, it was presented with a
threat much smaller than its design limit: there were never
more than three targets presented simultaneously. What were
the results? A23. AEGIS failed to shoot down six out of
sixteen targets due to system failures later associated
with faulty software. [Admiral James Watkins, Chief of
Naval Operations, Letter to Congressman Denny Smith,
February 7, 1984. Reprinted in Department of Defense
Authorization for Appropriations for FY 1985, Hearings
before the Senate Committee on Armed Services. page 4337

Miscellaneous

1985 Nobel Peace Prize

The 1985 Nobel Peace Prize has been awarded to
International Physicians for the Prevention of Nuclear War
(IPPNW). CPSR has been proud to be associated with IPPNW.
CPSR Board member Alan Borning addressed the Plenary
Session at the organization's 1984 annual meeting in
Helsinki, Finland, and CPSR President Brian Smith delivered
a paper at the 1985 IPPNW annual meeting in Budapest,
Hungary.

IPPNW was started by Drs. Bernard Lown, James Muller and
Herbert Abrams in the United States, and Evgueni Chazov in
the Soviet Union. It is now a federation of national
organizations, represented in the United States by
Physicians for Social Responsibility. It is headquartered
in Boston.


CPSR in the News

John Markoff, "Superweapons: The Defense Boom in Silicon
Valley," San Francisco Examiner, August 11-13, 1985, p. 1.
An excellent three-part series on the defense electronics
industry in Silicon Valley. Quotes CPSR Executive Director
Gary Chapman.

J.E. Ferrell, "Disarming Computers," San Francisco
Examiner, August 19, 1985, p. B-7. An article about CPSR,
featuring quotes from and a picture of Gary Chapman, and
quotes from CPSR/Palo Alto Chapter Secretary Dave Caulkins.

Associated Press, "Pentagon Hopes Computers Will Someday
Fight Battles," August 25, 1985. A wire-service story on
the Strategic Computing Initiative and the international
Joint Conference on Artificial Intelligence. Quotes CPSR
National Chairman Severo Ornstein.

Brian Robinson, "Is DARPA Plan Too Ambitious?", Electronic
Engineering Times, August 19, 1985, pp. 1, 12-13. Fifth and
last article of a series on the Strategic Computing
Initiative. Quotes Gary Chapman.

Kathy O'Toole, "Computers' War-Making Power Feared,"
Oakland Tribune, October 13, 1985, pp. 1-2. An article on
the SDI, Strategic Computing and CPSR. Quotes Gary Chapman
and CPSR/Berkeley member Clark Thompson.

Ken Haldin, "Defense-Related Topics Focus of Network," Los
Angeles Times, October 15, 1985, Part IV, p. 20. An article
about CPSR/LA, with interviews with CPSR/LA Chapter
Secretary David Booth and Chairman Rodney Hoffman, and Gary
Chapman.

David E. Sanger, "A Debate About Star Wars: Can Software Be
Designed?", New York Times, October 23, 1985, pp. 25, 31.
An article about the debate at MIT organized by
CPSR/Boston. Features quotes from and pictures of CPSR
members David L. Parnas and Joseph Weizenbaum.

Leon E. Wynter, "Defense Agency's Research Role Stirs
Debate," Wall Street Journal, October 24, 1985, p. 6. An
article about DARPA and its current funding priorities.
Quotes Gary Chapman and CPSR/Palo Alto member Doug
Englebart.



OOPS!

To provide some relief from the seriousness of the issues
which CPSR regularly addresses, we offer this column which
contains true stories of computer faux pas. Send us your
favorites and we will select, edit and publish.

San Francisco Chronicle columnist Herb Caen recently
published this item sent in by a reader. The man had
received a computer-generated bill. At the top of the bill,
in typical high-speed dot-matrix print, it said, "Due to a
computer error, the following figures are correct."

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

I like the health insurance opportunity.