Personal tools

Spring1985.txt

Star Wars Computing
Part I: Can the System be Built?
Greg Nelson and Dave Redell ¥ CPSR/Palo Alto

This is the first of a series of articles on the computational aspects of the
Strategic Defense Initiative (ÒStar Wars"} system proposed by the Reagan
Administration.

In March, 1983, President Reagan presented his vision of a future in which
nuclear weapons would be "impotent and Obsolete" because of advanced anti-
ballistic missile systems. His administration is pursuing this vision with its
Strategic Defense Initiative, an ABM research program with an initial budget of
$26 billion for the first five years. Two of the five major technological
thrusts of the SDI are inherently computational:

surveillance, acquisition, and tracking systems analysis and battle management.

The purpose of these articles is to outline the computer system required by the
SDI proposal and to assess the prospects for successfully building such a
system.

Strategic and Technical Context

The proposed Star Wars ABM system comprises a layered defense that would attempt
to intercept enemy ballistic missiles during all phases of their flight, from
boost phase through mid-course to re-entry. A number of interception
technologies have been proposed, including a variety of exotic beam weapons as
well as "kinetic energy" devices, such as rockets and other projectiles.
Similarly, a variety of sensors are proposed, including infrared and radar
tracking devices.

Some of the technical details of the Star Wars system are contained in
classified government reports, such as the full report of President Reagan's
Defensive Technologies Study Team (the so-called Fletcher Commission). Most of
the system's general design, however, is unclassified and has been published in
newspapers and magazines. In addition, a number of excellent books and articles
are available describing and analyzing the proposed weapon and sensor
technologies in considerable detail (for example, The Fallacy of Star Wars by
the Union of Concerned Scientists). We will not repeat that information here,
except where necessary to clarify the computational issues. In general, while
some details of the computational system do depend on the particular weapons
employed, the fundamental system requirements do not. For example, immediate
response is required to attack boosters, and surveillance and tracking of tens
or hundreds of thousands of objects are required in the mid-course and re-entry
phases.

The Computer System

The main source of information about the computational aspects of the Star Wars
system is volume V of the Fletcher Report, "Battle Management and Data
Processing," the only unclassified volume of the report of the Fletcher
Commission, and the only one to deal directly with data processing issues. The
system description given there is very tentative, but the broad outlines are
consistent with the obvious specifications that the computer system would have
to satisfy.

The foundation of the proposed system is a Ground-Air-Space packet-switching
communications network, made reliable by replication of the communications links
and computing nodes. Connected to the network are:

controllers for radar and optical sensors
controllers for directed energy weapons
activators for other intercept weapons
high-speed processors
data storage devices
terminals for human operators.

As the name implies, the components of the network are to be distributed on the
earth, in airplanes, and in satellites.

Probably the most computationally intensive task that must be performed is
"track formation." Phased-array radars are capable of emitting many thousands of
individually-aimed pulses per second; under appropriate computer control, these
could be used to resolve a threat cloud into a set of tracks of individual
objects. Optical or infrared sensors, whose shorter wavelengths offer the
potential for sharper angular resolution, also require high-bandwidth
controllers. Because the computing requirements are so high, both in absolute
terms and in asymptotic growth rate, these algorithms may become saturated by
large numbers of decoys. The Fletcher report estimates that the computing nodes
controlling radars and optical sensors must perform between ten million and one
billion floating point operations per second. The report also calls for research
into realistic algorithms for track formation.

As individual objects' tracks are identified, they are stored in a global
database that is accessible by boost-phase, mid-phase, and terminal-phase battle
management software. For reliability, several replicas of the database are
maintained, since many node and link failures can be expected during an attack.

The battle management software includes a scheduler that allocates defensive
resources (e.g. sensors and weapons) to targets. This allocation must be
coordinated across the various elements of the system, and must be resilient in
the face of damage to the network, the computers, and the resources themselves.
Moreover, the resource allocation must occur in the context of a coherent battle
plan in response to the nature of the attack and other important parameters.

The programs for the system and the orders for the human operators would have to
insure that the appropriate battle plan be activated within tens of seconds of a
warning, since the crucial boost-phase, during which missiles are vulnerable to
attack, is so brief. Because of these time constraints, several battle plans
would be programmed in advance, so that in the event of an attack (or a crisis
that threatens to lead to an attack) human operators would simply select one or
more battle plans and specify their important parameters, leaving their
activation and execution to the computer system. Thus, the highest level of the
battle management software would exercise a decision-making function of the
utmost importance, constituting the most important decision by tar ever
delegated to an AI program.

There are several ways in which the SDI software project might fail: (1) The
attempt to build and deploy the system might fail outright, because of its
complexity. (2) The system might be successfully deployed, but fail under
attack. (3) The system might be successfully deployed, and then activate itself
under unanticipated circumstances.

We examine the first of these below The possible failure modes of a deployed
system will be discussed in the next article in this series.

Outright Failure to Develop and Deploy the System

Assuming the continuation at current rates of increases in computation speeds
and radiation hardness, the individual hardware components of the SDI computer
system are likely to become feasible in the next decade or so. On the other
hand, the problems of designing the entire system, writing the software, and
integrating the various components are much less likely to be satisfactorily
solved. Programming large software systems is more an art than a science, and
progress in this art has been disappointingly slow.

One potential source of software complexity Tiff with the processors themselves.
In principle, the complexity of high performance fault-tolerant processors
should be confined to the lowest levels of the system, in practice, fault-
tolerant systems have been harder to program than conventional ones. In this
way, the very measures designed to insure against failures In the hardware can
Induce a new class of failures in the software. For example, one of the first
flights of the space shuttle was stopped at the last moment by a software error
that caused a synchronization problem between the shuttle's redundant
processors.

The major source of complexity in the SDI system, however, will be in the
construction of the complete distributed software system, combining as It does
not only fault tolerance but networking, distributed databases, realtime control
and signal processing, resource allocation, artificial intelligence and complex
human interface issues.

There is little doubt that it is possible to build the lowest level of the
proposed Ground-Air-Space network, allowing simple data flow from point to point
in the network. It is much harder to design and implement protocols that
maintain communication in the face of failures in individual nodes and
communications links, particularly when the failure rate is high (as it would be
during a nuclear war). Even more complexity stems from the requirement that
separate nodes share access to a replicated database and maintain it in a
consistent fashion. Moreover, the requirement that the system satisfy real-time
constraints introduces additional problems as well as exacerbating existing
ones.

The difficulties described above represent challenging research problems even
when restricted to a local area network with homogeneous nodes clustered within
a few kilometers of one another. The SDI system requires solutions for a
heterogeneous network spanning thousands of kilometers. The Fletcher report
issued a "word of caution" because the diameter of the network in light-
milliseconds would be large enough that speed-of-light limitations would create
inherent latency delays in accessing shared data that were large compared with
the reciprocal bandwidth of the servo control systems. In practice, this would
be only the tip of a much larger iceberg, since much longer delays would
certainly be introduced by the network's packet-switching system and the
processing in the higher levels of the software.

In addition to the fundamental design problems outlined above, the builders of a
distributed system meeting the Fletcher Report's specifications would face a
number of very difficult logistical problems in the delivery, deployment,
operation, and maintenance of such a large and complex software system. Work on
a large computer system does not end with initial installation, but continues
over the entire useful life of the software. This would be particularly true of
the SDI system, in which software changes would be needed not only to fix
problems, but to enhance the system in the face of new threats and
countermeasures by an adversary. The distribution of software updates, a
difficult problem in any case, is rendered more difficult when the target
computers are part of a network of orbiting satellites. For example, the typical
current practice for space-based computers is to send a small machine language
patch to repair each flaw. This would clearly be a grossly inadequate approach
for such a large system upon which so much would depend.

A useful sense of perspective is provided by comparing the SDI plans with past
and present computer systems. Two systems that are particularly relevant are the
Safeguard ABM computer system and the World-Wide Military Command and Control
System (WWMCCS - pronounced "Wimmix").

The Safeguard system controlled the radars, missiles, and human command
interface for one re-entry phase ABM installation. It contained about two
million lines of code running on ten 1.5 MIPS (million instructions per second)
processors. The real-time portion of the system contained three-quarters of a
million instructions. It was characterized in the Sell System Technical Journal
as one of the most complicated real time systems ever built, although it was
much smaller and simpler than the proposed SDI system.

WWMCCS is the Pentagon's world-wide communication system built at a cost of
between ten and fifteen billion dollars and involving more than ten million
lines of code. It is thus about the same size as the proposed SDI system, but is
much less tightly integrated and does not encounter the kinds of real-time
constraints that are inherent in the ABM problem. The system has not been
reliable: according to Atlantic Monthly editor James Fallows, "Once or twice a
month, the newspapers carry a story about the WWMCCS computers breaking down.
When the system was thoroughly tested in 1977, attempts to send messages ended
in 'abnormal terminations' (i.e. breakdowns) 62 percent of the time."
These two comparisons suggest that the Fletcher report is not unrealistic in
estimating the size of the software for the SDI system as ten million lines.
They also suggest that optimism about the prospects for developing and deploying
the system is unjustified.

It is a common tendency to expect rapid progress in software technology, simply
because there are no physical constraints that need to be overcome. "All" that
is required is mental mastery of complexity. Such optimism is misplaced. There
are many areas in software engineering where the state of the art is
depressingly similar to the state twenty years ago. Despite gradual improvement
in our understanding of large software systems, it seems likely that any SDI
computer system developed within the next two decades will display the same
kinds of shortcomings that are rampant in most large software systems deployed
today.

In summary, it is possible that an attempt to build the computer system for the
SDI would fail outright. It strains the state of the art in many areas, and
therefore could easily get out of control, as so many large software projects
do. But it is equally possible that a stubborn attempt would lead to the
deployment of an unreliable system. In this context it is worth quoting C. A. R.
Hoare's words about the PL/1 programming language definition project:

ÒAt first I hoped that such a technically unsound project would collapse, but I
soon realized it was doomed to success. Almost anything in software can be
implemented, sold, and even used given enough determination. There is nothing a
mere scientist can say that will stand against the flood of a hundred million
dollars. But there is one thing that cannot be purchased in this way -- and that
is reliability."

In the next article in this series, we consider the two dangers presented by an
unreliable Star Wars system: failure under attack, and unanticipated activation.




The Responsible Use of Computers:
Where Do We Draw the Line?
Christiane Floyd ¥ Chairperson
Forum Informatiker fur Frieden und Gesellshaftliche Verantwortung (FIFF)

This article is the first halt of a paper by one of the founders of FIFF our
West German counterpart organization which was founded in June of 1984.

Introduction

At the Forum's founding meeting, Joseph Weizenbaum sketched out the limits of
responsible computer use in the following terms: one should not do things with
computers that one could not accept the responsibility for doing without
computers. I should like to take that as our motto, and yet at the same time I
feel it to be insufficient. As I see it, we here in the Forum must construct a
detailed and technically sound set of arguments which will enable us to assess
individual concrete developments according to our own values and professional
judgments.

I don't mean by this a set of rules permitting a sort of grading system
(responsible/irresponsible); the assessment will always be a dynamic social
process allowing, in the case of conflicts, scope for other viable alternatives.
To ensure that this scope is utilized in accordance with our own particular
views and values, it is up to us to provide a contribution to the discussion
setting out these views.

In what follows I shall attempt to outline a number of points relating to the
limits of responsible computer use. I shall distinguish here between several
different categories:

¥ I see the technical limits of responsible computer use - i.e., from the point
of view of the profession itself - where computers are utilized as a result of
misguided trust in the capabilities of computer programs. My aim here is to call
into question the exaggerated claims of our particular field. I don't wish to
merely generalize about technical errors which might be avoided with the help of
an improved technology or methodology, but rather to draw attention to the
fundamental limits of this field as suggested by our professional experience as
computer scientists.

¥ I see the human limits of responsible computer use - i.e. from the point of
view of human interaction where computers are used as a result of the misguided
equating of people with machines. The areas in question here are those in which
human interaction is hindered as a result of people's being replaced by computer
programs; in which human experience dwindles, human care ceases to exist and
social networks are destroyed. We as computer scientists must reject any attempt
to raise the computer to the level of man's "partner."

¥ Finally, the ethical and political limits of computer application must be
drawn where (to paraphrase Weizenbaum) the attempt is made to do things with
computers that ought not be done without them. We cannot shirk responsibility
for things which happen as a result of our computer programs. The use of
computers has made it possible to split our sense of responsibility to a greater
extent than has previously been the case with other complex technological
applications. Between the preplanning (programming) of events and courses of
action and their occurrence, the computer has now been interpolated as a non-
human element. Nevertheless, the responsibility for what computers "do" still
rests with the people who produce and program them, who plan and govern their
application; which means that we bear this responsibility too.

In the following pages, I shall be using general principles to examine these
limits in greater detail. In our work in the Forum they can be used primarily to
throw critical light on questionable developments in the military sector. But
our responsibility does not end there: we must help to lay down the limits of
responsible computer use in all areas of society. My main concern, though, is
that we use this discussion in the Forum to encourage a process of rethinking
among computer scientists, each of us in our own particular sphere of work and
in accordance with our own responsibilities, so as to enable us, through our
combined efforts, to lay the professional foundations for a computer science
more closely geared to human values.

1. Technical Limits of Responsible Computer Use

Although, in my view, this question requires highly detailed argumentation, the
present paper can represent no more than a tentative attempt at coming to terms
with the issue. I hold that the limits described below are inherent in the
nature of our work, in other words they may be shifted by technological and
methodological improvements, but never eliminated altogether.

- All models of reality anchored in a program are, by their very nature,
reductionistic. This means that in specifying or implementing programs we are
always forced to model reality in all its diversity using a finite number of
selected objects (or classes of objects), their selected but henceforth fixed
characteristics, as well as the operations permitted on these objects. In so
doing, we always create an artificial, closed microcosm which alone is decisive
for the functioning of the program. In reality, however, we must always be
prepared for unforeseen events or objects with new relevant characteristics; the
limit imposed by this process of modelling with the help of programs is
intrinsically insurmountable.

- The intelligence anchored in a computer differs fundamentally from human
intelligence in the following respects: it always relies on a model which is
based on fixed rules (in some cases with additional rules for expanding these
dynamically). Human intelligence, on the other hand, incorporates as essential
elements an appreciation of the situation in hand, together with past
experiences relating to it, and experienced needs and values. Artificial
intelligence is not rooted in the senses and the body, and therefore lacks the
association with feeling and acting - for us ever present and taken for granted
- an association without which reality is devoid of human significance.

Decisions on the basis of program results should therefore not be taken by other
programs unless it is ensured that human beings with sufficient competence for
taking these decisions are able to intervene as the responsible parties in the
decision-making process. Only human beings can assess results and fit them
meaningfully into the ever-changing interpretational context which itself
emerges as a result of human communication.

- We shall never succeed in totally eliminating program errors because to err is
human. But erring is not only a human weakness, it is also a human strength,
being as it is so closely bound up with the capacity for learning and finding
unconventional solutions. Any given program can be no more than the result of
limited human insight at a given point in time. We all know that errors in
programs may have a wide variety of causes: insufficient understanding of the
problem in hand, inadequate communication, equivocal agreements, ignorance of
the actual application context of programs, insufficient knowledge about the
available resources, errors in reasoning, carelessness, lack of or difference in
motivation. We shall not be able to eliminate a single one of these by
improvements in technology or methodology. Program errors, then, lay down one
intrinsic limit of computer application: in other words, computers can only be
used responsibly where the possibility of locating and eliminating program
errors exists.

I see one intrinsic limit of responsible computer application in the lack of
transparency of large program systems - not that I claim to be able to specify
the maximum acceptable size of such systems - and the consequent impossibility
of tracing the causes of located errors and reliably eliminating them in time I
do not consider this limit to be surmountable by improving specification.
programming or documentation methods; instead, it requires a conscious
confinement to small loosely interconnected program systems separated by human
beings acting in an interpretative and evaluative capacity who are familiar with
the functioning of the sufficiently small programs and in a position to alter
these, should an error occur, or compensate for it by manual circumvention of
the program.

- It is imperative that programs are made to fit the actual demands of the
application context by trial operation under real conditions and subsequent
adaptation. I do not simply mean by this the elimination of program errors in
accordance with a given specification, but rather the interplay between the
program and the technical or social system surrounding it. Particularly in the
case of the increasingly common embedded systems, the desired functionality of
programs cannot be established unequivocally and contextfree; instead. the
proper functioning of the program can only be understood in the context of the
surrounding system. Where a trial in this context under real conditions is not
feasible, it is not possible to assess the adequacy of the programs either; it
is therefore irresponsible to make human beings dependent on the functioning of
such programs when important issues are at stake.

- Finally, I see one intrinsic limit of responsible computer application in the
susceptibility to error and failure inherent in computer operation. No matter
what organizational measures we may take, we shall not be able to eliminate the
possibility of machines operating defectively at some time or other: of programs
being erroneously fed the wrong data; or even of correctly functioning, and for
their application context quite adequate programs producng incorrect results
owing to other unforeseeable events In the last analysis, only human beings are
capable. through assessment of the overall situation on the basis of numerous
individual and differing perceptions. of distinguishing between real events and
those merely simulated with the help of data and without any real background.

The second half of this article will appear in our Summer 1985 Newsletter

An Earnest Proposal
Lewis Thomas

The following is an excerpt from an essay that first appeared in the New England
Journal of Medicine and was later reprinted in The Lives of a Cell, Bantam 1974.

There was a quarter-page advertisement in the London Observer for a computer
service that will enmesh your name in an electronic network of fifty thousand
other names, sort out your tastes, preferences, habits, and deepest desires and
match them up with opposite numbers, and retrieve for you, within a matter of
seconds, and for a very small fee, friends. "Already," it says, "it [the
computer] has given very real happiness and lasting relationships to thousands
of people, and it can do the same for you!"

Without paying a fee, or filling out a questionnaire, all of us are being linked
in similar circuits, for other reasons, by credit bureaus, the census, the tax
people, the local police station, or the Army. Sooner or later, if it keeps on,
the various networks will begin to touch, fuse, and then, in their coalescence,
they will start sorting and retrieving each other, and we will all become bits
of information on an enormous grid.

I do not worry much about the computers that are wired to help me find a friend
among fifty thousand. If errors are made, I can always beg off with a headache.
But what of the vaster machines that will be giving instructions to cities, to
nations? If they are programmed to regulate human behavior according to today's
view of nature, we are surely in for apocalypse.

The men who run the affairs of nations today are, by and large, our practical
men. They have been taught that the world is an arrangement of adversary
systems, that force is what counts, aggression is what drives us at the core,
only the fittest can survive, and only might can make more might. Thus, it is in
observance of nature's law that we have planted, like perennial tubers, the
numberless nameless missiles in the soil of Russia and China and our Midwestern
farmlands with more to come, poised to fly out at a nanosecond's notice, and
meticulously engineered to ignite, in the centers of all our cities, artificial
suns. If we let fly enough of them at once, we can even burn out the one-celled
green creatures in the sea, and thus turn off the oxygen.

Before such things are done, one hopes that the computers will contain every
least bit of relevant information about the way of the world. I should think we
might assume this, in fairness to all. Even the nuclear realists, busy as their
minds must be with calculations of acceptable levels of megadeath. would not
want to overlook anything. They should be willing to wait, for a while anyway.

I have an earnest proposal to make. I suggest that we defer further action until
we have acquired a really complete set of information concerning at least one
living thing. Then, at least, we shall be able to claim that we know what we are
doing. The delay might take a decade; let us say a decade. We and the other
nations might set it as an objective of international, collaborative science to
achieve a complete understanding of a single form of life. When this is done,
and the information programmed into all our computers, I for one would be
willing to take my chances.

As to the subject, I propose a simple one, easily solved within ten years. It is
the protozoan Myxotricha paradoxa which inhabits the inner reaches of the
digestive tract of Australian termites.

It is not as though we would be starting from scratch. We have a fair amount of
information about this creature already - not enough to understand him, of
course, but enough to inform us that he means something, perhaps a great deal.
At first glance, he appears to be an ordinary, motile protozoan, remarkable
chiefly for the speed and directness with which he swims from place to place,
engulfing fragments of wood finely chewed by his termite host. In the termite
ecosystem, an arrangement of Byzantine complexity, he stands at the epicenter.
Without him, the wood, however finely chewed, would never get digested; he
supplies the enzymes that break down cellulose to edible carbohydrate, leaving
only the nondegradable lignin, which the termite then excretes in geometrically
tidy pellets and uses as building blocks for the erection of arches and vaults
in the termite nest. Without him there would be no termites, no farms of the
fungi that are cultivated by termites and will grow nowhere else, and no
conversion of dead trees to loam....

If it is in the nature of living things to pool resources, to fuse when
possible, we would have a new way of accounting for the progressive enrichment
and complexity of form in living things.

I take it on faith that computers, although lacking souls, are possessed of a
kind of intelligence. At the end of the decade, therefore, I am willing to
predict that the feeding in of all the information then available will result,
after a few seconds of whirring, in something like the following message, neatly
and speedily printed out: "Request more data. How are spirochetes attached? Do
not fire."

From the Secretary's Desk
Laura Gould - CPSR National Secretary

CPSR's General Proposal for 1985-6, which includes detailed descriptions of
three major projects, has been submitted to nine funders. Copies are available
from CPSR for $1.00 each.

We are pleased to welcome CPSR/Chicago, CPSR/Portland, and a new Italian
organization, Informatici per la Responsibilita Sociale. See the chapter
information page for particulars.

Our new Office Manager, Mary Karen Dahl, has recently completed her Ph.D. at
Stanford in Dramatic Literature, with a thesis entitled The Ethics of Violence.
We are fortunate to have found someone with her interests, intelligence, and
vivacity.

CPSR's Executive Director Gary Chapman is still glowing from his visit to the
East Coast chapters. In May he will visit CPSR/Seattle and CPSR/ Portland. He
will give a talk at the University of Washington entitled "AI and the Conduct of
War"; at Reed College he will talk about Star Wars.

In March, there was considerable activity at U.C. Santa Cruz, thanks to the
Silicon Valley Research Group there. A two-day conference on Strategic
Computing: History, Politics, Epistemology included talks by CPSR board members
Lucy Suchman and Terry Winograd. A lecture series entitled The Computerization
of Society included tags by Severo Ornstein and Terry Winograd.

In April, Lucy Suchman moderated a panel at the CHI '85 (human factors in
computing systems) conference in San Francisco entitled Social and Cultural
Impact of Technology.

Brian Smith, CPSR's President, met in Toronto with a group of computer
scientists concerned about Canada's proposed involvement in Star Wars.
Signatures are being gathered of Canadian scientists refusing to work on Star
Wars. The Canadian government must soon decide whether to participate. (As of
late April, Norway is the only NATO country to refuse.)

CPSR in the News

January 16 - The Los Angeles Times ran a story by staff writer Bob Sipchen
entitled "Scientists see Stereotypes as Dangerous"; a similar story by him on
the same date entitled "Computer Scientist Thinks Society Isn't Figured into
EquationÓ ran in the Orange County edition of the Los Angeles Times.

April - High Technology magazine ran a story entitled "Assessing the Strategic
Computing Initiative.Ó which included pictures of, and quotes from, Robert
Cooper and Lynn Conway of DARPA, Mike Dertouzos of MIT, Terry Winograd and
Severo Ornstein from CPSR.

April 20 - Severo Ornstein was interviewed by JoAnne Garrett of Wisconsin public
television station WHA on the topic of computers and nuclear war. The interview
will be broadcast throughout Wisconsin.

Cliff Johnson's court case against Caspar Weinberger, arguing that launch-on-
warning is unconstitutional [see Fall 1984 Newsletter], has received so much
publicity that we haven't space to itemize it here. There have been stories in
the New York Tribune. The London Guardian, several Stanford papers, an Italian
magazine called PIN, and much coverage in West Germany.

CPSR's assessment of DARPA's Strategic Computing Initiative also continues to be
reprinted in various journals in Australia and New Zealand

Computer Unreliability and Nuclear War
CPSR/Madison

This article is the fourth and final section of a paper prepared by CPSR/
Madison entitled "Computer Unreliability and Nuclear War." This material was
originally prepared for a workshop at a CPSR symposium held in Madison,
Wisconsin, in October 1983. For a complete copy, please send $1.00 to the CPSR
national office.

4. Implications
Larry Travis, James Goodman

For many years, our stated national policy for deterrence was known as MAD --
Mutual Assured Destruction -- the promise that an attack by the Soviet Union
would be answered by an all-out retaliatory strike by the United States. This
policy was modified under the Carter Administration to include the possibility
of a "limited nuclear war," and the idea was subsequently endorsed by the
National Republican Platform in 1980. In this Action, we argue

(1) that limited nuclear war is not feasible, and

(2) that technical development and policy decisions are rapidly increasing the
likelihood of an accidental nuclear war.

It is tempting to pursue a policy of Mutual Assured Destruction for the
indefinite future. After all, there have been no nuclear wars since MAD was
adopted. By the same line of reasoning, however, a resident of New York City
might decide that Central Park was safe at night after walking through it a
couple of times without getting mugged. We're not sure we'd try it. It is not at
all clear that MAD is actually responsible for the nuclear truce of the last 30
years. We may have just been lucky. Either way, there is considerable evidence
that MAD will not prove to be a viable policy for the next 30 years.

Two important trends are undermining MAD. The first is an increase in the
effectiveness of modern weapons systems. New and more powerful armaments,
coupled with delivery systems of remarkable precision, have made a first strike
more effective than ever before. The theory of deterrence says that threatened
retaliation can prevent such an attack. But to be effective, either the
retaliation must be initiated before the first warheads have been detonated, or
else the missiles and the communication systems must retain sufficient capacity
to coordinate a massive attack afterwards. The tatter option, "survivability,"
is becoming less and less likely. Powerful and accurate weapons can destroy all
but the most thoroughly protected command centers. Moreover, as described in
Section 2 [published in the Fall 1984 Newsletter], electromagnetic pulse is
almost certain to cripple most of what remains. The first option, known as
"launch-on-warning," or "launch-under-attack," is emerging more and more as the
strategy of choice.

Unfortunately, launch-on-warning is complicated by a second important trend: a
decrease in decision-making time. In the '50s, when nuclear weapons were
delivered by bombers, it would have taken ten hours or more to deliver a weapon
across the vast distance required. Either side could have detected such an
attack by radar and had hours to establish readiness, possibly open
communications with the other side, and select an appropriate response before
the first bombs exploded. With the development of ICBMs in the late '50s. that
warning time was reduced to about 25 to 30 minutes With the installation of
Pershing II missiles in Europe, the time available

for the Soviets to respond to a perceived attack on their capital has been
reduced to somewhere in the range of 6 to 12 minutes. An EMP burst can be
triggered with even less warning, since the warheads need not be directly on
target, and need not fall to earth. Recent substantial improvements in the
Soviet submarine fleet have put the United States in much the same situation as
its adversary.

What are the consequences of these trends? Certainly the effectiveness of a
first strike argues against the possibility of a "limited" nuclear war. The arms
race in recent years has focused not so much on numbers of weapons or the size
of warheads, but rather on the precision with which they can be delivered.
Leaders can be annihilated quickly, or at least their leadership role can be
destroyed by the lack of communication links. They need to be able to assume
that their policies will be carried out in their absence. Under these
conditions, the decision to launch missiles clearly resides with a very large
number of people. Almost certainly, every submarine carrying nuclear weapons has
on board a group of people who could collectively make such a decision. This
delegation of authority alone would seem to preclude any notion of a "limited
response." Combined with the fact that C31 (command, control, communications,
and intelligence) can be incapacitated easily and quickly, it makes a
"protracted" war impossible.

An even more serious consequence of recent trends is increased reliance on
inherently unreliable computer systems. The effectiveness of a first strike
appears to make launch-on-warning more or less essential. The lack of decision-
making time makes it more or less impossible, at least for human beings. Six or
twelve minutes is hardly enough time to confirm an attack, let alone alert
leaders and allow them to make intelligent decisions. The incredibly short time
from first warning until the first incoming weapons might explode makes it
highly unlikely that a launch-on-warning policy could include human intervention
in the decision-making process. Certainly not humans at the highest levels of
command. The temptation to place much or all of the decision-making authority in
the " hands" of automatic computer systems is all but irresistible. Computers
have none of the time limitations of human beings. They also have no common
sense.

We hope that the information in Section 2 [see the Fall 1984 Newsletter] has
made it clear that computers cannot be trusted with the future of the planet.
The theory of deterrence depends totally and unconditionally on the rationality
and good judgment of the superpowers. That rationality is suspect enough with
humans in command With computers In command, the theory falls apart.

Launch-on-warning, presumably under computer control, has been advocated by many
people In policy-making positions. Harold Brown, Secretary of Defense during the
Carter administration, is one example. Warnings he made to the Soviet Union
during that administration suggested that the United States had actually adopted
such a strategy. That suggestion appears to have been inaccurate. Many defense
experts believe, however, that the Soviets have indeed adopted launch-on-
warning, certainly since the Pershing II missiles were deployed, if not before.
Whether or not the Russians currently employ such a strategy, there are a number
of objective pressures which appear to be pushing both sides toward d policy of
launch-on-warning.

(1) The increased accuracy of delivery systems makes a first strike potentially
more effective, particularly against land-based missiles, and makes rapid
response imperative. This is necessary because the land-based missiles may be
destroyed before they are launched. The Soviets have greater concern in this
regard than does the United States, because the Soviets place greater reliance
on land-based missiles. However, both sides feel compelled to launch a
retaliatory strike before the incoming missiles explode because the attack will
also severely cripple communications, making it difficult or impossible to
coordinate retaliation.

(2) The decrease in decision-making time means that if weapons are to be
launched before they are destroyed, they must be launched almost immediately
upon detection of an attack.

(3) The increased complexity of weapons and the greater number of resulting
options means that much more time is necessary for human decisions. More
strategies must be considered, and there is a greater likelihood of human
miscalculation .

(4) The President's proposal for "Star Wars" missile defense will require
decisions to be made with only seconds of warning, not minutes. Completely
autonomous computer control is a foregone conclusion. The installation of such a
defense system would amount to an adoption of launch-on-warning, since its use
would be an act of war.

Whenever a new application of computer technology is proposed, and a political
decision must be made about its development, it is necessary to compare its
benefits to its costs. In order to evaluate a cost which may or may not happen,
the normal technique is to multiply the probability of its occurrence by its
cost if it does occur. But what cost can we assign to the destruction of human
civilization, or even the complete loss of Fe human species? If this cost is
infinite, then any probability at all of such an event makes the cost of
deploying such a system also infinite. Since 100% reliability is unattainable,
it is only reasonable to limit the use of computers to applications in which we
can tolerate an occasional mistake.

References

R. Thaxton, "Nuclear War by Computer Chip," The Progressive. August 1980.

R. Thaxton, "The Logic of Nuclear Escalation," The Progressive, February 1982.

J. Steinbruner, "Launch under Attack" Scientific American, January 1984.

M. Bundy, "Ending War Before It Starts," Review of two books in the New York
Times Book Review, 9 October 1983

Miscellaneous

CPSR/Boston's Proposal Funded

Several months ago CPSR/Boston wrote a proposal for the production of a
slide/tape show on CPSR issues. The CPSR national office sought funding for this
well-written proposal by submitting it to several foundations. The CS Fund, a
California-based foundation, responded to the proposal with enthusiasm by
granting us the entire $10,000 requested for the project. Preliminary work has
already begun on the script for the presentation, which is intended for lay
audiences. When the production is completed, a copy will be provided to each
chapter. Such a professional quality slide/ tape show will be an invaluable
asset in transmitting CPSR's messages to the public.


Letters to the Editor

Military Tax

Dear Editor:

As important as the computer may be to the military, money is even more
important. Money spent on "Star Wars" and threatened nuclear holocausts cannot
be spent on finding nonmilitary, non-violent solutions to international
conflicts and for life-affirming domestic programs. The Conscience 8 Military
Tax Campaign - U.S. has, since 1979, been seeking legislation from Congress to
allow conscientious objectors to military taxation to pay their fun share of
federal taxes toward peace purposes. If any readers are supportive of such a
plan or need information on how they can begin to refuse and redirect military
taxes, they may contact the CMTC at 44 Belihaven Road, Bellport, NY 11713 or
call (516) 286-8825.

Towards Peace.
Ed Pearson


Recommended Reading

Daniel Ford, "The Buxom the New Yorker, April 1, 1985 (p. 43) and April 8. 1985
(p. 49) 1985. Excellent two-part article. The first discusses the present U.S.
command and control system; the second is a penetrating analysis of first
strike.

George Ball. "The War for Star Wars," New York Review of Books, April 11, 1985,
pp. 38-44.

ÒAmericas High-Tech Crisis," Business Week, March 11, 1985. pp. 5667.

Byte Magazine. April 1985, Special section on Artificial Intelligence. Also note
letters column on p. 436 about "Computers vs. Human Responsibility."

Dwight B. Davis, "Assessing the Strategic Computing Initiative," High
Technology, April 1985, pp. 41-49. (Includes pictures of and quotes from CPSR
National Chairman Severo Ornstein and Executive Committee member Terry
Winograd.)

New York Times, series on ÒStar Wars," March 3 - 8, daily.

Hans Jonas, The Imperative of Responsibility: In Search of an Ethics for the
Technological Age, Univ. of Chicago Press. 1985.

Jerry Mander, "Six Grave Doubts About Computers," Whole Earth Review, January
1985, pp. 11-20.

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

This is an excellent forum for developing positions and learning detailed information.

Andy Oram