The Johnson v. Weinberger Lawsuit
Gary Chapman - CPSR Executive Director
Cliff Johnson of CPSR/Palo Alto has brought suit against Caspar Weinberger
arguing that a policy of Launch-On-Warning is unconstitutional in that it
abrogates the responsibilities of the President and the Congress for declaring
and waging war. Our first article covering the case appeared in Volume 3, Number
After his case was dismissed in District Court, Johnson appealed to the Federal
appellate level. The case was heard July 10 by a panel consisting of Judges
Warren Ferguson, William Norris, and Charles Wiggins. The government's case was
argued by Assistant U S. Attorney John Penrose.
The judges were apparently very interested in the case, and knew the issues
"backwards and forwards," according to Johnson They were very polite and
listened attentively When Penrose started to present his argument, Ferguson and
Norris were unimpressed. Norris said, "I thought the Constitution was clear that
only Congress can declare war .. there could be an act of war without any
decision being taken by anyone ... That in itself is a decision to engage in
nuclear combat ... in effect, a declaration of war taken without approval of
The judges introduced a new element in the case, asking Penrose if it were
therefore not the responsibility of the President to go to the Congress and
request permission to implement a launch-on-warning policy. Penrose answered
that the Congress has already approved the funding for Pentagon computer systems
attached to command and control systems. Ferguson interrupted Penrose saying,
"There are 200 cases on the books in which Congress has appropriated money, but
that doesn't mean approval of anything."
Judge Wiggins seemed less supportive of Johnson's case. He said that he thought
that a launch-on-warning policy "provides a powerful deterrent to attack."
However he also said, "Machines make mistakes, that is one part of the complaint
I can accept."
Johnson needs only two of the three justices to agree with his position. The
best result would be for two of the justices to agree that the case can be tried
and that it should be sent back down to the District level. If that happened
there would be a full trial with witnesses and so on. It is more likely,
however, that the government attorneys would appeal the case to the Supreme
Court. Alternatively, if the justices rule against Johnson, he himself will
appeal to the Supreme Court
After the hearing, Johnson and CPSR jointly held a press conference at the
Federal Building in San Francisco.
Scientist Quits Antimissile Panel
By Charles Mohr, New York Times, July 12, 1985, p. A6
Washington, July 11 - A computer scientist has resigned from an advisory panel
on antimissile defense, asserting that it will never be possible to program a
vast complex of battle management computers reliably or to assume they will work
when confronted with a salvo of nuclear missiles.
The scientist, David L. Parnas, a professor at the University of Victoria in
British Columbia, is a consultant to the Office of Naval Research and was one of
nine scientists asked by the Strategic Defense Initiative Office to serve at
$1,000 a day on the "panel on computing in support of battle management."
Professor Parnas, an American citizen with secret military clearances, said in a
letter of resignation, and 17 pages of accompanying memoranda, that it would
never be possible to test realistically the large array of computers that would
link and control a system of sensors, antimissile weapons, guidance and aiming
devices, and battle management stations.
Nor, he protested, would it be possible to follow orthodox computer program-
writing practices in which errors and bugs
are detected and eliminated in prolonged everyday use. .. "I believe," Professor
Parnas said, "that it is our duty, as scientists and engineers, to reply that we
have no technological magic that will accomplish that. The President and the
public should know that." ... In his memoranda, the professor put forth detailed
explanations of his doubts. He argued that large-scale programs like that
envisioned for the program only become reliable through modifications based on
He dismissed as unrealistic the idea that program-writing computers, artificial
intelligence or mathematical simulation could solve the problem.
Some other scientists have recently expressed public doubts that large scale
programs free of fatal flaws can be written. Herbert Lin, a research fellow at
the Massachusetts Institute of Technology, said this month that the basic lesson
was that no program works right the first time.
Professor Parnas wrote that he was sure other experts would disagree with him
But he said many regard the program as a pot of gold for research funds or an
CPSR and others have written letters to the Congress in support of Professor
Star Wars Computing
Part II: Failure Possibilities
Greg Nelson and Dave Redell - CPSR/Palo Alto
This is the second part of a three-part article on the computer aspects of the
Strategic Defense Initiative Copies of the full article complete with references
may be obtained from the CPSR office for $1.00 to cover postage and handling
The Possibility of Failure Under Attack
The main reason for this second concern is the impossibility of testing the
system under full-scale operational conditions. It is the universal experience
of computer system designers that reliability cannot be achieved without such
testing. Since we have no spare planets on which to fight trial nuclear wars,
operational testing of a global ABM system is impossible. The system would have
to work the first time No computer system even a fraction of the size of the SDI
system has ever worked perfectly in its first operational use.
The Fletcher Report acknowledges that "there will be no way, short of conducting
a war, to test fully a deployed BMD system, and that therefore "the credibility
of a deployed system must be established by credible testing of subsystems and
partial functions." Experience shows, however, that this technique is a totally
inadequate substitute for testing under operational conditions. The first real
use of any large computer system inevitably uncovers flaws that did not show up
previously in simulations and partial tests. Some early examples are cited by
McCracken: after extensive in-house testing, the first release of IBM's OS/360
revealed five thousand errors. Similarly, the SABRE airline reservation system
was carefully tested in small cities before its first large release in New York.
The system worked well in the small tests, but in the high-volume situation it
sold two hundred seats on 90-seat airplanes all over the country All subsequent
experience has shown such results to be typical; real operational use invariably
reveals scaling, saturation, and other problems that would never have been
discovered by limited testing or simulation.
Any large system that has been exercised only in partial tests and simulations
will lack credibility, but in addition, there are special problems with an ABM
system that make simulation testing particularly difficult. There are no
complete specifications for the problem being solved by the battle management
system, because we cannot know exactly what countermeasures the Soviets would
use to fool, disable, destroy, overload, and penetrate the defense system There
is no way to simulation-test the system against such unanticipated attacks.
Related to this problem is the fact that the system would be subject to frequent
updates Its battle management software would encode detailed plans for dealing
with particular Soviet threats; as these threats changed, so would the software
The Fletcher Report cautions that "when a system is upgraded with new
capabilities, the probability of introducing new errors is very high. The
ability of a massive ongoing simulation project to establish and maintain the
credibility of such a large and continuously evolving body of software is
questionable, to say the least A third difficulty is that simulation testing of
an ABM system must be carried out effectively In real time if It is to reveal
timing errors. Therefore, simulated stimulus data that are computed in advance
must be delivered during the test in simulated real time. Unfortunately, if the
system interacts with its environment during the test in ways that affect its
input stream, this technique won't work. In short, credible simulation testing
of any real-time closed-loop system is extremely difficult, and for a system as
large as that envisioned for the SDI, may simply be impossible.
The Apollo project is sometimes cited as an example of a similar system where
simulation testing succeeded in producing reliable operation on its first use in
addition to ignoring the special problems cited above, however, this argument is
based on a questionable representation of the Apollo software development: many
errors in the NASA control programs were weeded out by operational testing in
the Mercury and Gemini projects. For example, Gemini 5 came down 100 miles from
its intended reentry point, because a programmer had neglected to account for
the orbital motion of the earth around the sun. Thus software errors that might
have been disastrous in the Apollo missions were discovered by earlier
operational testing Even so, during the first moon landing, the computer in the
Apollo 11 Lunar Module failed due to software problems; only emergency manual
intervention by the pilot saved the mission from a last minute abort.
It is sometimes argued that operational testing will be made unnecessary by the
development of automatic program verifiers that certify the correctness of
formal mathematical proofs of the consistency of programs with their
specifications. Unfortunately, progress in automatic verification has been slow,
and the prospect of proving the consistency of a ten million line program with
its functional specification is extremely remote A more fundamental weakness of
the argument is that many kinds of flaws in large systems represent errors in
specifications rather than inconsistencies between programs and specifications;
for example, error-prone features in user interfaces, or the neglect of relevant
physical phenomena (e.g., the orbital motion of the earth around the sun).
Because of the complexity of the requirements for the SDI computer system, which
are influenced by targeting doctrines, battle management plans, constraints of
user interfaces, and the physical environment of weapons and sensors in space,
any formal specification derived from the requirements would contain flaws that
could be revealed only by operational testing.
The message of responsible proponents of program verification and similar
disciplines of programming is not that mechanical verification can replace
operational testing of large systems, but that a disciplined design employing
formal specifications and a priori correctness arguments is necessary in
addition to operational testing. In the case of the SDI computer system, the
state of the art is limited to informal specifications and at best partially
convincing correctness arguments for the major components of the system, such as
replicated data bases, failure resilient communications protocols, and fault-
tolerant high-performance architectures. In short, the required computer system
is too complicated to be designed by the disciplined methodology advocated by
programming formalists Thus, if it somehow did become possible to perform
repeated operational tests of the ABM system, we can predict the outcome: the
system would exhibit unanticipated failures that might or might not be amenable
to later repair.
An illustrative example of the unreliability arising from even moderate
complexity occurred when the DIVAD (Division Air Defense) computer-controlled
cannon was demonstrated to a military review panel The drone target flew by, the
gun swung Into action, passed right by the drone, swung all the way around, and
pointed Itself straight at the review stand, whose occupants scattered for
cover. Later it was discovered that the flaw in the system had been caused by
the washing that the weapon had received in preparation for the review This
suggests that the DIVAD test team had not done an adequate job, since thorough
testing in rain, snow, and other adverse conditions should have revealed the
error under less embarrassing circumstances. There is another possibility:
perhaps the testing team did a thorough job, but the design was so complex that
the bug revealed in the demonstration was too obscure for the tests the kind of
bug that strikes only when the forward radar is wet, the target is flying north,
and the date is February 29. In this case, the blame would lie with the weapon's
designers, who failed to master the complexity of their design.
In passing we call attention to a nontechnical but very serious problem the
Department of Defense (like many other institutions) rarely tests its systems as
thoroughly as it plans to.
A discussion of this problem and a list of references is contained in a 1983
General Accounting Office report to Congress on DoD testing The report makes it
clear that weapons systems that prove difficult or impossible to test adequately
are often deployed anyway, with the predictable result: chronic unreliability.
In summary, it is folly to rely on any computer system that has not been
subjected to full scale operational testing; if the system is as large and
complicated as the proposed ABM system, the folly is monumental: all experience
teaches us that flaws would surface in the first operational use of the system-
in a real nuclear war.
(The third and final section of this paper will appear in the Fall 1985
The Responsible Use of Computers:
Where Do We Draw the Line? (Part II)
Christiane Floyd - Chairperson Forum Informatiker fur Frieden und
Gesellshaftliche Verantwortung (FIFF)
This is the second half of a paper by Christiane Floyd the first part of which
appeared in the Spring 1985 Newsletter. For a copy of the complete paper please
send $1.00 to the CSPR national office to cover postage and handling
2. Human limits of responsible computer use
What I have to say here appears to me self evident however, in view of current
developments in computer application in our society, I feel it necessary to
state the obvious.
When using computers, the actions of computation, data processing and symbol
manipulation are severed from all other ways of dealing with reality available
to man, and are alone effective. In my view, this is only defensible if:
- computers are used in applications where computation and symbol manipulation
are adequate ways of dealing with reality
- there remains sufficient scope outside the computer application for other
human faculties and forms of experience not to degenerate.
In current practice these two points are, in my view, given no or far too little
consideration. And, in this respect, computer science relies on kindred
mechanisitc theories in the humanities which, following the procedures adopted
in the natural sciences, attempt to analyse our inner world in terms of clearly
demarcated sub-areas. This is done by confinement, for example, to statements
about the cognitive faculties of man in order to obtain concrete results. It is
not my intention here to question whether a separative consideration of this
sort is scientifically meaningful; but I feel it essential to stress that it
does not follow from this separative approach in scientific consideration that
these areas may ever be separated in reality too
We as human beings alternate between mental states in which we compute, feel,
process information, remember or dream, skipping back and forth at any time with
such facility that mostly we are scarcely aware of it. Computer science, on the
other hand, nurtures, in my view, an image of man in which the diversity of
human experience and action is reduced to information processing in the
technical sense and to symbolic object manipulation. Here, I regard the
following points as particularly problematic:
- the postulated structural similarity of man and computer as information-
- the consequent view that human problem-solving behaviour is governed by rules
and is therefore classifiable and predictable;
- the already mentioned assumption that it is possible to disregard an essential
and ever-present feature of man's mental make up, i.e., that his faculties of
computation and symbol manipulation are inextricably linked with other forms of
experience, such as feeling or acting, which are governed both by the connection
between the mind and body and by the circumstances relating to the particular
unique situation in hand and to its interpretation by the experiencing human
At this point I do not wish to go further into rival theories in the humanities
in which other views of man are opposed to the mechanistic one; nor do I wish to
examine the empirical evidence for or against the mechanistic view of man. But I
do emphatically desire to disassociate myself from this view as the sole basis
for the design and application of computer aided systems since it conflicts with
my experience of myself and in dealings with other human beings in a number of
essential points and, what is more, it threatens certain qualities m
interpersonal relations which are for me indispensable
I also feel that we must draw the limits of responsible computer use by
consciously adopting a richer view of man and using this as the basis for our
work. This involves both examining theories about man which conflict with the
mechanistic view, and also and this is of particular importance recognizing the
difference between man and machine In our own experience and in our relations
with other human beings, so as to embody it.
The points mentioned above appear to me to be of significance for a number of
specific concrete developments in computer science and computer application of
which the following may stand as examples:
- The area of human-machine interaction. Here, we must make it clear that
between human beings and machines there can be no communication in the proper
sense of the word, and no dialogue can be carried on; that the computer will
never be man's partner; and that we must therefore ensure that communication,
dialogue and partner-like cooperation between human beings remain possible in
jobs involving work at computer terminals.
-Changes in work processes through computer application. We must see to it that
the role of human beings in computer-aided systems does not degenerate into
meaningless routine work and computer operation, but rather that meaningful
human activities are adequately supported by the computer.
- The use of computers in areas where interpersonal relations are of paramount
importance. Here, we must insist, for example, that the human relationship
between teacher and student cannot be replaced by computer aided instruction;
that human care for the elderly cannot be substituted by the use of geriatric
robots, as proposed by certain computer scientists: that jurisdiction is more
than information processing with reference to the particular case in hand and
the relevant laws; that the assessment of colleagues' achievements cannot be
reduced to the numerical evaluation of selected performance characteristics.
-The Invasion of leisure time and family life by the computer. Here we are
quite justified in posing the question as to what extent the continued and one-
sided preoccupation with computers allows other human capabilities, which are
thereby neglected, the chance of developing harmoniously.
The drawing of a line between man and machine is not, in the last analysis,
imposed from without, but is rather inherent in man as a thinking and acting
being in whom machine-like and non-machine-like elements interact. Through our
constant confrontation with the computer we risk being encouraged to think and
act more like machines, our other faculties going to waste If we wish to avert
the negative effects of the computer on our environment, we must first of all
come to terms with the machine in our own selves
3. Ethical and Political Limits of Responsible Computer Use
Wherever computers are used there is a danger of their being utilized to promote
the interests of individuals or specific groups in order to do things with
computers that ought not to be done without them I shall therefore not attempt
to survey the areas which are particularly at risk, but rather outline the ways
in which computers may be used to circumvent accepted ethical or political
- With the help of computers the attempt can be made to conceal measures whose
open application would be impossible on the basis of currently accepted values
and legislation. Instances of such utilization are the surveillance and control
of the individual by the state and private organizations and enterprises.
- With the help of computers one can set out to circumvent ethical action -
which Is exclusive to man so as to force pre-planned events to occur contrary to
man's sense of responsibility This is evidenced, for example, by considerations
as to the necessity of triggering off the destruction of a city (in the case of
a nuclear war) by means of a computer on the grounds that human beings would
probably be unwilling, even under stress and acting under orders, to carry out a
command of this sort.
- With the help of computers one can attempt to shirk responsibility for the
occurrence of programmed events by delegating the responsibility to the machine
Although the computer stands as a nonhuman element between programming and
execution, the responsibility for the effects of applying our programs rests
with us as human beings
- Finally, one can with the help of computers set out to exceed the limits of
human responsibility altogether More specifically, we are today in the process
of planning the application of programs which would be capable, on occurrence of
a local problem, of wiping out, without human Intervention, all life on this
Gunther Anders defines the permissible limits for human products as follows:
"Man, being as he is at liberty to produce things, probably has no criterion of
this sort to go by: that is, unless he takes himself as criterion in other
words, defines the limit as being reached at the point where, being, so to speak
'smaller than himself,' he 'cannot keep tap with himself,' i.e., can no longer
cope with his own products As is the case today" (Excerpt from "Die
Antiquiertheit des Menschen," Vol 1, (Uber die Seele im Zeitalter der zweiten
industriellen Revolution, C H. Beck, Munich 1980, p 45.)
West Germany - FIFF, a German counterpart of CPSR, held its first annual
meeting, which was attended by about 300 computer scientists. We were not able
to send an American representative of CPSR as requested, but did send them a
message of support and congratulation.
Hungary - CPSR's President Brian Smith gave a talk at the Annual Meeting of the
International Physicians for the Prevention of Nuclear War in Budapest (see
story page 4).
Canada - The US is asking NATO nations to participate in Star Wars research.
Over 700 Canadian scientists (including many computer scientists) have signed a
petition stating that they wilt not work on the SDI.
Scotland - CSR. a Scottish counterpart of CPSR, wrote a letter to Vice President
Bush on the occasion of his meetings In Britain with Margaret Thatcher The
letter, signed by over 70 computer scientists from Great Britain, began as
"We share your desire to see a world free from the threat of nuclear weapons.
However, it is our professional opinion that the proposed SDI cannot accomplish
this and would in fact increase the risk of accidental nuclear war, because SDI
makes demands on computing technology which can never be met."
Following arguments about the impossibility of achieving total reliability, the
letter closed as follows:
"These arguments are also being voiced by computer professionals in your own
country and Canada, but have largely been ignored. We urge you to take back to
America a renewed commitment to listen to their advice, and to encourage your
administration to seek alternative solutions to the problems of nuclear
Something Old Under the Sun
The following excerpt is from an article published fourteen years ago when the
previous debate about antiballistic missile systems was talking place It
appeared in the Forum section of the November 1, 1971, DATAMATION and was
entitled "Why MAC MIS, and ABM Won t Fly (Or SAGE advice to the Ambitious)," by
H R J. Grosch It is reprinted here with permission of DATAMATION magazine,
copyright by Technical Publishing Co. a Dun & Bradstreet Company 1971 - all
The thesis is this: there is a spectrum of feasibility, from the very easily
doable to the forever (yes, forever!) impossible At the simple end there are
clearly stated problems, using available and well-understood hardware and
software in familiar modes. Such problems may be very large (utility billing for
Con Edison, payroll for the French National Railway), but the amount of novel
analysis is negligible, the problem statement is changed hardly at all during
the project, and the definition of success is agreed to in advance and does not
change while work is under way. We all see dozens of such applications around
us: department stores, utilities, banks; engineering calculations; school
records; at the extreme of size, the Internal Revenue Service and the Social
Of course, failures occur; we see them daily Incompetence within, supplier greed
without- these are always with us. But it certainly is fair to say that projects
at this easy end of the spectrum should, and almost always in the end do,
succeed. Whether the measure of success is speed, cost, consolidation of
records, or whatever, all parties agree.
Now look at the middle of the spectrum. Here we have problems clearly stated
and readily mathematized (airline reservations is a fine example), but subject
to some change during the project. Nevertheless, change can be resisted in many
cases (added customer luxuries), if not all (tax and regulatory matters).
Success is understood pretty well, especially the speed or reliability sort of
thing, but financial targets tend to waver somewhat, and public acceptance is
mentioned more often than in the payroll sort of application. Above all, the
intrinsic difficulties of the project, the system elements that have to be
explored de novo and interlocked and tuned, make the time scale long enough that
not only are the problem statements and the financial targets reluctantly
allowed to change somewhat, but the available technology changes enough that
some new hardware and software has to be substituted for the original complement
SABRE is, of course, a perfect example. It pioneered, at least at the high level
of sophistication aimed for, it stretched out over many years; new software
methodology was reluctantly incorporated. And it barely worked; just a little
less determination and just a little less exuberance in the national economy,
and it would have been abandoned, IBM or not.
Another enormously expensive, but in the end enormously successful, case was the
NASA Apollo program--the computer part, that is Here the problem was relatively
stable: the law of gravity, the lunar "terrain," even the vehicle configuration,
held pretty still for the half-decade involved But the unbelievable equipment
and personnel and software redundancies, the simulations, the thousands of
practice runs and reruns, are well known; the price was high. I think it fair to
say that, like SABRE in a more narrowly audited context. Apollo was right at the
edge of the possible.
We improve, of course, and demonstrated successes help our confidence. In 1954
GE Louisville couldn't make an overly complex payroll program work; 15 years
later, we put computer equipment on the moon! But when you consider the scale of
Apollo, or even SABRE, the managerial and technical experience of the teams, and
the reliability of the equipment, we haven't pushed the class of problem
solvable per man and per dollar very far toward the difficult end of the
spectrum in the last decade....
Inspect now the far end, the impossible end, of the feasibility spectrum. Here
we find that hecatomb called SAGE, and the other (all the other) command and
control projects. We find the great corporate MIS systems-the automated board
room, the self-optimizing model, the realistic management game. And, pardon my
chuckles, if we turn over a few flat stones we may even find MULTICS.
These projects, and especially the military ones, are characterized far
differently than the simple ones at the other end of the spectrum The problem is
not clearly stated The definition of success is not agreed on between contractor
and customer. The time scale is terribly long. The challenge of the initial
problem is so great that only the very newest, most powerful, least easily
realizable hardware and software can be considered; each new offering is
immediately seized upon and examined for incorporation. This is true of the
outer peripherals as well, the radars and the rockets and the management
Above all, returning for the moment to problem definition, there Is the element
of full disclosure, of good faith. At the easy end of the spectrum, the
customer gives the contractor complete information about the problem, and
genuinely wants to cooperate in a mutual success. At the almost-impossible
middle, the customer still tries his best on both counts; new facts, new
competitive urges, new divergencies of motive do intrude, and
overprofessionalism rears its ugly head. But at the impossible end, the problem
statement is withheld. Russia and China keep their current and future
capabilities and their attack and defense plans secret; the top management
conceals from the would-be modeler how it actually functions ..
At the impossible end also, the customer and the contractor have a complete
failure to agree about the measure of success. Is it to avoid use? The scarecrow
effect, SAGE proponents called it. Is it to utilize to the absolute limit our
technical capabilities? Is it to satisfy, precisely, the most recent revision
of a set of contract specifications? Or is it to genuinely solve the current,
as-evolved real-life problem: really shoot down enemy missiles, really present
the key data the top managers use to run the business?
Originally, SAGE was to shoot down subsonic Russian bombers coming over the
pole. By the time it disappeared from the news (but not from the DoD budget,
alas!), the bombers had gone supersonic, been replaced by unmanned missiles,
been replaced again by the ICBM, the trajectories had become more varied,
suborbital, fractional-orbital--from all points of the compass. . .
In summary, then, I claim that projects from the wilder shores, and especially
command and control, never have been and never will be successful in the real-
life sense. They may make a colonel into a general, fill sites with fantastic
hardware, make the cover of Business Weed, and even frighten the competition or
the enemy ("...but by God, sir, they frighten me!"), but they will not run a
giant business or defend a country
IPPNW Meeting in Budapest
Brian C. Smith ¥ CPSR President
The Fifth Annual Meeting of the International Physicians for the Prevention of
Nuclear War (IPPNW) was held in Budapest in June, complete with 842 participants
from 51 countries, simultaneous translation, 147 press agents, speeches by the
President of Hungary and Willy Brandt, etc. I.e., it was a Big Deal--highly
visible, as important for what it symbolises as for the work it actually
I presented a paper called "The Limits of Correctness" at a session on
Unintentional Nuclear War. I tried to show in lay terms how program
verification, far from proving programs "correct," only shows consistency
between two formal objects-program and specification--both defined with respect
to an inherently partial model (copies of the paper are available from the CPSR
office for $1.00). I also got lots of requests for copies of Nelson and Redell's
SDI analysis. People, not surprisingly, were eager to cut a diet rather rich in
generalities with some specific technical critiques. All in all CPSR was widely
appreciated, and even (somewhat to my surprise) rather well known: Johnson's
suit against Weinberger, for example, had been featured on prime-time TV news in
Finland; doctors from New Zealand knew of our newsletter; this sort of thing.
Dr. Chazov (a very influential Soviet physician: a founder of IPPNW, Brezhnev's
private physican) agreed to send us the names of concerned computer scientists
The doctors' situation differs somewhat from our own. For one thing, the anti-
nuclear position is highly esteemed in their hallowed halls. For example, a
panel of editors of major medical journals (the New England Journal of Medicine,
JAMA, Lancet, etc.) discussed how they could best communicate concern about
nuclear issues to their readers; they have published many such articles and
editorials in recent years, and want to keep this tradition going. This can all
happen, I presume, because there are no doctors on the opposite side; by and
large, the medical profession is independent of the arms race.
The doctors also worry about what positive steps to take, in response to the
criticism that they are merely doomsayers. Like us, there is considerable
disagreement on what form this should take. For example, a major proposal was
aired to undertake or support a global children's immunization campaign, but
there were many late-night discussions in hotel corridors about whether it would
distract IPPNW from its main charter.
I was repeatedly told of considerable concern about SDI among nonAmerican
computer scientists. As with American non-computer scientists, the lack of an
explicit opposition. and the consequences of having a research community not
relying primarily on defense funding, is terribly evident. I wish anyone who
thinks that this doesn't make a difference could have experienced this.
As in any large conference, there was less debate than one might wish (and lots
of my least favourite theme: people hoping for a better human race). But then
the kinds of discussion I was looking for, and the kinds of work I think should
be done, need smaller rooms, and more heated voices. If public visibility and
general consensus was the point, the conference was a great success.
Next year, Cologne; then Moscow. These conferences make an important
international statement. We could do worse than to mobilize a similar level of
"If any witness should come here and tell you that a reliable and safe launch-
on-warning posture can be designed and implemented, that man is a fool ... The
most fateful action in our Nation's history would be predetermined years in
advance, in almost total ignorance about the future situation that might trigger
the decision crisis. It would be predetermined in some computer language, by
some engineers, which the President and his advisers, of course, could never
have checked out ... The entire system of warning, response, and release is too
complicated .to be checked out against all possible failures, and our
imagination is too limited to even think of all the sources of failure:
sabotage, an accidental nuclear explosion, a nationwide power failure in the
midst of a crisis, all sorts of human errors and so forth."
Fred Ikle, 1977, Under Secretary of Defense for Policy
CPSR in the News
As CPSR has grown in size and reputation, it has become impractical to keep
track of all the relevant news articles. Only the highlights are presented here,
arranged by topic.
Cliff Johnson's lawsuit against Caspar Weinberger
May 5 - Headline story in the Los Angeles Daily Journal (a legal publication)
quoted CPSR's supporting statement of Johnson's case
July 10 - Channel 4 in San Francisco interviewed Johnson on the 5 pm News.
July 11 - Coverage in the San Francisco Chronicle, Peninsula Times Tribune, Loa
Angeles Times and San Jose Mercury News, following the press conference in S.F.
July 16 - Syndicated columnist Art Hoppe wrote a column on Johnson's case.
Strategic Defense Initiative (Star Wars)
June - Jon Jacky's article entitled "Star Wars Defense Won't Compute" was
published in The Atlantic magazine.
June 6 - Jacky was interviewed on National Public Radio about his Atlantic
June 7 - Editorial entitled "Star Wars Won't Work" printed in the Bremerton Sun
(Seattle area) mentions Jacky's article.
June 9-10 - Editorials entitled "Military Takeover" and "Software Fantasies"
appeared in the San Jose Mercury News. These editorials quote Gary Chapman. Jon
Jacky, Greg Nelson, Dave Redell, and Terry Winograd.
June 19 - Front page of the Seattle Times printed story entitled "Nonhuman
Warriors are in the Offing" included a picture of Jon Jacky (as well as one of a
June 24 - Op-ed piece by Dave Caulkins. secretary of CPSR/Palo Alto, entitled
"On Star Wars" appeared in Electronic Engineering Times.
July 1 - Greg Nelson and Dave Redell were interviewed on KNTV, Channel 11, San
Jose, by technology reporter John Crump, following a press release about their
Star Wars paper.
July 17 - A letter entitled "Weinberger and Star Wars" written by computer
pioneer Ted Shapin of CPSR/LA, was printed by the Los Angeles Times.
Strategic Computing Initiative
May/June - Computers and People reprinted the short version of CPSR's Strategic
Computing Assessment from the February CACM.
May - Metro, a Silicon Valley publication, printed two stories entitled "War
Games: The Defense Department's Plan for Silicon Valley," and "Strategic
Computing and its Opponents: CPSR Fights Militarization of Silicon Valley." They
quoted Gary Chapman, Lucy Suchman and Terry Winograd.
CPSR/Boston The Presentation Group has made substantial progress on their
slide/tape project, "Computer Reliability and Nuclear War." After a first round
of fundraising in which $10,000 was raised, they created and gave a very
successful presentation to several non-technical audiences (for whom the final
presentation is intended). The professional producers hired for the project have
arranged for free use of audio facilities (studio time, editing machines), a
Kurzweil music synthesizer, a composer to produce original music, and a discount
on film processing and supplies.
This summer, the group is concentrating on more fundraising and on scripting.
Other tasks include obtaining letters of endorsements from prominent groups and
individuals, getting commitments of donated services from artists, and locating
sources of archival graphics materials.
CPSR/Chicago The chapter held a reception at the Chicago Peace Museum on July
17, during the week of the National Computer Conference there. CPSR has been
denied booth space at NCC because of a recent ruling that booths are given only
to members of AFIPS, so the reception was used as a means of distributing CPSR
literature to NCC participants. About fifty people attended the reception and
presentations were made about CPSR on both a local and national level.
CPSR/Los Angeles The chapter is in charge of the CPSR booth at the International
Joint Conference on Artificial Intelligence (IJCAI), which will be held in Los
Angeles, August 18-23. They are preparing a game called "Non-Trivial Pursuit" in
which participants discover the remarkable answers to some compelling questions.
People are still needed for staffing the booth. If you can help, please call
Fred Blair at (818) 577-6917 or Al Beebe at (213) 559-5273.
CPSR/Palo Alto Dan Carnese and Lucy Suchman have prepared two versions of an
information packet for new members: one applying to CPSR in general and one
specific to CPSR/Palo Alto. The general version will be distributed to new
members by the national office.
CPSR/Santa Cruz James Ganong and John James have written a chapter called
"Computers In Peacework" for a book entitled "Working For Peace", which will be
published in October.
CPSR/Seattle In April 1985, members of CPSR/Seattle met with Dick Mark, the
Executive Director of The Professional's Coalition for Nuclear Arms Control
[1616 P Street, NW, Suite 320, Washington, D.C. 20036 tel. (202) 332-4823]. The
coalition was begun in early 1984 to coordinate the arms control efforts of
various professional groups. It includes the Physicians for Social
Responsibility, the Union of Concerned Scientists, and the Lawyers Alliance for
Nuclear Arms Control. CPSR has been in touch with them informally for some time
but presently has no plans to join.
From the Secretary's Desk
Laura Gould -- CPSR National Secretary
The Federation of American Scientists is arranging hearings before the Senate
Sub-Committee on Strategic Nuclear and Theater Weapons on "Computers and the
Strategic Defense Initiative." The proposed agenda for these hearings includes
three main topics: introduction to computers and the SDI;
feasibility of developing the software for the SDI; limitations of computers in
SDI. CPSR has provided a list of expert witnesses who could testify before this
sub-committee of the Armed Services Committee.
Response to CPSR's 1985 proposal is starting to come in To date we have received
$30,000; $10,000 from the Scherman Foundation, $10,000 from the Ruth Mott Fund,
and $10,000 from three New York donors. Several other foundations have shown
interest and solicited proposals recently.
An anonymous donor has given the CPSR national office an Apple III identical to
the one we were given last fall. Thanks to these generous gifts, we have been
able to handle the increasing workload in the national office.
IJCAI The International Joint Conference on Artificial intelligence will take
place at UCLA August 18-23, 1985. CPSR will have a booth there (see Chapter
News) and Terry Winograd will chair a panel entitled "Expert Systems: How Far
Can They Go?" Besides Terry, the panel will consist of Brian Smith, Stuart
Dreyfus, and Randy Davis. Alan Bundy and Henry Thompson, of CSR, Edinburgh, will
also give papers on social topics.
CPSR Meetings The next CPSR Board Meeting will be held Saturday, November 23,
1985, in the Palo Alto area. The second Annual Meeting will be held on Saturday,
March 1, 1986, in San Francisco, the weekend before the COMPCON Conference opens
there. Details in the next Newsletter.
COMPCON This meeting will take place March 3-7, 1986 in San Francisco. We are
currently trying to arrange a panel on the feasibility of building a computer
system for the SDI.
CSCSI-86 This Canadian Artificial Intelligence Conference will take place in
Montreal May 21-23, 1986. One of the topics will be Social Aspects of A/, and
papers of less than 5000 words are solicited. Three copies should be sent,
before Dec. 31, 1985, to the Program Chair: Bill Havens, Dept. of Computer
Science. Univ. of British Columbia, Vancouver B.C. V6T 1 W5, Canada.
To provide some relief from the seriousness of the issues which CPSP regularly
addresses, we are instituting this column which will contain true stories of
computer faux pas. Send us your favorites and we will select, edit, and publish.
Well, there was this cement factory that a company I used to work for built an
8080-based distributed control system for. At the time (1975) this was state-of-
the-art in process control.
The plant crushed boulders into sand before mixing with other things to make
cement. The conveyors to the rock crusher (and the crusher itself) were
controlled by the 8080s. A batch of defective RAM chips used in the processor
had a habit of dropping bits (no parity or ECC), causing at one point the second
of a series of three conveyors to switch off. This caused a large pile of
boulders (about 6-8 feet in diameter) to pile up on top of the conveyor (about
80 feet up), eventually falling off and crushing several cars in the parking
lot, and damaging a building. We noticed the problem when, being unable to
explain the dull thuds we were hearing in the control room, we looked out the
window ... JCP
Created before October 2004