Personal tools

Fall1986.txt

Growing Concern About the Militarization of Artificial Intelligence

Artificial intelligence, or AI, has always been heavily funded by the Department
of Defense. But after the appearance of the Strategic Computing Initiative in
1983, DoD funding of AI took on a new character. Now there is a growing trend
toward funding development more than basic research, and Federal funding of AI
research is increasingly directed toward the specific military programs under
the Strategic Computing umbrella.

This has caused a great deal of concern among researchers and workers in AI,
many of whom were attracted to the field by its intellectual challenge and
technological promise. Many people working in AI today are critical of the
growing dependence of the field on the military. And they are worried that their
work may be used to develop a new generation of deadly weapons, including the
qualitatively revolutionary "autonomous" weapons now being pursued by the
Department of Defense.

Two personal protests against the militarization of artificial intelligence are
presented here, beginning on the following two pages. Doug Schuler has proposed
an alternative to the Strategic Computing Initiative which he calls a
"Responsible Computing Initiative." Professor Joseph Weizenbaum for the
Laboratory of Computer Science at MIT, author of the famous program "Eliza" and
the book, Computer Power and Human Reason, has more serious doubts about doing
AI research. We have excerpted here a speech he gave in West Germany .

The CPSR Newsletter is dedicated to disseminating the thoughts and opinions of
CPSR members when they are relevant to the issues the organization works on. The
personal statements that follow are not presented as official CPSR policy or any
official attitude of the organization. They are published here with the intent
of stimulating a dialogue that we hope will be conducted in future issues of
this publication.

A Responsible Computing Initiative
Rescuing Artificial Intelligence from the Military
Doug Schuler-CPSR/Seattle

Dystopian science fiction has always been characterized by popularized
apocalyptic visions of the future. These have included ray guns in space, the
ability to destroy the world, and the use of intelligent robots in warfare.
Technology has been busily perfecting the art of destruction; our nightmares
have become military objectives. All three examples mentioned above are now on
the drawing boards of Pentagon contractors. The Strategic Defense Initiative
(SDI), or "Star Wars," may produce the first; the nuclear arms race in general
reflects the second; and the Strategic Computing Initiative (SCI), through the
use of artificial intelligence (AI), is developing the third.

The desirability of the Strategic Computing Initiative has not been a matter of
public debate or scrutiny. Nor have any possible alternatives. The modest
proposals explained below began as a reaction to the Strategic Computing
Initiative. In my opinion, there must be a more uplifting vision than the
blasting of enemies.

I see an urgent need for a widespread, coordinated alternative effort, on par
with the well-financed and organized military programs, to direct computing
research and other technical resources toward progressive goals. To begin to
define such a program would be a significant, positive step.

A Responsible Computing Initiative

My thinking has started to take the form of what I call an RCI, a Responsible
Computing Initiative-a humane alternative to the SCI, one in which researchers
from a wide spectrum of disciplines could cooperate in the definition and
implementation of an interesting and technically challenging project directed
toward humane goals. I also want to add my voice to the increasingly large
chorus calling for better approaches and an end to a reliance on present
institutions and indirect problem solving, which are inadequate. My grandiose
title for this new program is meant to represent a new effort through which
computer scientists and other technically skilled people can begin to engage in
work that is more concordant with their values and sense of social
responsibility.

I have derived several ground rules for the first version of an RCI. These have
fortunately turned out to be simple and few in number: (1 ) explicitly address
non-military and humane objectives; (2) recommend a small number of specific,
tangible goals which are feasible; and (3) foster a wide range of research
activities.

Responsible computing is research and development of computing technology to
improve the human condition. The definition, though not rigorous or all-
inclusive, allows for such goals as improving health care, improving
environmental quality, providing computing and other technical capabilities to
handicapped persons of all types, providing less intimidating access to tools
and information, reducing illiteracy, and reducing the threat of war.
Responsible computing is meant to deal directly with these objectives. As an
example, the Japanese Fifth Generation Project, promoted in Japan as an
ambitious multi-year program to use AI for socially responsible objectives, is
in marked contrast to the military goals of the SCI here in the United States.

As a first cut at an RCI, I attempted to partition the problem into broad areas
that would promote various types of research to ultimately benefit different
audiences, the prototypical "end-users." Each area represents a different
"viewpoint." Each is designed for a different "end-user" and has a unique focus
to it.

The first viewpoint, Communication, Language, and Literacy (CLL), is designed
for individuals and small groups. Its focus is on basic communication between
"naive" groups, including illiterate people, disadvantaged people, and others
with neither the opportunity, skills, nor inclination to deal with computer
technology as it presently exists. It is intended to help teach written language
and to facilitate communication across national and cultural boundaries through
other means. This research could involve graphics, multilingual word and text
processing, on-line dictionaries, and specialized vocabularies. Computer
networks and electronic mail services to these users would be a very reasonable
research topic. An example of this is DBNet, an experimental network developed
at the University of Washington to supply mail services to deaf and blind
people. The development of higher-level programming languages for less-technical
users could also be initiated.

The second viewpoint, Resource Management (RM), is designed for larger groups
and organizations. Its focus is on using the power of a computer to simulate or
model scenarios that are important to the user. This could include water
distribution or crop allocation in rural Third World environments, or industrial
uses such as factory scheduling. It is intended to help groups manage
enterprises more thoughtfully through increased awareness of the availability of
resources and the nature of interactions through the use of "what-if" exercises.
This area could integrate research such as remote sensing, simulation, and data
reduction. Artificial intelligence concepts could be used to develop strategic
models. Iconic languages could be employed to represent relationships between
elements in the model. Hybrid systems containing elements of spreadsheets, AI-
based modelling systems, and current simulation environments could be proposed,
built, tested under actual conditions, and evaluated for effectiveness.

The third viewpoint, Arbitration and Conflict Resolution (ACR), is designed for
nations and transnational organizations. This area falls into the realm of
"participant systems." The focus in this area would be the use of computer
software to facilitate negotiation by supplying some of the bookkeeping and
other appropriate functions. Using legal expert systems and arbitration models,
computing systems would be designed to facilitate peace through conflict
resolution. Simulation and artificial intelligence concepts could again be
employed in this area. As an example, a computer model which showed the
interrelationships between variables in a deep-sea mining operation was employed
extensively in the Law of the Sea negotiations. If a model is credible, the
context relevant to the negotiations is more sensible to the negotiators.
Furthermore, there are various methods now employed in AI research to elicit and
represent knowledge. These methods could be used in conjunction with arbitration
software and realworld models to build a complete system.

There are, of course, many shared objectives and methods between these proposed
RCI applications. Graphical representation of concepts developed in the
communications and language area could be used in the resource management area;
groups and organizations of smaller size than nations and transnational
organizations could benefit from versions of negotiation software. It should be
noted that these applications are more feasible and less naive than many of the
areas within the Strategic Computing Initiative. And if these fail to live up to
their expectations, no deaths or accidental wars will result.

The initiation of a responsible computing plan depends on the ability of those
who are interested in such a plan to make contact with each other and to
communicate enthusiastically, credibly, and forcibly their proposals and their
vision.

Of course this proposal gives rise to many questions. Is application-driven
research such as this practical, efficient, or possible? If it isn't practical,
why is that? What obstacles are there and are they insurmountable? Can the
resources and commitment required to pursue these projects successfully be
assembled? If not, why not? Perhaps technology itself is to blame and attempts
to utilize it constructively are in vain. We know, however, that technological
research and development in opposition to human interests will continue unless
we press our case. As scientists and technologists who are concerned about the
future, we must define our vision. The policies of the next generation are being
debated today. The present course is outdated, unfair, governed by inertia, and
potentially suicidal. A balanced, fair, progressive and workable vision must
replace it.

"Not Without Us"
A Personal Protest
Joseph Weizenbaum--CPSR/Boston
Whenever I am in West Germany, I am amazed by the apparent normality of everyday
life. As only an occasional visitor to Germany I see strange things that must by
now appear routine, even natural to Germans. For example, holes in the streets
that are intended to be filled with nuclear land mines or the closeness of every
German citizen to nuclear weapons storage facilities. I notice, in other words,
the Germans' physical, but even more their psychological, proximity to the final
catastrophe.

We in America are no more distant from the catastrophe than the Germans. In case
of war, regardless of whether unintentionally initiated by technology allegedly
designed to avert war, or by so-called statesmen or women who thought it their
duty to push the button, Germans may die ten minutes earlier than we in fortress
America, but we shall all die.

We have no holes in our streets for atomic land mines. We see our missile silos
only now and then, that is, only whenever it pleases someone to show them to us
on television. No matter how passionately our government tries to convince us
that the nasty Soviets are effectively as near to us as to the Europeans, that
they threaten us from, for example, Cuba or Nicaragua, Americans are, on the
whole, unconvinced and therefore untroubled by such efforts. So it isn't
surprising that the average American worries so little about the danger that
confronts us. In fact, it would be astounding if he were even particularly aware
of it. The American experience of war allows an "it can't happen here" attitude
to grow rather than a concrete fear of what appears to be far removed from the
immediate concerns of daily life.

I am aware that it is emotionally impossible for people to live for very long in
the face of immediate threats to their very existence without bringing to bear
psychological mechanisms that serve to exclude those dangers from their
consciousness. But when repression necessitates systematically misdirected
efforts or excludes potentially life-saving behavior, then it is time to replace
it by a deep look into the threat itself.

This time has come for computer professionals. We now have the power to alter
the state of the world fundamentally and in a way conducive to life.

It is a prosaic truth that none of the weapon systems which today threaten
murder on a genocidal scale, and whose design, manufacture and sale condemns
countless people, especially children, to poverty and starvation-that none of
these devices could be developed without the earnest, even enthusiastic,
cooperation of computer professionals. It cannot go on without us! Without us
the arms race, especially the qualitative arms race, could not advance another
step.

Does this plain, simple and obvious fact say anything to us as computer
professionals? I think so:

First those among us who, perhaps without being aware of it, exercise our
talents in the service of death rather than that of life have little right to
curse politicians, statesmen and women for not bringing us peace. Without our
devoted help they could no longer endanger the peoples of our earth. All of us
must therefore consider whether our daily work contributes to the insanity of
further armament or to genuine possibilities for peace.

In this context, artificial intelligence (AI) comes especially to mind. Many of
the technical tasks and problems in this subdiscipline of computer science
stimulate the imagination and creativity of technically oriented workers
particularly strongly. Making a thinking being out of the computer, giving the
computer the ability to understand spoken language, making it possible for the
computer to see-goals like these offer nearly irresistible temptations to those
among us who have not fully sublimated our playful sandbox fantasies or who mean
to satisfy our delusions of omnipotence on the computer stage, i.e., in terms of
computer systems. Such tasks are extraordinarily demanding and interesting.
Robert Oppenheimer called them "sweet." Besides, research projects in these
areas are generously funded. The required moneys usually come out of the coffers
of the military-at least in America.

It is enormously tempting and, especially in artificial intelligence work,
seductively simple, to lose or hide oneself in details, in subproblems and their
subproblems, and so on. The actual problems on which one works-and which are so
generously supported-are disguised and transformed until their representations
are mere fables, harmless, innocent, lovely fairy tales.

For example, a doctoral student characterized his projected dissertation task as
follows:

A child, perhaps six or seven years old, sits in front of a computer display on
which one can see a kitten and a bear-all this in full color of course. The
kitten is playing with a ball. The child speaks to the computer system: ÒThe
bear should say 'thank you' when someone gives him something." The system
responds in a synthetic but nevertheless pleasing voice: ÒThank you, I
understand." Then the child again: Kitty, give your ball to your friend."
Immediately we see the kitten on the computer display throw the ball to the
bear. Then we hear the bear say: ÒThank you my dear kitten.Ó

This is the kernel of what the system, whose development is to constitute the
student's doctoral work, is to accomplish. Seen from a technical point of view,
the system is to understand spoken instructions-that alone is not simple-and
translate them into a computer program which it is then to integrate seamlessly
into its own computational structure. Not at all trivial, and beyond that, quite
touching.

Now a translation to reality:

A fighter pilot is addressed by his pilot's associate system: ÒSir, I see an
enemy tank column below. Your orders please." The pilot: 'When you see something
like that, don't bother me, destroy the bastards and record the action. That's
all." The system answers: ÒYes sir!" and the plane's rockets fly earthward.

This pilot's associate system is one of three weapons systems which are
expressly described, mainly as a problem for artificial intelligence, in the
Strategic Computing Initiative, a new major research and development program of
the American military. Over six hundred million dollars are to be spent on this
program in the next four or five years.

It isn't my intention to assail or revile military systems. I intend this
example from the actual practice of academic artificial intelligence research in
America to illustrate the euphemistic linguistic dissimulation, the effect of
which is to hinder thought and, ultimately, to still conscience.

I don't quite know whether it is especially computer science or its
subdiscipline artificial intelligence that has such an enormous affection for
euphemism. We speak so spectacularly and so readily of computer systems that
understand, that see, decide, make judgments, and so on, without ourselves
recognizing our own superficiality and immeasurable naivete with respect to
these concepts. And, in the process of so speaking, we anesthetize our ability
to evaluate the quality of our work and, what is more important, to identify and
become conscious of its end use.

The student I mentioned above imagines his work to be about computer games for
children, involving perhaps toy kittens, bears and balls. Its actual end use
will likely mean that some day a young man, quite like the student himself, who
has parents and possibly a girl friend, will be set afire by an exploding
missile which was sent his way by a pilot's associate system shaped by the
student's research. The psychological distance between the student's conception
of his work and its actual implications is astronomic. It is precisely this
enormous distance which makes it possible not to know and not to ask if one is
doing sensible work or contributing to the greater efficiency of murderous
devices.

One can't escape this state without asking again and again: what do I actually
do? What is the final application and use of the products of my work?" and,
ultimately, "Am I content or ashamed to have contributed to this use?"

Once we have abandoned the prettifying of our language, we should begin to speak
realistically and in earnest about our work as computer professionals. We
should, for example, ask questions with respect to attempts to make it possible
for computer systems to see. Progress in this domain will, with absolute
certainty, be used to steer missiles like the cruise and the Pershing ever more
precisely to their targets. And at their targets, mass murder will be committed.

Such statements are often countered with the assertion that the computer is
merely a tool. As such it can be used for good or for evil. In and of itself, it
is value free. Furthermore, scientists and technicians cannot know how the
products of their work will be applied, whether they will find a good or an evil
use. Hence scientists and technicians cannot be held responsible for the final
application of their work.

Many scientists adopt the argument just stated as their own. They say that the
systems on which they work can take men to the moon and bring them back just as
these same systems can guarantee that missiles aimed at Moscow will actually hit
Moscow when fired. They cannot know in advance, they say, which of these two or
still other goals their work will serve in the end. How then can they be held
responsible for whatever consequences their work may entail? So it is, on the
whole, with computer professionals. The doctoral student I mentioned, who wishes
to be able to converse with his computer display, does in fact believe that
future applications of his work will be exclusively in innocent applications
such as children's games. Perhaps his research is not sponsored by the
Pentagon's Strategic Computing Initiative, perhaps he never even heard of the
SCI. How then can he be assigned any responsibility for anti-human use to which
his results might be put ?

Here we come to the essence of the matter: today we know with virtual certainty
that every scientific and technical result will, if at all possible, be put to
use in military systems. The computer, together with the history of its
development, is perhaps the key example. In these circumstances, scientific and
technical workers cannot escape their responsibility to inquire about the end
use of their work. They must then decide, once they know to what end it will be
used, whether or not they would serve these ends with their own hands, that is,
with the psychological distance between themselves and the final consequences of
their work reduced to zero.

I think it important to say that I don't believe the military, in and of itself,
to be an evil. Nor would I assert that the fact that a specific technology
adopted by the military is, on that ground alone, an evil. In the present state
of the evolution of the sovereign nation-state, each state needs a military just
as every city needs a fire department. (On the other hand, no one pleads for a
fire station on every corner, and no one wishes for a city fire department that
makes a side business out of committing prophylactic arson in the villages
adjacent to the city.)

But we see our entire world, particularly its universities and science and
engineering facilities, being increasingly and ever more profoundly militarized
every day. "Little" wars burn in almost every part of the earth. (They serve in
part to test the high-tech weapons of the "more advanced nations.") More than
half of all the earth's scientists and engineers work more or less directly in
military institutions or in institutions supported in the main by the military.

Probably the most pandemic mental illness of our time is the almost universally
held belief that the individual is powerless. This (self-fulfilling) delusion
will surely be offered as a counter argument to my thesis. I demand, do I not,
that a whole profession refuse to participate in the murderous insanity of our
time. "That cannot be effective," I can already hear it said. "Yes, if actually
no one worked on such things . . . but that is plainly impossible. After all, if
I don't do it, someone else will."

First, and on the most elementary level, l must say that the rule "If I don't do
it, someone else will" cannot serve as a basis of moral behavior. Every crime
imaginable can be justified on its basis. For example, "If I don't steal the
sleeping drunk's money, someone else will."

But it is not at all trivial to ask after the meaning of effectiveness in the
present context. Surely, effectiveness is not a binary matter, an either/or
matter. If what I say here were to induce a strike on the part of all scientists
with respect to weapons work, that would have to be counted as effective. But
there are many much more modest degrees of effectiveness toward which I aim.

I think it was George Orwell who once wrote, "The highest duty of intellectuals
in these times is to speak the simplest truths in the simplest possible words."
For me that means first of all the duty to articulate the absurdity of our world
in my actions, my writings and with my voice. I hope thereby to stir my
students, my colleagues, everyone to whom I can speak directly. I hope thereby
to encourage those who have already begun to think similarly, and to be
encouraged by them, and possibly rouse all others I can reach out of their
slumber. Courage, like fear, is catching! Even the most modest success in such
attempts has also to count as effectiveness. Beyond that, in speaking as I do, I
put what I here discuss on the public agenda and contribute to its legitimation.
These are modest goals that can surely be reached. But each of us must believe
"it cannot be done without me.

Loose Coupling
Does It Make the SDI Software Trustworthy?
Severo M. Ornstein-CPSR Chairman

No other piece of military equipment is ever allowed into the field without
extensive testing. Tanks. Airplanes. Even boots. WeÕve all read about the horror
stories when a conventional weapon gets into the inventory with inadequate
testing and we've all heard enough about weapons that don't work right after
they're deployed. But a nuclear warhead is the most complex weapon we've got. We
have to test them as they'll be used, before and after deployment.... Would you
fly in an airplane that had only been tested by a computer simulation?

Ed Badolato, Deputy Assistant Secretary of Energy for Security Affairs-speaking
against a Comprehensive Nuclear Test Ban.

With fewer tests . . .we simply can't maintain the reliability of our deterrent
forces. And without reliability, there is no credibility.

Frank J. Gattney Jr., [Deputy Assistant Secretary of Defense for Nuclear Forces
and Arms Control Policy

In its report on Battle Management software for the Strategic Defense Initiative
(SDI), the Eastport Group has argued that a distributed architecture is critical
to a successful system. This architecture would emphasize loosely coupled
subsystems as opposed to a more traditional tightly coordinated and centralized
system. This suggestion has been hailed by many defenders of the SDI as a new
and clever idea that overcomes the software objections raised by critics of the
SDI.

The purpose of this article is to explore these claims. First of all, how novel
is the idea of loosely coupled distributed systems? Is it really a new idea or
simply new emphasis on a long existing trend in the development of robust
software systems? Second, how much does it actually accomplish? Can it provide
the enhanced reliability that proponents claim and thus remove the objections of
critics? Does the task, by its nature, demand coordination at a level that
defeats the potential reliability advantages of loose coupling? What has the
experience been with existing systems of this sort? The answers to these and
related questions throw some light on the continuing debate about the
practicality of trying to build a trustworthy strategic defense system.

At the outset it is important to distinguish between two related but distinct
concepts: partitioning and distribution. Partitioning is a logical, not a
physical process, and describes the act of breaking down a problem or a system
into some kind of logical units-generally in such a way as to minimize the
communication required between the parts. Informally a "logical unit" can
be defined as that collection of material that "hangs together"-i.e., has
substantial local interaction but relatively less interaction
with other parts of the system. Systems are partitioned for a whole host of
reasons, most of which serve the general purpose of rendering a complex problem
more comprehensible and tractable. Localization of communication can also have
specific advantages in both software (e.g., reduced swapping) and hardware
(e.g., reduced wire lengths).

The act of partitioning forces the designer to think carefully about the large
structure of a problem and focuses attention on the interactions required
between the larger functional pieces. The more the parts of a system can be
isolated from one another, the less likely that failures in one part of a system
will have unanticipated effects in another part.

While partitioning is fundamentally a logical process, distribution is a
physical one. Thus a distributed system is a system whose physical components
are placed in different locations. The components may be nearly identical in
function or they may perform entirely different functions. In the latter case,
someone has previously partitioned the system and implanted the various pieces
in different machines.

Often there is a strong tendency to break a system up logically and physically
along the same boundaries, thus producing specialized machine functions.
However, this duplication is by no means necessary or universally practiced. In
any complex system, partitioning is an early step in system design. Figuring out
just what pieces constitute logical units is not always easy. Often there are
complicating factors that tend to suggest a partitioning other than one that
best isolates and insulates modules from one another. Such elements include
preconceptions on the part of the designers about the structure of the problem
(often based on superficial features that don't reflect the actual underlying
structure), pre-existing structures that must be accommodated, physical
constraints due to location of system elements, and last but not least, cost.
All of these tend to confuse and complicate the job of partitioning.

Beyond these problems, some systems don't have nicely separable logical units
since some jobs demand so much interaction that there simply are no identifiable
isolatable chunks. Some parts of the SDI are like that. For instance sensor data
is required for track formation. Limiting sensor data to a single battle station
would deny useful information to other stations. But providing common data to
multiple stations renders them all vulnerable to errors either in the data
itself or in its transmission.

What do the terms "loosely coupled" and "loose coordination" actually mean?
Systems can be broken up in various ways, some of which allocate a good deal of
autonomy to subsystems. The term loose coupling refers to such a structure.

There are a variety of conflicts that arise in designing the SDI along these
lines. For example there is a conflict between the desire to have the pieces be
as autonomous as possible and the need for supervisory control. A hierarchical
structure is suggested by the Eastport Group to resolve such conflicts, but a
hierarchical structure introduces common failure points. In addition to a need
for supervisory control, there are needs for communication not only for
efficiency of performance but simply to accomplish the necessary coordination of
activities. The more independent the battle groups become, the more complex they
become as each increasingly takes on the aspect of an entire mini-SDI system.

Is the idea of gaining fault tolerance through loose coupling in a distributed
system a new idea? Certainly not to anyone familiar with computer system design
over the past twenty-five years. One of the reasons that computer systems have
caused us so much trouble is that they allow us to create great complexity with
comparative ease. Managing that complexity is the essence of software
engineering. People concerned about this problem long ago recognized that when
systems become so complex that the rules governing their behavior cannot easily
be grasped, then a good way to gain some control over matters is to partition
the problem into smaller pieces, each of which is by itself more tractable. By
now modular programming has become a virtual commandment. Furthermore,
distributed computer systems, coupled at every conceivable level of tightness,
have been explored for over two decades now. The descriptions of these systems
and their benefits fill the computer literature and there are regular
conferences dedicated to the topic. The advantages speak for themselves and the
Eastport Group has done an excellent job of explaining how they apply to the SDI
problem.

However, the report flies in the face of fact when it states that "techniques
for constructing systems that are usefully reliable in spite of imperfections
are not in the current mainstream of software engineering research." They most
certainly are and have been for many years. System programmers today routinely
practice fault tolerant techniques, without which most systems would immediately
crumble. To argue that these are uncultivated techniques is sheer nonsense.
There is an entire subdiscipline of Zfault tolerance," replete with professional
committees, regular conferences, etc. Techniques have been developed for dealing
with many of the more common causes of trouble in computer systems, both
hardware and software bugs.

The real question is whether any technique is capable of eliminating the kind of
uncertainties in large software systems that stem from design errors and
misconceptions. Can one, as the Eastport Report suggests, build a trustworthy
system from programs containing errors? The answer is that it depends on what
kinds of errors occur. If they lie within the bounds of those that have been
thought of and allowed for, then they will be successfully tolerated. If not,
the consequences will be, as usual, unpredictable.

The report argues that decentralized systems gain robustness through diversity.
It argues that "errors or vulnerabilities in one system are not likely to be
duplicated in other systems." Such a statement must, however, be predicated upon
the independence of errors. In fact, studies have indicated that except for
minor errors, various programmers assigned to the same job often make the same
errors and suffer from the same oversights. There is good reason for this.
People tend to make similar mistakes because they often share the same wrong
information, misconceptions, etc.

In dealing with a problem as remote from our experience as an assault by large
numbers of ballistic missiles, a body of "conventional wisdom" gradually
evolves. This provides a background against which design decisions are made.
Errors in conventional wisdom are nearly impossible to root out until actual
experience points them out to us. Such errors, oversights, and misconceptions
find their way into system specifications and thus show up as problems in all
versions of the programs that carefully follow the specifications. Furthermore,
few specifications are truly complete and freestanding. Instead they rely on the
common sense of the programmers to fill in gaps in a reasonable way. Here again,
errors n the conventional wisdom will lead to common mistakes in the programs.

The Eastport Group Report emphasizes over and over gain the need for an
architecture that allows one to infer the performance of a full scale deployment
from a much smaller scale test." Unfortunately it is almost always re case that
when one first puts together indepenently tested pieces of a system, previously
unsuspected interactions are uncovered. It is also true that changes to a system
invariably produce effects and ruse troubles in areas of the system that at
first glance seem totally unrelated to the part that was changed. People are
capable of looking at only a limited number of things at once. Experienced
people accept these facts as a matter of course. That the Eastport Group's
report does not fully acknowledge them seems an egregious omission.

An instructive example comes from experience with the communications network
known as the "Arpanet." This DoD network consists of small communications
processors (nodes) at scattered sites around the United States (and a few
abroad). It was the first major computer network ever to be constructed, and
very early on the designers recognized the need to insulate the nodes from one
another's misbehavior. The nodes were identical small computers with identical
programs all participating in the overall system task of passing messages from
source to destination nodes through intermediate nodes in the network, finding
and avoiding congested routes, broken links, etc. It was a widely distributed
system with loose coupling and thus much the sort of general design recommended
by the Eastport Report. Indeed the Arpanet continues to demonstrate the
advantages of a distributed architecture. But it also demonstrates that such
architectures are not invulnerable.

The designers, recognizing the need to protect the sites from one another, spent
an extraordinary amount of effort devising methods of communication that would
provide such protection. As in all such systems, compromises were required
between the needs for efficiency and reliability. As the Eastport Report points
out, added communication can enhance performance but degrades reliability by
complicating matters. Some communication is essential if the various pieces of a
system are to cooperate at all. For instance, exchange of some control
information was necessary in the Arpanet in order to provide sensible routing
and flow control to avoid congestion. It was clear that these were places where
erroneous information could potentially cause trouble, and so the mechanisms
were kept as simple as possible and were carefully protected. The success of
this effort was quite astonishing. During the early operating history, numerous
unexpected failures occurred at individual sites, but the network as a whole
continued to operate undamaged, passing messages successfully around broken
nodes and links, avoiding bad segments during outages and automatically
reincorporating them when they were repaired and resumed functioning. In fact
the robustness of the network as a whole was much touted and admired.

Failure of the Arpanet

Then on October 27, 1980, a hardware failure occurred at one of the sites and
interacted badly with a subtle oversight in the software in such a way as to
produce a lock-up of the entire network. Within minutes the flow of messages
came to an abrupt halt after ten years of successful operation. Details of this
infamous problem are available in the literature (Software Engineering Notes,
vol. 8, no. 5) but are less important than the larger lesson that the experience
teaches. Even with prodigious care and attention and years of successful
operation in actual use, there is no way to be sure that tomorrow some minor
unanticipated circumstance will not produce catastrophic consequences. In fact
the failure that occurred, while perverse in its subtlety, was only overlooked,
not truly extraordinary. It could have happened any time. It wasn't that any
unusual network event or overload had happened-nothing extraordinary in any
overt sense; it was just unusual in that it hadn't previously happened. And
devastating in that its potential consequences had not been foreseen. Unlike the
Arpanet, the SDI is a system that would not be exercised in actual operation
every day.

David Parnas makes the following argument (my amendments shown in brackets) in a
paper he has written (shortly to be published in Abacus) in response to the
Eastport Report:

"The essence of the Eastport Report's argument is that the SDI could be
trustworthy if each battle station functioned autonomously, i.e., without
depending on help from others. Three claims can be made for such a design:

(1) It decomposes an excessively large problem to a set of smaller ones, each
one of which can be built and tested;

(2) Because the battle stations would be autonomous, a failure of some would
allow the others to continue to function;

(3) Because of the independence one could infer the behavior of the whole system
from tests on individual battle stations.

"The first claim is appealing and reminiscent of arguments made in the 60's and
70's about modular programming. Unfortunately, experience has shown that modular
programming is an effective technique for making errors easier to correct, not
for eliminating errors. Modular programming does not solve the problems raised
by critics. None of the arguments presented [by critics] have been based on an
assumption of tight coupling; some of the arguments do assume that there will be
data passed from one satellite to another. The Eastport Report, like earlier
reports, supports that assumption.

"The Eastport Group's argument is based on four unstated assumptions:

(1) Battle stations do not need data from other satellites to perform their
basic functions;

(2) An individual battle station is a small software project that will not run
into the software difficulties that exist for the overall system;

(3) The only interaction between the stations is by explicit communication;

(4) A collection of communicating systems does not constitute a single system.

"All of these assumptions are false.

(1) The data from other satellites is essential for finding tracks and
discriminating between warheads and decoys in the presence of noise.

(2) For true autonomy, each battle station has to perform all of the functions
of the whole system. [Thus although a limited class of problems is eliminated,
the original complaint still applies to each station. Each is impossible to test
and hence unlikely to work in actual operating conditions. There is no way to
guarantee that failure modes common to all battle stations have been eliminated
and consequently the overall system would not be trustworthy.]

(3) Battle stations interact through weapons and sensors as well as through
their shared targets. If we got a single station working perfectly in isolation,
it might fail completely when operating near others. The failure of one station
might cause others to fail because of overload. Only a real battle would give us
confidence that such interactions would not occur.

(4) A collection of communicating programs is mathematically equivalent to a
single program. In practice, distribution makes the problem harder, not easier.

"Restricting the communication between the satellites does not solve the
problem. There is still no way to know the effectiveness of the system and it
would not be trusted. Further, the restrictions on communication are likely to
reduce the effectiveness of the system."

Summary

Modular systems are not magic. They don't of themselves solve problems. They
merely provide a framework in which human designers and programmers have a
better opportunity to see and understand interactions between the pieces of a
system. Narrowing and isolating these interactions makes it easier to understand
the possible ways that one piece might damage another and this, in turn, allows
one to institute protective mechanisms. Careful partitioning of a problem into
suitable architecture thus promotes better understanding and that in turn
benefits overall system reliability. But it is only a help, not a complete
solution to the problem. Subtle and unsuspected interactions continue to plague
the designers of even the most carefully partitioned systems.

The concerns that have been raised about the trustworthiness of software for an
SDI system have not been adequately addressed by the Eastport Group's report.
Nor can they be. These concerns bedevil all computer systems and have their
roots in the fallibility of human beings, not computers. Distributed systems do
make some problems easier. By constraining the interactions between the pieces
they can be understood somewhat better than other systems and this can help to
reduce the number of potential pitfalls. But so can the employment of more
sophisticated, more clever, and more disciplined programmers. In fact a whole
host of disciplines and devices can be brought to bear, all of which help to
reduce the problem of overlooked flaws in the design and programming of systems.
These techniques have been explored and utilized to the hilt over the past
twenty-five years, since software reliability in critical systems is hardly a
new problem.

Some of the flaws that remain in systems prove relatively harmless when they
finally surface. Or they may be rendered so by fault-tolerant mechanisms. But
some of those that remain, even tiny, seemingly insignificant ones, will find
their way around our cleverest safeguards and will prove absolutely fatal when
they emerge. One can only hope that despite the present spate of technological
hubris, the safety of the nation will never actually be entrusted to a system
such as the SDI. O

This article may be ordered as a separate paper, "Loose Coupling: Does It Make
the SDI Software Trustworthy?" by CPSR Chairman Severo M. Ornstein. Please send
$3.00 to cover postage, printing and handling to the National Office of CPSR,
P.O. Box 717, Palo Alto, CA 94301

Eastport Group Report
Recommends "Hierarchic "
SDI Architecture

On pages 7 and 8 of the Eastport Group Report is a recommendation for a
ÒhierarchicÓ or Òtree structureÓ architecture for the SDI, represented by the
diagrams to the right. The report says that Òorbiting assetsÓ would be organized
into battle groups; membership in the groups Òwould change dynamically according
to the patterns of the orbits;Ó and Òhigh-level command and control decisions
defer to the rootÓ of the architecture. Even such a modest level of
interdependence between components introduces the potential for the propagation
of errors. malfunctions, and design flaws


Privacy in the Computer Age
The Role of Computers
Ronni Rosenberg-CPSR/Boston

This is the second part of a three-part article. The complete paper may be
ordered from the CPSR National Office for $6.00 to cover printing, postage and
handling.

There are greater risks to privacy when information is centralized and
consolidated.... But we may also improve on rather than limit the sense of
privacy which individuals can have in our society. First, public concern over
the issue is forcing public and private organizations to become more self-
conscious about the role of privacy and the role of information in society . . .
Second, the computer itself, ironically, offers a great deal more opportunity to
protect information at a systems level if you mean to do it; access can be so
guarded that the machine is almost inaccessible to unauthorized persons, and
guarded public audit systems can provide restraints on how even those with
access to the system make use of it. Here is the dilemma. The computer is
threatening our levels of privacy as never before, but it also offers more
protection for privacy than we have had heretofore. As always, the machines are
neutral. The answer depends on what man will do with them.1

Much of what has been said in the previous article-about conflicts between the
individual's right to control data and organizations' needs for data-applies
equally well to manual methods of data collection as to the development of
computerized databanks. Large amounts of information were collected about
individuals long before the first computers were put into use. Personal privacy
is threatened by people, not by computers, in the same sense as people are
killed by people, not by guns. However, just as guns play a special role in the
act of people killing other people, computers play a special role in the threat
to personal privacy. This special role of computers has focused attention on old
questions and concerns about privacy and on the practices of government and
industry relating to files about individuals.

The main problems of the computers and privacy issue are independent of the
existence of computers and computerized databanks, but computers are a unique
part of the problem insofar as they make recordkeeping and record sharing
qualitatively simpler and less expensive. Computers create possibilities that
did not exist or were not cost-effective before, and the existence of technical
possibilities often is an irresistible temptation.

We are building hundreds of data banks with communications networks uniting
them; they are outgrowths of the way in which we have always kept records,
files, dossiers, and information; but in terms of the quantity of the
information, its sensitiveness, and the speed with which it circulates, through
the evaluative system of the society today, these systems present a problem of
new dimensions.2

Increased Data Collection

There are several specific ways in which computers appear to have changed the
collection of personal information and, as a result, increased the problem of
personal privacy. Foremost of these changes is the collection of much larger
quantities of data. In a study conducted by the Project on Computer Databanks of
the National Academy of Sciences, computer system documents were examined from
more than 500 organizations "whose computerization activities had generated
public attention," and site visits were made to 55 of these organizations.3
Among the conclusions of the study group is that there are more records being
maintained about more people now than in the pre-computer era, and the data is
more heavily used now.

The extent to which this increase is due to computers is hard to determine. The
authors of the NAS report note that there was a heavy increase in the use of
data about individuals (i.e., in the transaction rate per person) between 1940
and 1955, before computers were generally available. "The timing of these
increases contradicts the notion that the rise in transactions can be explained
by the availability of computer processing itself."4 They speculate that the
increase is due primarily to the tremendous expansion of government programs
like social security, accompanied by the carrot of more federal funding that is
dangled before local governments with the prerequisite that those local
governments collect more personal data. However, elsewhere in the report, they
conclude that computers have enabled the creation of larger databases and
information systems than would have been possible through manual procedures.
And, in another publication, one of the report's authors wrote: "I have observed
again and again that once an agency installs its lovely big computer, and has
programmers to be put to use and machine-time to use, it tends to ask for much
more information from the individual than it sought before."5

Similarly, another researcher notes that ". . . a trend towards increased record
keeping existed long before computers came along. For example, the number of
items on the U.S. Census inquiry expanded from six to thirty-one during the
period from 1770 to 1850."6 She goes on to suggest that the more recent increase
is tied rather directly to computers: "Still, there are limits to how much
information can be handled by humans.... If the computer had not come along when
it did, these limits might have been reached.... Without the computer,
recordkeeping activities might never have greatly accelerated."7

The conservative consensus of researchers seems to be that the extensive
development of computer and communications technology has created a "tendency
toward more extensive manipulation and analysis of recorded data, which, in
turn, has required the collection of more and more data."8 What is new in the
gathering of personal information in computers is the scale of the operation:
"The important danger about invasion of privacy by electronic means is that it
is a mass invasion ."9

Matching

Another major distinction between computerized recordkeeping and manual
recordkeeping is the ability of the computer to combine data usefully from
different sources, yielding qualitatively different information. For instance,
data about welfare recipients can be combined

with IRS data, with the goal of identifying people who earn too much money to
qualify for the welfare benefits they receive. Combining data in this way is
referred to as data matching. By associating two seemingly unrelated pieces of
information, matching programs can output more sensitive data than their input.

For example, in 1983, the IRS tested a matching system for tracking down people
who fail to pay their taxes. They contracted with a commercial brokerage firm
that provides marketing lists to obtain a computerized list of the estimated
incomes of two million American households. To compile such a list, names and
addresses are first taken from telephone books and input into a computer. The
computer is instructed to assign each household to the correct census tract.
From the information published by the Census Bureau, conclusions can be made
about each household, including median income. This information can then be
matched with computerized data from each state's Department of Motor Vehicles on
the model and year of automobiles owned by the people at each address. If the
automobile is an expensive one, the estimated income is adjusted upward; if it
is a cheap one, the estimated income is reduced.

Objections to such programs come from many sources. The head of the Direct
Marketing Association, which objects to matching programs on ethical grounds,
pointed out some of the dangers of the program:

Strangely enough, a mailing list is essentially anonymous. A company rents a
computer tape, prepares one set of labels and makes a mailing. That's it.... But
if the IRS starts with a commercial mailing list, then adds Census data, then
cross references it with other data, then they are taking something that is
essentially anonymous in the commercial world and turning it into individually
identifiable information, using it in a way the individual never imagined.10

The ACLU's Washington office Executive Director warns that matching programs
present a serious threat to civil liberties by circumventing constitutional
limitations on intrusive information gathering:

To understand the impact of computer matching on civil liberties, it is
necessary to grasp the profound difference between a computer-match
investigation and a traditional law enforcement investigation. A traditional
investigation is triggered by some evidence that the person targeted has engaged
in wrongdoing.... The American constitutional system generally bars the
government from conducting intrusive investigations of persons it does not
suspect of wrongdoing. A computer-match is not bound by these limitations. It is
directed not at an individual, but at an entire category of persons, not because
any of them is suspected of misconduct, but because the category is of interest
to the government. What makes computer-matching so fundamentally different from
a traditional investigation is that its purpose is to generate the evidence of
wrongdoing that usually is required before a traditional investigation can be
initiated.... All of these developments have an impact on the lives of real
people. Examples from the files of the American Civil Liberties Union reveal the
Kafkaesque problems that may result from the unregulated use of personal
information:

"In Massachusetts, the Medicaid benefits of an elderly woman in a nursing home
were ordered terminated after a computer-match of welfare rolls and bank
accounts in the state revealed that she had an account above the Medicaid assets
limit. The termination order was improper because the woman's bank account
contained a certificate of deposit in trust for a local funeral director, to be
used for her funeral expenses, an exempt resource under federal regulations. The
computer-match did not reveal this fact."11

The ability of the computer to collect and transfer large amounts of data for a
negligible cost, and the lure of cutting down abuse of government programs, made
matching programs hard to resist when first introduced by President Carter, and
their development subsequently has been accelerated by President Reagan. As
discussed in the next section, matching has come under fire on legal grounds.
For instance, the Privacy Act of 1974 specifies that federal agencies cannot
release personal data to another agency without the written permission of the
person who provided the data, except for "routine" purposes. By stretching the
definition of "routine" purposes, matching has grown into hundreds of ongoing
programs, making it impossible for anyone who gives data to one federal agency
to know where that data might be used. Moreover, matching has in some cases
turned into front-end screening (e.g., male students applying for federally
funded student loans must first provide evidence of their draft registration,
and matching programs are used to verify the evidence). In this way, matching is
coming close to creating the

general-purpose national system which was expressly prohibited by Congress in
the early 1970s: "Today, unregulated computer-matching at all levels of
government has created a de facto national databank."12

Among the other special attributes of computerized recordkeeping is the fact
that most people still assume that data output by a computer is more correct
than data generated by manual means. Some people trust computers more than they
trust other people. "When printed out on computer output sheets as the result of
an inquiry, [data] looks quite 'official' and hence is taken as true."13

Computer professionals know that complex computer systems are extremely error-
prone and unpredictable, but it is already observable that "as information
accumulates, the contents of an individual's computerized dossier appear more
and more impressive and impart a heightened sense of reliability to the user,
which, coupled with the myth of computer infallibility, will make it less likely
that the user will try to verify the recorded data. This will be true despite
the 'softness' or 'imprecision' of much of the data."14

More public education about the limits of computer reliability would alleviate
this problem, but it is worth noting that more than a decade has passed since
the Kentucky Appeals Court sounded a warning about the danger of computers
obscuring human responsibility for human acts. The case involved a man who sued
the Ford Motor Credit Company, which had repossessed his car after a computer
error caused them to conclude he was not making his car payments. Ford argued
that they were not responsible for decisions made on the basis of faulty data
given by a computer. The court thought otherwise:

Ford explains that this whole incident occurred because of a mistake by a
computer. Men feed data to a computer and men interpret the answer the computer
spews forth. In this computerized age, the law must require that men in the use
of computerized data regard those with whom they are dealing as more important
than a perforation on a card. Trust in the infallibility of a computer is hardly
a defense.15

Innacurate Data

Overdependence on computer data is a special problem when that data is
inaccurate. The NAS study group concluded that computerization of databanks
could lead to the production of more up-to-date records, containing fewer
omissions. I was unable to locate any references to comparative studies of data
quality in manual collection systems and computerized collection systems.
Although it cannot be determined here whether computerized databanks are
"better" than manual ones, we can determine that the quality of data in even the
most modern of computerized databanks is not good.

Consider the FBI's National Crime Information Center (NCIC), which two
researchers described as "one of the finer examples of the computer as a
beneficial device, [giving] a patrolman in his cruiser the opportunity to check
stolen car records before he approaches a suspected vehicle."16 The NCIC
contains criminal history records, specifically, information about outstanding
warrants. A study of record quality in the NCIC and other federal and state
information systems was conducted by the Office of Technology Assessment from
1979 to 1982, the first systematic, independent study of record quality in
national information systems. The study revealed that less than half of the
records in the NCIC's criminal history files were "complete, accurate, and
unambiguous." Translated to the population of 360,000 annual disseminations (at
the time of the study), the results indicate that 195,000 disseminations had a
significant quality problem. Other results from the study include:

In the FBI's largest criminal history file, 25.7% of the records were complete,
accurate, and unambiguous.

"In excess of 14,000 Americans are at risk of being falsely detained and
perhaps arrested, because of invalid warrants in [the FBI Wanted Persons File]."

In the computerized criminal history files of three states, the percentage of
complete, accurate, and unambiguous files ranged from 49.4% to 12.2%.

The FBI's files include a recommendation that users verify the data with the
local originating agencies, but "this verification by user agencies is rarely
undertaken. Numerous interviews of criminal justice decision makers established
that the records are taken 'as is' and used directly in the decision making
process."17 In 1980, when the OTA questioned the 50 states about their control
over record quality, 80% fact that such audits are legally required.18

Because computer-generated data is, more often than not, assumed to be correct,
the media is full of horror stories of people detained, even jailed, on the
basis of computerized criminal history data that was thought to identify them as
wanted criminals. Incarceration has occurred even when the computer's
description of the person does not match the real person in numerous categories.

Inaccuracies in data files can crop up in a variety of ways. One of the least
obvious involves the insertion of hearsay in files of otherwise factual data.
Criminal justice agencies are not the only organizations that include in their
files unverified opinions. Credit agencies' files contain some information
collected by investigators, collected, for instance, through interviewing. One
of the largest credit agencies is TRW, which maintains credit files about tens
of millions of Americans. As of 1980, TRW was selling 35 million credit reports
a year. They were also getting 350,000 formal complaints from individual
subjects, leading to record changes in 100,000 cases. Nevertheless, "as a matter
of law, TRW argues, it has no obligation to determine the accuracy of the
information it receives from businessmen about the bill-paying habits of
individual consumers."19

Incomplete Data

A problem with computer databanks that is closely related to inaccurate data is
incomplete data. In its most obvious form, "incomplete data" refers to files
with missing data fields, exemplified by criminal history files that contain
data about arrests but not about subsequent dispositions (convictions or
acquittal). The American tradition of "innocent until proven guilty"
notwithstanding, arrest-only records are disseminated freely by local
authorities (though federal agencies are currently prohibited from doing so) and
have served as the basis for treatment usually reserved for convicted criminals,
such as refusal of employment.

The problem of incomplete data also refers to files without critical contextual
information, exemplified by records of arrest and conviction that do not
distinguish between a criminal and a civil rights demonstrator or conscientious
objector. Although modern computer systems certainly can handle the free-form,
textual information that could supply the appropriate context, in practice only
context-free facts are included in databanks. The resultant loss of context in a
criminal history record can result in identical treatment for someone convicted
of a serious criminal offense and someone whose only crime is the valid exercise
of constitutionally protected rights; for instance, both people may be denied
jobs because of their criminal records. The ACLU's files include the case of a
New York man who was denied a job "because a computerized credit report showed
that when he was thirteen years old in Massachusetts he temporarily had been
placed in a mental institution. What the files did not show was that he was an
orphan and the institution was the only home the state authorities could find
for him for a period of four years."20 In many cases, the all-important
contextual information may well remain in manual records, but the availability
of computerized records dissuades people from using the manual ones.

Aging of Data

A final problem that is of special concern with computerized databanks is the
tendency of databank operators to retain and use data far beyond its useful or
appropriate life. In some cases, it is less expensive to retain data in a
computer system than to determine which data is too old and to discard it. As
data becomes immobile in time, information about one's past becomes easier to
find and is ever more precisely described, potentially limiting one's future
possibilities.

Computers make it technically practical to retain records indefinitely. Without
procedures for gracefully aging data, present-day computers are making the
possibility of overcoming mistakes and starting over technically obsolete.
Society does not benefit if the ability of computers to maintain criminal
records indefinitely creates a permanent class of people who are unemployable,
and hence statistically prone to crime. Moreover, a loss of belief in the
possibility of starting over could have a chilling effect on freedom of
expression: "The knowledge that one cannot discard one's past, that advancement
in society depends heavily on a good record, will create considerable pressure
for conformist activities."

Potential for Positive Effects

In this section, l have emphasized the negative effects of computerized
databanks, not because there are no positive effects, but because the negative
effects are more evident than the positive ones in today's implementations. This
is so because of a combination of technical designs that paid little attention
to privacy issues and regulations that are incomplete and ineffective.

There is nothing inherent in computer technology that causes the problems
mentioned above, and they can be s easily solved as they can be fueled. Matching
programs can be halted or more carefully regulated, to tonsure that individuals
know where data about themselves is maintained and consent to its use. Public
education can dissolve the myth of computer infallibility so fewer people act
unquestioningly on the basis of computer data. Administrative procedures for
verifying data can be strengthened, and procedures for removing outdated
information can be instituted. All these are nontechnological solutions for
problems that are nurtured, though not created, by technology.

Computer technology actually has the potential to increase personal privacy.
Even now, the debate over computer databanks has served to increase public
awareness of the extent of recordkeeping and of its problems. In addition,
storing data in computers rather than on pieces of paper in file cabinets allows
us to create technical mechanisms to provide far more protection for sensitive
information than was possible in the era of written records and physical
manipulation.

Notes

1. Alan F. Westin, "Computers and the Protection of Privacy R Technology Review,
71, 6, April 1969, p. 37.

2. Ibid., p. 36.

3. Alan F. Westin and Michael A. Baker, Databanks in a Free Society: Computers,
Record-Keeping and Privacy, New York: Quadrangle Books, 1972, p. 23. Report of
the Project on Computer Databanks of the Computer Science and Engineering Board,
National Academy of Sciences.

4. Ibid., p. 223.

5. Westin, "Computers and the Protection of Privacy," pp. 345.

6. Deborah Johnson, Computer Ethics, Englewood Cliffs, New Jersey: Prentice-
Hall, Inc., 1985, p. 58.

7. Ibid.

8. Arthur R. Miller, The Assault on Privacy, Ann Arbor: University of Michigan
Press, 1971, p. 23.

9. Abbe Mowshowitz, ed., Human Choice and Computers, 2, New York: North-Holland
Publishing Company, 1979. Proceedings of the Second IFIP Conference on Human
Choice and Computers, Baden, Austria, June 4-8,1979, p. 164.

10. David Burnham, "IRS Starts Hunt for Tax Evaders Using Mail-Order Concerns'
Lists," The New York Times, Sunday, 25 December 1983, p. 1.

11. John Shattuck, Thin the Shadow of 1984: National Identification Systems,
Computer-Matching, and Privacy in the United states," Hastings Law Journal, 35,
6, July 1984, pp. 1001-1002 and 994.

12. Ibid., p. 996.

13. Lance J. Hoffman, "Computers and Privacy: A Survey," Computing Surveys, 1,2,
June 1969, p. 87.

14. Miller, pp. 234.

15. Court of Appeals decision in Ford Motor Credit Co. v. Swarens, 447 SW 2d. 53
(Kentucky Appeals 1969). Quoted in Robert P. Bigelow and Susan H. Nycum, Your
Computer and the Law, New Jersey: PrenticeHall, Inc., 1975, p. 136.

16. Ibid. p. 145.

17. Kenneth C. Laudon, "Data Quality and Due Process in Large Record Systems:
Criminal Record Systems," prepublication draft July 1 983, p. 23.

18. David Burnham, The Rise of the Computer State, New York: Random House, 1980,
p. 82.

19. Ibid, p. 44.

20. Shattuck, p. 994.

21. Burnham, The Rise of the Computer State, p. 47

From the Secretary's Desk
Laura Gould-CPSR National Secretary

After a long search, we are very happy to have hired a new National Office
Manager, Katy Elliot. Katy is a longtime Palo Alto peace activist. She has a
degree in international relations from Mills College. Katy has a lot of
experience in office management and bookkeeping, and we are very lucky to have
found her.

Our first national membership campaign will be underway as you read this.
Working with a $20,000 grant provided by the Rockefeller Family Fund and another
$20,000 committed by the CPSR Board of Directors, we are running full-page ads
for CPSR in the November issues of IEEE Spectrum and The Communications of the
ACM. These ads should be seen by about 400,000 readers. We are also doing a
direct mail campaign to 75,000 members of IEEE and the ACM. We hope to triple
the membership of CPSR by adding about 2,500 new members to the organization.

We have also received a $10,000 grant from the Richard and Rhoda Goldman Fund of
San Francisco, to be used to extend and supplement the national membership
campaign that is already underway. This grant may be used to purchase more
advertising space or do another direct mail appeal.

The Boston slide show, Reliability and Risk: Computers and Nuclear War, should
be ready for circulation to the chapters by mid-November. This will be an
excellent vehicle for disseminating CPSR's message, particularly about the
computer aspects of the SDI. It is a half-hour long and will be available in
videocassette as well as in several different formats for showing with one or
more slide projectors. For information about getting a copy of the slide show,
call the CPSR National Office at (415) 322-3778.

CPSR chapters around the country have been very busy lately. CPSR/Palo Alto co-
sponsored a three-day conference on the SDI at UC Berkeley October 9-11, which
featured a well-attended debate between Lowell Wood and Colin Gray versus
Richard Garwin and John Holdren. CPSR/Palo Alto member Dave Redell spoke on the
computer aspects of the SDI, and Eastport Group Chairman Danny Cohen and CPSR
member Clark Thompson were part of a panel on how the SDI will affect
universities.

CPSR/Palo Alto member Greg Nelson was invited to give a talk on the computer
aspects of the SDI at Lawrence Livermore National Laboratories, which he did on
October 3.

CPSR/Pittsburgh has organized a series of discussions on science and society. On
October 6, Herbert Simon spoke on the responsibility of scientists for the
consequences of their research. On November 5, Angel Jordan, Provost of
Carnegie-Mellon University, and Lincoln Wolfenstein, University Professor, will
talk about how funding sources influence the nature of scientific research. In
late November there will be a third discussion on whether society's priorities
concerning technology, science and the humanities are correct.

CPSR/Los Angeles is hosting a talk by Cliff Johnson, who will discuss his
current lawsuit against the Department of Defense. The meeting will be on
November 19 at the University of Southern California.

CPSR/Portland will sponsor a discussion entitled "Computerized Vote Counting in
Elections-Are Software and Operations Standards Adequate?" on October 28 at
Portland State University. CPSR member Bob Wilcox will give a presentation on
computerized voting.

CPSR members in Philadelphia organized and staffed a booth at the annual
convention of the American Association of Artificial Intelligence August 11-15
in their city. The booth was very successful. Particularly popular, just as at
last year's IJCAI in Los Angeles, were the stickers that say "It's 11 p.m.: Do
You Know What Your Expert System Just Inferred?" Thanks to the efforts of Eric
Krotkov, Philadelphia will become our newest CPSR chapter by the end of October.

An Open Letter to Congress on the SDI

Serious concern about the Strategic Defense Initiative is the topic of an open
letter to Congress sponsored by a group of scientists, including Nobel laureates
and Turing Award Winners, who work at government or industrial laboratories. The
open letter, the text of which is on the next page, points out the
inconsistencies in the President's stated goals for the SDI and what can be
expected to be realized by the SDI effort. This letter supplements similar
activities in the university research community.

As of July 1986, the letter had been signed by 1,600 technical professionals
from 26 government and 70 industrial research laboratories.

The initial group of signatures was presented to Congress on June 19, 1986 at a
news conference sponsored by Senators J. Bennett Johnston (D-La.) and Daniel J.
Evans (R-Wa.). This news conference and the open letter were reported by most
major newspapers in the U. S., with front page articles in the Wall Street
Journal and the Christian Science Monitor.

If you work at an industrial or government laboratory and wish to participate,
send a copy of the letter with your signature, name (printed or typed) and
institution or address (for identification purposes only) to:

Open Letter to Congress
P.O. Box 497
Murray Hill, NJ 07974

Additional information can be obtained from the above address or by calling
(201) 467-7629. The signature drive is continuing through the fall of this year.

The text of the open letter is:

ÒWe, the undersigned scientist and engineers currently or formerly at government
and industrial laboratories, wish to express our serious concerns about the
Strategic Defense Initiative (SDI), commonly known as "Star Wars." Recent
statements from the Administration give the erroneous impression that there is
virtually unanimous support for this initiative from the scientific and
technical community. In fact the SDI has grown into a major program without the
technical and policy scrutiny appropriate to an undertaking of this magnitude.
We therefore feel that we must speak out now.

ÒThe stated goal of the SDI is developing the means to render nuclear weapons
"impotent and obsolete." We believe that realization of this dream is not
feasible in the foreseeable future. The more limited goal of developing partial
defenses against ballistic missiles does not fundamentally alter the current
policy of deterrence, yet it represents a significant escalation of the arms
race and runs the serious risk of jeopardizing existing arms control treaties
and future negotiations. Furthermore, in view of the international economic
competition faced by the U.S., it should be asked whether the country can afford
the diversion of resources, especially scientific and technical manpower, that
the SDI entails.

"The Congressional Office of Technology Assessment has raised serious questions
concerning the scope and scale of the present SDI effort. We urge the Congress
to heed these concerns and to limit the SDI to a scale appropriate to
exploratory research, while assessing the costs, the risks and the potential
benefits of the program in comparison with alternative strategies for
strengthening the overall security of the nation. Top priority must be given- to
this task before the momentum inherent in a program of such magnitude makes this
venture irreversible."

Gloria Duffy Elected to CPSR Board of Directors

CPSR welcomes Dr. Gloria Duffy, president and founder of Global Outlook, to the
Board of Directors of Computer Professionals for Social Responsibility. Global
Outlook, of Palo Alto, California, is a national security and arms control
consulting firm.

Dr. Duffy is a graduate of Occidental College in Los Angeles and Columbia
University in New York, where she received a Ph.D. in political science. She has
specialized in U.S.-Soviet relations and arms control. She has served as
Assistant Director of the Arms Control Association and Executive Director of the
Ploughshares Fund, a foundation that funds arms control work. She is a member of
the Stanford Center for International Security and Arms Control at Stanford
University. Along with Dr. Coit Blacker, she is the editor of the textbook
International Arms Control: Issues and Agreements. While working as an analyst
at the Rand Corporation, she authored the Rand Report Soviet Nuclear Energy:
Domestic and International Policies.

Currently Dr. Duffy is the head of the Working Group on Arms Control Compliance
of the Center for International Security and Arms Control at Stanford. This
group is preparing a large report entitled Compliance and the Future of Arms
Control, which will be a comprehensive survey of alleged violations of current
arms control agreements by both superpowers.

Gloria lives in San Jose, California, with her husband Rob Elder, who is the
editor of the San Jose Mercury News

Miscellaneous

The CPSR Newsletter is published quarterly by:

Computer Professionals for Social Responsibility
P.O. Box 717
Palo Alto, CA 94301
(415) 322-3778
The purpose of the Newsletter is to keep members informed of thought and
activity in CPSR. We welcome comments on the content and format of our
publication. Most especially, we welcome contributions from our members.
Deadline for submission to the next issue is December 15, 1986.

This Newsletter was produced on an Apple Macintosh Plus using .the "desktop
publishing" program Pagemaker, donated to CPSR by the Aldus Corporation. It was
typeset from a Pagemaker file on a Linotronic 100 Imagesetter.

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

I care about the issues that CPSR concerns itself, and I don't have the resources or time to address them personally.