Personal tools

Fall1987.txt

Thinking About "Autonomous" Weapons
Gary Chapman
CPSR Executive Director

This article is adapted from the chapter, "The New Generation of High Technology
Weapons," in the book, Computers in Battle: Will They Work?, David Bellin and
Gary Chapman, eds., HarcourtBrace Jovanovich, 1987. This book is now available
through the CPSR National Office. References for this article can be found in
the book.

The great 19th-century social theorist Max Weber was among the first to describe
how science and technology "disenchant" the world we live in. Science and
technology replace myths and fables with cold hard facts; customs with rules;
kinship ties with economic relationships; and mystery and uncertainty with a
quantified universe.

Perhaps nothing was ever "enchanted" about war and the role of the military,
but, nevertheless, science and technology have virtually banished the centuries-
old military reliance on honor, virtue, courage and duty. Today it is the
weapons we deploy which are "smart" and "friendly," rather than our soldiers. It
appears that our military at least believes that success in conflict is based
less on the virtue and determination of the soldier and the justice of his
cause, but on the effectiveness of the weapons he employs. The soldier is no
longer an individual
political representative of a national will that has decided to risk his life
and employ force against others, but is now simply a weapons operator, a small
part of a powerful war machine that is employed by a handful of policymakers in
Washington.

As a simple weapons operator, the soldier's capacity as a political
representative is more a problem than an asset to modern war planners. Dead
soldiers are a
political liability. Modern weapons have made skill, planning, and courage
largely irrelevantÑmost of our recent casualties have been men who were in the
wrong place at the wrong time. To minimize the political liabilities of having
men simply slaughtered because they were in an area for "policy" reasons, the
Pentagon, for the last twenty years, has tried to keep the people exposed to
combat to a minimum, and to supplement those that are left with increasingly
deadly weapons. This is advertised as a policy of "nothing but the best" for
American soldiers, in order to save lives, but in fact this policy leaves
soldiers at the mercy of the reliability of complex high technology weapons
systems, with diminishing control over their own fate.

Robots and the Military
Do They Really Need Each Other?
Bernard RothÑStanford University

Professor Bernard Roth is a professor in the Department of Mechanical
Engineering at Stanford University, and one of the country's leading researchers
in robotics. He wrote this article for The CPSR Newsletter.

The military, mainly through the Defense Advanced Research Projects Agency
(DARPA) and the research offices of the Navy, Army and Air Force (ONR, ARO,
AFOSR), is a major funder of basic and applied research in robotics. The
military funds work in all the major subject areas of robotics (manipulation,
vision, planning, locomotion, sensing and computing) by heavily supporting
robotics research at DoD industrial contractors and the leading university
research centers (such as Carnegie-Mellon University, MIT and Stanford). It is
my opinion that there is little special justification for the military to be a
leading funder of this robotics, and that the National Science Foundation and
other civilian agencies are the proper channels for the government to use in
funding this work.

A robot is a machine, nothing more. Yet the name invokes images of far different
than mere machines; most of the images of robots are fantasy, and it is the
fantasy that gives the concept its popular appeal. Science fiction films and
stories have done much to form images in the public mind that are far removed
from current reality, and it is these images that are easily exploited by both
commercial interests and proponents of high-tech military gadgetry. Although
such images are not new, they are, in their current form, reinforced by films
such as George Lucas' Star Wars series and others of the genre. Through science
fiction stories people have become accustomed to the idea of machines which are
more than dumb inanimate objects; they see instead willful devices capable of
interacting with humans on their own level and capable of autonomous and varied
actions. They also see these devices as adjuncts to people in the fighting of
wars. From such influences the obvious question follows: If these robots exist
on the screen, cannot our technology do the same in real life?

This question invariably implies the answer: surely it must be just a matter of
time and money before military robots become a reality. To conclude otherwise
makes one seem retrogressive, anti-technology andÑwhat is even worseÑa "spoil-
sport" for those in our society whose livelihood, satisfactions and professional
character depend on hightech research in the name of military security. It is
the purpose of this article to examine the thrust of the U.S. military programs
in robotics from the point of view of the state of the art of the associated
technologies, and the potential military applications of such devices.

According to its dictionary definition a robot is a machine which performs
certain functions usually done by people (or, in a disparaging sense, it is a
person that behaves like a machine.) In our vernacular it is the name used for
any mechanical device that looks or behaves like a living entity; so a machine
that looks even vaguely like a dog and emits a barking sound becomes a "dog
robot," while a machine that talks is referred to as a "talking robot." The noun
robot (without the use of any modifiers) is used to describe any machine with an
external covering that gives it an anthropoidal appearance, or machines that
simply move while in what appears to be an erect position.

The concept is not newÑit dates back to various cultures and takes its modern
form directly from science fiction. The 1923 play R. U. R. by the Czech writer
Karel Capek, is widely credited with popularizing the term, "robot." In this
play a scientist creates android workers which are continually improved until
eventually they gain enough understanding and feeling to rebel against and
destroy their creators. Over the centuries many devices have been built which
are automations in human form (androids). These devices were mainly used for
entertainment: to play music, adorn elaborate clockwork and as features in royal
salons and exhibitions. A very fine contemporary example of this tradition is
the Abraham Lincoln automaton at Disneyland in California. On the darker side,
these "entertainment" automatons in human form were used for making illicit
profit by taking advantage of a gullible public that was ready to believe that
such devices were capable of doing more than they really could. An early example
of this was the "chess playing robot" which toured Europe; ostensibly
autonomous, it was in reality operated by a midget secreted in the machines
base. A modern parallel are the remotely operated devices which various
promoters pawn off on the press (and public) as autonomous devices.

The concept of automatic machinery has been with us for a long time. Today
however if an automatic machine is operated under computer control we tend to
use the adjective robotic to describe it. In many ways the situation today can
be understood in terms of an incident which took place over 60 years ago. A
steamship made a 21-day run from San Francisco to Auckland, New Zealand, and
things were arranged so that no human hand touched the steering apparatus.
Immediately the press and the promoters proclaimed the age of robot shipsÑthat
was in 1927. Of course gyroscopes have proved to be useful devices, yet the name
gyroscope lacked the popular appeal which is inherent in thinking of it as a
robot navigator. This same exaggeration and inflation of concepts is very much
with again today.

In analysing the use of robots and the field of robotics with regard to military
interests and activities, the definition of the term is perhaps besides the
point, since the term has taken on meanings having little to do with standard
definitions. It can best be thought of as the name applied to any mechanical
device which operates under some computer control, and for which it suits
someone to use the term "robotic." There is no logical explanation why certain
devices are called robots and others are not. If there is a logic it is that of
publicists and not technologists. For this analysis I will classify so-called
robotic devices by their degree of autonomous control and independent decision
making. I will start with the least autonomous, the "robots" under direct and
continuous control of a human operator.

Direct Operator Control

There are machines directly and continuously operated by humans that are
nonetheless called robotic devices. In the 1 960's the Army contracted with
General Electric to develop devices to enhance the prowess of an individual
soldier. The work resulted in the construction of prototypes of machines into
which a human operator was fitted. These machines were essentially "worn" by the
operator, and hence were called exoskeletal devices. One such device fit on the
arms and legs of an operator and by virtue of hydraulic action amplified the
forces exerted by the operator's muscles. Another device was a four-legged
walking machine which carried the operator in the cab (much as in a conventional
tractor). The operator's leg and hand movements controlled the device. Although
a mechanical success, in that the devices could operate more or less as planned,
this work was essentially a failure in that nothing has ever come of it. One of
the reasons often cited for the lack of success was the heavy strain on the
operators' abilities to control such complex devices. The four-legged walker
required an operator to work 18 different controls.

Today certain aspects of this work have been resurrected, mainly in the form of
the Adaptive Suspension Vehicle developed under a DARPA contract at Ohio State
University.' This project started in 1981, and has to date cost approximately $8
million. This year the device had its first public operational demonstrations.
The result is a six-legged walking vehicle that measures 17 feet in length and
weighs 7,200 pounds. It currently has achieved a walking speed of 2.5 miles per
hour, while the design calls for a top speed of 5 miles per hour. It carries its
own four-cylinder 68-kw engine and 17 Intel 86/30 singleboard computers. It also
carries a human operator who uses a joystick and a keypad to determine the gait,
direction and rate of motion. The actual control of the motors on the legs is
done by the onboard computers which monitor 82 control variables, thus relieving
the operator of the large burden carried by his predecessors at General Electric
over twenty years ago.

In all cases involving a human operator there are various degrees of control
that the human operator can exercise. These range from direct control of every
action to what is called high-level or supervisory control. If the operator
adjusts every controllable aspect of a machine it is called low level or direct
control. On the other hand

if the operator simply provides general control signals, such as "turn right
slowly," and these are then automatically translated into low level actions by a
computer or other device, the machine is said to be operating under high-level
supervisory control. This represents the state of the art of this technology. In
lay terms one might think of supervisory control as being roughly equivalent to
semi-automatic operation. Using this framework it follows that the early General
Electric walking machine was operated under low level control while the new
Adaptive Suspension Vehicle relies on supervisory control. It is this transition
of the level of operator control that has been the major technological change
which makes the operation of such a complicated machine feasible.

At this point it seems clear that this project will be a technical success, in
that the machine will be capable of a wider scope of motion than previous
walking machines. But again, the question remains, what will it lead to?

Personally I think the walking machine project does have many interesting
features. It combines the most modern elements of computer control, machine
design, kinematics, dynamics and human factors. Few machines or mechanical
devices offer so many state of the art problems and student projects. So I
definitely believe aspects of these project make for good research problems and
student training. What I don't understand is why it is being funded by the
military. It would seem that in a democratic society other branches of
government should be responsible for funding academic training and the expansion
of the knowledge base.

If we put aside the motive of academic training, what is the military
justification for this project? Is it a serious development or a "proof of
concept" project? The proponents of walking machines point out that most of the
earth's surface is not a paved highway or even a passable roadway. Hence, the
argument goes, it would be useful to replace wheeled vehicles with ones that can
operate on more rugged errain. We can all agree on the nature of the
terrestrial surface, and the difficulties in traversing certain portions of it
with the current array of military land vehicles, but why military walking
machines? They promise to be slow, inefficient, and cumbersome devices at best.
It seems as though we are going back to HannibalÑthis time with high-tech. In
fact Hannibal had a better "walking machine" than we are likely to see in the
foreseeable future. Biological systems seem so much better than any walking
machine we are capable of building one wonders why a military interested in
walking transport would abandon the horse, the camel or any of the other
traditional biological "walking machines." They are bound to be both less
expensive and better solutions in many situations. Of course in today's world,
the helicopter has evolved as a far superior solution to deal with moving in and
out of difficult terrain, and in this sense a military walking machine
seemsÑpardon the expressionÑa step backwards.

Putting aside the question of military relevance, the Adaptive Suspension
Vehicle represents the state of the art in robotic devices operating under
direct human supervisory control. This technology is well-developed and is being
applied to all sorts of military hardware. The issues in regard to this
technology come down to reliability in service, and cost in manufacturing and
maintainability. The basic question which needs to be asked each time a new
operator-controlled "robotic" system is proposed is whether the military is
better off with simpler devices under direct operator control, or more complex
and sophisticated devices which rely on computers and sensors in order to allow
for high level supervisory control.

Teleoperator Control

Machines under teleoperator control are operated at a distance, i.e., there are
human operators but they are not in direct proximity to the machines they are
controlling. This has been a very successful area of development. It has the
dvantage of leaving people in direct control of the machines, while affording
the operator the advantage of a safe and more comfortable environment. Military
applications of this technology include the types of drone aircraft known as
RPVs (remotely piloted vehicles) and deep sea rescue and surveillance vehicles.
One advantage of such devices for the military is that they do not have to be
concerned with protecting the operator from a hostile environment (be it
natural, such as deep sea or outer space, or man-made, such as enemy forces.)
This allows in general for lighter, smaller and less expensive devices. One of
the major disadvantages of such devices is that they usually must operate within
a limited range and maintain a link with their home base. The link can be an
actual physical connections such as metal wires or optical fibers, or it can be
through some form of wireless communication, such as radio, video, or laser
signals.

The successful use of RPVs by the Israelis in neutralizing defensive missile
sites in Lebanon's Bekaa Valley is an interesting case study in the use of
modern technology. Many people naively believe that the Israelis used "robot
planes" and thereby simply "out teched" their adversaries. In fact these planes
were not general purpose robot vehicles, they were not robots at all.
Furthermore, the technology was not any more advanced than that available to
their adversaries. It was rather, according to knowledgeable observers [2], the
fact that the RPVs were "designed for specific mission applications rather than
catch-all platforms," and the use of countermeasures such as missiles designed
to home in on the radiation emissions from defensive radars, and the jamming of
the enemy's voice and radar
transmissions were considered large factors in the Israelis' success.

In other words the success was not due to some specific set of technological
advances as much as it was to specific tactical modifications and the wise use
and coordination of a specific strategy of countermeasures to disable the
enemy's advanced technology. The lesson here is a strong rebuttal to proponents
of "general purpose" autonomous systems.

Preprogrammed Operation

There are now over a hundred different types of commercially available
techanical arms which are operated under preprogrammed computer control. Such
devices are commonly referred to as industrial robots3, and are used mainly in
the automotive industry for spot welding and spray painting, and in the electric
machinery industries for a variety of tasks including light assembly. They
operate by following a set of instructions that have been previously inserted
into their controllers, which are usually electronic computers. In general these
devices
have very little, or no capacity to vary from their preprogrammed sequence of
movements.

The military interest in these devices has been with respect to their use in
factories which make military equipment and as operational parts in some
military hardware. In the first category there are proposed uses in munitions
plants and in some aircraft and vehicle

fabrication facilities. In the second category there are arms for the exteriors
of submersibles, proposed shell loaders for tanks, and other ammunition loading
and handling situations. Mechanical arms have also have also been proposed for
bomb and mine location and disposal.

The automatic loading of shells was publicly singled out some years ago as a
good "proof of concept area." Although some work has started in this area,4
there seems to have been no public announcements as to the results. Now the idea
is again being advanced as a feasible use of battlefield robotics.5

Other than in the development of devices for a few very specific applications,
the military stake in preprogrammed operations seems small, since this
technology addresses itself to devices which automatically and repeatedly
execute the same task.

Structured Control

The most advanced industrial robots may work in conjunction with an artificial
vision system or sensors to note the forces on the environment or the location
of objects, and the information from these sensors may be used in a rudimentary
way to modify the actions of the mechanical arm. In many university laboratories
and in some industrial situations mechanical arms do operate under conditions
where
their motions are modified using information obtained from some sensory device
which measures one or two properties of the environment. Although these
generally represent the state of the art in "robotic" technology, they are in
fact rather limited devices in that the domain in which they can correctly
interpret environmental changes is quite limited, and the variety of possible
actions is in effect the same that is available from a structured preprogrammed
situation with a range of alternative possibilities.

The military has sponsored projects to improve this technology. One such project
was the Intelligent Task Automation projects supported jointly by the Air Force
and DARPA. In 1983, $3.35 million was awarded to Honeywell and $2.25 million as
awarded to Martin Marietta for demonstration projects which were to produce an
industrial robot capable of assembling a device from a tray of randomly
arranged parts. The actual work was carried out by industry-university
consortia.

This intelligent task automation research is a direct descendent from the
earlier, incredibly expensive Air Force Manufacturing Technology program, or
MANTECH. Although beyond the scope of this article, entire rationale for the Air
Force to be at the forefront of a push toward an automated "factory of the
future" is a question which seems worthy of exploration. It does however seem an
unlikely agency to be supporting the development of a technology which can in
general only be economically justified by much higher rates of production than
are obtained by
aircraft manufacturers.

In regard to the military use of manufacturing robotic and automation
technologies, most of it is in the plants of their large contractors. In many
cases
where savings are quoted in terms of saved man-hours, the figures are deceptive
since they do not include the large overhead costs required to develop the
specific hardware and setups for that application. In other cases where special
handling and extreme consistency of performance is required the technology of
industrial robotics may be perfectly justified. In general it does not seem that
there is any
special justification for the military taking an active and leading role in this
area.

There are various other types of technologies being supported by the military
which involve devices which have some sensory decisionmaking capability built
in. For example there is the hopping machine development started at CMU and
recently moved to MIT. This is a fascinating project where a device was built to
bounce along in a "pogo-stick" fashion. First two, and now four such devices
have been combined to form what is essentially a hopping or running machine. In
the process various interesting things have been learned about the symmetry of
the gaits
associated with running. This project is a technical success, and yet again the
question arises what are the actually military applications for such devices
and why is this work being funded by DARPA.

Similarly the DoD funds projects for studying the design and control of flexible
mechanical arms, for all sorts of vision work, and for the development of
prototype mechanical arms and hands. Again, it seems that the military use of
most such devices would be, at best, extremely limited.

Autonomous Operation

In this category we finally come to the type of devices that the term robot
implies in most peoples' minds. As a point of fact such devices do not really
exist. The closest we come to this category today are the devices capable of
structured control as discussed in the previous category. There are various
projects in this category being currently financed by the military. Perhaps the
most noteworthy is the Autonomous Land Vehicle (ALV)

project being developed by Martin Marietta under DARPA contract. There are also
DoD-sponsored autonomous vehicle projects at other companies
(such as the FMC Corporation) and at several universities (the largest appears
to be at Carnegie-Mellon). Some of the guard vehicles presently being researched
by various companies might also fit in this category.

Although remotely-controlled vehicles do seem to have some potential, the idea
of a completely autonomous vehicles seemsÑto say the leastÑpremature. The
technology of autonomous decisionmaking is extremely difficult. Presently a
vehicle can be made to operate autonomously in a very friendly environment with
extremely limited real-world complexity. If one considers the presence of a
conflict-ridden environment replete with countermeasures and the vagaries of
nature, the idea seems unrealizable.

In the early days of vision work researchers used to delight in putting a
perspective drawing of a three-dimensional block in front of vision system which
was programmed to locate blocks for a mechanical arm to pick up. The vision
system, thinking it was seeing a real block, would then instruct the arm to
move to where it "saw" the block, and the hand attached to the arm would then
reach out and vainly attempt to grasp the nonexistent block. In the same way,
with very little effort many methods could be devised to fool any autonomous
vehicle we are currently able to build. These types of projects simply represent
a fascination with the idea of having such devices and completely ignore the
question of what they would actually be good for. Even ignoring the question of
countermeasures and the question of being able to deal with a real "friendly"
world, there still remains the question of the extreme complexity of such a
device and the impracticality of maintaining and operating it under actual field
conditions.

In spite of newspaper and magazine stories with headlines such as "The Lean,
Mean Warrior Robots"6 and ''Robots Go To War,"7 it is extremely unlikely that
autonomous robots using current technology or logical near-term developments
from this technology will be of any real military value.

Technology Push or Technology Pull?

As a final topic it is instructive to note how the military goes about
soliciting ideas for the use of robotics. One reasonable technique would be to
identify problem areas and then set people to work solving these problems,
simply seeking the best, most efficient solution without any prior commitment to
using any specific technology. This seems to be the technique used by the
Israeli Defense Forces, where the weapons systems are often designed by the very
same people that use them in combat and training.8

A diametrically opposite technique seems to be used in this country. Here the
DoD research establishment (which includes government agencies, commercial
interests, and the universities) seems to try and force a specific technology
into a military application, whether the technology is appropriate or not.
Examples are the various workshops, study groups, and think tanks that the
military engages to determine how it might conceivably find uses for robotics
(or expert systems, or whatever the latest fad is). Of course it is the
responsibility of military technology developers to keep abreast of changing
technologies, but all too often they allow themselves to be influenced by
enthusiastic, and probably well-meaning, individuals with obviously conflicting
interests trying to force a technology to fit a specific military problem.9 The
fact is that if one looks at the
various reports, 5,10,11 it becomes apparent that even though it is easy to
generate a lot of "blue sky" concepts, there is very little that robotics
technology
can do for the military distinct from what it can do for the rest of society.
There seems to be little justification for the U.S. government to be channeling
so much of its research funds in the robotics area through its military rather
than through its civilian research agencies. Unfortunately the same conclusions
are also valid for many other areas of research.

References

1. "The Adaptive Suspension Vehicle," K.J. Waldron and R.B. McGhee, IEEE Control
Systems Magazine, pp. 7-12, 6(6), 1986.

2. "Lebanon Proved Effectiveness of Israeli EW Innovations," David M. Russell,
Defense Electronics, pp. 41-44, October 1982.

3. Robotics Industry Directory, Fourth Edition, P.C. Flora, ea., Technical
DataBase Corp., Conroe, TX, 385 pp., 1984.

4. "Army Tests Robots That Might Be Used On The Battlefield," New York Times, p.
19, November 13, 1981.

5. Applications of Robotics and Artificial Intelligence to Reduce Risk and
Improve Effectiveness, A Study for the United States Army; Manufacturing Studies
Board, National Academy of Sciences, 91 pp., 1983.

6. "The Lean, Mean Warrior Robots," Don Clark, San Francisco Chronicle, p. 23,
October 13, 1986.

7. "Robots Go To War," Robert B. Aronson, Machine Design, pp. 72-79, December
6,1984.

8. "Israeli Defense Industry Pushes Appropriate Technology," David M. Russell,
Defense Electronics, pp. 103-114, June 1985.

9. "Robots On the Battlefield," Ejner J. Fulsang III, Defense Electronics, pp.
77-82, October 1985.

10. R&D Plan For Army Applications of Al/Robotics, Final Report ETL-0296, SRI
International, 323pp., May 1982.

11. Concepts For Army Use of Robotic-Artificial Intelligence in the 21st
Century, Lt. Colonel Dennis V. Crumley, ACN 82016, U.S. Army War College,
Strategic Studies Institute, Carlisle Barracks, PA, June 3, 1982.


Why I Never Met A Programmer I Could Trust
John Shore

John Shore is the author of The Sachertorte Algorithm, and Other Antidotes to
Computer Anxiety, published by Penguin and available through the CPSR National
Office. He works for Entropic Processing, Inc., in Washington, D.C., and is a
CPSR member. He wrote this piece for the CPSR Newsletter.

Human lives and human society depend daily on many technologies. Of all such
technologies, I can think of one that is virtually free from government and
professional regulation: computer technology.

Computer professionals are well aware of the risks associated with computer
technology, and we have fought for years to control them. But our fight has been
a technical fight, because it's natural that technologists try to control
technical risks by means of technical forces. Technical forces, however, cannot
do the job alone. The control of technical risks requires social forces as well
as technical forces.

I believe that it follows from human nature that the fight against computer
risks will remain unsuccessful until we marshal social forces as well as
technical
forces. This will happen eventually, but we should put some of our professional
energy into making it happen sooner rather than later.

The "Software Crisis" Isn't One

crisis, n., 1. the turning point in the course of a disease, when it becomes
clear whether the patient will recover or die.

After almost twenty years of battling the software crisis, it's time to admit
that we're not doing so well. When you consider the unsatisfactory state of
software engineering, and when you consider that our profession now includes
people after the term "software crisis" became popular, it's clear that the
software crisis is less a turning point than a way of life.

It's not that there hasn't been progress. But our software ambitions have grown
at least as fast as our abilities, and software technology doesn't scale well.
Moreover, our rate of progress in extending software technology is dwarfed by
the rate at which the public is becoming dependent on computer systems, the net
result being that the public depends on fragile systems.

In Code We Trust?

The software crisis used to be merely a struggle for program correctness. Today
we ask more. It's not enough the software be correct; it should also be
reliable, safe, and trustworthyÑas though acceptance testing includes a Boy
Scout oath.

Returning home from a software conference last year, I was amused to realize how
much time we had spent on linguistic arguments rather than technical arguments.
We argued incessantly about "correctness," "reliability," "safety," and
"trustworthiness." Some said, "It's more important that software be trustworthy
than safe." Other said, "How can it be trustworthy if it's not reliable?"

Instead of discussing solutions to software problems, we dissected the language
used to pose the problems. We thought of ourselves as software engineers, but we
acted like Talmudic scholars. In doing so, we gave evidence for the propositions
that progress in software engineering has slowed.

This nouvelle cuisine of software rhetoric has some advantages. I especially
like the language of trust, since it reminds us that trustworthy software
requires trustworthy programmers.

What Do The Lawyers Say?

Perhaps the most telling evidence of our failure to resolve the software crisis
is a legal document known as the software license. Many of your are acquainted
with these things, perhaps because you've arm-wrestled with a lawyer over one.
Here's my favorite example, which won a prize in a contest run by Abacus
magazine. It is said to be a real license.

We don't claim Interactive EasyFlow is good for anythingÑif you think it is,
great, but it's up to you to decide. If Interactive Easyflow doesn't work,
tough.
If you lose a million because Interactive EasyFlow messes up, it's you that's
out
the million, not us. If you don't like this disclaimer, tough. We reserve the
right
to do the absolute minimum provided by the law, up to and including nothing.

This is basically the same disclaimer that comes with all software packages, but
ours is in plain English and theirs is in legalese.

Quite right, this is basically the same disclaimer that comes with all software.
It inspires a variation of that joke about the three most frequent lies:

(1 ) The check's in the mail.

(2) Of course I'll still respect you in the morning.

(3) The software works.

How can we trust people who lie constantly?

Sometimes I think the net effect of battling the software crisis is that twenty
years ago only professionals knew how hard it is to write trustworthy software,
whereas today everyone knows. To paraphrase something attributed to Ambrose
Bierce, I offer the First Law of Software Economics:

A programmer's word is worth its weight in gold.

Confessions Of A Lapsed Software Engineer

One of the reasons for my current cynicism about the software crisis is that
I've become part of the problem. About two years ago I left the Naval Research
Laboratory (NRL) for Entropic Processing, Inc. (EPI), a private company whose
products include software, both standalone and embedded. I knew a fair
amount about software engineering, and I vowed that software development within
the company would be done "right." Company management was convinced, and they
also vowed to do it "right."

At NRL I had participated in a project, led by David Parnas, that developed a
successful requirements methodology for real-time avionics software. So it
was natural to use the methodology for the real-time telecommunications software
we were developing at EPI. After a considerable effort, however, we had to admit
that the methodology didn't fit exactly and we didn't have time to figure out
how to adapt it. We pressed on, determined at least to maintain software
engineering rigor in our software design, development, and documentation
efforts. Eventually, however, these efforts began to deteriorate in the face of
financial, competitive, and corporate pressures. We haven't given up, and we do
retain
substantial software discipline, but our efforts are increasingly similar to
those we had disparaged. I am told that the first labor laws in England resulted
in part from the efforts of those who wanted fair labor laws but who could
compete unless all businesses were required to meet the same standards. There
are parallels in today's software industry.

While still at NRL, I wrote:

Within the computer industry today, the software crisis is widely recognized and
widely battled, except by a few who accept it as a fixture along with cancer,
venereal disease, and budget deficits.,

Today, I suspect that the few are in fact many and that I have become one of
them. When the pressure is on, I throw up my hands, throw out the methodology,
close the door, and hack. I have become a lapsed software engineer, a member of
Hackers Anonymous.

Knowing How Isn't Enough

When I say, "We didn't have time to do it right," others say, "You didn't take
the time." Strictly speaking, they're right, but I was as well-motivated as
anyoneÑthe pressures were simply too great. The critics also say, "In the long
run you'll be sorry," and, again, they're right, but they overlook the fact that
the short run comes before the long run, especially in companies like ours.

The problem here is that knowledge and good intentions are insufficient for
success. This is not just true of software. Those of you who understand
nutrition and want to lose weight, can you count on a successful diet? And how
many of you who modify your homes always do so according to applicable building
codes?

Software Idealism Isn't Enough

Software idealists argue that personal inspiration and market forces will
suffice. They say that software producers should resist selling hasty
softwareÑthose
that don't will suffer in the long run. And they say that software consumers
should resist buying hasty softwareÑthose that don't deserve what they get.
They imply that regulation is somehow unnecessary or wrong. This like being a
rabid, no-nuke peacenik who doesn't believe in arms control regulations, as
though the belief in peace should be sufficient. It's like being against alcohol
and drugs but not believing in any form of regulation. By this way of thinking
we should have Edsger Dijkstra tour our nation's schools with the Reverend Jesse
Jackson, preaching to future programmers before their exposure to corruption,
and pinning on their chests buttons that urge:

Software Discipline Requires Enforcement

Much of what is known about the production of quality software can be summarized
in a single word: discipline. Unfortunately, our experience with human nature
shows that discipline requires enforcement. Now there are a few Saints of
Programming out there whose self-discipline is sufficientÑsort of Digital
Ghandis who never seem tempted to write bad code. The rest of us, however, need
more than an inner force. It's not that we are the software heathen; indeed,
many of us are true believers. But we are software sinners, and like sinners
everywhere we respond best to the threat of punishment.

As for what would work, I'm not sure. One general approach is license
individuals. We could require that certain types of software be "signed off" by
licensed software engineers, and hold those engineers accountable. Another
general approach is to specify software building standards that would have to be
met by certain types of software, which would be subject to regular inspections.

Advancing the state of the art attracts us all, but we should admit that the
introduction of appropriate regulation is probably the single most effective
thing that can be done today to raise the average quality of software.

Code for the First Software Dynasty

For inspiration, we can look to the history of regulation in the building
industry. In his fine book, To Engineer is Human, 2 Henry Petroski cites the
Code of Hammurabi, a 4,000 year-old collection of laws from the First Dynasty of
Babylon. Included are perhaps the world's first building regulations, so it
seems reasonable to start with them. To make the relevant we need only apply a
simple linguistic transformation, substituting "programmer" for "builder," and
Reprogram" for "house."

I was able to apply this transformation without the aid of an AI program, with
the following results:

Updated Hammurabi Code
For the First
Software Dynasty

If a programmer build a program for a man and do not make its construction firm,
and the program which he has built collapse and cause the death of the owner of
the program, that programmer shall be put to death. If it cause the death of the
son of the owner of the program, they shall put to death a son of that
programmer.

Some Other Lessons From History?

Eighteenth-century French warships were the best in the world. Their dominance
arose not only because of new technical developments, but also because these
developments coincided with the professionalization of French shipbuilders.3

Certification or regulation of software professionals would bring social forces
to bear on a problem for which technical forces have proven to be insufficient.
I'm not sure why we tend to ignore the need for such social forces. I suspect
that it's because, as technological professionals, we want to believe that
technical solutions will obviate regulation. They won't. Indeed, nothing in the
history of science and technology suggests that the ability to build safe
systems is sufficient to result in safe systems.

Large buildings in the United States have a pretty good track record. They tend
not to fall down, and if you recall the movie The Towering Inferno, you'll also
recall that serious fires are extremely rare. This good record exists not just
because we know how to make buildings safe, but because we know how and are not
permitted to do otherwise.

We have building codes for buildings, why not for software? We trust elevators,
but they must be approved on installation and inspected regularly thereafter.
You
might think that the "regular inspection" analogy shouldn't apply to software,
but software under maintenance does degrade. (Note the contrast: elevators
degrade if they are not maintained; software degrades if it is maintained.)

We require certification for doctors, lawyers, architects, civil engineers,
aircraft pilots, automobile drivers, and even hair stylists. Why not for
software engineers engaged in building systems on which the public depends?

Consider automobile safety. Were market forces or legislation responsible for
safety glass and seat belts? Consider aircraft safety. Would you be in favor of
extending airline deregulation to aircraft certification and maintenance?

It Will Happen

Even if you are not in favor of regulating the software industry, I can
guarantee one thing: it will happen. Just look at the history of all forms of
technology on which society depends. This history also shows that legislative
constraints are lightest on those who regulate themselves. So let's do it
ourselves before they do it to us. And let's make sure that when it does
happenÑand it willÑthat it has a sound and useful technological basis.

A Role for CPSR

It's one thing to believe that some form of professional regulation is needed.
It's quite another to know what to do. In many respects, regulating software
will be more difficult than implied by the historical analogies I have
mentioned. The complexity is greater, and the practical application of the
relevant mathematics is harder.

There's an important and difficult technical challenge in this, and one that I
think is well-suited for CPSR. After all, the introduction of appropriate
regulations would not only improve software quality, but it would do so by
encouraging computer professionals to accept their social responsibility

References

1. The Sachertorte Algorithm, Penguin, 1986, p. 163. 2. St. Martin's Press,
1982.

3. James Pritchard, "From Shipwright to Naval Constructor: The
Professionalization of 18th-Century French Naval Shipbuilders," Technology and
Culture, January, 1987, pp. 1-25.


Autonomous Weapons continued from page 11
The Pentagon's capacity to send significant firepower to any spot in the world
without having to mobilize a large army or renew the draft also hamstrings the
democratic process regarding decisions about deploying force. Representatives of
the military-industrial complex have lulled Americans, even policymakers, into
believing that the replacement of men with machines is the humanitarian and
efficient thing to do in war. Certainly any given soldier would rather stay home
and let a robot go in his place. But the overall effect of replacing men with
machines is the creation of a fearsome military force without the traditional
democratic check of individual soldiers examining the justice and conduct of the
war from the inside. Moreover, the nation that is simply "left alone" while its
leaders wage war elsewhere is no democracy. As Rousseau once said, "As soon as a
single citizen says, 'What is it to me?,' the republic is lost."

The Trend Toward Autonomous Weapons

General William Westmoreland summed up the modern military's goals over
seventeen years ago when he said before a congressional committee, "I am
confident the American people expect this country to take full advantage of its
technologyÑto welcome and applaud the developments that will replace wherever
possible the man with the machine." More recently, Frank Barnaby has written in
his 1986 book, The Automated Battlefield:

The PentagonÕs goal is to give machine intelligence the job of waging war
without human intervention. If the enemy uses human soldiers on the battlefield
they will be killed by computerized weapons. If the enemy uses robots and
automated machines on the battlefield they will be destroyed by computerized
weapons.

Weapons development has been headed in this direction for some time. One of the
most illustrative examples is the history of the anti-tank weapon. In World War
II, soldiers used the simple "bazooka," a projectile fired from a tube aimed by
the operator. The bazooka model was in use from World War II to the late 1960s,
when the U.S. began to deploy the TOW missile (for Tube-launched, Optically-
sighted, Wire-guided). The TOW missile has a wire that is trailed out the back
of the projectile as it travels toward the target. The wire supplies information
to a computer that is linked to the operator's sight, so the missile can be
steered after it leaves the firing tube. New-generation antitank weapons replace
the wire with a laser beam. The next generation of development will be so-called
"fire and forget" weapons, in which the projectile itself locks onto the target
so that the operator is free to engage another target as soon as the projectile
is fired. Finally, the operator is replaced with an artificial intelligence
system that not only operates the vehicle but identifies targets and destroys
them (this has already produced its own terminology, including IFFÑthe acronym
for "identification of friend or foe"). This is the truly autonomous weapon.

The Pentagon is currently spending about $300 million a year on the development
of autonomous weapons. The military has a wide variety of reasons for wanting
combat robots, and the most important of these will be reviewed here briefly.

The NATO-Warsaw Pact Balance

The United States and other NATO partners are worried about the dramatic Soviet
numerical superiority in Europe in a number of key military resources,
particularly in battle tanks. This worry will only increase if there is an
agreement eliminating medium-range nuclear missiles in Europe. Currently the
Soviets have an advantage of between two and three to one in tanks deployed in
Europe, with as many as 50,000 tanks in the Warsaw Pact. NATO commanders are
concerned about the possibility of being overwhelmed by a massive Soviet
"blitzkrieg" attack.

NATO and government officials are convinced that the Western countries will
never match the Soviet Union in production of tanks and other traditional
military hardware. It has been estimated that to produce the material required
to match the Soviets tank for tank, gun for gun, the United States would have to
increase its defense budget by a third, or more than $100 billion per year. The
Soviet Union has a comparative advantage in labor and heavy industry, while the
economies of the Western countries are moving toward high technology-based
manufacturing and services.

The Defense Department has explicitly set forth a policy of developing an
arsenal based on computer technology because it is in this field that the
Soviets fall furthest behind the West. Soviet computer technology is usually
assessed as between five and seven years behind the technology of the U.S.,
Japan, and Europe, and, moreover, it appears that the Soviet economy is poorly
equipped to catch up. The Defense Department is waging a war against technology
transfer to the Eastern bloc nationsÑtrying to stop "high tech leaks"Ñwhile at
the same time moving rapidly to exploit the advantages of the West in computer
engineering with increasing levels of funding for research and development. The
hope in the Pentagon is that through high technology the NATO alliance will
"leapfrog" the Soviets in weapons development, providing the edge that is
necessary against the alleged Soviet superiority in Europe.

The Population Dilemma

Another reason military planners are interested in combat robots and other
military machines is that the population of young men eligible for military
service is declining for the first time since World War II. The Vietnam-era
conscription drew on a population cohort larger than any in U.S. history, the
"baby boom" generation of post-World War II births. But in 1981, the year
President Reagan took office, the steady rise in the population of 18- to 21 -
year old males reversed itself for the first time since 1945. In 1980, the total
population of males aged 15 to 19 was 10.7 million. By 1995 it will decline to
8.6 million, a 20% drop.

In Western Europe the decline in birth rates is even more severe. West Germany
has a birth rate of only 1.3 children per woman, the lowest in the world and the
lowest in German history. The mortality rate in West Germany is higher than the
birth rate, meaning that West Germany is losing population. The West German
government has responded with a proposal for extending the term of military
service from 15 months to 18. French Prime Minister Jacques Chirac recently
commented, gain demographic terms, Europe is vanishing."

The declining birth rates of Western countries means that their populations are
growing older. By the year 2000, over half the men in the United States will be
over the age of 40, usually considered beyond the age of eligibility for
military service. As the population of NATO countries ages, the demand in the
private sector for highly-skilled and educated young workers will focus on a
smaller population. The military will be competing for young men in a seller's
market, as it were. Martin Binkin, an analyst of such trends at the Brookings
Institute, has reported that by the early 1990s, at planned military force
levels, the U.S. military will need to recruit more than half of the young men
not dedicated to college or disqualified from the service. If increased force
levels are factored in, the military may need to recruit up to 60 percent of
this pool.

The policy alternatives facing the military are highly controversial and
politically risky, such as reinstating the peacetime draft, putting women into
combat jobs, and paying skilled personnel at levels comparable with the private
sector. A less controversial policy alternative, however likely to succeed, is
to follow the rhetoric of American business and replace workers with machines.

Robot Capabilities

Another important reason the military is considering the use of combat robots is
that robots are capable of operating in environments where humans cannot. Not
only could combat robots be designed to withstand high velocities, tremendous
shocks, extraordinary temperatures, and so on, but, perhaps most important in
the context of modern arsenals, robots could operate in combat zones
contaminated by nuclear, biological, or chemical weapons. The most significant
drawback to the use of such weapons, from a military point of view, is that they
are just as deadly to one's own troops as they are to the enemy. Chemical
weapons, for example, fell out of favor after World War I because their
dispersion could not be adequately controlled and they just as often affected
friendly forces as the enemy. Very few commanders are confident that the
advanced protection measures employed to protect soldiers from these agents will
actually do much good. Robots, however, could operate in environments even
heavily contaminated by nuclear blasts. One Army document even proposes
development of a robot to pick up the dead after a nuclear war.

But the human soldier may have difficulty operating in the modern combat
environment even if nuclear, biological, and chemical weapons go unused .
Current developments in weapons may make the future battlefield too lethal for
human soldiers. For example, a missile system under development by the U.S.
Army, popularly known as "the grid-square buster," can saturate a 600-square
meter area with 12,000 individual high explosive bomblets, and, sometime in the
future, each of these bomblets will be terminally guided by an on-board system
that will direct the explosives right to their targets. Very little could
survive such an attack.

Psychological research conducted by the military strongly suggests that no
military unit can sustain 10% or more casualties in the course of a single
operation without seriously compromising the sanity of the survivors. Modern
weapons, however, may be capable of wiping out entire units in a single blow,
or, at best, leaving a handful of stunned and helpless survivors. The horror and
speed of future combat may be too much for people, even after the Battle of the
Somme and the Northwest Pacific battles of World War II. Robots may be the only
way to deliver force in a future war.

____missing page 14_____

we should recognize the deep, fundamental irrationality of our current system of
providing for national security. If we stop to think about the monstrous crime
of sending a person to his or her maker by a machine which can have no regret,
no sense of historical tragedy, or any sense of empathy and despair, then maybe
we will think more concretely about the world we are leaving to our children, a
world that should be free of such nightmares, one where we instead devote our
best intellectual skills and our greatest financial resources to building peace
and understanding

From the Secretary's Desk
Eric RobertsÑNational Secretary

Over the last few months, CPSR has been getting some important coverage in the
national press. On August 16, CPSR was mentioned in a front-page New York Times
story on the National Test Facility under construction in Colorado Springs. In
the article, CPSR member Jim Horning questions whether it is possible to rely on
simulation alone to test the SDI, noting that "the accuracy depends critically
on the assumptions of the people who build the simulator, and those assumptions
can easily be wrong." CPSR was also the cover story in the September 14 issue of
Information Week.

In September, CPSR Executive Director Gary Chapman went to Castiglioncello,
Italy, for a four-day conference on "Science, Technology, and Arms Control,"
sponsored by USPID, the umbrella organization of Italian scientists concerned
about arms control. The conference was attended by about 185 people from the
United States, Canada, Europe, and the Soviet Union. Gary spent a good deal of
time with the Soviet delegation, which included a high proportion of computer
scientists, including Academician Andrei Ershov, a director of the Soviet
Computing Center in Novosobirsk and head of the Soviet computerization program,
and Dr. Sergeev, a specialist in artificial intelligence research at the
Institute for U.S. and Canada Studies in Moscow, who wrote the section on
computer requirements for the official Soviet publication on the SDI. Gary
believes that the conference was useful in advertising CPSR as a serious
professional and technical organization at a meeting among some of the top
scientists in the Western world and feels that his one-on-one contacts were very
productive and likely to lead to more regular communication.

CPSR Program Associate Mary Karen Dahl is continuing her work on the civil
liberties project. In September, CPSR members Dave Redell and Peter Neumann
travelled to Washington to serve on a review panel on the National Crime
Information Center organized by Representative Don Edwards (D-CA), who chairs
the House Judiciary Subcommittee on Civil and Constitutional Rights. On
September 25, the panel were given a report on the new NCIC proposal by
representatives from the FBI and the MITRE Corporation, the principal contractor
on the project. The panel is now in the process of preparing comments and
questions on the proposal which will be considered by Representative Edwards'
committee.

Following up on the very successful symposium this summer in Seattle, plans are
proceeding for DIAC '88 in St. Paul, Minnesota. Several CPSR chapters are
involved in organizing the event, and we expect it to be even more exciting than
the one this year. The call for papers appears on page 19, the inside back
cover.

As the fall issue of the CPSR Newsletter goes to press, most of us around the
National Office are heading to Cambridge, Massachusetts, for the Annual Meeting.
A report on the activities at the meeting will be published in the winter issue.


Computers and War Games
Book Review
Clifford JohnsonÑCPSR/Palo Alto

Review of the book War Games, by Thomas Allen, McGraw-Hill, 1987.

Thomas Dupuy, a father of wargaming, warned in 1985: "The senior decision makers
of the U.S. military establishment are increasingly basing their decisions on
the outputs of computer models and simulations which are widely recognized to be
unreliable and unrealistic." In 350 pages supported by extensive notes,
bibliographies, and interviews, Thomas Allen lays out the supersecret world of
Pentagon war games, and confirms Dupuy's fear. Allen covers the "spectrum of
wargaming," which runs from high to low operational realism, and includes:
military field exercises; military field experiments; map exercises; regular war
games; computer simulations; and analytical models.

Most attention is directed to regular war games, which comprise two teams (U.S.
v. U.S.S.R., or Blue v. Red, or Sam v. Ivan) presented with a prescripted
scenario written and manipulated by a control panel of gamesters. Teams take
moves in turns. Each move takes perhaps three or four hours. It comprises a
written statement handed to control, and is usually formatted in five parts:
assessment; objectives; strategies; actions; and contingencies. Control, again
in writing, updates the scenario and passes the buck by passing nonidentical
messages to the teams. A game typically lasts a few days, after which there is a
"hot washup." Programs now exist that can stand-in for human teams. The most
important such program is the Rand Strategy Assessment System (RSAS), which
reduces "Sam" and "Ivan" to lines of code. (RSAS was critiqued in the last
Newsletter under its 1986 acronym, RSAC.) By processing 1,000 rules per second,
RSAS arrives at decisions by itself, and quickly enough for real-time crisis
applications.

One concern is that although most war games hinge upon political decisions,
politicians rarely participate. For example, a 1983 game "drew nine admirals and
generals, two ambassadors, and fifteen members of the staff of the Joint Chiefs
of Staff. The players looked at the most fundamental question of war: Should it
be declared?" Another problem is communication between game designers and users,
a point enlivened, like all other points in the book, by insightful illustration
and quotation: "They were arguing about how World War Three was going to come
out, and finally the general, in exasperation, says to the kid, who has all
kinds of Ph.D.s, ÔWell, goddamn it, we'll just have to start the war and see how
it goes.' And the analyst said, 'That would be totally invalid. You'd only get
one run of the experiment.ÕÓ

After reviewing the international history of wargaming up to the end of World
War II, the bulk of the book traces Pentagon wargaming and its real-life
applications from World War II through to the present Joint Analysis Directorate
(JAD), which recently supplanted the Strategic Analysis and Gaming Agency
(SAGA). JAD is the most important Pentagon gaming agency because it has
inherited key responsibility for developing the real nuclear war plan, the
Single Integrated Operational Plan (SIOP). The last three chapters of the book
describe RSAS. Allen concludes: "The Joint Chiefs of Staff has begun using it
[RSAS] for wargaming and contingency planning. Sam and Ivan [RSAS's coded U.S.
and Soviet command authorities] can detonate nuclear weapons because they lack
what keeps players of flesh and blood from crossing the nuclear threshold...
They can cross that threshold because neither of them has a conscience." This
conclusion misses the point that RSAS was engineered to cross the nuclear
threshold after its rational utility functions had flunked nuclear escalation
strategies favored by human policy makers. RSAS's driving mission is to provide
real-time crisis decisionmaking that is too complex and rapid for human
judgment, rather than too immoral to be trusted to human execution. Conscience
must be heeded in the relatively painless task of programming, for thereafter it
may be wholly ineffective.

War Games does not address in depth the mathematical subtleties underpinning
game theory, but does describe the evolution of the measurement of war and
weapons by contrived indices. For example, Lanchester's law is discussed, which
runs: "The fighting strength of a force may be broadly defined as proportional
to the square of its numerical strength multiplied by the fighting value of its
individual units." The fighting value of a unit may be derived from a "Lethality
Index = rate of fire per hour x targets per strike x relative incapacitating
effect x range factor x accuracy x reliability." The Lethality Index of a one
megaton nuclear bomb is figured to be 695,385,000, versus 23 for a sword. These
indices naturally lead on to a verbal description of Dupuy's ubiquitous
Quantified Judgment Model, which has been much used in NATO gaming. Allen agrees
with a General Accounting Office report calling such techniques "very squishy."

The omission of mathematics is wise, for the result is a highly readable
narrative packed with telling anecdotes. The following miscellany of vignettes
is representative. In a nuclear war game run from the Reagan White House, the
destruction of Moscow by U.S. missiles incited spontaneous applause. The
transcript of one Pentagon war game contained the proposal "Let's write off
Germany." The Bay of Pigs fiasco led McNamara to institutionalize Pentagon
wargaming. Until recently, it was taboo to have Navy carriers sunk in any war
game. The failure in a war game of a "wimp" commander to use military options,
such as going nuclear, could adversely affect his or her career path. Pearl
Harbor was gamed before the event by Japan. Daniel Ellsberg is credited for
first modeling nuclear deterrence. A mobilization exercise was the cause of
draft registration law in the 1980s. A game called TEMPER was a 1960s precursor
to RSAS, and it was shelved after the failure of escalation in Vietnam.

When the head of SAGA was asked for an interview by the prestigious Journal of
the Armed Forces, an underling replied that Lithe head of SAGA has nothing he
can say that is not classified." War Games is an authoritative overview of a
subject as globally important as it is deeply hidden. The chilling message that
most strongly comes across is that the difference between game and reality,
between computer programs and nuclear war, is vanishing. Allen puts it this way:
ÒPentagon games often take yesterday's events, merge them with today's planning,
and file the results away for tomorrow's action . . . Computerized war plans can
produce military contingency plans virtually on call. Players include the
commanders in chief (CINCs) of military commands throughout the world, hooked
into a complex computer system called SINBAC (System for Integrated Nuclear
Battle Analysis Calculus)... nuclear wargaming with automatons instead of human
beings is now a reality. 'The Joint Chiefs,' a former Pentagon civilian official
told me, 'want to give the CINCs high-grade tools... So the scenario equipment
could be the real war-fighting equipment at the same time.Õ

Miscellaneous

CALL FOR PAPERS
For Premier Issue of The Journal of Computing and Society

Gary Chapman, editor Published by Ablex Publishing Corporation

Ablex Publishing Corporation is beginning publication of a new international
scholarly journal on the social impact of computing technology, called The
Journal of Computing and Society. This journal will be a standard refereed
academic publication of quality material on themes concerning the social
implications and trends resulting from developments in computing and
computerization. The journals premier issue will be available in late 1988.

The editor of this journal is Gary Chapman, executive director of Computer
Professionals for Social Responsibility. The international editorial board
includes, among others, Terry Winograd, Joseph Weizenbaum, Deborah Johnson,
Douglas Hofstadter, Rob Kling, Abbe Mowshowitz, Peter Neumann, Langdon Winner,
Margaret Boden, John Ladd, Brian Smith, Calvin Gotlieb, David Burnham, Alan
Westin, Jean-Louis Gassee, Susan Nycum, Lucy Suchman, and Zhisong Tang.

The first issue of the journal will be dedicated to addressing the question:

"Has There Been A Computer Revolution?"

For more information about this journal, including manuscript submission
requirements, call Gary Chapman at (415) 322-3778, or send a request for
information to P.O. Box 717. Palo Alto. CA 94301.


Parnas Given First
Nobert Wiener Award at
CPSR Annual Banquet

Professor David Lorge Parnas, Professor of Computer Science at Queen's
University in Kingston, Ontario, Canada, has been awarded the first Norbert
Wiener Award for Professional and Social Responsibility by the Board of
Directors of Computer Professionals for Social Responsibility. The award will be
presented to Professor Parnas at the CPSR Annual Banquet.

The Norbert Wiener Award for Professional and Social Responsibility was
established by the Board of Directors of Computer Professionals for Social
Responsibility in 1987 in order to recognize extraordinary and exemplary
practice of the highest standards of social responsibility in the computing
profession. The award is given to someone in the computing field, or someone
dealing with issues in the computing field, who makes a significant personal
sacrifice for the sake of public safety, the reduction of risk, and the
maintenance of the highest example of professional conduct.

The award was named after Norbert Wiener, the famous MIT mathematician and
cyberneticist who died in 1964, because of Professor Wiener's well-known concern
for the responsible use of computers, and his opposition to the militarization
of Science.


CPSR Newsletter
Advertising Policy

The CPSR Newsetter does not normally publish advertisements. Special
announcements for events, products, or services may be included in the
Newsletter at the discretion of the editor of the Newsletter, in consultation
with the publications committee of the CPSR Board of Directors. Such
announcements are published free of charge, although there may be arrangements
made for in-kind exchanges, particularly with other publications . For more
information, call the CPSR National Office at (415) 322-3778.


CPSR Members,

Help Promote the CPSR Slide Show

Reliability and Risk: Computers and Nuclear War

If you are a CPSR member who wants to get more involved in your community and
help spread the message of CPSR, why not organize a showing of the award-winning
CPSR slide show, Reliability and Risk: Computers and Nuclear War. The half-hour
slide show is an excellent way to introduce audiences to the issues of computer
reliability and risks to the public in the national security field. It features
interviews with and comments by a number of CPSR members, as well as prominent
leaders of the opposition to the SDI, and current and former government
officials. The slide show is suitable for nontechnical audiences, so it is
appropriate for schools, church groups, and community and peace organizations.

The slide show is most effective when it is shown by someone from CPSR who can
help lead a discussion of the issues and answer questions from the audience. If
you are interested in helping organize a showing in your area, the CPSR National
Office can help you with suggestions, support material, and other assistance.
The slide show is available as a videotape in all video formats, and in 2 and 6
projector versions. Contact Katy Elliot in the CPSR National Office at (415)
322-3778.

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

The need for CPSR's activities has never been greater.