Personal tools


The CPSR Newsletter

Winter 1989

CPSR Responds to the Internet Computer Virus

On November 2nd, 1988, a computer program called a "worm" or a "virus"
spread to thousands of
computers across the United States and even overseas via the Internet,
a national computer network.
The program was called both a "worm" and a "virus;" like a "worm" the
program did not attach itself to
another program and did not alter any data, but, like a computer
"virus" it was self-replicating.

The virus exploited a long-standing but relatively little known feature
in the Unix operating system
that allowed the programmer to funnel the virus' code through the Unix
electronic mail subsystem. But
a programming error apparently caused the virus to replicate much more
rapidly than intended,
clogging network capacity and bringing systems on the network to a
virtual standstill. Although the
virus appeared to do no permanent damage to the estimated 6,000
computers affected, it created a
massive national headache for systems operators who spent countless
hours removing it from their

A few days after the virus flashed across the country, the programmer
came forward. It was revealed
that the creator of the virus was Robert Tappan Morris, Jr., a 22 year-
old graduate student in
computer science at Cornell University. Morris, who is described by all
who know him as a brilliant
young computer scientist, is the son of the head of computer security
for the National Security Agency,
which added an unexpected twist to the story. News reports suggested
that the younger Morris created
the virus as an experiment to demonstrate the flaws of the primary
operating system used within the
Internet, Unix, including trapdoors in the implementation of the
sendmail transfer protocol (smtp) and
the user-status query (finger).

This incident was extensively reported by the national and local press,
appearing as a front page story
for six days in a row in The New York Times. The CPSR National Office
was flooded with telephone calls
from reporters, and CPSR members and staff appeared on several radio
call-in shows and television
reports and were widely quoted in the press. The virus episode drew
widespread attention to the
vulnerability of computer networks that CPSR has been describing for
several years.

The Internet virus struck shortly before the 1988 CPSR Annual Meeting
held at Stanford University on
November 19 and 20. With so much interest in the incident, time was set
aside for meeting
participants to address and discuss the virus issue. Prior to the
meeting, CPSR members began working
on a statement about the virus using electronic mail. At the CPSR
Annual Meeting, a draft statement was
handed out to meeting participants, over 100 people, who elected to
stay late on Saturday afternoon to
discuss the virus problem. The discussion was wide-ranging and touched
on many issues of concern to
computer professionals. At the reassembled meeting on Sunday, all
participants at the meeting were
asked to review the draft and make comments. A final draft was then
prepared after it appeared to
reflect the majority opinions of the attendees at the Annual Meeting.
The statement was officially
endorsed by the CPSR Board of Directors at its meeting on November 21.
The statement has been
requested by computer systems administrators and has been circulated
throughout the computing

Following the CPSR statement on the computer virus, other professional
computer societies prepared
their own statements, addressing many of the same concerns raised by

CPSR Statement on the Internet Computer Virus

The so-called computer virus that swept through a national computer
network, the Internet, in early
November is a dramatic example of the vulnerability of complex computer
systems. It temporarily
denied service to thousands of computers users at academic, business,
and military sites. An estimated
6,000 computers across the country were affected in only a few hours.
Fortunately, the program was
not designed to delete or alter data. The impact of a malicious virus
would have been immeasurable.

This was an irresponsible act that cannot be condoned. The Internet
should not be treated as a laboratory
for uncontrolled experiments in computer security. Networked software
is intrinsically risky, and no
programmer can guarantee that a self-replicating program will not have
unintended consequences.

The value of open networks depends on the good will and good sense of
computer users. Computer
professionals should take upon themselves the responsibility to ensure
that systems are not misused.
Individual accountability is all the more important when people work
together through a network of
shared resources. Computer professionals should establish and encourage
ethics based on the shared
needs of network users. We also encourage educators to teach these

The questions of legal responsibility in this instance are ultimately
for our legal system to resolve. The
questions confronting computer professionals and others concerned about
the future of our technology
policy go well beyond this particular case. The incident underscores
our society's increasing dependence
on complex computer networks. Security flaws in networks, in computer
operating systems, and in
management practices have been amply demonstrated by break-ins at
Stanford University, by the
penetration of national research networks, and by the "Christmas virus"
that clogged the IBM internal
network last December.

CPSR believes that this incident should prompt critical review of our
dependence on complex computer
networks, particularly for military and defense-related functions. The
flaws that permitted the recent
virus to spread will eventually be fixed, but other flaws will remain.
Security loopholes are inevitable
in any computer network and are prevalent in those that support
general-purpose computing and are
widely accessible.

An effective way to correct known security flaws is to publish
descriptions of the flaws so that they can
be corrected. We therefore view the efforts to conceal technical
descriptions of the recent virus as

CPSR believes that innovation, creativity, and the open exchange of
ideas are the ingredients of
scientific advancement and technological achievement. Computer
networks, such as the Internet,
facilitate this exchange. We cannot afford policies that might restrict
the ability of computer
researchers to exchange their ideas with one another. More secure
networks, such as military and
financial networks, sharply restrict access and offer limited
functionality. Government, industry, and
the university community should support the continued development of
network technology that
provides open access to many users.

The computer virus has sent a clear warning to the computing community
and to society at large. We
hope it will provoke a long overdue public discussion about the
vulnerabilities of computer networks,
and the technological, ethical, and legal choices we must address.

Special thanks must go to the many people who worked hard, and in
demanding time pressures during
the CPSR Annual Meeting, to stimulate discussion about the virus issue
and produce the statement.
Congratulations and thanks should also go to the more than one hundred
Annual Meeting participants who
displayed an exemplary democratic spirit in developing the statement.

For more information about the Internet virus of November, 1988:

Mark Eichin and Jon Rochlis, "With Microscope and Tweezers: An Analysis
of the Internet Virus of
November 1988," MIT Project Athena report, Cambridge MA 02139, November

John Markoff, "The Computer Jam: How It Came About," The New York
Times, November 11, 1988,

Donn Seeley, "A Tour of the 'Worm'," Computer Science Department,
University of Utah, November

Eugene H. Spafford, "The Internet 'Worm' Program: An Analysis," CSD-TR-
823, Department of
Computer Sciences, Purdue University, West Lafayette IN 47904-2004,
November 1988.

Consensual Realities in Cyberspace Computer Viruses and Science Fiction
Paul SaffoÑCPSR/Palo Alto

More often than we realize, reality conspires to imitate art. In the
case of computer viruses, the art is
"cyberpunk," a strangely compelling genre of science fiction that has
gained a cult following among
hackers operating on both sides of the law. Books with titles like True
Names, Shockwave Rider,
Neuromancer, Hard-wired, Wetware, and Mona Lisa Overdrive are shaping
the realities of many
would- be virus experts. Anyone trying to make sense of the culture
surrounding computer viruses
should add the "cyberpunk" books to their reading list as well.

Cyberpunk got its name only a few years ago, but the genre can be
traced back to the publication of John
Brunner's Shockwave Rider in 1975. Inspired by Alvin Toffler's 1970
best-seller, Future Shock,
Brunner painted a dystopian world of the early twenty-first century in
which Toffler's most
pessimistic visions have come to pass. Crime, pollution and poverty are
rampant in overpopulated
urban arcologies. An inconclusive nuclear exchange at the turn of the
century has turned the arms race
into a brain race. The novel's hero, Nickie Haflinger, is rescued from
a poor and parentless childhood
and enrolled in a top secret government thinktank charged with training
geniuses to work for a
military-industrial Big Brother state locked in a struggle for global
political dominance.

It is also a world certain to fulfill the wildest fantasies of a 1970s
"phone phreak." A massive
computerized data-net blankets North America, an electronic
superhighway leading to every computer
and every last bit of data on every citizen and corporation in the
country. Privacy is a thing of the past,
and power and status is determined by the level of one's identity code.
Haflinger turns out to be the
ultimate phone phreak: he discovers the immorality of his governmental
employers and escapes into
society, relying on virtuoso computer skills (and a stolen
transcendental access code) to rewrite his
identity at will. After six years on the run and on the verge of a
breakdown from input overload, he
discovers a lost band of academic techno-libertarians who shelter him
in their ecologically sound
California commune and. . . well, you can guess the rest.

Brunner's book became a best-seller and remains in print. It inspired a
whole generation of hackers
including, apparently, Robert Morris of Internet virus fame. A recent
Los Angeles Times article
reported that Morris' mother identified Shockwave Rider as her teen-age
son's primer on computer
viruses and one of the most tattered books in young Morris' room.
Though Shockwave Rider does not use
the term "virus," Haflinger's key skill was the ability to write
"tapeworms"Ñautonomous programs
capable of infiltrating systems and surviving eradication attempts by
reassembling themselves from
viral bits of code hidden about in larger programs. Parallels between
Morris' reality and Brunner's art
is not lost on fans of cyberpunk: one junior high school student I
spoke with has both a dogeared copy of
the book and a picture of Morris taped next to his computer. For him,
Morris is at once something of a
folk hero and a role model.

In Shockwave Rider, computer/human interactions occurred much as they
do today: one logged in and
relied on some combination of keyboard and screen to interact with the
machines. In contrast, second
generation cyberpunk offers more exotic and direct forms of
interaction. Vernor Vinge's True Names
was the first novel to hint at something deeper. In his story, a small
band of hackers manage to
transcend the limitations of keyboard and screen, and actually meet as
presences in the computer
network. Vinge's work found an enthusiastic audience (including MIT
professor of computer science
Marvin Minsky, who wrote the afterword), but never achieved the sort of
circulation enjoyed by
Brunner. It would be another author, a virtual computer illiterate, who
would put cyberpunk on the

The Appearance of "Cyberspace"

That author was William Gibson, who wrote Neuromancer in 1984 on a 1937
Hermes portable
typewriter. In this book, keyboards have disappeared; Gibson's
characters jack directly into
"cyberspace," "a consensual hallucination experienced daily by billions
of legitimate operators....a
graphic representation of data abstracted from the banks of every
computer in the human system.
Unthinkable complexity. Lines of light ranged in the nonspace of the
mind, clusters and constellations of

Just as Brunner offered us a future of the seventies run riot, Gibson's
Neuromancer serves up the
eighties taken to their cultural and technological extremes. World
power is in the hands of
multinational zaibatsu, battling for power much as the Mafia and yakuza
gangs struggle for turf today.
It is a world of organ transplants, biological computers and artificial
intelligences. Like Brunner's
book, it is a dystopian vision of the future, but while Brunner evoked
the hardness of technology,
Gibson calls up the gritty decadence evoked in the movie Bladerunner,
or of the William Burroughs
novel, Naked Lunch (alleged similarities between that novel and
Neuromancer have triggered
speculation that Gibson plagiarized Burroughs).

Gibson's hero, Case, is a "deck cowboy," a freelance corporate thief-
for-hire who projects his
disembodied consciousness into the cyberspace matrix, penetrating
corporate systems to steal data for
his employers. It is a world that Ivan Boesky would understand:
corporate espionage and double-dealing
has become so much the norm that Case's acts seem less illegal than
profoundly ambiguous.

This ambiguity offers an interesting counterpoint to current events.
Much of the controversy over the
Internet virus swirls around the legal and ethical ambiguity of Morris'
act. For every computer
professional calling for Morris to be punished, another can be found
praising him.

It is an ambiguity that makes the very meaning of the word "hacker" a
subject of frequent debate.

Morris' computer "worm" in no way corresponds to the actions of
Gibson's characters, but a whole new
generation of aspiring hackers may be learning their code of ethics
from Gibson's novels. Neuromancer
won three of science fiction's most prestigious awardsÑthe Hugo, the
Nebula and the Philip K. Dick
Memorial AwardÑand continues to be a best-seller today. Unambiguously
illegal and harmful acts of
computer piracy such as those alleged against Los Angeles hacker David
Mitnick would fit right into the
Neuromancer story line.

Neuromancer is the first book in a trilogy. In the second volume, Count
ZeroÑtitled after the code name
of a characterÑthe cyberspace matrix becomes sentient. Typical of
Gibson's literary elegance, this
becomes apparent through an artist's version of the Turing test.
Instead of holding an intelligent
conversation with a human, a node of the matrix on an abandoned orbital
factory begins making achingly
beautiful and mysterious boxes, a 21st century version of the work of
the late artist Joseph Cornell.
These works of art begin appearing in the terrestrial marketplace, and
a young woman art dealer is
hired by an unknown patron to track down the source. Her search
intertwines with the fates of other
characters, building to a conclusion equal to the vividness and
suspense of Neuromancer. The third
book, Mona Lisa Overdrive, answers many of the questions left hanging
in the first book and further
develops the details of Gibson's imaginative world, including the
adoption by the computer network of
the personae of the pantheon of voodoo gods and goddesses worshipped by
21st century Rastafarian

Aspiring Deck Cowboys

Hard core science fiction fans are notorious for identifying with the
worlds portrayed in their favorite
books. Visit any science fiction convention and you can encounter,
amidst the majority of quite normal
participants, a small minority of individuals who seem just a bit,
well, strange. The stereotypes of
individuals living out science fiction fantasies in introverted
solitude has more than a slight basis in
fact. Closet "Dr. Whos" or "warrior monks" (from the Star Wars movies)
are not uncommon in Silicon
Valley; I was once startled to discover over lunch that a programmer
holding a significant position in a
prominent company considered herself to be a "wizardess" in the literal
sense of the term.

Identification with cyberpunk at this level seems to be becoming more
and more common. "Warrior
monks" may have trouble conjuring up imperial stormtroopers to do
battle with, but aspiring deck
cowboys can log into a variety of computer systems as invited orÑif
they are good enoughÑuninvited
guests. One individual I spoke with explained that viruses held a
special appeal to him because they
offer a means of "leaving an active alter ego presence on the system
even when I wasn't logged in." In
short, for this person computer viruses are the first step towards
experiencing cyberspace.

Gibson reportedly is leaving cyberpunk behind, but the number of books
in the genre continues to grow.
Not mentioned here are a number of other authors such as Rudy Rucker
(considered by many to be the
father of cyberpunk) and Walter John Williams, who offer similar
visions of a future networked world
structured by human/ computer symbiosis. In addition, at least one
magazine, Reality Hackers
(formerly High Frontiers magazine of drug culture fame) is exploring
the same general territory with
a menu offering tongue-in-cheek paranoia, ambient music reviews,
"cyberdelia" (contributor Timothy
Leary's term), and New Age philosophy.

This growing body of material is by no means inspiration for every
aspiring digital alchemist. I am
particularly struck by the "generation gap" in the computer community
when it comes to Neuromancer;
virtually every teenage hacker I have spoken with has the book, but
almost none of my friends over
thirty have picked it up.

Similarly, not every cyberpunk fan is a potential network criminal;
plenty of people read detective
thrillers without indulging in the desire to rob banks. But there is
little doubt that a small minority of
computer artists are finding cyberpunk an important inspiration in
creating an exceedingly strange
computer reality. Anyone seeking to understand how that reality is
likely to come to pass would do well
to pick up a cyberpunk novel or two.

Paul Saffo is a research associate at the Institute for the Future in
Menlo Park, CA.

CPSR/Boston Funds Five New Projects

CPSR/Boston has committed a portion of its treasury to support
independent CPSR projects. In
response to a call for proposals accompanying the last CPSR Newsletter,
CPSR/Boston received ten
proposals for projects, and five have been awarded money from the

First, the chapter will support payments of $100 each to eight authors
of articles for The CPSR
Newsletter. These will be awarded at the discretion of the editor of
the Newsletter, in consulation with
the publications committee of the CPSR Board of Directors.

Nance Goldstein, of the Department of Economics at the University of
Southern Maine, was awarded
$975 to support her work on "The Economic Impact of Defense Department
R&D Spending on the U.S.
Software Industry." This work will be featured in a forthcoming article
in the Newsletter. Another
research project awarded $955 is being conducted by Professor Riva
Bickel of the Department of
Computer and Information Systems of Florida Atlantic University. Her
work is on teaching computer
ethics using cases from computer law.

The CPSR chapter in Washington, D.C., will be given $910 to help the
chapter effectively use the local
public access television station. CPSR/Maine, the organization's newest
chapter, will be working with
$150 from CPSR/Boston in order to study the ethical use of computers.

1988 CPSR Annual Meeting Largest and Most Successful Yet David
BellinÑCPSR/New York

More than four hundred people gathered November 19 and 20, 1988, for
two days of discussion and
debate during the 1988 CPSR Annual Meeting. This made the meeting at
Stanford University in
California the largest Annual Meeting held thus far by CPSR. And by the
account of many participants, it
was also one of the most informative and stimulating events they have
attended. Evaluation sheets
included comments such as, "Great!" "Thanks for a great weekend;"
"Saturday sessions excellentÑfirst
rate speakers, exceptionally well-organized;" and, "Excellent
conference I'm joining!" The CPSR
Annual Meeting received a lot of press coverage, including an entire
page of stories in the "Computing"
section of the Sunday San Jose Mercury News, and stories in the San
Francisco Chronicle and other local
newspapers. A San Jose Mercury News story was picked up by the wire
services, and even made it to
the South China Daily News, which is read throughout China and
Southeast Asia.

Technical Challenges in Arms Control

The first talk of the meeting was given by Dr. Sidney Drell, co-
director of the Stanford Center for
International Security and Arms Control [Editor's note: Dr. Drell
resigned from this position two weeks
after the CPSR meeting]. Drell outlined a number of emerging problems
in the arms control field. First
is the issue of land-based missile survivability, which has come into
question following the expansion
of Soviet nuclear forces with high accuracy. Concerns about the
survivability of the the land-based leg
of the U.S. triad have led to a variety of proposals and programs,
including the Strategic Defense
initiative, "rail garrison" basing of the MX missile, and the mobile,
single warhead Midgetman missile.
Drell said that while he supports in principle the stated goal of both
President Reagan and General
Secretary Gorbachev to eliminate nuclear weapons, he believes that for
the foreseeable future we will
co-exist with these weapons and we must therefore think hard about
their deployment so they are used
only for deterrence and never for war. For land-based missiles, this
means we should try to move away
from vulnerable, multiple warhead, highly accurate missiles that both
pose a significant threat to an
opponent and exist as lucrative targets. We should instead be taking
steps toward a deterrence force
based on survivable, single warhead missiles that are not lucrative
targets for a first strike, but which
can clearly be effective in a second strike, a basis for credible yet
nonprovocative deterrence.

The second technical challenge Drell said we will face in the coming
years is dealing with sea-launched
cruise missiles, or SLCMs. Cruise missiles are a problem for arms
control verification because they
can be used for either nuclear or conventional warhead delivery. SLCMs
are a particular problem
because they can be launched from ships close to the territory of a
superpower and thus be used for a
rapid, surprise first strike. Both the U.S. and the Soviet Union have
taken up cruise missiles as a
subject for arms control in the ongoing START talks, but the problem of
verifying compliance with a
limitation on SLCMs is daunting. Neither the U.S. nor the Soviet Union
are happy about the prospect of
having the other side's inspectors aboard navy ships or submarines, yet
it is probably easy to conceal
cruise missiles from remote sensing verification. Both the U.S. and the
Soviet Union are investigating
technical means of differentiating nuclear from conventional cruise
missiles by using identification
"tags" or radiation markers which can be sensed remotely. The
institutional cooperation required for
adequate verification of SLCMs will surpass any the U.S. and the Soviet
Union have developed so far.

Finally, Drell addressed the Strategic Defense Initiative, or SDI, the
program he has criticized in
public most strongly. He said that we can do reasonable research on
ballistic missile defense, but we
should be supporting and even extending the 1972 Anti-ballistic Missile
Treaty, one of the most
important agreements in the arms control field. There is now a
significant opportunity to trade
concessions on the SDI program for cuts in Soviet weapons, and this is
something we should take
seriously. The SDI program as it now stands is wasteful, provocative,
and unlikely to lead to a system
that will enhance our security.

Technology, Work, and Authority in the Information Age

Next, Robert Howard, author of the book Brave New Workplace, spoke
about the role computer
professionals have in the massive transformation of the workplace by
information technologies. Howard
said that the introduction of complex computer and communications
technologies has three related
effects: first, it increases the distance between workers and top
management; second, it creates a
system that is more fragile and subject to breakdown; and, third, it
destroys the social systems that
support existing work environments.

Howard described to the audience the topics he feels CPSR should
address. First was what he called the
"technocratic fallacy," or the hope for perfect technical control of
the workforce. As an example of this,
he quoted a researcher at Bell Labs who proudly asserted his goal of
"replacing part of the person with a
machine." This perspective supports reliance on workplace monitoring,
which typically emphasizes
quantity over quality and characteristically creates a lot of stress in
monitored workers. Howard
contrasted the "technocratic fallacy" with his own view that failure is
commonly part of complex
systems, and the capability for human intervention is important and
must be designed into technological
systems from the start instead of being avoided.

Finally, Howard addressed the overall issue of what kind of economy,
society, and workplaces we want
to have. Howard feels CPSR is in a unique position to take up this
issue. He said, for example, that
computer-supported cooperative work is an important area for research
and funding in order to
develop an economy that is both productive and conducive to human needs
at work. Howard concluded
with three specific suggestions for CPSR action:

1 ) Work to safeguard individual privacy in the workplace; 2) Work with
interest groups (such as
unions) to help them understand the impacts of computerization upon
their members; and 3)
Participate in state government efforts to promote innovation in the
use of technology by small

Women and Computers: Does Gender Matter?

In the final talk before lunch, Dr. Deborah Brecher, founder and
director of the Women's Computer
Literacy Program in San Francisco, gave a controversial presentation in
which she argued that women
and men exhibit different learning styles. Since most computer
professionals are men, this disparity
often keeps women out of the computer field and makes them
uncomfortable with computers. For
example, Brecher said, only two per cent of doctorates in computer
science are awarded to women. She
said the different learning styles she sees in men and women are the
result of the way males and females
are raised in our society, as opposed to biological origins.

Brecher described male learning as "rule-based," in contrast to female
learning, which she
characterized as more "holistic." "Female learners," who are not all
women, said Brecher, learn rules
as a result of understanding the functionality of the whole. She gave
several illustrations from her own
experience of how quickly women can learn and use computing equipment
and concepts when the
training they are given takes this learning style into account.

This talk was one of the most controversial of the day, as judged by
comments on evaluation forms. Some
people in the audience thought that the problem of the paucity of
representation of women in the
computing field, which is self-evident, seems to be confused by rooting
"learning styles" in gender.
Whether or not we see these as gender-related, CPSR should address the
problems of how computer
education takes place, what prejudices are built into such education,
and how computer education might
contribute to any group's exclusion from the field. For example,
computer science education typically
continues to direct students into a curriculum of study that puts "hard
science" programming and
mechanics first. This does tend to force out many women (and some men)
because it can be impersonal
and devoid of human-centered values. What Brecher has observed as an
obstacle for the women in her
computer classes may be only one aspect of a field which could be
driving away the very people required
to make computer technology oriented toward social needs.

Privacy, Computers and the Law

An informative and provocative panel of four experts on criminal
justice computer systems in the
United States started off the afternoon sessions of the Annual Meeting.
The first speaker was William
Bayse, Assistant Director for Technical Services for the Federal Bureau
of Investigation (FBI). Bayse
described the operations of the FBI's NCIC (National Crime Information
Center), the large, national
computer system maintained by the FBI and used by criminal justice
authorities and others all over the
country. Bayse said that as of October 1988, the NCIC conducted 796,763
average daily transactions,
748,255 of which are queries. Forty per cent of the queries are about
wanted persons, and 53% are of
the vehicles file. Only a third result in "hits," which in turn result
in local users or states directly
coordinating data transfer outside of the NCIC system.

Of interest was the fact that the NCIC system is written in assembly
language on a Hitachi mainframe
and has a team of 15 people dedicated to software maintenance. Bayse
said, "the current system is
fragile, the software is difficult to deal with." Ten million dollars
is spent annually on the NCIC system,
with 10% of this spent on an audit staff of ten people.

Bayse was followed by Jim Dempsey, a staff member for Congressman Don
Edwards, chairman of the
House Subcommittee on Civil and Constitutional Rights, who was unable
to attend. After discussing the
role of the House subcommittee in the NCIC oversight process, Dempsey
explained some of his concerns
about the the FBI's system. First he described the problem of widely
distributed access to the NCIC,
with nearly every police officer, prosecutor, probation officer, and
many others in the U.S. having
NCIC access. He said there is a general failure to verify or use
information correctly. FBI control of
NCIC also challenges a traditional dispersion of control to many parts
of government. Dempsey made a
strong call for limiting NCIC to public record information only,
coupled with limiting access to
criminal justice agencies. This contrasts with some calls to open NCIC
to private industry for
background checks (already in practice), and for the inclusion of
"tracking" files on individuals only
suspected of engaging in illegal activity. Dempsey also pointed out
that there are currently no statutory
limitations on the NCIC, and that congressional oversight pertains only
to the budgetary process.

Jerry Berman, chief legislative counsel of the American Civil Liberties
Union (ACLU) and director of
the ACLU Privacy and Technology Project, followed Dempsey. His concern
was the centralizing power of
the NCIC, and he cited the example of a "stopping file" which was used
to keep track of the movements of
4,000 anti-Vietnam War activists during that period. He posed the
interesting premise that it may be
impossible to reconcile an ever more efficient system with civil
liberties. In fact, many civil liberties
rest on governmental inefficiencies such as the Miranda warning or the
political structure of checks
and balances. Berman warned that many of the more controversial
recommendations for the new NCIC
2000 system, such as automated ties to other governmental computer
systems, have been rejected for
reasons of cost or time, not for privacy reasons. He also pointed out
that technology has overwhelmed
the constraints put on the system in the past. For example, the current
pointer system to state records
is functionally equivalent to the central file system that Congress
opposed in the 1960s. Berman also
discussed the techniques of the current tracking files used by states,
and asked rhetorically if the same
techniques could not be used to set up other kinds of tracking files as
well. Finally, Berman called on
computer professionals to find ways in which new technologies could be
used to enhance civil liberties
by creating systems that refuse incomplete or inaccurate records, which
have better auditing, and
which might empower citizens to access, correct, and purge data.

The final panelist was Dr. Peter Neumann of SRI International and
CPSR/Palo Alto. Neumann said that
CPSR has attempted to address the issues of data quality, security,
authentication, and audit trails. He
stated emphatically that today's systems are not secure. There is
misuse of the system by valid users,
severe authentication problems, and little user accountability. He
concluded by stating that there is
great risk in putting nonpublic records into the NCIC system. We also
need unique identification of
every user and, even though they are not completely secure, better
audit trails.

Following the panel there was a lively question and discussion session
involving the audience. There was
a call for an electronic Freedom of Information Act from some, while
one audience member questioned
why CPSR was discussing any of these concerns with an FBI

Living in the Future with Apple's Knowledge Navigator

The final panel of the day focused on the social assumptions and
implications of the technology shown in
the Apple Computer video story called Knowledge Navigator. This short
video portrays a professor who
enters his study to be greeted by the voice output of a notebook-sized
computer sitting on his desk, with
a bow-tied generic face as its talking icon. On the professor's voice
command this computer searches
unnamed databases, schedules appointments, places phone calls (combined
voice/data/ image), and does
simulations. Apple Computer has shown this video to audiences all over
the world, and, as one of the
panelists remarked, it is probably the resource in most demand for
audiences considering the future of
computing technology.

Larry Tesler, Apple Computer vice president for Advanced Technology,
was the first panelist to speak.
His feeling was that the hypothetical technology shown in the video
could be similar to "desktop
publishing." It will augment work, making it more efficient but at the
same time more demanding. He
raised the issue of a potential conflict between privacy and
convenience. We may want more freedom of
information or technologies that can call anyone, anywhere, at any
time, and capture data. However,
Tesler said, interruptions of personal time can be seen as an invasion
of privacy, as he has experienced
with both human and machine telemarketing systems using current
technology. Tesler warned of the
possibility of inter-computer "junk calling" on a grand scale. However,
on balance, he feels that
technology trends point toward the increased humanization of computer

Esther Dyson, publisher of Release 1.0 newsletter and a respected
computer industry commentator, had
a sharply different view. She found the technology portrayed in the
video to be misleading, inaccurate,
and offensive. She objected to the anthropomorphising of the computing
machine by portraying it as a
talking human. Dyson believes that we want a computer to be just a
computer, "not a human face with a
bow tie." She said that if we are expected to believe that a computer
is a human-like agent, this is
insulting, while if people actually do believe that, it's frightening.
Dyson also asked who is liable for an
"intelligent agent's" decisions and actionsÑthe user of the computer or
the manufacturer?

A stimulating and frequently humorous point of view was provided by the
Director of The Program on
Scholarly Technology at the University of Southern California, Dr.
Peter Lyman. He said the technology
portrayed in the video would probably increase the social isolation of
the people portrayed, rather than
connecting them. The computer in the video acts like a human slave in
that it can interpret the user's
will and act on this interpretation. There is a hidden dependence of
the user on this computer "slave."
Furthermore, the video doesn't show where the computer goes to get
data, how this data is verified, or
what drives the simulation shown in the story. It is as if the human
user is no longer concerned about
the calculative logic behind the computer display. For Lyman, personal
communication mediated
through a technology necessarily reduces the information flow between
people. He further asked why, if
we have a computer system as powerful as this, does the professor still
deliver formal lectures to a
room full of students? Why are the students portrayed as passive
receptacles? This technology could
instead be used to enable students to broaden their areas of learning
outside the university, perhaps
even abandoning the formal institution itself. Lyman's challenge to the
audience was to find ways
technology can be used to improve social relations.

Fernando Flores of Action Technologies in Emeryville, California,
provided a more global view of the
problems raised by the video presentation. His comments centered on
questions of a return to morality
and the American view of the world, in contrast to many of the
differing cultural concepts of technology
expressed, for example, in Latin America. Following his discussion, the
audience heard from Douglas
Engelbart, who replaced Theodore Roszak. Engelbart said he too was
concerned with the
anthropomorphising of the computer, and he suggested we are actually in
danger of becoming dependent
on nonhuman slaves. Talking computers with artificial intelligence
could just as well be aliens from
another planet, he said, and he suggested the talking head in the
computer should really be made to look
like "ET." Engelbart said that we should use technology to develop
skills, while the educator in the
Knowledge Navigator video looks as if he wound up with very few skills
and with most of the professor's
previous expertise transferred to the computer.

The CPSR Annual Banquet

After a ten-course Chinese banquet during the evening, Professor Joseph
Weizenbaum was presented
the 1988 Norbert Wiener Award for Professional and Social
Responsibility. The award was presented
by CPSR President Terry Winograd. (See the text of Winograd's
introduction on page 12.) The CPSR
Board of Directors presented Professor Weizenbaum with a Bulova clock
set in Hoya crystal, mounted
on a clear Lucite block inscribed with the name of the award and the

After the award presentation, the banquet attenders heard from Jim
Warren, founder of the West Coast
Computer Faire, Intelligent Machines Journal (which became InfoWorld),
and Dr. Dobb's Journal.
Warren, who titled his talk "Do Something!," spoke about the role
computer technologies can play in
political campaigns. He emphasized how a small amount of thought and
energy, well planned and
combined with relatively cheap computer equipment, can play a pivotal
role in electoral politics.

The Sunday Program: The State of CPSR

Sunday morning started with a presentation by Gary Chapman, CPSR
executive director, on the
financial health of the organization. Reports from CPSR's chapters were
introduced by Eric Roberts,
CPSR National Secretary. CPSR members from around the country exchanged
information on the wide
variety of activities underway in the chapters. This included numerous
public meetings with notable
speakers on diverse topics such as research funding, ethics, software
copyright law, computer viruses,

and many more. Local chapters are involved in study groups, research on
computerized vote counting
systems, and projects to support nonprofit and community organizing
groups. The session was
concluded with a summary of CPSR goals by Steve Zilles, CPSR Chairman
of the Board.

Grassroots Organizing

One of the highlights of the Sunday meeting was the session on
organizing and building local chapters.
John Spearman, Associate Director for the Doctor's Council of New York
City, ran the two hour session.
This workshop was requested by many members around the country who felt
they needed specific
guidance in developing local chapters and CPSR activity. Spearman
challenged meeting participants to
redefine CPSR's role in order to have an impact on society in a broad
and fundamental way. He pointed
out that the world is clearly in a period of change and the opportunity
now exists to set expectations for
how CPSR intends to be viewed by the public. Invoking a slogan of the
60's from his own history, he
exhorted CPSR members to "seize the time" to project a vision in
cooperation with others with similar

Beyond presenting a framework in which to understand local organizing,
Spearman discussed many
concrete, practical techniques. He drew out participants' own
experiences and questions, then helped
synthesize many general points latent in comments from the audience.
Most important among these was
the significance of interaction on a "one-to-one" basis with people in
work and community settings.
Spearman pointed out that CPSR's success with individual activity says
a lot about the organizationÑ
such as how members are valued and how their contributions and
committments (at whatever level)
are appreciated. Without this communication with individuals, CPSR
leaders will not understand what
motivates people to work with the organization. Other key aspects
Spearman emphasized were the need
for a "social" character to CPSR work and meetings, ensuring that
chapter leadership is accountable to
the membership, that communication flows from both the national level
to the chapter level and
vice-versa, and the development of a viable local and national media
plan to be able to respond quickly
to issues such as the "worm/virus" incident. Finally, he explained how
important it is to understand
the political terrain in which CPSR exists and to build partnerships to
educate a public which is looking
for leadership on many issues involving technology.

Activist Workshops

The conference then dispersed into six workshops. The first four had
been planned, and these were on
Computing and Social Issues in Education, Computers and the Workplace,
Computers and the Military,
and Computers, Crime and Civil Liberties. Attendees also decided to
create workshops on research
funding in computer science and computerized vote counting. Reviewed
here are two of the workshops,
though all the participants in the workshops reported that they were
valuable and could have used more

The workshop on computers and education was spirited and action-
oriented. There was discussion of the
large information gaps in this area, particularly evident in the
disparities between heavily funded
schools in better neighborhoods and schools attended predominantly by
the underprivileged. The
wealthier schools have lower computer-to-student ratios, more funds for
equipment, software and
training, and they tend to use computers to enhance problem-solving
skills. The poorer schools tend to
use machines for simple drill and practice problems and have more
students per computer. It was also
pointed out that there are many kinds of computer science program
curricula. There are specific needs
in elementary and secondary education and within universities different
kinds of computer science
courses are needed for different majors. For those concerned about the
teaching of ethics and social
responsibility in computer science courses, it was pointed out that
what may be needed is more of such
subjects in all curricula.

This workshop came up with a long list of action items. Two national
newsletters will be initiated, one
aimed at university issues and another on the K-12 schools. Several
participants will be developing a
position paper on computer literacy, while others hosted a workshop on
these issues at Stanford on
January 22 (also see the enclosed questionnaire on computer education).

After some heated discussion and a debate on goals, participants in the
workshop on civil liberties
proposed that CPSR should attempt to construct a set of general
principles on civil liberties and
privacy for the construction and use of databases. The basis of this
would be the forthcoming CPSR
report analyzing the NCIC system. However, such an analysis will have
to be extended to look at
practices in the private sector. It was suggested that the IRS would be
a better focus than the FBI's NCIC
system. The proposed set of principles would enable grassroots
organizers around the country to
evaluate specific databases in order to see if they conform to CPSR
guidelines. There was some debate as
to whether such a set of principles could apply across applications,
and this remains to be seen.

Overall, the 1988 CPSR Annual Meeting was a tremendous success. It
illustrated the growing diversity
of issues within the CPSR program, but each issue was presented
clearly, provocatively, and
professionally. The meeting attracted many new members to the
organizationÑover 100Ñand generated
a significant amount of press coverage. The meeting provided a useful
forum for members from all over
the country, and it appears to have stimulated several new and
important projects.

This year's meeting will be in Washington, D.C., in October. Members on
the East coast who were unable
to attend the 1988 meeting should make plans to be at the 1989 CPSR
Annual Meeting in the capital.
The meeting should feature many prominent policymakers and it will give
the organization some
important exposure to people dealing with issues in the CPSR program
within the government.

David Bellin is assistant professor of computer science at the Pratt
Institute in Brooklyn, NY, and a
Member-at-Large on the CPSR Board of Directors. He thanks Eric Roberts,
Susan Suchman, Christine
Borgman, and Eric Gutstein for help in putting together this article.

Volunteers Made the CPSR Annual Meeting a Success

A tremendous amount of work was put into the Annual Meeting by CPSR
volunteers. Ail CPSR members
owe thanks to Bradley Hartfield who was was Co-chair and Program Chair,
Anita Borg, Co-Chair and
Facilities Chair, Todd Newman, Facilities Coordinator, and Wayne
Martin, Volunteer Coordinator. Also
special thanks go to the many volunteers for the meeting: Steve Adams,
Annamaria Avernter, Laura
Balcom, Dan Bloemberg, Dan Bloomday, Erick Blossom, Miriam Butt, Dan
Carnese, Carolyn Curtis,
Paul Czyzewski, Marilyn Davis, Frank Dohl, Oscar Firschein, Judith
Gilbert, Laura Gould, Torn
Gruber, Kathy Hemenway, Rodney Hoffman' Mark Horovitz, Dave Kadlecek,
Joe Karnicky, Kate Morris,
Ross Nelson, Severo Ornstein, Priscilla Oppenheimer, Liam Peyton,
Sanford Rockowitz, John Sullivan,
Deborah Tatar, Larry Tesler, Fred Tonge, Ivan Tou, and Zona Walcott
Special thanks is also due Apple
Computer for their help in showing the "Knowledge Navigator" video.

Weizenbaum Presented Norbert Wiener Award

The following remarks were made by CPSR President Terry Winograd on the
occasion of presenting
Professor Joseph Weizenbaum with the Norbert Wiener Award for Social
and Professional
Responsibility at the CPSR Annual Banquet in Palo Alto, California, on
November 19, 1988.

Tonight we are presenting the second Norbert Wiener award for Social
and Professional Responsibility.
The award goes to a person whose work in the field of computers
demonstrates the highest level of
commitment to the responsible use of computer technology. Last year's
award went to Professor David
L. Parnas, who has been tireless in informing the public about the
dangers and misrepresentations of
the SDI program. This year, we are proud to present our award to Joseph
Weizenbaum, professor
emeritus of computer science at the Massachusetts Institute of

The award is named in honor of Norbert Wiener, whose pioneering work in
cybernetics was one of the
pillars on which computer technology was created, and whose many
writings on computers and society
were among the first inklings of the problems and potentials that this
new technology would create. In
many ways, our recipient, Joe Weizenbaum, has followed a similar path.

Both spent the bulk of their working lives at MIT, beginning their
careers with technical
contributions, and then progressing in later years to a focus on the
social consequences of the
technology they had helped to create.

Both fought with a passion against the destructive madness of high
technology at the service of war and
destruction. Both wrote highly influential books about the problems of
humanity and technology,
moving beyond discussion of the machinery to a broad consideration of
human actions, values and ethical
responsibilities. Weizenbaum's Computer Power and Human Reason stands
alongside Wiener's books on
science and society as a powerful reminder that wisdom and technical
mastery are not the same, and
that we confuse them at our danger.

From his earliest writings, Joe was concerned about the relationship
between the computer and the
human. In a research document written in his early days at MIT, working
close to the nascent artificial
intelligence laboratory, he wrote, "The goal is to give to the computer
those tasks which it can best do
and leave to man that which requires (or seems to require) his

He has devoted many years and much effort to helping us understand that

From the point of view of CPSR, Wiener may be the patron saint, but
Weizenbaum had a much more
direct influence on the fact that we are here tonight. During his many
years of working with students at
MIT, he was a teacher to many of us, and his work stimulated the
thinking of many others who were not
fortunate enough to be in the same institution. I know that my own
concerns with social issues and the
ethics of computing were strongly influenced by my contacts with Joe,
beginning over 20 years ago. All
of us can trace some part of our concern back to Joe's vital influence.

Looking back, it is fair to say that Joe was out there ahead of us in
all of our major issues. In the panel
discussion today on the National Crime Information Center, Jim Dempsey
quoted testimony Joe gave
before a congressional committee over a decade ago on computers and
civil liberties. Paul Armer (who
has also done much valuable work himself) reminded me that Joe was one
of the founders of Computer
Professionals Against the ABM, which was a direct forerunner of our
program on the SDI. His concern
with the military domination of computer science has pervaded his
writings for many years, and is
expressed in an article published in our newsletter in the fall of

This isn't to say it has all been easy and comfortable. In fact I don't
think Joe would see "comfortable" as
a good word. He has spent much of his career making people
uncomfortable, and making it clear to them
why they should be. He sees how dangerous it can be for people to live
with their comfortable
presuppositions, making endless "progress" toward some unexamined goal.

Joe Weizenbaum challenges those of us with comfortable positions in the
computer profession to look
seriously at how our work is being used. In the newsletter article I
mentioned above, he said:

We now have the power to alter the state of the world fundamentally and
in a way conducive to life.

It is a prosaic truth that none of the weapon systems which today
threaten murder on a genocidal scale,
and whose design, manufacture and sale condemns countless people,
especially children, to poverty and
starvation, that none of these devices could be developed without the
earnest, even enthusiastic
cooperation of computer professionals. It cannot go on without us!
Without us the arms race, especially
the qualitative arms race, could not advance another step.

Does this plain, simple and obvious fact say anything to us as computer
professionals? I think so.

. . . Those among us who, perhaps without being aware of it, exercise
our talents in the service of death
rather than that of life have little right to curse politicians,
statesmen and women for not bringing us
peace. Without our devoted help they could no longer endanger the
peoples of our earth. All of us must
therefore consider whether our daily work contributes to the insanity
of further armament or to
genuine possibilities for peace.

Going beyond the question of computers, Joe has questioned some of the
most sacred dogmas of our
culture, including the pre-eminence of the rationality of science. He
challenges the assumption that
science can yield a complete understanding of the objects of its
studies, in particular that it can
ultimately account for "the whole human." He rejects the common view of
progress as an

accumulation of abstract knowledge and material power. He reminds us
that in losing sight of human
values this quest can turn from progress to madness.

At times, Joe has been characterized by his critics as a Ludditc as
having an irrational fear of all
science and technology. It is not surprising that such allegations
would come when someone dares to
question the sanctity of the modern scientific enterprise and to argue
that there is a more fundamental
kind of wisdom. I think a deeper reading gives a different perspective.
His criticism is not of
technology, but of our uses of technology. I will conclude with one
more quote from a paper Joe wrote a
few years back:

Perhaps the computer, as well as many other of our machines and
techniques, can yet be transformed,
following our own authentically revolutionary transformation, into
instruments to enable us to live
harmoniously with nature and with one another. But one prerequisite
will first have to be met: there
must be another transformation of man. And it must be one that restores
a balance between human
knowledge, human aspirations, and an appreciation of human dignity such
that man may become worthy
of living in nature.

It is an honor to present the Norbert Wiener Award for Social and
Professional Responsibility to
Professor Joseph Weizenbaum.

CPSR Chapters
ary 1989


Ivan M. Milman
4810 Placid Place
Austin, TX 78731
(512) 823-1588 (work)


Steve Berlin
Al Architects Building 400, One Kendall Square
Cambridge, MA 02139
(617) 625-2597 (home)


Don Goldhamer
528 S. Humphrey
Oak Park, IL 60304
(312) 702-7166 (work)


Randy Bloomfield
4222 Corriente Place
Boulder, CO 80301
(303) 938-8031 (home)

CPSR/Los Angeles

Rodney Hoffman
CPSR/Los Angeles
P.O. Box 66038
Los Angeles, CA 90066
(213) 932-1913 (home)


Deborah Servi
128 S. Hancock St., #2
Madison, WI 53703
(608) 257-9253 (home)


Betty Van Wyck
Adams Street
Peaks Island, Maine 04108
(207) 766-2959


David J. Pogoff
6512 Belmore Lane
Edina, MN 55343-2062
(612) 933-6431 (home)

CPSR/New Haven

Larry Wright
702 Orange Street
New Haven, CT 06511

CPSR/New York

Michael Merritt
294 McMane Avenue
Berkeley Heights, NJ 07922
(201) 582-5334 (work)
(201) 464-8870 (home)

CPSR/Palo Alto

Dan Carnese
19 Mercedes Court
Los Altos, CA 94022
(415) 949-4849 (home)
(415) 438-2009 (work)


Lou Paul
314 N. 37th Street
Philadelphia, PA 19104
(215) 898-1592 (work)


Ravi Kannan
5921 Nicholson Street
Pittsburgh, PA 15217
(412) 422-2439 (home)


Bob Wilcox
P.O. Box 4332
Portland, OR 97208-4332
(503) 246-1540 (home)

CPSR/San Diego

John Michael McInerny
4053 Tennyson Street
San Diego, CA 92107
(619) 534-1783 (work)
(619) 224-7441 (home)

CPSR/Santa Cruz

Alan Schlenger
419 Rigg Street
Santa Cruz, CA 95060
(408) 425-1305 (home)


Doug Schuler
P.O. Box 85481
Seattle, WA 98105

CPSR/Washington, D.C.

David Girard
2720 Wisconsin Ave., N.W. Apt. 201
Washington, D.C. 20007
(202) 965-6220 (home)

CPSR, P.O. Box 717, Palo Alto, CA 94301 (415) 322-3778

CPSR Contacts
ary 1989

CPSR has six regional directors in charge of CPSR activity in their
regions of the country, as well as
representing their respective regions on the CPSR Board of Directors.
There are also CPSR contacts for
areas where there are no chapters. These people have volunteered as
contacts for members in the area
interested in starting a CPSR group.

Regional Directors

Middle Atlantic

Susan Suchman
433 Seventh Avenue #3
Brooklyn, NY 11215
(718) 686-7551 (work)


Hank Bromley
Martha's Co-op
225 Lake Lawn Place
Madison, WI 53703

New England

Dr. Karen R. Sollins
MIT Lab for Computer Science
545 Technology Square
Cambridge, Ma 02139
(617) 237-6755 (home)


Dr. Jonathan Jacky
4016 8th N.E. #202
Seattle, WA 98105
(206) 634-0259 (home)
(206) 548-4117 (work)


Dr. Alan K. Cline
Computer Science Department
University of Texas
Austin, TX 78712
(512) 471-9717 (work)


Dr. Lucy Suchman
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94303



Dr. Karen M. Gardner
Info. Systems Dept.
Golden Gate University
536 Mission Street
San Francisco, CA 94105
(415) 442-7217 (work)


Phillip W. Hutto
788 Brookridge Drive
N.E. Atlanta, GA 30306
(404) 872-8404 (home)


Albert D. Rich
912 Ocean View Drive
Honolulu, HI 96816
(808) 735-8698 (home)


Dr. Henry Walker
Dept. of Mathematics
Grinnell College
Grinnell, IA 50112
(515) 236-8893 (home)


Herb Barad
Electrical Engineering Dept.
Tulane University
New Orleans, LA 70118-5674


Melanie Mitchell
Psychology Dept.
University of Michigan
427 S. 5th Avenue #3
Ann Arbor, Ml 48104
(313) 763-5875 (work)
(313) 994-3726 (home)

Daniel L. Stock

1464 Timberview Trail
Bloomfield Hills, MI48013
(313) 852-1621 (home)


Thomas Sager
Dept. of Computer Science
304 Math Science Bldg.
University of Missouri
Columbia, MO 65211
(314) 882-7422 (work)

Ted Lau

14386 Sycamore Manor Drive
St. Louis, MO 63017
(314) 532-9215 (work-home)

New York

Robert J. Schloss
279 East Lake Blvd.
Mahopac, NY 10541
(914) 945-1870 (work)
(914) 628-1041 (home)


Jon Weisberger
687 S. Cassingham Road
Bexley, OH 43209
(614) 895-4443 (work)
(614) 236-1674 (home)

Steven M. Roussos
2000 Eastman Drive
Milford, OH 45150
(513) 576-2551 (work)
(606) 341-8467 (home)

Leon Sterling
4516 College Road
South Euclid, OH 44121
(216) 368-5278 (work)
(216) 381-2861 (home)


Sally Douglas
87041 Greenridge Drive
Veneta, OR 97487
(503) 686-4408 (work)


Jim Gawn
321 Nevin Street
Lancaster, PA 17603
(717) 872-3667 (work)
(717) 393-6179 (home)


Guy T. Almes
3106 Broadmead Drive
Houston, TX 77025
(713) 527-8101 (work)

CPSR, P.O. Box 717, Palo Alto, CA 94301 (415) 322-3778

CPSR Foreign Contacts
ary 1989

CPSR is in regular communication with the following individuals and
organizations concerned with the
social implications of computing.


Ottawa Initiative for the Peaceful Use of Technology (INPUT)
Box 248, Station B
Ottawa, Ontario K1 P 6C4


Dr. Calvin Gotlieb
Dept. of Computer Science
University of Toronto
Toronto, Ontario M5S 1A4


Richard S. Rosenberg
Department of Computer Science
University of British Columbia
6356 Agricultural Road
Vancouver, British Columbia V6T 1W5


Australians for Social Responsibility in Computing (ASRC)


Graham Wrightson
Dept. of Computer Science
Newcastle University
Newcastle, NSW 2308

New Zealand

Computer People for the Prevention of Nuclear War (CPPNW)
P.O. Box 2
Lincoln College


Pekka Orponen
Department of Computer Science
University of Helsinki Tukholmankatu 2
SF-00250 Helsinki

Great Britain

Computing and Social Responsibility (CSR)


Jane Hesketh
3 Buccleuch Terrace
Edinburgh EH8 9NB, Scotland

Philip Wadler
Department of CS
University of Glasgow
Glasgow 612 800


Gordon Blair
University of Lancaster
Department of Computer Science
Bailrigg, Lancaster LA1 4YN


Mike Sharples
The University of Sussex
School of Cognitive Sciences
Falmer Brighton, BN1 9QN

West Germany

FIFF per Adresse Helga Genrich
Im Spicher Garten #3
5330 Koenigswinter 21
Federal Republic of Germany


Informatici per la Responsibilita Sociale (IRS-USPID)
Dr. Luca Simoncini
Istituto di Elaborazione dell'
Informazione CNR
Via Santa Maria 46
1-56100 Pisa

Ivory Coast

Dominique Desbois
Centre d'Information et d'Initiative sur l'Informatique (CIII)
08 BP 135 Abidjan 08
Ivory Coast, West Africa

South Africa

Philip Machanick
Computer Science Department
University of the Witwatersrand
Johannesburg, 2050 Wits,
South Africa


Dr. Ramon Lopez de Mantaras
Center of Advanced Studies
17300 Blanes, Girona


David Leon
c/o The Population Council
P.O. Box 1213
Bangkok 10112

CPSR, P.O. Box 717, Palo Alto, CA 94301 (415) 322-3778

Student Pugwash Meets in Boulder in June

The Sixth Biennial Student Pugwash International Conference will meet
June 18-24, 1989, at the
University of Colorado at Boulder. This meeting assembles ninety
students from around the world for
discussions on the impact of science and technology on society.

All undergraduate, graduate, and professional students are eligible to
compete for the places available.
Competition is based on merit. All on-site costs are covered by Student
Pugwash; some travel subsidies
are available.

Contact Kathryn Janda, Conference Coordinator, Student Pugwash USA,
1638 R Street, N.W., Suite 32,
Washington, D.C. 20009, or call (202) 328-6555.

The Search for An "Electronic Brain"

An Introduction to Neural Networks Tyler FolsomÑCPSR/Seattle

The first computers were called "electric brains" and from the start
there was a widespread popular
conception that computers could think, or that scientists would make
that happen very soon. The reality
has of course been far different. A human programmer must specify in
nit-picking detail exactly how a
task is to be done. If the programmer forgets anything or makes a
mistake, the computer will do
something "stupid," or perhaps unforeseen.

"Artificial Intelligence" (AI) has been a popular buzzword for decades
but it remains ill-defined. Al has
produced some useful expert systems, good chess-playing programs, and
some limited speech and
character recognition systems. These remain in the domain of carefully
crafted algorithmic programs
that perform a specific task. A self-programming computer does not
exist. The Turing test of machine
intelligence is that a machine is intelligent if in conversing with it
one is unable to tell whether one is
talking to a human or a machine. By this criterion, artificial
intelligence does not seem any closer to
realization than it was thirty years ago. The Department of Defense
poured a great deal of money into
the Strategic Computing Program to develop specific military
applications of AI. The Pentagon has been
disappointed in the results and is now looking at other approaches. If
we are to build an electronic
brain, it makes sense to study how a biological brain works and then
try to imitate nature. This idea has
not been ignored by scientists, but, unfortunately, real brains, even
those of "primitive" animals, are
enormously complex. The human brain contains over ten billion neurons,
each capable of storing
more than a single bit of data. Mainframe computers are approaching the
point at which they could have
a comparable memory capacity. While computer instruction times are
measured in nanoseconds,
mammalian information processing is done in milliseconds. However, this
speed advantage for the
computer is superseded by the massively parallel structure of the
nervous system; each neuron
processes information and has a bewildering maze of interconnections to
other neurons. Multiprocessor
computers are now being built, but making effective use of hundreds of
processors is a task that is still
on the edge of computer theory. No one knows how to handle a billion
processors simultaneously.
It may well be that the best approach to achieving true machine
intelligence is to start over and throw
out traditional machine architectures and programming. Artificial
neural systems represent one such
approach. These take their inspiration from biological nervous systems.
Since the brain's wiring and
organization are mostly unknown, the modelling is inexact by necessity.
These systems are
characterized by massively parallel simple processors, which are
referred to as neurons. Each neuron
has many inputs and one output, the latter of which is activated when
the sum of the inputs passes some
threshold. The connections between neurons have different strengths.
This connection scheme and its
weights determine how the system processes information. The weights can
be either positive
(excitatory) or negative (inhibitory). The collection of neurons and
their weighted interconnections
form a network. Certain neuron input signals are considered to come
from the outside world. This
pattern is designated as the network input. Similarly, some neuron
outputs are considered as the output
pattern of the network. The network can be viewed as an associator of
stimulus and response patterns.
Implementation can be electronic, optical, biological, or via computer
simulation. Simulation is by far
the most common at present. When a connectionist system is executed as
a program on a digital
computer, the speed advantage of massive parallelism is lost and the
computation is likely to be less
efficient than other methods. Thus specialized hardware will be needed
to achieve optimum
performance. In an electronic implementation, the processing elements
are op amps which are
connected by resistors. The values of the resistors affect the weights.
Since there are no negative
resistors, the op amps need to supply both positive and negative
outputs to allow both excitation and

Information is distributed throughout the network. Data is represented
by the pattern of activation of
the network. The on/off state of a single neuron has no obvious
interpretation. It is only by looking at
the overall pattern of activation that the machine state can be
understood and it is usually difficult to
figure out how the computation is made.

It is usually useful to organize neural networks into layers. A typical
example is a pattern recognition
system. Assume that we want to recognize binary vectors of N bits each.
These vectors fall into M
disjoint classes. We construct a network that has N connections to the
outside world to handle the input
vectors. Our network will have M outputs, one for each class. When we
apply an input vector, only the
output vector for the right class will turn on. Such a pattern
recognizer might be built by assigning N
op amps to the input layer. Each of these has only one input, but the
output is connected through
appropriate resistors to each of the op amps of the next layer. This
layer is called "hidden" since none
of its inputs or outputs can be observed externally. The outputs from
the hidden layer travel via
various resistors to the inputs to the final, or "output" layer. The
state of the outputs from this last
layer are presented to the outside world as the class to which the
pattern belongs.

There are many ways in which networks can be organized. The above
example used feed-forward layers.
Other topologies permit neurons to be self-excitatory or feed back to a
lower level. Most artificial
nervous systems investigated to date use homogeneous processing
elements, but in natural systems
there are many different kinds of nerve cells. It is important to match
the network architecture to the
specific problem.

Coming up with the weights to make a network perform its desired task
is difficult. The usual approach
is to initialize the weights to some random values and then apply a
learning algorithm to modify the
weights. During training, the network is given a set of inputs and
desired outputs. As each input is
applied, the resulting outputs are compared to the desired ones. If an
output neuron has an incorrect
value, each of the contributing weights is modified slightly. After
many iterations through the training
set, the weights will be configured so that proper responses are
obtained. At least this is what usually
happens. Sometimes the network does not converge.

The neural network approach can have some unexpected side effects,
properties whose achievement in
traditional computers is difficult. One is the ability to generalize. A
network can solve a problem that it
has not previously encountered based on similarities with what it does
know. For instance, consider the
behavior of our pattern classifier when it is presented with an unknown
pattern. It will activate the
category for the most similar pattern. Its idea of similarity may or
may not coincide with a human's. If
the new pattern is ambiguous, it may make a guess by giving weak
outputs for two possible categories.
Neural networks tend to fail gracefully; if a wrong answer is produced
it is generally a plausible
response based on the training received. Like the brain, a network can
continue to function despite the
failure of part of the system. This is a consequence of distributed
data storage; if some cells fail, the
remaining ones still retain enough of the pattern to function.

Crick and Mitchison [83] have speculated that neural networks may need
to sleep, dream and forget in
order to function properly. Dreams can be interpreted as patterns
activated by random noise instead of
by the appropriate inputs. It has been suggested that during sleep,
connection weights may be modified
to decrease the probability of spurious states. Neural modelling
permits fascinating interplays between
engineers and biologists. A biological structure may inspire a computer
simulation of a simplified
neural network. The model in turn may predict behavior which the
neurologist has not observed and
which may be verified or disproved by further experimentation.

Conspicuously absent from neural networks is any logical processing
unit or von Neumann-style
sequential program. These systems seem able to respond to the gestalt
of what they see without needing
to make laborious series of inferences. Learning is one of the most
far-reaching promises of neural
networks. There is no longer a programmer in charge of the machine, but
a teacher. Would it be
possible to build a machine that can learn from experience the way a
child does? The consequences of
generalizing from a lesson may be unpredictable, with the machine
reaching conclusions that were

A Brief History of Neural Networks

Interest in neural networks has seen explosive growth in the last five
years. The field does have a
longer history. The first paper to lay out the theoretical foundation
of neural computing was written by
McCulloch and Pitts in 1943 [McCulloch, 43]. This made the assumption
that biological neurons are
binary devices and proposed a logical calculus by which the nervous
system works. Von Neumann cited
this work in the 1945 paper that first proposed a stored program
computer. In a later work [von
Neumann, 58] he calculated human memory capacity to be on the order of
1020 bits and pointed out
that the limited resolution of nerve cells restricts the types of
calculations that the brain can do;
multilayered arithmetic would be severely degraded by roundoff errors
but logical calculations would
not face this difficulty. Connectionist learning theory can be traced
to Donald Hebb's statement of a
physiological learning rule for synaptic modification [Hebb, 49].
Connection strengths will increase
between two nerve cells as a result of persistent excitation. In 1958
Rosenblatt, a psychologist,
introduced the perceptron, which was the first precisely specified
computationally oriented neural
network [Rosenblatt, 58]. He felt that the brain is not a logic
processing machine but a learning
associator that functions by matching responses to stimuli. The
perceptron caused a major stir and
received widespread analysis. In 1969, Minsky and Papert published the
book Perceptrons, which
demonstrated that these neural networks were incapable of learning to
perform a function as simple as
an exclusive OR. Disillusionment set in and research funds for neural
networks dried up.

Not everyone abandoned the field. In Finland, Teuvo Kohonen extended
the ideas of the perceptron to
associative memories using linear analog signals instead of binary.
Stephen Grossberg has also been
active in the field for nearly 20 years and has published extensively.
His articles tend to be difficult
reading since they are seldom self-contained and use a high degree of
mathematical sophistication. In
the early 1980's Rumelhart, McClelland, Hinton and other colleagues at
UCSD began the parallel
distributed processing (PDP) discussion group to investigate the
relationship between connectionist
models, perception, language processing and motor control. Their book
is probably the most
comprehensive treatment of neural networks. An accompanying workbook
includes floppy disks with
MS-DOS C programs for simulating several types of PDP systems. Use of
nonlinear processing
elements grouped into multiple layers has overcome the exclusive OR

In 1982, John Hopfield published a paper that brought neural networks
back into vogue [Hopfield, 82].
This collected previous work with a coherent mathematical theory and
pointed the way to building
asynchronous parallel processing integrated circuits. In subsequent
papers Hopfield indicated how
neural networks could come up with a good, if not optimal solution to
the combinatorially intractable
travelling salesman problem. He also showed how his network theory
could be used to build circuits
such as a digital to analog converter from artificial neurons. By 1987
AT&T Bell Laboratories
announced neural network chips based on the Hopfield model. Today the
neural net researcher can
choose from a wide selection of conferences, books, articles, software
packages, and specialized


Bernard Widrow and Marcian Hoff's 1960 paper "Adaptive Switching
Circuits" improved the learning
speed and accuracy of the perceptron. This was applied to the ADALINE
(originally ADAptive LInear
NEuron, but changed to ADAptive LINear Element as perceptrons lost
favor). It was a threshold logic
device with inputs and outputs set to states of +1 or -1. Learning of
the weights continued even after
correct classification had been achieved. Widrow's adaptive noise
reduction systems have been widely
applied throughout the telecommunications industry. Each telephone line
has a different transfer
characteristic, but these circuits can adjust the input signal to
maximize the signal to noise ratio for
whatever state the line is in. Since Widrow has been in the field
longer than almost anyone and has
enjoyed commercial success, it is not surprising that he was chosen to
write the recent neural network
study for the Defense Advanced Research Projects Agency (DARPA).

Terry Sejnowski has demonstrated a system that learns to pronounce
English text [Sejnowski, 87]. The
network is presented with a string of seven letters and asked to select
the correct phoneme for the
central letter. The resulting speech had good intelligibility and
mistakes tended to be similar to those
made by children. The more words the network learns, the better it is
at generalizing and correctly
pronouncing new words. Performance degrades very slowly as the network
is damaged; no single
element is essential. The network had 203 input units, 80 hidden
elements, and 26 output units. There
were 18,629 weights assigned in the learning process.

In Japan, Kuniko Fukushima of NHK Laboratories has developed the
"neurocognitron" over the past 20
years. It is capable of recognizing hand written characters after being
trained with the letters of an
alphabet. It responds correctly even to letters that are shifted in
position or distorted. NHK claims to
have solved the problem of perception and says that it could build an
artificial brain if it had a good
model of short-term and long-term memory [EE Times, April 6, 87].

Nabil Farhat has built a content addressable associative memory based
on Hopfield's model. Farhat has
used optics instead of electronics to achieve parallelism and massive
interconnectability. When this
memory is combined with a high resolution radar it produces images that
can resolve to 50 cm on a full
sized aircraft. It can match airplane types to a library of
categorizers which are stored on photographic
film. Correct classification can be done using as little as 10% of the
radar's full data set. Farhat's
research team predicts the ability to identify aircraft at ranges of a
few hundred kilometers. A 5 by 5
bit optical neural network has been built; classification results were
obtained by simulating a 32 by
32 bit associative memory. The radar has been tested on model airplanes
in a laboratory [Electronics,
June 16, 86] [Farhat, 85].

Problems with Neural Networks

"The probability of building a neural network, pointing it at the world
and having it learn something
useful is about as close to zero as anything I can think of on this
earth," said Dr. Leon Cooper (EE
Times, Nov 30,87). Dr. Cooper won the 1972 Nobel prize for work in
superconductivity and is co-
chairman of Nestor, a company selling neural network systems for
character recognition, financial
analysis, and other tasks. He is concerned that inflated expectations
could kill off neural networks. "The
second thing that could defeat us is not being able to put together
working systems that actually do
something in the real world.... What we must strive for now are
practical systems that do useful things,
and cards-on-the-table explanations of what they do."

One problem with neural networks is the difficulty of verifying and
reproducing published results.
Since much neural modeling consists of ad hoc computer algorithms, it
is hard to recreate published
systems. Two different computer implementations of the same algorithm
may produce different results.
Even when the same software is used, there are dozens of parameters
which represent assumptions that
most authors omit mentioning. This situation makes it easy for overly
optimistic claims to go

Several learning algorithms have been suggested for neural networks,
but none is entirely acceptable.
One would want to learn the same way that the brain does, but no one
knows the brain's learning
algorithm. The most widely used learning algorithm is "back
propagation" and it is not biologically
plausible. it usually converges to a good solution but is slow. This is
due to the fact that non-trivial
networks must be non-linear and multilayered. The input states and
desired outputs are known and it is
relatively easy to vary the weightings to achieve a desired state. The
problem is that the states of the
interior layers are not known and must be discovered as part of
learning [Hinton, 87].

The ability to learn new responses to new situations is often presented
as an advantage for neural
networks. The long learning times required detract from this claim.
Furthermore, using the back
propagation method requires that relearning include all the original
training data; if only new data are
used, old knowledge would gradually be erased. Grossberg's Adaptive
Resonance Theory (ART)
overcomes this by using an attention-focusing mechanism. In ART, novel
situations arouse attention and
these can be learned without wiping out old patterns. However ART
cannot learn as many
representations as back propagation.

A more serious problem is scaling. Most of the systems demonstrated to
date are relatively small,
containing dozens or hundreds of neurons. To do applications such as
speech or image understanding
would likely require an additional four orders of magnitude.
Unfortunately, as the number of nodes in a
network increases, the learning time required increases nonlinearly.
The exact relationship of network
size to learning time is not known but it appears to be proportional to
the square or cube and may even
be exponential.

This leads to the application of complexity theory to learning in
neural networks. Les Valiant developed
a formal theory of what it means to learn from examples [Valiant 84].
By his definition, a problem is
solvable by learning only if there is a polynomial time algorithm that
learns acceptably well from a
polynomial number of examples. By polynomial, we mean that as the size
of the problem increases, the
learning time or number of examples needed can increase linearly, by
the square, or by the cube, etc.,
but not exponentially. It turns out that there is a large class of
practical problems that are not
learnable. These difficult problems are called "NP complete" and
include scheduling, graph coloring,
and satisfiability of Boolean formulas. There is a theorem stating that
if an algorithm can be found to
solve any NP complete problem, then any other NP complete problem could
by solved quickly using that
algorithm. Since programmers have been unable to find effective
solutions despite sustained efforts, it
appears that practical solutions to any NP complete problem are
impossible. Pitt and Valiant [88] have
shown that a threshold weight assignment problem similar to learning in
neural networks is not
learnable (unless if it turns out that NP complete problems are
solvable). This means that there is no
polynomial bound to the learning time required to assign weights. Thus
learning by example belongs to a
class of hard problems. There is no theory describing what kinds of
problems are easily learnable or
which network architecture is appropriate for a given problem.

Connectionists are not likely to let theory stand in their way. After
all, the brain does learn. We need a
better understanding of what is involved in human learning. Learning in
mammals is highly constrained
by genetics. It may be that genetic code includes built-in assumptions
about the world that have not yet
been incorporated into artificial neural systems. Modular organization
may be a fruitful approach. The
number of weights needed would drop drastically if the system consisted
of several kinds of building
blocks linked together. Each module type would have a fixed pattern of
internal weights.

A final difficulty is the lack of parallel computing hardware for
neural networks. The usual strategy is
to dedicate a hardware element to each node of the network. Electronic
designs for neural networks
usually consist of arrays of operational amplifiers interconnected by
resistors. Adaptive neural
systems learn and store information by varying the weights of the
interconnections. Variable weight
resistors are difficult to implement in integrated circuits but fixed
weight resistances can be made in
silicon. Scientists at Bell Laboratories have built such a chip
containing 256 neurons [Graf, 86].
However, due to the complexity of debugging a large analog chip, this
project has never been completed.

If one attempts a direct implementation of neural models in silicon,
with each neuron dedicated to an
individual processor, interconnection space becomes a major problem. In
fact, the routing and
interconnect space eats up most of the silicon area and makes it
impossible to fabricate a system of
10,000 processors and 5,000,000 interconnections with a network update
time of better than 100
microseconds unless the connectivity is extremely localized [Bailey and
Hammerstrom, 1988].
Researchers at the Oregon Graduate Center are working on a hybrid
analog-digital multiplexed wafer
scale neurocomputer. They claim to be able to complete a machine of the
above size and speed within
five years and manufacture it for less than $2,000.

DARPA and Neural Networks

The Defense Advanced Research Projects Agency recently proposed an
eight year, $390 million neural
network research program. The project would attempt to use a massively
parallel architecture
inspired by the neurophysiology of the human brain to solve military
problems that have not yielded to
traditional artificial intelligence approaches. DARPA sees
connectionist systems as the only alternative
to the disappointing results achieved to date from the Strategic
Computing Program.

DARPA Deputy Director of the Tactical Technology Office Jasper Lupo has
said, "I believe that the
technology we are about to embark on is more important than the atomic
bomb.... The future of machine
intelligence is not Al,, [EE Times, Aug 8, 88].

DARPA identified seven defense applications that may use neural

Strategic relocatable target detection from satellite optical and
infrared sensors.

¥ Detection of quiet submarines using a sonar array processor

¥ Electronic intelligence target identification from radar pulse trains

¥ Battlefield radar surveillance with synthetic aperture radar

¥ Battlefield infrared surveillance with passive infrared sensors on
low-altitude aircraft

¥ Detection of "stealth" aircraft using infrared search and track

Multisensor fusion with all sensors integrated by satellite

DARPA's neural network effort is based on a study by Bernard Widrow of
Stanford and Al Gschwendtner
of Lincoln Laboratory that runs over 600 pages. The report envisions
giving sophisticated brains to
weapons. These should be able to interpret radar signals and recognize
friend or foe and be able to
identify their own targets. In five years it is hoped the technology
will be available for a computer that
has the adaptability and information processing capability of a bee.
"Bees are pretty smart compared to
smart weapons," DARPA deputy director Dr. Craig Fields has said. "Bees
can evade. Bees can choose
routes and choose targets" (New York Times, Aug. 18, 1988). While the
report has recommended
robotics as the most promising neural network application, DARPA is
more interested in the military
tasks listed above. Present U.S. government funding of neural network
research is at the level of
approximately $5 million annually. The original DARPA proposal is for
spending $40 million in 1989,
peaking at $100 million in 1991.

Japan's Ministry of International Trade and Industry (MITI) is spending
only $183,000 on neural
network development for 1989. MITI is presently devoting resources to
its Fifth Generation Computer
System which seeks to make traditional Al techniques operate in
parallel. This project has met with
mixed success and its goals have been scaled back. When it ends in
1991, it expects to have a 1,000
processor inference engine doing 1 billion logical inferences per
second. Yuji Tanahashi of MITI has
promised that there will be a sixth generation neural network program
"in full swing three years after
the end of the fifth generation program". Thus the Japanese are looking
at a 1990-2000 time frame for
heavy neural network research, which is similar to the DARPA schedule
[EE Times, Dec 12, 88)].

West Germany has already committed $67 million for a ten-year program
in neural network
technology. The government of the Netherlands has proposed a $2.5
million seed program. Another seed
program of $6.4 million called Applications of Neural Networks for
Industry in Europe (ANNIE) has
been announced by the European Strategic Programme for Information

For CPSR, the DARPA effort brings to mind previous ill-considered
Pentagon forays into computer
science research. The Strategic Computing Program poured large sums of
money into artificial
intelligence while moving DARPA funded research away from generic
research toward specific military
applications. The Strategic Defense Initiative is a high tech gravy
train with little prospect of
feasibility due to extremely demanding software requirements and the
impossibility of adequately
testing the system. Will DARPA's program overwhelm American neural
network researchers with so
much money that useful applications are relegated to the Japanese while
we chase military pipe
dreams? Will much of the research become classified and drop out of the
scientific main stream? Is
this another boondoggle that will waste large amounts of taxpayer money
in the rush to deploy an
immature technology? Will inflated expectations for neural networks
destroy the credibility of the
field when these are not achieved?

"There's a great deal of concern that they not do what they did with
Al, which was widely regarded as a
disaster," said James Anderson, professor of cognitive and linguistic
sciences at Brown University. He
was referring to the Pentagon's artificial intelligence program, in
which more money was allocated that
could be used. "They dumped a huge amount of money, almost without
warning. So much money was
involved that people couldn't cope with it" (New York Times, Aug 18,

An initial reading of the neural network proposal indicates that the
Department of Defense may have
learned something from recent mistakes. The new program with a two
year, $33 million evaluation
phase. If this is viewed as successful, it will be followed by an
eight-year development project. The
research content appears to be much more widely applicable than the
Strategic Computing Program. As
originally proposed, DARPA funding would concentrate on three areas:
hardware designed to run large
neural networks; theoretical foundations; and applications. In more
recent announcements, application
projects have been replaced with an evaluation of the performance of
neural networks compared to
competing technologies [EE Times, Sept 19, 88].

The present thrust of DARPA's neural network program is to pick
applications presently served by Al,
signal processing or other conventional technologies and simulate these
as neural network
implementations. Particular attention will be paid to the ability of
small connectionist systems to scale
up into larger ones, combined with study on the amount of learning
needed to achieve desired
performance. Projects will be graded on robustness, size of data
representations, stability, and
convergence time. The initial application areas are speech recognition,
sonar recognition, and automatic
target recognition. Reportedly, all work will be unclassified.

Other U.S. government neural network programs may be more problematic
than DARPA's. The Federal
Aviation Administration has awarded contracts to study the use of
neural networks in air traffic
control. A current Department of Energy solicitation states, "Proposals
are invited on innovative
applications of neural networks for decision-making, real-time
automation, and control beneficial to
operation of nuclear power plants." It is not at all clear that the
reliability and predictability of neural
networks can be guaranteed to the levels of sophistication needed for
such critical applications. There is
no general way to predict the operation of a large neural network when
it is presented with unusual
inputs. It may be possible to build in predictability by following
strict mathematical rules in
constructing the network, but this topic needs more research.


Because of massive parallelism, the human brain has more computational
power than the fastest
computers by orders of magnitude. Supercomputers have reached the point
where the speed of light and
the distance that electricity must travel prevent unlimited speed
improvements. Thus it appears that
major breakthroughs in computing power will have to use parallelism.
Neural networks offer a class of
massively parallel architectures that promise excellent performance on
pattern analysis problems.
These systems can learn from examples, generalize easily and are
inherently fault tolerant. Drawbacks
include slow learning times which become even worse with increased size
and the lack of hardware
exploiting parallelism. Neural network technology is immature and faces
many years of development
before it can achieve the intelligence of primitive animals.


Many of the papers cited are reproduced in Neurocomputing: Foundations
of Research, edited by James
A. Anderson and Edward Rosenfeld, MIT Press, Cambridge, MA, 1988.

An excellent treatment of neural networks is the two volume set
(available in paperback): J. L.
McClelland, and D. E. Rumelhart, Parallel Distributed Processing, MIT
Press, Cambridge, 1986. A
third volume, Explorations in Parallel Distributed Processing, comes
with floppy disks containing
neural network programs in "C" for MS-DOS computers.

Anonymous, "Optoelectronics Builds Viable Neural-Net Memory,"
Electronics, pp 41-44, June 16,

Bailey, J,, and D. Hammerstrom, "Why VLSI Implementations of
Associative VLCNs Require Connection
Multiplexing," presented at the International Conference on Neural
Networks, 1 988.

Crick, F., and G. Mitchison, "The Function of Dream Sleep," Nature,
304, pp. 111-114, 1983.

Graf, H. P., et al., "VLSI Implementation of a Neural Network

Memory with Several Hundreds of Neurons/, AIP Conference Proceedings
151, Neural Networks for
Computing, Snowbird, Utah, AIP, pp. 182-187, 1986.

Hebb, Donald 0., The Organization of Behavior, Wiley, New York, 1949.

Hopfield, John J., "Neural Networks and Physical Systems with Emergent
Collective Computational
Abilities," Proceedings of the National Academy of Sciences 79:2554-
2558, 1982.

Farhat, Nabil. H., Demetri Psaltis, Aluizio Prata, and Eung Paek,
``Optical Implementation of the
Hopfield Model/, Applied Optics, pp 1469-1475, May 15, 1985.

Hinton, G. E., "Connectionist Learning Procedures," Carnegie-Mellon
University Technical Report
CMU- CS-87-115, 1987.

Johnson, R. Colin, ``Neural Networks in Japan: Part 2," Electronic
Engineering Times, pp.49,72,73,
April 6, 1987.

Johnson, R. Colin , "IEEE Puts Neural Nets Into Focus," Electronic
Engineering Times, pp.45-46, Nov
30, 1987.

Johnson, R. Colin, "Neural Nets are Next Project in Line/, Electronic
Engineering Times, Dec 12,
1988, p. 72.

Johnson, R. Colin, "DARPA Details Neural Program/' Electronic
Engineering Times Sept 19, 1988, pp.
69, 72.

McCulloch, Warren S. and Water Pitts, "A Logical Calculus of the Ideas
Immanent in Nervous Activity,"
Bulletin of Mathematical Biophysics 5:115-133, 1943.

Minsky, Marvin and Seymour Papert, Perceptrons, Cambridge, MIT Press,

Rosenblatt, F., "The Perceptron: A Probablistic Model for Information
and Storage in the Brain,,'
Psychological Review, 65:386408, 1958.

Sejnowski, Terrance J, and Charles R. Rosenberg, ``Parallel Networks
that Learn to Pronounce English
Text/, Complex Systems, pp. 145- 168, 1987.

Valiant, L. G. "A Theory of the Learnable," Communications of the ACM,
November 1984.

von Neumann, John, The Computer and the Brain, New Haven, Yale
University Press, 1958.

Widrow, Bernard and Marcian E. Hoff "Adaptive Switching Circuits,, 1960
IRE WESCON Convention
Record, New York, IRE, pp. 96-104, 1960.

Tyler Folsom is a software engineer at Flow Industries in Seattle, WA.
The National Science Foundation
recently awarded him a major grant to support research on neural

With the new Administration taking shape and the 101st Congress
underway, plans for the selection of a
Presidential science advisor remain at a standstill. The selection of
the science advisor, and the role
that such a person would play, became one of the key issues for the
science community at the end of the
last Congress. The general assessment of the Reagan years among
scientists was that science policy was
poorly handledÑthere little support within the community, a reluctance
to tackle big decisions, and too
little access to the President. Many prominent scientists came forward
with recommendations to
improve the formulation of science policy. How the administration
resolves the question of science
advice will influence a whole host of issues, including those of
particular concern to CPSR members,
such as military funding of computer science and the future of SDI.

Some issues to watch at the beginning of the new Congress:

SD/ funding--President Reagan's final budget called for a 50% increase
in SDI support, from $3.7
billion to $5.9 billion. But SDI funding is a probable target of DoD
budget-cutting because of growing
skepticism about the feasibility of the space shield, recent
disclosures of DoD procurement scandals,
and a general sense that the military will have to bite the bullet on
spending. President Bush must soon
decide what to do with SDI. During the campaign he expressed doubts
about the program, but Bush's
incoming chief of staff, John H. Sununu said on a recent TV talk show
that the Strategic Defense
Initiative is "technically feasible" and "ought to be deployed." This
was not a good signal coming from
the administration's most highly placed engineer.

FBI records system-- Congressman Don Edwards (DCA) will hold hearings
on the proposed expansion of
the National Crime Information Center (NCIC). CPSR, at the request of
Congressman Edwards, is
preparing an assessment of the proposal and the privacy interests at
stake. Other government agencies
are also developing large information systems that may raise similar
privacy concerns. For example, a
new national medical records system is currently under study by the
Department of Health and Human
Services. The system would provide on-line access to medical records
for 54,000 pharmacists across
the country.

High tech export controlsÑGrowing concern about U.S. competitiveness,
the cost of export control
restrictions, and the recent National Academy of Sciences report,
Global Trends in Computer Technology
and their Impact on Export Control, should lead to a serious
reexamination of export control
restrictions. Computer technology is likely to be at the center of the
debate. The Commerce Department
is currently reviewing proposed changes in export restrictions on

"Data highways"ÑA bill to establish a National Research Network, which
would link supercomputers
through a high-speed fiber-optic cable network, has been likened to the
national interstate highways
system. The bill was introduced at the end of the last session and no
action was taken. Expect to see it
reintroduced early in the session with growing support. A big science
project in computers is about due
this one has the added appeal of building infrastructure and
strengthening ties between U.S. research

Computer virusesÑIt is presently unclear what will happen in Congress
over the computer virus issue.
The central question in the House is whether Representative Wally
Herger (R-CA) goes forward with
hearings on his proposed Virus Eradication Act. Representative William
Hughes (D-NJ), the sponsor of
the 1986 Computer Fraud and Abuse Act, believes that it is too soon to
consider new legislation. A key
event in Congressional planning may well be how the legal case against
Robert Morris, Jr., is resolved.
In any case, the National Security Agency and the National Institute of
Standards and Technology will
continue to meet with security experts and agency officials to develop
a plan of action for future virus
incidents. Most recent talk is of a "Virus Swat Team" and the use of
smart cards for network access. As
this newsletter goes to press, plans are underway for hearings on
computer viruses in the Senate
Subcommittee on Law and Technology, chaired by Senator Patrick Leahy

Implementation of the Computer Security ActÑLast session, Congress
passed a law to shift authority for
the security of federal government computers from the
militaryÑspecifically the National Security
AgencyÑto a civilian agency, the National Institute of Standards and
Technology at the Commerce
Department. However, the NIST program is underfunded and the
requirements are overwhelming.
Expect careful review of the implementation of the Act, particularly
following the virus incident.

Access to electronic informationÑAlthough it remains unclear which
members of Congress will pick up
this issue, access to electronic information is an idea that is gaining
momentum. A lengthy Office of
Technology Assessment report was issued in November and follow-up
reports are expected. Librarians
are the champions of congressional action on this issue, fearing that
library services are rapidly
becoming privatized and that the government is shirking its
responsibility to publish and distribute
information in its most useful format. The Office of Management and
Budget (OMB) proposed changes to
Circular A-130, the government's policy for providing electronic access
to agency records, and is now
taking public comments before the final rule is issued. The outcome of
A-130 may provide some
indication of where the new administration goes with electronic

ÑMarc Rotenberg

CPSR/New York Develops MEMBERS Software

Members of CPSR/New York have developed a database management software
package, called MEMBERS,
to support nonprofit organizations that depend on membership. The
software is to be provided, for a
nominal fee, to organizations that benefit their communities, promote
human rights, or support social
responsibility. The software is flexible and can be customized by
volunteer programmers. Members of
the New York chapter are also providing programming support, and
assessment of computer needs for
select organizations that need computer help.

This project provides a prototype of pro-active use of data processing
technology in pursuit of socially
responsible goals. It gives local CPSR chapters a concrete way to build
links with other organizations in
their communities by providing them with a service, and gives CPSR
programmer members an
opportunity to use their computer-related skills in a CPSR project. It
also provides local chapters a
chance to get to know each other better through sharing experiences and
technical expertise on a
nationwide project.

CPSR members interested in helping the MEMBERS group should contact Ed
Levy of CPSR/New York at
211 Warren St., Brooklyn, NY 1 1280, or telephone (718) 875-1051.

From the Secretary's Desk Eric RobertsÑNational Secretary

The last quarter was an extremely busy one for CPSR. We opened our
Washington office, we held our
largest and most successful Annual Meeting here in Palo Alto, our
phones rang off the hook as reporters
wanted to find out more about the Internet virusÑall of these are
described elsewhere in this issue.
There have, however, been quite a few other happenings around CPSR that
deserve mention on their

In early December, CPSR Executive Director Gary Chapman and 1, along
with Carolyn Curtis and Dave
Kadlecek from the CPSR/Palo Alto Computers in the Workplace Project,
attended a conference on
"Changing Technologies in the Workplace." The conference was held at
UCLA and was sponsored by the
California Policy Seminar at the request of Assemblyman Tom Hayden,
chairman of the Assembly's
Labor and Employment Committee. The conference brought together an
exciting collection of academics,
legislative analysts, labor leaders, industrial representatives, and
other people who are interested in
these issues, and we developed a number of excellent contacts for CPSR
in this area. Based on
discussions that started at the conference, the CPSR Workplace Project
is currently planning to
sponsor a follow-up conference on participatory design sometime in
early 1 990.

As an outgrowth of one of the workshops at the Annual Meeting, CPSR
members Rock Pfotenhauer,
Cathie Dager, and CPSR President Terry Winograd organized a conference
on "Computers and Education"
that was held at Stanford University on January 21. The conference
divided into smaller groups
centered on the issues of (1) computer education in elementary and
secondary schools, (2) fostering a
concern for ethics and professional responsibility in college students
who intend to pursue careers in
computing, and (3) empowering college-level students who intend to
concentrate in other areas by
providing the necessary knowledge and skills to understand computer-
related issues and make informed
decisions about computer use. The group is working to develop resource
materials and will publish a
computers in education newsletter.

The next few months look equally exciting. In March, the CPSR Board
will meet in Washington where we
will formally inaugurate our new office with a public opening. We will
also review plans for the 1989
Annual Meeting, which will be held in Washington in October.

As this newsletter goes to press, CPSR's report on the FBI's National
Crime Information Center is just
being completed. The report, by CPSR members Dave Redell, Peter G.
Neumann, and Jim Horning, along
with contributors from the ACLU and City College of New York, will be
summarized in the next issue of
the Newsletter.

In April, CPSR National Advisory Board member Sherry Turkle, Annual
Meeting speaker Esther Dyson,
and Executive Director Gary Chapman will travel to the Soviet Union to
study the emerging computer
culture there. Sherry Turkle, author of the widely acclaimed book The
Second Self: Computers and the
Human Spirit, is working with a grant from the MacArthur Foundation to
examine how the growingÑ
and now government encouragedÑuse of computers is transforming the
Soviet Union in its new era of
glasnost. Gary will be going along to speak with Soviet computer
scientists who work on issues in peace
and arms control, and he will be talking with some computer
professionals in Moscow who are
interested in setting up a Soviet version of CPSR. Esther Dyson, who
speaks Russian, will be gathering
material for her computer industry newsletter, Release 1.0. The trio
plans on visiting Moscow,
Leningrad, and Novosibirsk in Siberia, home of the Soviet Computing

In organizational news, the CPSR Executive Committee, which meets
monthly to act on issues of CPSR
policy in between meetings of the full CPSR Board, has a new member.
Greg Nelson, author of many of
CPSR's early papers on the SDI, resigned his position on the Executive
Committee and was replaced by
Amy Pearl. Amy is a member of the steering committee for CPSR/Palo Alto
and has been actively
concerned with securing greater equality of participation by women in
computer science. The Board
thanks Greg for all his work and is very happy to welcome Amy to this

Finally, I am very happy to report that our end-of-the-year fundraising
campaign is beginning to bear
fruit, and we are breathing a little more easily around the office. In
December, CPSR received grants of
$29,000 from the Rockefeller Family Fund (plus a guarantee of $25,000
more later in year),
$20,000 from an anonymous donor, and $15,000 as a matching grant from
the Stern Family Fund. We
have also received more than $8,000 in donations from our members. This
does mean that the staff no
longer has to worry about getting their paychecks, but we are still
running a bit behind our budget
projections for this year. So thanks to all of you who have supported
us, and keep those letters and
contributions coming.


The CPSR Newsletter is published quarterly by:

Computer Professionals for Social Responsibility P.O. Box 717
Palo Alto, CA 94301 (415) 322-3778

The purpose of the Newsletter is to keep members informed of thought
end activity in CPSR. We
welcome comments on the content and format of our publication. Most
especially, we welcome
contributions from our members. Deadline for submission to the next
issue is March 31, 1989.

This Newsletter was produced on an Apple Macintosh II, using the
desktop publishing program
Pagemaker 3.0 and Adobe Illustrator 88. The hardware and software used
were donated to CPSR by
Apple Computer, the Aldus Corporation, and Adobe Systems. The
Newsletter was typeset from a
Pagemaker file on a Linotronic 100.

Archived CPSR Information
Created before October 2004

Sign up for CPSR announcements emails


International Chapters -

> Canada
> Japan
> Peru
> Spain

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
Why did you join CPSR?

Gain better understanding of the Information society.