Personal tools

Fall1988.txt

The CPSR Newsletter

Using Computers in Arms Control Verification
Cindy L. Mason CPSR/Palo Alto

Editor's introduction: Since 1945, every American President except Ronald
Reagan has supported the negotiation of a comprehensive test ban treaty that
would end all testing of nuclear warheads. In the history of the arms control
process, limitations on nuclear testing have been viewed as a stepped
process, beginning with bans on testing in space and in the atmosphere, both
of which have been in place for decades, then moving toward threshold limit
treaties that would place caps on the yield of nuclear device testing
allowed, and, ultimately, a complete ban on nuclear testing. Until the Reagan
administration, which objects to limiting testing because of the
adminstration's plans for modernizing the nuclear arsenal, the chief obstacle
to either a threshold limit treaty or a comprehensive test ban has been the
issue of verification of compliance with the treaty. Scientists around the
world, including a team at Lawrence Livermore National Laboratory (LLNL) in
California, have been studying ways for nuclear testingÑor its absenceÑ to be
monitored in order to verify compliance with a threshold limit treaty or a
test ban. The article below is an official report of the work being done at
LLNL.

Seismic monitoring is one of the most important verification technologies
available to monitor compliance with underground nuclear test ban treaties
and deter clandestine testing. However, verification of a comprehensive test
ban treaty (CTBT) or a low yield threshold test ban treaty (LYTBT) places
challenges on current seismic verification technology by lowering the
magnitude of the seismic events that must be monitored. To detect such low
threshold seismic events we require not only greater resolution in our
seismic instruments, but more powerful data processing tools for
seismologists. At Lawrence Livermore National Laboratory, computer scientists
in the Treaty Verification Research Program are concerned with the
fundamental problem of processing the massive amount of data that would be
generated by seismic monitoring systems for CTBT or LYTBT verification.

To evade treaty provisions, nuclear tests would likely be designed to produce
weak seismic signals which could be hidden in background noise or mixed with
signals of other seismic events. Detection thresholds must therefore be low
enough to make intentionally muffled seismic events visible over background
noise. As a result of low thresholds, seismologists must examine an enormous
number of seismic events. Our experience with the Norwegian Experimental
Seismic Station (NORESS) indicates that up to 50,000 events per year must be
analyzed from that station alone. Such events include earthquakes, chemical
explosions, and other natural seismic events (such as river ice breaking up
in the spring), as well as possible nuclear explosions. Each event must be
analyzed to determine if it was the result of a clandestine nuclear test. In
effect, reliable verification of a CTBT or LYTBT requires automated systems
to aid seismologists in the interpretation of seismic events.

Several such systems are being developed using knowledge-based systems
technology in the Seismic Event Analyzer Project. The first system to be
discussed, SEA (Seismic Event Analyzer), interprets seismic events from the
NORESS station, and serves as the basic program from which subsequent
systems, NETSEA I and NETSEA 11, have been built. The second system, NETSEA
I, is a monolithic interpretation system designed to interpret data from a
four-station prototype seismic sensor network, Livermore Nevada Test Site
Network (LNN), surrounding the Department of Energy's Nevada Test Site.
NETSEA II is a decentralized approach to the network interpretation problem
and has been designed to interpret data from an as yet unspecified number and
type of sensor sites.

A Knowledge-Based System for Seismic Arrays

SEA interprets seismic data from a type of monitoring station known as a
seismic array. Seismic array stations are made of many sensors which are
arranged close together to cover the seismic activity of a particular area.
Norway's experimental seismic array station, NORESS, is shown in Figure 1.
SEA presently interprets data from 25 sensors arranged in four concentric
rings with a maximum aperture of three kilometers. An overview of the system
architecture is shown in Figure 2.

We receive the data from Norway via satellite at the lab and immediately
archive it onto an optical disk. An event detection program processes the
satellite data in real time on a MASSCOMP computer and generates event files
for each detected event. The event files are then sent out over an Ethernet
communication network to a SUN 3/160 workstation where the knowledge-based
system, SEA, resides. SEA then reads the event files, performs an
interpretation, and saves the results of the interpretation in a file for
review by a seismologist.

Our approach in designing SEA has been to create a system that behaves as an
intelligent assistant. In addition to processing events automatically, SEA
can be used interactively to allow the seismologist to process events of
interest and browse through interpretation results stored on disk. The user
interface also allows the seismologist to access a suite of signal processing
routines to independently form a "second opinion" of a seismic event.

SEA's knowledge was obtained by observing and interviewing seismologists who
work with NORESS and is comprised of general seismological knowledge as well
as knowledge dealing with data from that particular array. The basic form of
the interpretation knowledge is expressed as a collection of IF THEN rules,
written in a homebrew rule language built on top of Franz Common Lisp, while
the interpretation itself is represented by a semantic network.

To create an interpretation, SEA reasons with assumptions much like a
seismologist does. The reasoning process can be regarded as a cycle of
building and extending an interpretation based on assumptions about signal
features, then checking the assumptionbased interpretation against measurable
signal features to identify the wrong assumptions. The cycle proceeds until a
consistent interpretation is generated to account for all possible signal
features in the data file. The system is designed to find all possible
consistent interpretations. Currently, a data file contains up to 400 seconds
of data, or enough to locate events accurately up to within 1,000 kilometers.

The basic method for testing the system is subjective, comparing results of
the system against those of our expert seismologists. In addition to making
sure the interpretations it generates match those of the seismologist, we
find it equally important to measure the system's ability to reject false
alarms and detect genuine events. Only by having confidence in our
verification technology will we be able to achieve verification goals. To
this extent, SEA has been run nearly continuously for periods of up to five
days in online mode and interactively against sets of archived events
spanning a variety of demanding event scenarios and locations. As a result of
testing, SEA's knowledge and processing strategy have been fine-tuned to the
point we are enthusiastic about its overall performance. In some cases, it
has pointed to events our expert seismologists

overlooked using unassisted interpretation skills. SEA continues to be tested
with archived data as well as in an on-line mode to expand its seismic
interpretation knowledge and refine its reasoning strategy.

Knowledge-Based Systems
for Seismic Networks

Both the NETSEAI and NETSEAII systems are designed to interpret data from a
network of seismic stations. A seismic network differs from a seismic array
both in the spatial distribution of sensors and in the strategy used to
interpret the seismic data. A seismic array can be viewed as a complex single
seismic station which consists of sensors arranged in a small cluster. By
contrast, a seismic network is made up of tens to hundreds of sensor stations
(some of which may be seismic arrays) and sometimes span an entire country,
continent or the planet. During the interpretation process for a seismic
array, data from all sensors are processed together, and signal processing
algorithms are designed to take advantage of the sensors in the array as a
whole unit. In a network, however, data from each sensor or station must be
interpreted on an individual basis. As a result, the knowledge bases and
interpretation strategies in NETSEAI and NETSEAII differ from that of SEA,
although the process of reasoning with assumptions stays the same.

The NETSEAI system is designed to process data from a network of stations on
a single computer. Currently the knowledge base in NETSEAI interprets seismic
data from the four station prototype sensor network surrounding the Nevada
Test Site, shown in Figure 3. As the NETSEA I system is based on the program
architecture of SEA, we use the same IF THEN structures to express network
interpretation knowledge. Alongside the knowledge based system we run a
numerically intensive process which takes partial interpretations and
predicts the consistency of station interpretations based on numerical signal
information. This information is coupled with the symbolic interpretation of
the knowledge based system in determining which signal processing routines to
apply next, which stations have faulty data, and in generating a final
interpretation.

In general, seismologists are knowledgeable about data interpretation for a
particular geological region or sensor site and collaborate in forming a
network interpretation. For example, a seismologist in Norway may understand
those seismic signals quite well, but may be unfamiliar with knowledge needed
to interpret signals from a seismic station in South America. This is a
result of the differences in local signal propagation patterns, typical
background noise differences, differences in geological terrain and sensor
technology. In effect, the knowledge bases for systems interpreting sensor
network data can contain distinct sets of interpretation rules for each
station. As a result, NETSEA II is an effort to parallelize the network
interpretation strategy by decomposing the system into a collection of
computing agents which interpret data from a sensor site or geological
region.

The processing strategy for seismic network data interpretation requires
seismologists to put their heads together and compare interpretation results
from all the stations in the network. Depending on the location of the
source, the location of the sensor, and geologic features, an individual
station may receive only partial data, extremely noisy and uncertain data, no
data at all, or a beautiful, clear signal. So some station interpretations
are more useful and reliable than others in forming the overall network
interpretation. In building NETSEA II, the communication rules enable each
station to send and receive information based on their own assessment of
usefulness, relevance and data quality.

In general, we anticipate NETSEA lids distributed systems approach will
provide us with a flexible and extendable system that can be fine-tuned for
an as yet unspecified seismic network architecture which will be designed for
in-country monitoring.

Summary

We are developing a number of knowledge based systems to deal with the
massive amount of data which must be interpreted from very sensitive seismic
monitoring instrumentation for CTBT or LYTBT verification. Our approach has
been to develop expert systems for both arrays and a network of seismic
stations. The systems are designed to allow for a variety of specific seismic
instrumentation and knowledge-based systems experimentation. While the
emphasis on our work at Lawrence Livermore National Laboratory is research
directed (as opposed to operationally directed), we are investigating the
creation of rigorous testing procedures.

The author gratefully acknowledges the support of a University of California
Student-Employee Fellowship at Lawrence Livermore National Laboratory. This
work has been performed under the auspices of the U.S. Department of Energy
by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.



Non provocative Defense in Europe
Hal Harvey

The recently ratified Intermediate Nuclear Forces Treaty (INF) changes the
terms of the security debate in Europe. Removing a whole class of nuclear
weapons brings new prominence to the East/West balance of conventional
forces. The INF weapons were originally deployed to offset Soviet nuclear
forces and to give credibility to NATO's strategy of extended deterrence,
meaning the plan to use nuclear weapons to deter a whole range of possible
Soviet transgressions. In a crisis, Western military commanders planned to
incrementally raise the stakes by using, or threatening to use, nuclear
weapons, thus meting out sufficient punishment to convince their Warsaw Pact
counterparts to cease their aggression. For such a measured strategy, a broad
range of nuclear options was considered necessary.

The INF Treaty eliminates all nuclear forces with a between 500 and 5,500
kilometers. This appears to put the burden of deterring a conventional war on
Western conventional forces, which are considered by many inferior to those
of the Warsaw Pact. Many defense experts, including prominent arms
controllers, have called for a massive conventional buildup and modernization
to compensate for the INF Treaty's reduction of theater n forces.

Recognizing the danger of an unbridled conventional race, many analysts,
concentrated primarily in Europe, have argued that nuclear reductions should
be matched by fundamental changes in NATO and Warsaw Pact conventional
forces. They maintain that conventional forces can be restructured so that
they are manifestly defensive. According to these advocates, building such
"nonprovacative defenses" around strategies of "defensive defenseÓ would
result in more reliable strategic stability, while winding down the
conventional arms race. Such an arrangement, it is argued, would be
inherently more stable than a balanced force of offensive character, because
defensive dominance removes any advantage in attack. Nations can then assure
their own security without posing a threat to other nations.

This article examines the problems with a conventional force buildup, and
then examines the implications of a shift toward a nonprovocative defense.

Conventional Wisdom

Those who argue that a conventional buildup is the price the West must pay
for the INF Treaty unfortunately ignore four emerging trends in global
security.

¥ Conventional war in Europe is now unthinkable. The density of Europe's
population and industry makes the probable result of a war catastrophic.
Hitting one of Europe's nuclear reprocessing plants with a conventional
warhead, for example could make a large portion of Europe uninhabitable for
centuries.

¥ War is becoming obsolete as a means for pursuing policy. From Vietnam to
Afghanistan, from Iran and Iraq to Kampuchea, nations are finding that they
cannot achieve rational policy goals by war. U.S. and Soviet strategy in
Europe must ultimately reflect this new truth, or they will hobble progress .

¥ Europe and the United States cannot afford a conventional buildup. The
Reagan administration has presided over a two trillion dollar investment in
the Pentagon without significantly altering the U.S.-Soviet balance. It is
naive to presume such levels of investment can be sustained. Moreover, it is
unclear that another two trillion dollars would significantly alter the
balance in Europe. Even more important, as historian Paul Kennedy argues in
his recent book, The Decline of Empire, it is clear that military excess
undermines a nation's long-term security.

¥ Europe does not have the manpower to sustain a buildup. Demographic trends
in Western Europe show that indigenous military personnel cannot be greatly
increased in number.

These trends lead to an inescapable conclusion: a conventional force buildup
by NATO is out of the question. The consensus for increased military
expenditures is unraveling; the utility of force has become questionable; and
the political relations between the blocs cannot fully evolve in such a
militarized atmosphere.

Nonprovocative Defense

A shift toward a nonprovocative defense might avoid all these problems. The
calculus of balance for a nonprovocative defense differs fundamentally from
the offensive conventional force structure that now exists in Europe. For a
nonprovocative defense regime to achieve stability, the defensive forces of
Side A must be stronger than the offensive forces of Side B, and vice-versa.
No absolute balance need exist. This can be expressed:

Da > Ob and Db > Oa

When defensive forces are superior to the offensive forces of the enemy, the
advantages of first strike are eliminated. Defensive predominance also
eliminates the need for an extended nuclear deterrent.

Because defensive stability rests on inequalities rather than balance, it is
easier to define and achieve. No mirror imaging is required; no "missile
gaps" or bomber gaps or tank gaps need upset the equilibrium. Nonprovocative
defense, then, should help stop the arms race and lead to greater stability
in times of crisis.

Defensive forces in these scenarios are defined as those incapable of attack,
but capable of defense. But the line between defense and offense, as any war
planner knows, is often vague and elusive. Many argue, for example, that the
best anti-tank weapon is another tank. Even those who are willing to
distinguish between offense and defense point out that a shield is an ideal
adjunct to a sword. To transcend this debate, it is useful to define the
physical characteristics which make a weapon more or less provocative. In a
forthcoming book, Michael Shuman, Dan Arbess, and I use four characteristics
by which to measure the relative defensiveness of weapons: their range, their
vulnerability to preemption, their concentration of value, and their
dependence on local support.

¥ Short RangeÑUnlike offensive weapons, which need to travel long distances
to attack within an adversary's homeland, defensive weapons need only repel
nearby forces. To demonstrate its defensive intentions, for example, Sweden
has deliberately avoided long-range bombers and has kept the fuel tanks of
its other military aircraft small to limit their range.1

¥ Low Vulnerability--Any weapon that is vulnerable to a sudden attack invites
preemption or escalation in a crisis. Invulnerable weapons, in contrast, can
be fired at will; there's neither the need nor the temptation to fire them
preemptively or early in a battle. Switzerland hides its fighter planes in
bunkers carved under mountains so that they need not take off until a battle
actually begins.2

¥ Low Concentration of ValueÑWeapons of extremely high value such as large
battleships, or weapons support facilities like airfields, invite preemptive
attack. The Argentinians discovered this when the British began the
Falklands/ Malvinas War by sinking the large Argentinian battleship, General
Belgrano. Defensive weapons each have a relatively low value, dispersed over
a wide area.

¥ Dependence on Local SupportÑSince weapons that require long, vulnerable
supply lines are good targets for preemption, defensive weapons should be
locally supportable. Weapons that depend on local support cannot be sent into
foreign territory for offensive purposes. As Swiss defense analyst Dietrich
Fischer points out, "[I]f tanks depend on fuel depots in fixed positions,
they are limited in their mobility and serve essentially defensive functions.
If they are accompanied by fuel trucks or pipelines for long-range advances,
they can serve offensive functions."3

When the Strategic Defense Initiative (SDI) is tested on these grounds, it
becomes clear that it does not meet the requirements of a defensive system.
By the same criteria, many, if not most, of the weapons currently in the NATO
and Warsaw Pact arsenals must be characterized as offensive. The bulk of the
Warsaw Pact forces were designed for a blitzkrieg-style attack on NATO, a
war-fighting strategy in which masses of tanks backed up by infantry and
supported by air forces will suddenly and quickly pierce NATO's defenses,
establish control, and roll onward.4 To defend against a Pact blitzkrieg,
NATO amassed forces similar to those of the Warsaw Pact, with tanks,
mechanized infantry units, attack helicopters, long-range air power,
extensive support forces, and a sophisticated command and control structure.

NATO's modernization plans would reinforce this offensive character. To
compensate for alleged Warsaw superiority, NATO military commanders have
created a counter-offensive plan, called "follow-on forces attackÓ (FOFA).
With a high tech array of long-range, highly accurate missiles and airplanes,
NATO commanders would strike Pact forces deep in Eastern Europe at the
beginning of a war, trying to interdict the "follow-on forces," or
reinforcements waiting behind the front lines. Without the support of these
forces, the Pact attack would soon crumble according to current NATO
thinking.

FOFA requires a new class of weapons for target acquisition, long-range
attack, and kill assessment. These weapons will be threatening and
provocative to any prudent Warsaw Pact military commander, thus reducing
stability in times of crisis. They also make Western military planning
complex and difficult. If history is any indicator, these long-range, high
tech weapons will be expensive, unreliable, and difficult to test.

With a nonoffensive defense policy, NATO could gradually eliminate offensive
weapons and strategies. This would be accompanied by a shift to sufficient
short-range anti-tank and anti-aircraft weapons, deployed in dispersed,
invulnerable positions. Dispersed supply depots and command and control
facilities would increase reliance or support. Rather than being concentrated
at the Warsaw Pact border, more forces would be spread throughout NATO
territory. This raises the second essential fact NPD planningÑa
nonprovocative force configuration.

To be credibly nonprovocative, NPD weapons must be deployed in defensive
ways. One option for NATO, for example, might be to withdraw some of its
forces from the Warsaw Pact border, as Norway has done. Norway now defends
its 200 kilometer border with the Soviet Union with fixed defenses some 150
kilometers away from the border, using mountainous terrain to its advantage.
The deployment pattern complements the Norwegians' declared policy of
minimizing tension in the region.5 The result has been that the Soviet Union
has had no reason or pretext to build up forces on its border with NorwayÑand
this has also denied the Soviets potential pretexts for building up forces
along the Finnish border.

Sweden provides other examples of viable NPD deployment patterns. To make its
navy invulnerable, Sweden hides part of it in granite caverns at sea.6 Its
navy has twelve submarines, two destroyers, 35 fast attack craft and various
minelayers and minesweepers, all dedicated primarily to coastal defenses To
emphasize their neutrality, Swedish forces' radio frequencies are kept
incompatible with those of both NATO and Warsaw Pact forces.3 Finally, the
Swedish army, following the philosophy of "defense in depth," is deployed
throughout Swedish territory.9

Since Tito broke with Stalin in 1948, Yugoslavia has successfully used
various nonprovocative schemes to deter a Soviet invasions Like the Swedes,
the Yugoslavs have a small navy capable of coastal defense. Also like the
Swedes, they have stressed defense in depth. Local territorial units have
their own militias, trained in guerrilla tactics, which supplement the
national army. Backing up these militias is a national commitment never to
surrender; giving up territory is considered an act of treason.

These deployment patterns, of course, are not necessarily applicable to NATO
and Warsaw Pact countries. Exactly how each nation should implement NPD
depends on its culture, history, and geography. Plans have already been
devised informally for some NATO countries, and in Britain and West Germany,
the principal opposition parties have commissioned studies on nonprovocative
defense.11 Most proposals integrate one or more of the following ideas:12

¥ Defensive BarriersÑTo make aggression by ground forces more difficult, NPD
proposals recommend placing ditches, walls, mines, boulders, tank obstacles,
and even dense forests along the border separating the East and West blocs.
Some analysts have proposed the burial of a pipeline along the East/West
border. In times of crisis, the pipeline would be filled with an explosive
slurry, and then blown up. The resulting trench would be an effective tank
trap. Jochen Loser recommends establishing a defense zone with such barriers
80-100 kilometers from the border; these would channel attacking tank forces
toward well-prepared, concentrated anti-tank forces.13 Norbet Hannig and
Albrecht von Muller would establish a four-to-five kilometer corridor along
the border, a "no-man's land" in which advancing Warsaw Pact forces would
face intense, remotely delivered firepower.14

¥ Techno-Commando UnitsÑHorst Ahfeldt has suggested that NATO deploy 10,000
fighting units, each containing 20-30 men and armed with short-range
artillery, anti-tank weapons, and Stinger-like anti-aircraft missiles.'5
These units would each be responsible for defending ten to fifteen square
kilometers. Intimately familiar with its sector, each unit could effectively
use the terrain to build a defensive advantage. A decentralized
communications network would tie the commando units together, so they could
help each other when necessary. Ahfeldt envisions units close to the inter-
German border as active duty forces, while those in the rear would contain
reservists who would be activated during a crisis. These groups, deployed
throughout West Germany, would be responsible for the attrition and
destruction of Warsaw Pact (WTO) tank forces. The dispersion of troops and
the lack of heavy armour makes the force obviously incapable of attack; it
also denies WTO any targets worth attacking with nuclear weapons.

Shortly after Gorbachev's ascendance in 1985, signs appeared that the concept
of nonoffensive defense was being taken seriously in the East. Analysts from
Poland, Hungary, and East Germany presented papers on the concept at the
Pugwash working groups. On June 11, 1986, in Budapest, the WTO nations
declared that "the military concepts and doctrines of the military alliances
must be based on defensive principles." In 1987, General Jaruzelski of Poland
presented a European security plan which includes elements specifically
designed to move toward a non-provocative defense posture.

At the Soviet Communist Party Congress in February 1986, and several times
since then, Gorbachev has called for restructuring military forces and
doctrine so that they are strictly defensive. In this context, Gorbachev has
recognized that asymmetric force reductions may be needed (a real departure
from previous Warsaw Pact demands for equal reductions), a position he
strengthened by accepting greater Soviet reductions of INF missiles. These
proposals have evolved into the concept of "reasonable sufficiency," which
Soviet officials define as "possession of a military potential that, on the
one hand, would be enough to safeguard the security of one's own country and,
on the other, not enough to give effect to offensive plans, especially to
surprise attacks."16 Gorbachev has repeatedly emphasized the concept of
reasonable sufficiency, and internal studies of the feasibility of a switch
to non-offensive defense are underway.

Technologies for Defense

An ongoing debate over the potential for defenders to successfully hold off
attacking forces has focused on trends in technology: Will new, high-
precision weapons shift the advantage toward the defender or the attacker?
What about advances in armor? Can technology allow the West to implement a
nonprovocative defense unilaterally, or must it be done through negotiations
with the East?

There are no unambiguous answers to these questions, but it does seem that
most technological trends do favor the defense. Advances in precision-guided
munitions (PGMs), for example, make it possible for small squads of infantry
soldiers to knock out tanks. PGMs put a great premium on concealment and
dispersion.

In 1984 I began research to see just how effective PGMs might be in shifting
the advantage toward the defender. The classic comment, from a West
Point/Princeton graduate, was, "in times of excessive peace, it is difficult
to determine the efficacy of these weapons." Still, the military must make
its estimates. At Lawrence Livermore Laboratory, l met with a group of
military games modelers who were trying to assess the effect of PGMs. They
had developed a sophisticated computer model which included an extensive
database of weapons characteristics (range, probability of kill as a function
of range, deployment capabilities, etc.). To make the model realistic, they
ran "battles," in which Red and Blue teams would independently deploy
different force configurations on computer-generated terrain maps. Red, the
attacker, would launch an offensive against Blue. Red would move his weapons
toward Blue positions, and, upon sighting a target, fire. The computer would
compute the probability of a kill as a function of the terrain, time, and
weapon's characteristic, and the battle would be updated according to the
results.

The Blue commander, working on a different screen (with visibility of Red
determined by the terrain and the sighting technologies on hand) would react
by redeploying, counter-attacking, or digging in. Throughout the run,
military commanders (real people) had to make all the decisions. The
computer only gave them the simulated battleground and calculated the effects
of their actions. By keeping a "man in the loop," the modelers hoped to
impart further reality to the exercise.

To assess the impact of different weapons, the mix of weapons available to
Red and Blue was changed between simulations. In one run, Blue might have a
full complement of tanks to fend off Red. In the next run, half of those
tanks might be removed, and a battery of anti-tank weapons substituted.

The full results of this exercise were classified, but I was told that,
"there is no question but that technology is helping the defense," and that,
"the technology trends are toward increased lethality," meaning that PGMs are
more and more likely to hit and destroy tanks.

Smart Weapons Policy

Technology cannot unravel the dangerous military stalemate in Europe. A
simple survey of the dispersion nuclear power plants in Europe should suffice
to discredit the notion that a conventional war can "safely" be fought. But
bilateral (and multilateral) reductions in nuclear weapons can reduce the
dangers of miscalculation or computer error, which might precipitate a
nuclear war. Restructuring conventional forces into nonprovocative defenses
can help build real crisis stability, and could pave the way for significant
force reductions.

NATO and the Warsaw Pact need military strategies and forces which complement
and make tangible their shift toward a new detente. Nonprovocative defense,
especially if adopted bilaterally, can substantially contribute toward that
shift.

References

1. Adam Roberts, Nations in Arms: The Theory and Practice of Territorial
Defense, (New York: Praeger Publishers, 1976), p. 98.

2. Dietrich Fischer, Preventing War in the Nuclear Age (Totowa, NJ: Rowman
and Allanheld,1984), p. 53.

3. Ibid, p. 51.

4. For a thorough discussion of this evolution, see Jonathan Dean, Watershed
in Europe, (Lexington, Mass: Lexington Books, 1987) pp. 29-59

5. Norwegian Permanent Parliamentary Defence Committee's Proposal no.230
(1983-1984) cited in Sweden's Security Policy Entering the 90s, SOU 1985:
23, Oslo, p. 50.

6. Roberts, supra n.1, p. 84.

7. Alternative Defence Commission, (U.K.), Defence Without the Bomb: Non-
Nuclear Defence Policies for Britain (London: Taylor and Francis,1983), p.
114.

8. Personal conversation with Sven Hellman, Strategic Planning, Swedish
Department of Defense,1985.

9. Alternative Defence Commission, supra n. 7, p. 114.

10. Ibid., pp. 116-17.

11. ÒProposal by the Social Democratic Party for Modernisation of the Defence
with Supplement (The Development and Composition of the Defence),Ó July 1986.
U.S. officials have looked almost exclusively at the nuclear disarmament
provisions of these proposals rather than those dealing with NPD. See, e.g.,
Stephen J. Solarz, ÒBritish Lion Mustn't Spit Out Nuclear Teeth,Ó Wall Street
Journal, November 6,1986, p. 32.

12. These ideas are nicely summarized in Stephen J. Flanagan, ÒNonprovocative
and Civilian-Based Defenses,Ó in Fateful Visions, pp. 98-105.

13. Jochen Loser, ÒThe Security Policy Options for Non-Communist Europe,Ó
Armada International, Vol. 2, March/April 1982, pp. 66-75.

14. Albrecht von Muller, ÒIntegrated Forward Defense: Outlines of a Modified
Conventional Defense for Central Europe,Ó (unpublished paper ) 1985; Norbet
Hannig, ÒCan Western Europe Be Defended by Conventional Means?Ó International
Defense Ffeview, Vol.1, 1979, pp. 27-34.

15. Hew Strachan. ÒConventional Defence in Europe,Ó International Affairs, 61
:1, March/April 1982, pp. 66-75; Horst Ahfeldt, Verteidigung und Frieden
(Munich: Deutsche Taschenbuch Verlag,1979).

16. Soviet Lieutenant General Mikhail Milstein of the Soviet Institute for
USA and Canada studies, quoted in Defense and Disarmament Alternatives, April
1988, p. 7.

Hal Harvey is Director of the Security Program for the Rocky Mountain
Institute in Snowmass, Colorado. Previously a founder and staff member of the
Center for Innovative Diplomacy, Hal was involved in the start-up of CPSR as
a formal organization.

Apple Donates Computers to CPSR

Apple Computer has very generously donated about $25,000 worth of computer
equipment to CPSR, most of which will be used in the National Office in Palo
Alto. The Community Affairs department of Apple awarded CPSR a grant of the
following: a Macintosh II with 2 megabytes of memory, an 80 megabyte hard
disk, a color monitor, color video card, monitor stand, and extended
keyboard; a LaserWriter II NT; two Macintosh SE's with 20 megabyte hard disks
and extended keyboards; two Apple Personal Modems; an ImageWriter LQ printer
with sheet and envelope feeders; and four LocalTalk connections for
networking the hardware. This equipment will make the CPSR National Office an
all-Mac operation. The donated equipment will be networked with the office's
two Mac Plus computers, which are already connected to hard disks, modems,
two ImageWriters, and two Diablo daisy-wheel printers. One of the current
Macintosh Plus computers and an Apple 20 megabyte hard disk were also donated
to CPSR by Apple. The new equipment will replace two old Apple It's, which
were donated to CPSR by members. One of the Mac SE's, a modem, and printer
will go to the new CPSR office in Washington, D. C.

CPSR has also been awarded several software packages as donations. Adobe
Systems, Inc., of Mountain View, CA, has donated the high-end graphics
program Adobe Illustrator 88, as well as a supplementary package of graphic
images called Collector's Edition I, and a downloadable Postscript font, ITC
Garamond. Ashton-Tate Corporation of Torrance, CA, has donated the feature-
packed word processor FullWrite Professional. A CPSR member, Frank Giraffe of
Philadelphia, donated a copy of the desktop presentation program Cricket
Presents, which is produced by his employer. Cricket Presents makes color
slides and overhead transparencies, as well as computer slide shows.

For the last two years, CPSR has been working with the desktop publishing
program Pagemaker, which was donated by the Aldus Corporation in Seattle. The
CPSR Newsletter is produced with Pagemaker 3.0.

The National Office staff is very appreciative of these generous donations,
and the corporate support for CPSR that they represent. If you work for a
company that makes Macintosh software, or have access to hardware or software
that would be useful in a Mac environment and which might be donated to CPSR,
please contact the National Office at (415) 322-3778.

Rotenberg Joins CPSR Staff To Head Washington Office

Beginning in November, CPSR Board member Marc Rotenberg will join the CPSR
staff as National Program Director for the CPSR Computing and Civil Liberties
Project, and head up a new CPSR office in Washington, D.C. Marc will be
directing the organization's work on civil liberties, privacy, access to
information, and computerized voting. He will also represent the organization
in Washington on the other issues CPSR addresses.

The CPSR office in the capitol will make the organization the only active,
significant representative of computer professionals in Washington. A
presence in Washington is an important opportunity for CPSR, since it signals
the organization's intent to play a prominent role in shaping public policy
involving information technology, and it highlights the national character of
the organization.

CPSR is very fortunate to have Marc Rotenberg to lead this new venture. He is
leaving his position as staff counsel to the Senate Subcommittee on Law and
Technology, where he helped direct many legislative efforts involving
computing technology. Marc was the founder and first executive director of
the Public Interest Computing Association (PICA) in Washington, D.C., and a
co-founder of the ACLU Privacy and Technology Project. He is a graduate of
Harvard College and Stanford Law School.


Special Section on Computing and Elections

Editors introduction: As this Newsletter goes to press, the United States
electorate prepares for an important Presidential election as well as state,
county, and local elections. Most American voters will have their votes
counted by computer. The following reports reveal that there are serious
concerns with the computer systems used to count votes in the United States.
These systems, many of which tend to be very primitive by present-day
computing standards, are subject to error from software problems, hardware
malfunctions, and user miscalculation or misunderstanding. There is also an
alarming potential for electronic fraud in modern, computerized elections,
with corresponding difficulties in detecting criminal activity or intent.

The following two articles have been produced by two study teams working
independently. While both teams agree that there is a significant problem
with computerized vote tallying, they have different opinions on the nature
of the problem and potential solutions. Both articles are set forth here in
order to convey these two different approaches, which more or less correspond
to two active groups of election reformers in the United States.

One article was produced by Election Watch, a special project of the Urban
Policy Research Institute in Los Angeles, directed by Mae Churchill. Edited
by Jennifer Leonard of Election Watch, it is based on a longer paper,
ÒEnsuring the Integrity of Electronic Elections,Ó by Howard Jay Strauss and
Jon R. Edwards, both of Princeton University.

The second article is the product of the CPSR Computer Voting Project, a
program of CPSR/Portland. The article was written by Bob Wilcox and Erik
Nilsson of CPSR/Portland, who have both been researching the computerized
election issue in cooperation with a number of national experts, including
election officials.

Stand-along copies of these articles may be ordered from the CPSR National
Office, at P.O. Box 717, Palo Alto, CA 94301. Please enclose $5 for the two
papers for copying, handling, and postage.

Computerized Vote Counting: How Safe?
Bob Wilcox and Erik Nilsson
CPSR/Portland

On election day we cast our ballots, and hear the results that evening. Most
votes are counted on computers, but we rarely question the accuracy of these
computer-generated results. Who makes sure that these programs are accurate?
Let's look at the 1985 mayoral race in Dallas, Texas, to see what can go
wrong.

Dallas County, 1985
Every Election Officials' Nightmare

As is frequently the case, the city limits of Dallas had changed since
precincts were drawn, resulting in precincts containing both city and non-
city residents. This situation is called a Òsplit precinct.Ó Elections
officials purchased the wrong computer software to handle these split
precincts.

On election night, there was a power glitch which interrupted counting on the
central computer counting the votes. The leading candidate before the glitch
was trailing soon after the power was restored, arousing the suspicions of
one candidate's campaign manager. Because of one of the incorrect pieces of
software, the official vote count included three different totals for the
number of votes cast, a fact that was only discovered later.

Because the election was close, a recount was ordered. When the votes were
recounted, 161 of the 250 precincts showed vote shifts one way or the other,
but these largely canceled out one another and didn't change the overall
election. Dallas County used pre-punched ballots for this election. A pre-
punched ballot is voted by placing it into a jig and punching a pre-
perforated "Chad" out. If this Chad is loosened but not removed, it is called
a "hanging chad." These changes in votes were blamed on hanging Chad, but no
physical evidence to support this contention was ever produced. In several
precincts, the number of ballots counted was different from all three
previous ballot totals. A defect in precinct procedures has been suggested as
the cause, but again, no evidence has been produced.1 Later, charges were
raised that all or part of the mayor's race results had been fabricated.2
These have never been disproved .

As a result of this fiasco, at least one election official was soon no longer
working for Dallas County Elections. The New York Times ran five articles on
the situation, and the Texas legislature conducted hearings, ultimately
amending state election law (Texas HB 1412). Dallas County also changed vote-
counting systems. Dallas County was not an isolated incident. In fact, the
original system the county bought is in use in many other counties today.3

How Elections Are Conducted

In the U.S., elections are administered locally. While states set the rules
for their respective elections, counties, a few large cities, and small units
like townships actually register voters, distribute ballots, and count them.
Naturally, these jurisdictions purchase equipment and software for the
elections they conduct.

The earliest use of computers to count votes was in 1964 when five counties
used punch-card ballots.4 In 1970, fraud in the a Los Angeles election using
punchcard ballots was alleged.5 An investigation found no evidence of fraud,
but IBM, the vendor for Los Angeles' system, decided to leave the business.
To this day, most elections systems are purchased from small firms, some with
just a few employees.

As of early 1988, about 55% of the popular vote was counted by computer
systems. Mechanical lever systems make up the bulk of the remaining 45%.6 By
far the most common type are punch card systems, where voters vote by
punching holes in a computer punch-card. Systems where voters fill in boxes,
as in standardized tests, are also used. Electronic voting machines, though
rare now, will tend to replace mechanical lever systems over time. Finally,
computer systems often tabulate manual entry of lever machine totals. Thus,
except for small jurisdictions, computerized vote-counting systems are or
will be the norm.

The Problem

Computers are used to count votes because they are cheaper and faster. But as
Dallas County's experience indicates, the use of computers in itself does not
guarantee better elections. Computerized vote counting as practiced today
evolved in the "frontier days" of computing. During that period (from the
mid-1950's to about 1970), the implementation of computing systems focused
purely on the necessary functionality for the application, and systems were
operated by experts for experts. Structured design and design for
maintainability were not well understood. The 80's have seen dramatic
improvements in the reliability and user-friendliness of computer systems,
but computerized voting systems continue to exhibit a frontier mentality.
This results in three problems: unreliable programs, user-hostile programs,
and inadequate administrative controls

Elections computer programs are not subject to design or source code
inspections by independent auditors outside the vendor, as banking software
is. Some programs still in use consist of unstructured COBOL, patched over
the years. In some cases, special purpose code is written for a specific
election, then discarded. There are no requirements that the programs be
written in a high level language, so assembly language is frequently used.
These features make it difficult to determine if the program is designed
correctly.

Even if elections officials had access to source code for vote-counting
programs, few would be able to obtain the resources to determine its quality.

The users of these systems are the voters and the elections system operators.
Though the process of voting seems simple, many systems are dependent on the
voter accurately lining up the ballot in a jig so that the vote punches are
counted as the voter intends. Elderly and handicapped voters have special
problems with these systems.

Elections operators have to lay out the ballot and configure the program for
counting, often by binary fields on punch cards. The reports that are
produced showing the results are sometimes not comprehensible by anyone but
the programmer who designed them.

Good operational practice in the counties is necessary to protect even the
best computerized voting systems. Errors in configuration programs frequently
go undetected because of inadequate pre-election testing. In one extreme
case, a test of 13 ballots each in 2 of 63 precincts for one election was
used. This inadequate testing failed to alert election officials to errors
that caused them much grief later.7 An audit trail for the counting process
is essential, but on some systems it may be disabled or even be an optional
software module!

One critical area of weakness is administrative controls on the programs
themselves. Controls to insure the object code corresponds to correct source
are not in place. Procedures to insure the operating system or other programs
running during the count don't affect the vote tabulation are largely absent.
Some counties permit dial-up modem access to the computer during the count.
Finally, some jurisdictions actually allow the software vendor to operate the
system or modify the vote-counting program during the election.

The Iron Triangle

The improvements that election systems need are well known within the
computing profession. If elections officials, elected officials and the
public want fair elections, why do problems persist? The three forces which
could make a difference are frozen in a web of dependencies which stifle
change.

Elected officials, primarily at the state level, could pass laws requiring
better systems. However, they were elected by the existing elections systems
and thus don't tend to question their reliability. Replacing existing systems
sounds expensive, and expenditure on vote counting is not of visible benefit
to the voters.

Elections officials could make known their dissatisfaction with existing
systems. However, they see the computer systems as part of the larger
elections process for which they're responsible. It's threatening to question
the reliability of elections systemsÑin many cases they could lose their job
over real or even alleged problems. Worse, they must choose among the
products available, but they often lack the expertise and information to make
an informed decision, or to demand better products.

Elections Vendors could provide better systems, but they are small
companies, and the market is small, providing little revenue to be plowed
back into development. Moreover, the fragmented market of individual counties
is not demanding better systems. Finally, for vendors to admit deficiencies
would open them to lawsuits or cause existing users to question their
investment in current systems.

The Solution

The technical solutions to the problems of elections system reliability
are well within the state of the art in data processing. Some of these
solutions were proposed as early as 1970 as a result of the problems with the
Los Angeles election.8 Even now, some of those recommendations have not been
put in place across the U.S. Numerous reports since have made specific
recommendations, which, if applied, would have prevented most of the problem
elections to date.

Better Programs Through Testing

To address the problem of program correctness, elections system vendors
need to catch up to current industry practice in system development. This
would require vendor quality assurance programs with practices such as source
code and design walkthroughs. Independent testing would also be required. The
quality of designs and of source code should be examined, and full-system
tests should be conducted with a worstcase number of realistic test ballots.
Currently, each state does its own testing of vote-counting systems. If this
rigorous testing were to be conducted, testing would become very expensive.
To make this testing costeffective, states will have to cooperate on testing.
Ideally, testing would be done once for the entire U.S. Requiring the use of
high-level languages throughout vote-counting programs would also reduce the
cost of testing, because experts in dozens of different assembly languages
wouldn't be required. Instead, one expert in each permitted high-level
language is all that would be needed.

Draft voluntary standards from the Federal Election Commission propose a
National Testing Facility that would test each vote-counting program once.
The test would include optional sections that would be relevant to only some
state election laws. States would ignore the results of sections that aren't
relevant to their state laws. However, a test facility need not serve the
whole country, at least not initially. A consortium of states could begin a
testing facility. States that are concerned about the quality of their vote-
counting systems could join the consortium. Regardless of whom it serves, the
test facility could be administered by independent testing labs, the National
Institute of Standards and Technology (formerly the National Bureau of
Standards), a public interest group or another federal agency, as politics
determine.

Eliminate Pre-punched Ballots Now

A spring-loaded punch is now available, for use with plain ballot stock that
is not pre-punched. This punch completely ejects the "Chad" from the hole to
prevent it from floating around loose in the deck and creating random
variation in the count. This new punch and ballot are compatible with
existing pre-punched vote-counting system, requiring the replacement of only
a few components. There is therefore no longer any excuse for pre-punched
ballots .

Improve Interfaces

The elections official's interface needs to be brought out of the realm of
bits. Elections officials in small jurisdictions are sometimes the county
clerk and assessor who are not computer experts, so what might be merely an
inconvenient interface in one county becomes a hazard to accurate elections
in another.

The input formats of all vote-counting programs should be standardized.
Imagine a world where compiler vendors supported only their own proprietary
programming languages, and each language was written from scratch, totally
different from every other. This is the situation in the vote counting world.
Standardizing input languages for vote-counting programs would allow election
officials to treat configuration for an election as a generic problem.
Training would be vendor independent. In addition, standard tests for vote-
counting systems could be devised and run unmodified on several
manufacturers' hardware. Many election systems contain a program that prints
the ballots for an election. This program should accept the same
configuration file that the counting program uses. Besides reducing the work
of election preparation, this arrangement would nearly eliminate errors
caused by desynchronizations between the ballot-printing program and the
counting program.

The report of the election results and the log of election night activities
should also be standardized. The same testing and training benefits would
also accrue for these interfaces. Furthermore, these outputs ought to be
public records (as they are in many places), and therefore must be
intelligible to interested voters. At the least, there should be a useable
reference available that explains the format of the reports, and the terms
used.

Finally, the interface that the voter sees should be improved and
standardized, so that voters wouldn't have to learn a new way of voting
whenever they move to a new county.

Improve Administrative Practices

Most elections problems that have been discovered have been at least partly
caused by inadequate administration. The best computer program in the world
can be brought down by poor administrative practices.

Some jurisdictions have carefully thought-out plans, covering a wide variety
of possible circumstances and compiled into books which are actually read and
used. Some states, notably Florida, provide guidance (sometimes backed up by
legal requirements) in this area. Many jurisdictions are not so fortunate.

One of the administrative practices that has caused the most trouble in the
past is pre- and post-election testing. Since a vote-counting system is
configured differently for each election, it must be tested prior to use. It
is also wise to test the system after use to be sure that no accidental or
malign change has occurred in the configuration. For the test to be useful,
it must test every ballot position for every ballot style, contain ballots
from every precinct, and contain more ballots than the largest number
expected.

After the election, this same test should be re-run, and the same (correct)
results should be generated. A valuable check on the accuracy of a vote-
counting system is a random recount on an independently managed system or by
hand. One per cent of precincts should be recounted at a minimum, increasing
to one-hundred per cent of precincts when the results are very close.

The Role of Computer Professionals

As computer professionals, we have little direct responsibility for the
machinery of democracy. However, when an important component of the election
process is inadequate, and this component is a computer system, we have a
responsibility as a profession to investigate and speak out.

Of course, it is individuals in the profession who must discharge this
responsibility. We can inform ourselves about what kind of system each of us
votes on. We can get to know our election officials, tell them about our
concerns, and learn about their systems from the election official's
perspective.

Computer scientists can also be expert observers of elections. They can be a
resource for election officials and elected officials, suggesting ways to
make their systems more secure, or evaluating prospective systems.

We can provide an analysis of the problem to public interest groups and the
public, and answer questions from decision makers on the available courses of
action. This is an important issue, one that cries out for our special
expertise. Working together, we can make elections more reliable and
deserving of our trust.

References

1. Roy G . Saltman, Accuracy, Integrity, and Security in Computerized Vote-
Tallying, preprint, (Washington, D.C.: U.S. Department of Commerce, National
Institute of Standards and Technology [formerly National Bureau of
Standards], NBS Special Publication 500-158, 1988), section 4.3.

2. Erik Nilsson, A Bucket of Worms: Computerized Vote Counting in the United
States, (Palo Alto, CA: Computer Professionals for Social Responsibility,
1988), p. 3; Terry Elkins, Testimony Before the Texas House of
Representatives, November 25, 1986.

3. Lance J. Hoffman, Making Every Vote Count: Security and Reliability of
Computerized Vote-Counting Systems, (Washington, D.C.: The George Washington
University, 1988), p. 8.

4. Roy G. Saltman, Effective Use of Computing Technology in Vote-Tallying,
(Washington, D.C.: U.S. Department of Commerce, National Bureau of Standards
NBSIR 75-687, 1975), p. 10.

5. Saltman, 1975, op. cit., p. 16-18.

6. "Type of vote-counting Equipment by County, Elections Data Services and
the Elections Center, 1988, p. 1.

7. Saltman, 1988 op. cit., section 4.4.1.

8. Saltman, 1975, op. cit., p. 35-38

Vulnerable On All Counts
How Computerized Vote Tabulation Threatens The Integrity of Our Elections

An Election Watch Report

I know of no safe depository of the ultimate powers of society the people
themselves...
ÑThomas Jefferson

The advent of computerized vote counting over the two decades has created a
potential for election fraud and error on a scale previously unimagined.

The complex systems that tally the votes of 55 percent of our electorate are
badly designed, hard to monitor and subject to a host of technological
threats. Worst yet, detection of error or sabotage is more difficult
traditional elections.

This report identifies some of the ways in which electronic voting systems
are vulnerable to error, both accidental and deliberate, and proposes a
comprehensive solution that returns control of elections to citizens.

The plan would make election fraud much more difficult to accomplish and more
certain of detection.

PART l:
A No-Confidence Vote for Computer Election Systems

Thirty-one per cent of American counties representing 55 per cent of our
voters now use computerized systems to tally votes and declare election
winners. These systems have been quietly introduced over the past twenty
years without public understanding of the implications of this change.

Decisions to use computers in vote counting have usually been made by
appointed election officials, guided by vendors in the business of selling
hardware and software for election systems.

The sales pitch has been that computers are more accurate than people: that
election results will be more secure once the human factor is minimized, and
above all that election results will be available more rapidly.

Instead, the introduction of computerized vote tallying has made it much
easier to compromise elections and much more difficult to detect errors.
Today's computerized vote-
vote-tallying systems are extremely vulnerable to every kind of threat, error
and manipulation possible to perpetrate in computer systems.

Our Voting Rights Are At Issue

When we vote, we expect our votes to be countedÑ onceÑfor the candidates or
issues we've chosen. We expect that only those registered voters who turn up
on Election Day (or send absentee ballots) will have their votes counted. We
expect that election officials representing diverse political viewpoints will
protect our voting rights.

If and when suspicion arises, we expect a recount will expose culprits and
lead to a correct election outcome.

Today's computerized voting systems can't guarantee these basic rights.
Lawsuits in Indiana, Florida, Maryland and West Virginia have alleged fraud
in computerized elections. Each case involves vote-counting systems from the
same vendor, whose products dominate the market and are vulnerable to most of
the threats outlined in this article.

Have the courts been able to resolve these cases? Not yet: because the
computer programs that count our votes are poorly designed, contain
proprietary information, cannot produce reliable recounts, and do not retain
audit information that could confirmÑor disproveÑwhether fraud actually
occurred.

Our Election Officials Have Left Their Posts

What makes computerized election systems vulnerable to error or tampering?
The fundamental answer is that control of elections no longer rests with
identifiable and accountable local election officials.

The officials are still there: but after purchasing computer vote-tallying
systems, many officials find the technology impossible to understand, operate
or supervise. So they turn to computer experts for help. These experts
usually work for the same vendors who supplied the vote-counting equipment.

There is no reason why computerized election systems have to be so obscure.
But the companies that design and sell the voting systems have had little
incentive to make their products intelligible to average citizens.

In the Electronic Voting Booth

In the most commonly used computer vote-counting system, the voter steps into
a voting booth and inserts a punched-card ballot behind a printed cardboard
booklet listing candidates and issues. The citizen votes by inserting a
stylus into holes beside the appropriate names or issues, punching out
corresponding holes on the ballot card.

Special vote-tallying computer programs then analyze the ballots by counting
the holes which correspond to each candidate and issue. At the end of
Election Day, the computer is supposed to report final winners and losers.

Can voters be sure they marked their ballot correctly? Can they be sure their
votes will be counted for the candidates and issues they chose? Can they be
sure their ballots will actually be counted? Often not.

Not-so-Benign Ballots

It is all too easy for ballots used in today's computerized elections to be
mismarked or misread.

The booklet that indicates which candidates or issues go with which holes can
be misprinted, deliberately or accidentally. Since most computer voting
systems don't print the names of candidates and issues on the ballot itselfÑ
only on the booklet, which stays in the boothÑ the voter has no way of
knowing whether the marked ballot really indicates the candidates and issues
he or she selected. The voter can't even check for simple mistakes.

Punched card ballots can be invalidated simply by the addition of an extra
punch or two. Bogus ballots can be added to the system (or others removed).
Punched-out dots from punched cards can fall into the next ballot in the
stack, filling a hole and erasing a vote.

Ballots that are marked like standardized tests carry orienting marks which
tell the computer how they should be read. These marks can be misprinted or
mispunched, for example, to throw each vote for the first candidate to the
second, for the second to the third, and so on.

Of course, someone might question why the first candidate received no votes,
so anyone intending to change the orienting marks would do so only for a
fraction of the ballotsÑjust enough to change the results in a close race.
Elections in California, with its millions of voters, have been won and lost
by as little as 100,000 votes.

Sabotaging the Vote-Counting Software

A computer vote-tallying program consists of a precise set of instructions
the computer is expected to follow. Any change in the program means the
computer will behave differently. In a computerized election system, the
program can secretly be changed to:

¥ Add extra ballots

¥ Cast invalid ballots a particular way

¥ Discard a portion of ballots cast for a particular candidate or issue

¥ Misread some or all of the ballots

¥ Turn off the computer's record-keeping system

¥ Change how election results are reported

Any of these threats can be accomplished by a single computer expert or, in
many instances, by a nonprofessional.

Quite recently the public has become aware of the vulnerability of computer
systems to phenomena such as Òviruses," "trapdoors," "time bombs" and "Trojan
horses," any of which can beÑand already have beenÑ used to disrupt
complicated computer programs. The potential use of these tricks in
computerized vote tallying has received very little public attention.

The September 26, 1988, Time magazine cover story on computer viruses
explained how these self-replicating programs penetrate and damage massive
computer systems. Election systems weren't even mentioned, but, as it stands
now, viruses may be able to do more harm in elections with less chance of
detection than in the systems Time magazine described.

A trapdoor, or hidden sequence of steps in vote-tallying instructions, could
allow an election operator to slip by the computer's security system with a
secret password. Once into the program, anything can be modified: the
operator could, for example, program the computer to count votes for one
candidate as votes for another. After enough votes have been changed to swing
the election, the trapdoor can be closedÑwith no record that it ever existed.

Trapdoors take someone on site to activate them. But with a time bomb, the
computer program will act up at a certain timeÑsay, on Election Day. The time
bomb could instruct the computer to add five hundred dummy votes while the
perpetrator relaxed thousands of miles away.

Just like the ploy the Greeks used, Trojan horses can sneak trouble into
election computers by hiding inside another appealing program, perhaps one
that prints up election results as bar charts.

Once election officials open the gates by using the bar chart program, the
Trojan horse can let out its software soldiers and manipulate the vote-
counting procedure, first turning off the computer's record-keeping system or
audit trail so that no later inspection of the election will turn up the
subversion.

Surely computerized election systems, like bank computers, have protections
against these threats? Not yet.

What Security Systems?

An unscrupulous computer programmer could insert a Trojan horse, time bomb or
numerous other software threats into our vote-counting systems simply by
working at the right computer, unsupervised.

This could happen when the vote-counting software is designed, when it is
modified for local precincts, or on Election Day, when the software program
needs help from a human operator.

Who would ever know? Private businesses supply all of today's vote-counting
software and specify the hardware on which it is to run. Only these vendors
double-check their products for accuracy. There is no independent review that
could assure election officials that the programs do exactly what they are
supposed to do, and nothing else.

Furthermore, the programs do not lend themselves to examination. When
Princeton computer scientists Howard Strauss and Jon Edwards reviewed the
most common computer vote-tallying system (EL-80, which runs the Votomatic
system that dominates the market), they found the software so poorly designed
that it was virtually impossible to guarantee its functions.

Another computer scientist has called EL-80 "a bucket of worms," referring to
its over 4.000 confusing, badly written instructions, all of which must work
perfectly for the election results to be properly tallied and reported.

A Convenient Lapse of Computer Memory

One defense against software manipulation is the computer's audit trail,
which makes a record of every step the computer takes, every keystroke
entered, every ballot analyzed. With a complete audit trail, any attempt to
change the computer's programmed operation would be recorded. Problem
elections could be reconstructed to identify what happened, and when. Errors
could be corrected.

But the audit trails of most election systems can be turned off. This means
that, if the audit trail were turned off on Election Day, by error or intent,
it would be impossible later to reconstruct the election if the results were
challenged. This would be like trying to find the cause of a mysterious
airplane crash without the cockpit "black box" recorder.

Manipulating Computer Hardware

The election computer itself can be violated. With access to the computer,
its peripheral equipment, or cables that network election computers, a
computer expert could alter the computer or tap into its operations on
Election Day.

Through hardware manipulation, votes can be added or modified, an operator
could be permitted to bypass the computer's security system, or the entire
election could simply be muddled beyond repair.

This is possible whenever election hardware is left unsecured, or tended by a
single person; by the use of unsecured networks linking election computers;
and by the addition of modifications to the specialized computer equipment
that most election systems now require.

Don't we lock up our election computers, like the lever machines of old? Not
necessarily.

For example, in 1987, someone gained unauthorized access to the Voter
Registration Board office in Burlington, Vermont and tampered with its
computer. The unknown culprit managed to wipe the names of as many as 1,000
voters off the official voter checklist, which is used to confirm voter
eligibility at the polls.

Some Enlightening Errors

There are many other deliberate tricks known to computer experts, and
computerized vote-counting systems are vulnerable to just about all of them.

Since these tricks can be designed to destroy evidence of their existence,
there are no confirmed reports of their use in elections. Two of the lawsuits
mentioned above were dismissed because of lack of evidence which could prove
or disprove fraud; another because the software manufacturer successfully
asserted that his programs were "trade secrets" and could not be examined.
The fourth, in Indiana, is still active.

But the chilling possibilities of deliberate fraud in computer vote-tallying
can be understood by examining the chaos wrought by mistakes, which by their
unplanned nature are far more likely to be discovered than fraud.

In the 1980 presidential primary, a programming error led Orange County,
California, election computers to count 15,000 votes for Jimmy Carter and Ted
Kennedy as votes for Lyndon Larouche and Jerry Brown.

In Gwinnet County, Georgia, a computer hardware error affected hundreds of
votes in a close race. A complete recount reversed earlier results in which a
candidate had apparently won by eight votes out of about 13,000.

In Moline, Illinois, a faulty timing belt slipped intermittently on one card
reader. This led to a miscount where the wrong candidate actually assumed
office and later had to give it up.

A 1988 National Bureau of Standards report lists some recent difficulties in
26 computerized elections.

How much worse can it get?

A De Facto Monopoly

In any given election district, the company that supplied the vote-counting
system usually provides the experts needed to operate the system. Since the
software received no independent review, and since election officials feel
helpless to supervise the on-site experts, this kind of access means that a
single computer programmer bent on mischief can secretly alter any election
outcome.

Moreover, while there are about a dozen vendors of computerized election
systems, one vendor alone supplies the programs which count more than one-
third of the nation's votes in local, state and federal elections. This de
facto monopoly could spell enormous trouble were any of this company's
personnel tempted to play with our votes.

Such a concentration of power in the hands of computer experts is
astonishing. Today's procedures make it absurdly easy for a person so
inclined to meddle with election computers and their software. And there is
little chance of detection later.

A Call to Action

America's fundamental democratic institution is ripe for abuse.

Election officials depend on a few computer experts who are unaccountable to
voters.

A handful of private companies monopolize the technology that counts our
votes.

Vote-counting programs, ballots and equipment are known to be vulnerable to
various types of threats, yet current election procedures fail to protect
them.

Many machine-readable ballots aren't readable by voters trying to confirm
that the punched holes correspond to their choices .

And the election computer may never document that someone has tampered with
it, because its audit trail can be turned off.

It is ridiculous for our country to run such a haphazard, easily violated
election system. If we are to retain confidence in our election results, we
must institute adequate security procedures in computerized vote tallying,
and return election control to the citizenry.

It is our hope that this report will result in very constructive dialogue and
action leading to reform of electronic voting in the United States.

PART II:
A Secure Solution

We have the technology and the knowledge right now to develop secure
computerized election systems. Any computer system can be undermined by fraud
or error, but there are methods that will make such threats difficult to
perpetrate and likely to be detected.

Princeton computer scientists Howard Strauss and Jon Edwards have designed a
plan for secure elections which could be implemented within two years. Their
proposal balances the interests of software vendors with the public's need
for accurate and verifiable vote counts. Whether or not this particular plan
is adopted, it is urgent that the issue be publicly discussed and a
competent, comprehensive election security plan adopted.

Independent Review of Software

A crucial element in the Strauss-Edwards proposal is independent review of
vote-counting software by impartial computer scientists.

States and local election districts have the primary responsibility for
ensuring that elections are properly conducted. When most voting was done
with paper ballots and lever machines, the expertise required to manage an
election could be readily obtained on a local level.

Electronic elections, however, require the development of specifications for
election software, selection of software and hardware and verification that
the systems will do what they are supposed to do and nothing else. To have
each state obtain the expertise needed for these tasks would be expensive and
indefensibly duplicative.

Strauss and Edwards propose, therefore, that a national review panel of
computer scientists, unaffiliated with any election software vendors, be
empowered to certify computer programs and equipment for use in all
elections. The vendors would be required to provide software that could be
readily analyzed.

Once these tasks are completed, the remainder of the election processÑballot
creation, vote tabulation, reporting of resultsÑcan be done readily and cost
effectively at the local level.

A Separation of Powers

The Strauss-Edwards proposal relies on the separation of functions, with
checks and balances throughout the election system. This minimizes the
opportunity for any one person or group to undermine election results, either
accidentally or intentionally.

The separation of software development from software review has already been
described. Another group would be charged with distributing that software to
local precincts and election officials. This group would double-check the
work of the software developers and software review personnel .

At the election level, the Strauss-Edwards proposal would separate ballot
creation, precinct elections, election analysis and the determination of
final election results.

Impartial officials representing all major and minor candidates would be
needed to run each part of the election system. No one affiliated with the
software vendor should be involved in any other part of the election system.

Putting Citizens Back in Control

A key tenet of the Strauss-Edwards plan is the return of election control to
citizens. Citizens, not computer experts, should run our elections.

Software vendors have had no incentive to spend extra money on the kind of
complex programming that would make their software "user friendly" for local
election officials. Nor have election officials realized that they could or
should demand programs that could be operated by a non-expert.

The Strauss-Edwards proposal would require vendors to develop software which
local officials can use. The officials would simply have to be able to turn
on the computer.

Hands off the Keyboard on Election Day

Many strategies to undermine computerized elections require someone to gain
access to the vote-counting equipment on Election Day. Strauss and Edwards
propose an elegant solution: vote-counting programs that require no human
interaction.

Invalid ballots offer one type of opportunity for people to interact with the
election vote-counting computers. The computer vote-counting operation
sometimes stops counting when it encounters an invalid ballot, and asks the
human operator what to do with it.

This may open the gates, allowing the operator not only to choose how the
ballot is cast (for his or her preferred candidate?), but also to confuse or
mislead the vote-counting program. An operator could, for example, introduce
a Trojan horse that would add just enough ballots to throw a close election.

Strauss and Edwards propose a system that does away with invalid ballots
entirely: by requiring that ballots be checked for validity at the precinct
before they are turned in and counted. This could be accomplished with
special software and what Strauss and Edwards call the Ballot Validation
Computer. Voters whose ballots were invalid would be given another chance to
vote. Their original, invalid ballots would be logged and destroyed.

Let Ballots Be Ballots

Voters must be able to confirm that the votes indicated on their ballots
represent the candidates or issues they wanted to select. The Strauss-Edwards
proposal calls for all ballots to include the names of candidates and
descriptions of issues so that the voter is assured his will has been
correctly entered on the ballot.

Controlling the number of ballots has always been an issue in traditional,
paper elections. Ballot security is even more important in electronic
elections, since the computer's own records may be unable to reconstruct the
election to permit a recount. Strauss and Edwards suggest that, in addition
to physical security measures, each ballot be marked with a precinct number
to ensure that stray ballots could be accurately voted in accord with their
own local slate.

Since the orienting marks on each ballot determine how the computer will read
the card, Strauss and Edwards also propose that local officials ensure that
punched cards have proper orienting punches before being given to the voter,
and that the printing of all other machine-readable ballots be carefully
controlled to ensure accurate orientation when counted.

Off-the-Shelf Software and Hardware

Strauss and Edwards propose that all computerized election systems be
designed to operate only on generic equipment, using generic programs which
can be bought from commercial vendors. This allows all precincts and election
analysis centers to buy their own "fresh" equipment for use in the election.

Even then, Time magazine reports an instance of a computer virus being
unwittingly spread through a piece of commercial software.

A System Which Never Forgets

Probably the simplest, yet most important, security measure to take is to
ensure that we have a complete record of all computerized elections. This can
be accomplished by ensuring that the audit trail in computerized vote-
tallying software can never be turned off. That way, when allegations of
error or subversion arise, every single step of the election can be
replayedÑincluding errors or tricks in the software instructions.

Overseeing the System

Elections are the province of local and state government. But in our
technological age, municipalities turn to national clearinghouses for advice
on the best ways to manage traffic, pollution, housing, redevelopment, and
other civic responsibilities.

It appears necessary that some national, publicly accountable body assume
responsibility for ensuring voting rights in computerized elections. Strauss
and Edwards propose a Citizens Election Commission (CEC) composed of
impartial citizens of the highest personal integrity and computer scientists
of comparable stature.

The CEC would be responsible for appointing the groups that review and
distribute election software; for setting specifications and certifying
hardware and software for use in local elections; and for developing and
overseeing security measures in the entire election system.

Just how such a body is to be created, how it would respond to the states
while being fashioned at a federal level, and who would join its august ranks
remain complex political problems. But it is a surmountable obstacle, given
the urgency of our current situation.

Conclusion

The election plan presented by Strauss and Edwards forces election officials
to be responsible for the election process, minimizes the chances of fraud
and mishap at each level, detects and records subversion that gets by anyway
and produces an election which can be audited.

It should disturb us all that these relatively simple procedures are not
already in place.

We do not believe that current voting methods are adequate. We urge lawmakers
and all interested parties to do all that they can, if not to promote the
precise measures that we recommend, then to guarantee that comprehensive
guidelines and procedures that take our concerns into consideration are put
into place and required of all election jurisdictions.

References

Election Administration Reports, (Washington, D.C: October 26, 1987), p. 5.

Philip Elmer-DeWitt, "Invasion of the Data Snatchers!" Time., September 26,
1988, pp. 62-67.

Lance J. Hoffman, Making Every Vote Count: Security and Reliability of
Computerized Vote-Counting Systems, (Washington, D.C.: George Washington
University 1987).

Roy G. Saltman, Accuracy, Integrity, and Security In Computerized Vote
Tallying, (Gaithersburg, MD: National Bureau of Standards Publication 500-158
1988).

Howard Jay Strauss and Jon R. Edwards, "Ensuring the Integrity of Electronic
Elections," (Princeton, NJ, 1988).

"Voters Wiped Out," Vanguard Press, Burlington, VT, December 17, 1987.

Voting System Standard Program (Draft), Federal Election Commission,
Washington, D.C., 1987-88.

Computers and Complexity
Book Review
Alan K. ClineÑCPSR/Austin

Review of the book, The Dreams of Reason: The Computer and the Rise of the
Sciences of Complexity, by Heinz Pagels, Simon and Schuster, 352 pages,
$18.95.

The computer, the instrument of the sciences of complexity, will reveal a new
cosmos never before perceived. Because of its ability to manage and process
enormous quantities of information in a reliable, mechanical way, the
computer, as a scientific research tool, has already revealed a new universe.
This universe was previously inaccessible, not because it was so small or so
far away, but because it was so complex that no human mind could disentangle
it.

I think I hear someone preaching to the choir. If anyone sees the value of
these machines it ought to be computer professionals. Of course, in this
book, Heinz Pagels is writing for a large audience of mostly non-computer
professionals. But this review is meant for professionals, and the question I
should answer is "does this book have something to offer to the computer
literate?" My answer is "perhaps." I did not enjoy reading this book but I'm
glad I did. That enigma I hope to clarify in what ensues.

But first things first. Heinz Pagels was a physicist with a Ph.D. from
Stanford. Last summer he died while climbing mountains in Colorado with his
son. He had been executive director of the New York Academy of Sciences and
held an adjunct professorship at Rockefeller University. His two other
offerings of popularized science were The Cosmic Code (1983) and Perfect
Symmetry (1986) . Both were well received, but both dealt with his home turf,
physics.

As I began reading the book and was deciding how to write this review, Dr.
Pagels was still alive. I recognized that should I have wanted, l could have
sent off my review for his comments. In fact, l could have chatted with him
over the telephone about my impressions and thus not felt so guilty about
reporting the numerous criticisms I have. No such luck now. Please pardon me
for what may seem like disrespect. Recognize that simply by paying this much
attention to his book, l must be taking him seriously. I'll guess that he
would have preferred my quite unabashed fault-finding to simply ignoring his
work.

One of Pagels' primary topics is "chaos." Here is my introduction to the
subject. When I was fresh out of graduate school, I took a position in the
computing facility of the National Center for Atmospheric Research in
Boulder, Colorado. At that time in the early 70's the largest single computer
resource user there (and the computer resources at NCAR were comparable to
the world's largest) was their global circulation model. That thing was a
huge piece of code that not only kept the machines busy but a staff of ten or
so people as well. The intent was to solve numerically the equations for the
fluid dynamics of the atmosphere. It was not claimed that this was to be a
predictive tool. In fact, it was in some sense pure research: "how can we
model the atmosphere on a computer?" One useful result might have been that
other, smaller models might rehang off its sides" (might use the large model
for the generation of initial and boundary conditions, for example). However,
I'm sure that beneath the understated claims for its utility were some hopes
that the model might develop into a predictive tool. As it was at the time,
the model could not beat predictions of the important weather variables
provided by simple constants: all of the differential equations on a grid of
the entire earth couldn't match the method of "what you see today is what
you'll get tomorrow."

In the late 1970's, after coming to the University of Texas, I returned to
NCAR for a visit and listened to a progress report on the global circulation
model. At that point, what had seemed to me to be rather modest goals for the
model had been reduced even further. A climate model is all they were trying
to get then; they were striving only for good long-term averages rather than
trying for short-term phenomena. It seemed quite a step back to me, although
not necessarily a reflection of the competence or the dedication of those
involved.

I don't know the status of the global circulation model today, but a year ago
on my most recent visit to NCAR, I noticed several talks being given on
something called "chaos theory." It appears that the original hopes for the
global circulation model (given the computational power of the time or now)
were too highÑthe problem simply displays too many instabilities. SmallÑin
fact, almost imperceptibly smallÑ perturbations in the initial conditions of
the problem can lead to huge changes in even short term values of the
atmospheric parameters. Such a fact is related to the nature of the fluid
dynamics, not the model.

"III-conditioned Problems"

Perhaps an example from life might clarify this idea of instabilities.
(Actually, numerical analysts such as myself prefer to label these situations
as "ill-conditioned problems." III-conditioning simply means that slight
changes in the inputs to a problem can produce large changes in the
solution.) Consider first the problem of determining the times of the eight
runners of a particular race. This is a well-conditioned problem: slight
changes in the inputs (e. g., the track, the runners' performances, the
measuring equipment) have slight effects on the array of eight times. Now
let's change the problem to that of determining the order of the finishers:
first through eighth. For most situations, this second problem is actually
perfectly conditioned: slight changes in the inputs have no effect at all on
the order. However, in the input space neighborhood of a tie, the problem is
terribly conditioned since slight changes in inputs (perhaps a timing device)
can totally change the outcome. We may think of such situations as highly
unlikely but actually they are common for certain problems (such as weather
modeling) and, in any case, such ill-conditioned cases are often of great
interest (as is the race when it's very nearly a tie).

When the problem is so badly conditioned that even the slightest changes in
inputs produce quite altered solutions, we are in the realm of "chaos." Two
technical books on that subject are Nonlinear Dynamics and Chaos by J. M. T.
Thompson and H. B. Stewart (John Wiley & Sons) and Melting, Localization, and
Chaos, edited by R. K. Kalia and P. Vashishta (North-Holland). A popularized
(and quite well received) volume is Chaos: Making a New Science by the New
York Times science writer James Gleick.

Although Pagels' book is not really on chaos, he does discuss several of
these chaotic problems as examples of "complexity." Recall the phrase, "the
sciences of complexity" is in his subtitle. What does he mean by that? The
book is too vague. He says "Complex systems include the body and its organs,
especially the brain, the economy, population and evolutionary systems,
animal behavior, large molecules, all complicated things." What scientist
would admit to working on uncomplicated things, though? He certainly didn't
mean the term to include all of science. He says the themes of the complexity
sciences are "the importance of biological organizing principles, the
computational view of mathematics and physical processes, the emphasis on
parallel networks, the importance of nonlinear dynamics and selective
systems, the new understanding of chaos, experimental mathematics, the
connectionist's ideas, neural networks, and parallel distributed computing. .
. ." And, to a greater or lesser degree, all are discussed, but what's the
commonality there? I'm tempted to say, these are, by and large, just some hot
topics. There is another element, though. Pagels labels the computer ". . .
the instrument of the new sciences of complexity. . . ," and perhaps that's
the best way to define this area: the sciences of complexity are those that,
because of the enormous quantities of information involved, require computers
as research instruments.

Nevertheless, there's too much of an emphasis in this work on fad topics, the
high fashion of science. It's possible that some of these areas will result
in significant results but it's far too early to be drawing many of the
conclusions he does. Pagels says, "like a river that had been driven
underground in the 1960's only to reemerge full strength in the 1980's, the
connectionist view is now the dominant approach to simulating intelligence."
Such a statement may or may not be true in the future but, as for "now," it
certainly was rejected by every one of the artificial intelligence
researchers I contacted. But connectionism/neural nets/parallel distributed
computing are not the only pancakes being sold. I mentioned chaos previously
and there is a section on simulated annealing, as well. Even Karmarkar's
algorithm gets a reference. I'm not so hidebound as to believe that there's
nothing of value in the hot topics and, personally. I find them fascinating,
but experience has shown that it takes these things a while to bear
dependable fruit.

As I read Pagels' work, I was reminded of three rather recent science or
science-related books for wide audiences. They are Richard Feynman's Surely
You 're Joking, Mr. Feynman, Richard Rhodes' The Making of the Atomic Bomb,
and Robert Pirsig's Zen and the Art of Motorcycle Maintenance.

Feynman's book showed the outside world that at least one scientist could
have the social skills of . . . well, if not the "Fonz," then at least your
average industrial adhesives salesman. While Pagels' first chapter reminisces
about his congress with the 60's California counterculture (and thus evoked
memories of Feynman), l doubt if he meant to be that much fun. Yes, there are
anecdotes here and there, and the book is described as something of an
autobiography, but it's really more than kiss-and-Teller.

The Rhodes work must stand as the best explanation of pre-1945 physics that
has been produced for the nonphysicist. Of course, it's more than that. It's
history and philosophy as well. So is Pagels' book but with different
emphasis. The Pagels' science book is presented chronologically only on a
given topic. Some topics (e. g., connectionism) are given lengthy histories,
others are not. A more striking contrast is in the clarity of the science
being presented. When Pagels sticks to physics, his discussions are
appealing. (On the biological topics I must admit I was too frequently
confused, but perhaps that's a comment on me.) The mathematics and computer
science sections, which are the bulk of the science material, are frankly not
very well done; Pagels' knowledge is quite broad but simply not deep enough.
I'll assume his statement that ". . . compilers, devices that translate the
underlying machine language of a computer into programming languages used by
humans" is something akin to a long typo. But his presentations of some of
complexity theory and cardinality theory are simply confused. He often refers
to algorithmic discovery and analysis as being done by mathematicians; is he
acquainted with computer theorists or does he just give them different names?
In order to promote his case about the rise of "experimental mathematics," he
quotes figures for "equipment" being 8 per cent of the National Science
Foundation mathematics research support budget. Surely, he must recognize
that such sums could easily be spent on laser printers and workstations that
do little more than word processing and electronic mail. More troubling to me
is the short shrift he has given to the significant computational work of the
past. There's too much of an impression that computers being employed in the
sciences is a recent activity.

Science or Philosophy of Science?

Finally, what may seem like the strangest comparison, is Pirsig's Zen and the
Art of Motorcycle Maintenance. Although Pirsig deals with science itself very
lightly, he does discuss in great detail the scientific method. Pagels does
as well. Pirsig's book is primarily about philosophy while Pagels includes
more science, but in terms of pure page counts, about one third of Pagels'
book is "real science" and the rest is philosophy of science (a distinction
that, although common, would probably trouble him). Pirsig's motorcycle
journey with his son and his running commentary on philosophy is a gradual
journey, though. The depth of the material is reflected in the rigor of the
climbing and we're led up to the top of the mountain with a charitable pace.
In contrast, Pagels made me feel like we stood together at the bottom of a
sheer face and he said "go to it." The philosophy was simply overwhelming,
yet I believe these very personal appeals for the appreciation of the
philosophy of science by scientists and nonscientists alike were possibly the
most important part of his book for him. Some outside reviewing prior to
publication should have clarified this material, though.

Lastly, I'd like to turn to several items in the book that are of particular
interest to socially concerned computer professionals. I was a bit worried
that Pagels gave the impression that computer modelling is a magic potion. In
fact he says, ". . . but people, with the innate desire to control their
destinies, would be foolish to abrogate such high-level judgments to
computers." (He says this in the context of complex economic models but I
suppose he believed the application to be wider.) To the less computer
literate though, I wonder if the book might over-glamourize the value of
computer solutions. Amidst all of his quotations, I was disappointed not to
find Richard Hamming's aphorism, "the purpose of computing is insight, not
numbers."

But it's wrong to end on such a sour note. Pagels certainly caused me to do
some contemplation I might not have and for that I think reading his book was
worth the effort. I'd like to close with a long quotation that quite frankly
is not at the heart of the material, but nevertheless was one of those
passages that justifies the price for everything else. This is the
application of scientific analysis to social situations at its best.

Take, for example, the act of Iying. We hold the telling of truth as a value;
we are not supposed to lie. Yet if everyone told the truth all the time so
that one could have complete trust in what one is told, then the advantage
that would accrue to a single liar in society is immense. That is not a
stable social situation. On the other hand, in a society of individuals in
which everyone lied all the time, society would be unworkable. The
equilibrium state seems to be one in which people tell the truth most of the
time but occasionally lie, which is how the real world seems to be. In a
sense, then, it is the liars among (and within) us that keep us both honest
and on our guard. This kind of scientific analysis of lying can help us to
understand why we do it.

Alan Kaylor Cline is the David Bruton Professor of Computer Sciences at the
University of Texas at Austin. He is the Southern Regional Representative on
the CPSR Board of Directors

CPSR/Minnesota Hosts DIAC-88
Eric Roberts
CPSR National Secretary

On August 21,1988, CPSR/Minnesota hosted the second Directions and
Implications of Advanced Computing symposium at the University of Minnesota
in St. Paul. Attendance was down somewhat from last year, with 75
participants as opposed to last year's 140. In part, this reflected a decline
in attendance at the American Association for Artificial Intelligence (AAAI)
conference the following week, coupled with the fact that the DIAC schedule
did not mesh quite as well with AAAI as it did last year (we were in
competition with some AAAI workshops, for example). Nonetheless, DIAC-88
featured a number of strong technical presentations, and those who did attend
seemed to enjoy the program.

The symposium opened with an invited keynote address by Douglas Engelbart
(best known as the inventor of the mouse"), who for many years has worked
with systems to augment the human intellect while retaining a strong concern
for the social impact of technology. Doug's address, entitled "Investor's
Guide: How to Get the Most Return on Your Career Investment," was really two
talks. The first talk was a personal retrospective on how Doug got into this
area. Discovering at age 26 that he had "met all his goals," Doug set out to
find a challenge that would offer the greatest benefit to humanity. Believing
that "complexity and urgency had passed mankind's ability to cope," Doug
decided to look for ways to manage this complexity and began to design
computer-based tools that allow people to maximize the impact from what they
control." Doug then described his own career experiences as he sought to
develop a conceptual framework for effective human-computer interaction. As
one might expect, Doug encountered some resistance to his new ideas, and
determined that it was important, not only to outline a vision, but to
facilitate the evolution toward that goal. The second part of Doug's talk
went on to describe that vision and a technique to achieve it via what he
calls "organizational bootstrapping." Much of this was very interesting and
is covered in the papers included in the proceedings.

The first technical presentation was given by University of Washington
professor Richard E. Ladner on "Computer Accessibility for Disabled Workers:
It's the Law." In 1986, Congress extended existing legislation protecting
disabled workers to include accessibility to electronic equipment under the
general guideline that "handicapped persons and persons who are not
handicapped shall have equivalent access to electronic office equipment." The
paper in the proceedings outlines the new regulations and analyzes the
reaction from various affected groups. In his talk, Richard offered more
anecdotal background, including a report on his discussions with system
designers who feel that such guidelines will limit their creativity by
forcing them to restrict functionality in order to accommodate a small
minority of users.

The remainder of the morning was devoted to a session on social issues in
computerization, beginning with a presentation of "Computerization and
Women's Knowledge" by Lucy Suchman, based on her paper with Brigitte Jordan.
This was fascinating, if for no other reason because it managed to tie
together midwifery in the Yucatan with office automation. In both cases,
technological innovation, whether this involved the introduction of "modern"
birthing techniques or the installation of computerized equipment and an
associated standardized set of procedures, changed existing power
relationships by denying the authoritative knowledge already present in the
community. Midwives in the Yucatan, like office workers, understand the
environment in which they work and are respected for their experience. New
technologies, usually introduced without an understanding of the pre-existing
sources of expertise, devalue that experience and diminish that respect.
Moreover, the gap between the design and the use of technology often makes it
necessary to adapt the technology or, in some cases, to reject it. Lucy
concluded by calling for greater involvement of users in the technological
design, as suggested by Christiane Floyd in a paper entitled "Outline of a
Paradigm Shift in Software Engineering."

The last presentation of the morning was "Dependence Upon Expert Systems: The
Dangers of the Computer as an Intellectual Crutch" by Jo Ann Oravec, a
graduate student at the University of Wisconsin . Jo Ann began by describing
various forms of dependence, concentrating on the danger that certain
computational skills will atrophy from lack of use, but also considering that
dependence may arise from economic investment in a new technology, from the
images we construct about how tasks are performed, from a desire for greater
convenience, or from the introduction of workplace rules mandating conversion
to new technologies. After giving brief examples of these forms of
dependence, Jo Ann presented scenarios of where dependence on computing
technology might lead and offered some preliminary thoughts on how the
negative effects of dependence might be avoided.

We reconvened after lunch for Ronni Rosenberg's presentation of "Computer
Literacy: A Study of Primary and Secondary Schools," which includes
preliminary results from her doctoral dissertation at M.I.T. Ronni's
principal thesis is that the benefits of computer literacy have been
oversold, and that the reality is characterized by "vague goals, inadequate
hardware, bad software, and poor training." Her talk included several
examples of the mythology that all too often masquerades as computer
literacy, such as a class in which an operating system was defined as a
program that "directs the flow of electricity" or the remarks of a teacher
who responded to a question with " I don't know how it knows how to
alphabetizeÑit's magic." A particularly dangerous aspect of this is that
students are often given a false sense of security about the nature of
software errors, as illustrated by one text which claimed that "if a computer
gets unclear instructions, it will do nothing" and that "if you make a
mistake, the computer will tell you by typing 'ERROR'."

This was followed by Erik Nilsson's presentation, "A Bucket of Worms:
Computerized Vote Counting in the United States," based on the of the CPSR/
Portland Computers and Voting project. Erik began by describing how vote
counting is currently conducted in the United States and outlining some of
the typical problems that arise. Erik illustrated these with tales from the
1985 mayoral election in Dallas County, Texas, in which three different sets
of results were reported, with audit trails so poor as to make it impossible
to reconstruct the actual tallies. Charging that "the current election
systems are clearly inadequate," Erik concluded by listing several
recommendations to improve the situation, including establishment of a voting
system test facility, uniform adoption of Federal Election Commission
standards, and greater standardization of ballot-counting systems.

The last paper to be presented was "Some Prospects for Computer-Aided
Negotiation" by DIAC organizer Douglas Schuler from Seattle. This paper
continued several ideas that Doug had proposed at DIAC-87 in his call for a
Civilian Computing Initiative. Last year, Doug suggested that one possible
focus for such research would be computer-aided negotiation, and he explored
this topic further here. His principal contribution in this paper lies in his
survey of prior results that are important to various aspects of the
negotiating process. Those who are interested in further background in this
area should look this up in the proceedings.

The day concluded with a panel on "How Should Ethical Values be Imparted and
Sustained in the Computing Community?" moderated by CPSR Executive Director
Gary Chapman. The panelists were John Ladd, professor of philosophy at Brown;
Deborah Johnson, associate professor of philosophy at Rensselaer Polytechnic
Institute; Claire McInerney, professor of philosophy at the College of St.
Catherine in St. Paul; and Glenda Eoyang, who runs a Minneapolis-based
company that trains people in the use of computers. Since each of the
panelists took the question in a different direction, it is difficult to
summarize the panel in any short form, other than to say that the ethical
issues raised by computers are difficult ones, but problems that we, as a
profession, must learn to address.

When the panelists finished their statements, Gary invited two participants
from the Soviet Union, Gennady Kochetkov and Viktor Sergeev, to give some
brief remarks on the issue of computers and ethics. This proved to be one of
the most controversial moments of the day, as the Soviets argued that it is
important to develop techniques for modeling ethical behavior so that
autonomous systems can make "ethical" decisions. Gary reports that their
paper on this subject qualifies this idea with a number of caveats, but the
idea that ethics could be formally or even pragmatically "modeled" unsettled
many of the DIAC participants. Unfortunately, given that the schedule had
slipped so badly during the day, there was little chance for audience
participation in the ethics discussion before everyone adjourned to the
reception.

The success of DIAC-88 was made possible through the hard work of a number of
volunteers. Doug Schuler served as overall coordinator, David Pogoff and the
new Minnesota chapter handled local arrangements, Nancy Leveson was program
chair, and Rodney Hoffman served as treasurer. Thanks are also due to Ablex
Publishing Company for sponsoring the reception, to the American Association
for Artificial Intelligence for donating booth space at their national
conference, to the Ethics and Value Studies Office of the National Science
Foundation for their grant in support of DIAC, and to ACM SIGCAS, Microsoft,
Quicksoft, and Yen Computing for their generous support

Miscellaneous

AAAS Holds Annual Convention in San Francisco in January

In January, 1989, the American Association for the Advancement of Science
(AAAS) will hold its annual convention in San Francisco. Several panels and
workshops at the convention may be of interest to CPSR members. One panel,
organized and moderated by CPSR/Palo Alto member Dr. Barbara Simons, will be
on "Federal Funding of the Academic Physical Sciences." It will be held on
Tuesday, January 17, from 8:30 to 11:30 a.m., in the California East Room of
the St. Francis Hotel.

The panelists will be Professor Philip Anderson (Nobel laureate), Physics,
Princeton University; Professor Peter Lax, Mathematics, Courant Institute;
Dr. Robert Park, Executive Director, Office of Public Affairs, The American
Physical Society; Dr. Burton Richter (Nobel laureate), Director, Stanford
Linear Accelerator Center; Dr. Robert M. Rosenzweig, President, Association
of American Universities; and Professor William Thurston (Fields Medal),
Mathematics, Princeton University.

For more information, write AAAS Meetings Office, 1333 "H" Street, N.W.,
Washington, D.C., 20005, or call (202) 326-6450

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

It was time to support the cause.