Personal tools

pdenning.html

CFP'91 -

Trends in Computers & Networks

Tuesday, March 26, 1991

David Chaum

David J. Farber

Martin E. Hellman

Peter G. Neumann

John S. Quarterman

Peter Denning, Chair


Copyright (c) 1991 IEEE. Reprinted, with permission, from The First Conference on Computers, Freedom and Privacy, held March 26-28, 1991, in Burlingame, California. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the IEEE copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Institute of Electrical and Electronics Engineers. To copy otherwise, or to republish, requires a fee and specific permission.

Published in 1991 by IEEE Computer Society Press, order number 2565. Library of Congress number 91-75772. Order hard copies from IEEE Computer Society Press, Customer Service Center, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1264.


DENNING: Welcome to this first technical session of the conference. We will introduce the speakers before we begin:

John Quarterman ... is well known to you because of his authorship of the book, The Matrix: Computer Networking and Conferencing Systems Worldwide [1990, 719 pp.; Digital Press, 12 Crosby Drive, Bedford MA 01730, ISBN 1-55558-033-5] He's going to talk about the global matrix of computers. ...

Peter Neumann ... is well known for his moderation, his moderate approach to moderating the Risks Forum of the ACM [Association for Computing Machinery]. He was also a member of the panel of the National Research Council that did the report, Computers at Risk, and he's going to report on that. ...

Martin Hellman [is] known to many of you as one of the originators of the public key crypto systems and also [as] an early critic of the Data Encryption Standard [DES]. He's going to be talking about the role of cryptography in today's networking. ...

Dave Farber [is] an old hand in the networking world and has served on many of the national committees that have produced the networks that we see today, especially the NSFnet, and the CSnet that preceded it. Dave is actually one of the co-founders of CSnet. ...

David Chaum ... has long been a proponent of the uses of cryptography to enable people to carry out their private business on computers, and he is also one of the founders of the International Association for Cryptographic Research. ...

DENNING: Computers Under Attack

I'm going to open up with a very few remarks to set a little bit of context here for us. [This] actually fits very well into the context that Larry Tribe set for us here where we talk about the basic distinctions with which we approach the questions that we're here for. [Peter Denning is Editor of Computers Under Attack - Intruders, Worms and Viruses, 1990, 567pp., ACM Press and Addison-Wesley Publishing Co., Reading MA, $23.75, ISBN 0-201-53067-8. -JW]

... This is a panel on computers and networks technology, and what I want to do is to point out the distinctions that we've been using to talk about this network technology and then suggest that some of them are too limited in their scope. Because of those limits, we don't see how to design the networks to serve us well.

I want to use the word "ontology" to describe the set of distinctions that we have for talking about networks. By ontology, I mean the set of distinctions around which our actions are organized. In particular, the actions I'm interested in are the ones that we design our networks and computers with. Sometimes we use the word "vocabulary," or sometimes the word "terminology," or sometimes the word "language," or sometimes the word "semantics" to talk about what I mean here with "an ontology."

The ontology that we have grown up with - which we use a lot in talking about computers and networks - is what I call "information ontology." The central feature of it is information.

We talk about information as if it is a thing that you can hold in your hand. And, among other things, the chief concepts in the information ontology are information containers, information channels, transmitters and receivers. A container can be anything that holds information, such as a document or a file or a database. A channel is any mechanism through which a stream of information can be sent, such as a link or a protocol or a telephone line.

The transmitter is a computer or even a person who has something to offer, some information to offer. A receiver is the other end of the channel, the machine or indeed the human being who receives the information.

This all occurs inside of a technological way of thinking about computers and networks, some of which Larry Tribe referred to earlier when he talked about cost/benefits analysis and other things that are applied to help us understand how the law applies in these things and frequently lead us astray.

[There are] what I call breakdowns ... that we are experiencing today. ... One of them I just referred to is: a lot of people talk about security as added onto our computers and networks as an afterthought, which means that somehow or other it wasn't adequately taken care of in the design.

Another area where we have persistent problems we refer to as "ease of use." The networks are getting bigger and more complicated, and people are having trouble finding things in them and using them. In fact, that's one of the things that John Quarterman wants to talk about in a few minutes.

We, thirdly, have been experiencing various types of legal ambiguities. Larry Tribe enumerated quite a few for you in this arena, where the lines of separation are becoming vague, or previously separated domains now start to overlap.

A fourth breakdown is a lack of privacy in the networks. You don't know whether any of your electronic mail is actually being read by anybody else, and you have no way of finding out, and no way of guaranteeing [privacy].

A fifth issue is suggested by the big word "dossier," which is a concern that David Chaum has [researched] for a long time. This is the business where you provide information to one organization and they may combine it with information from other organizations in ways that you don't know about and you may not have intended. This is a growing concern among many people.

Another area of continuing disagreement and inability to [resolve are] intellectual-property questions: "Who owns what?" "Does information have value?" "Can software be patented?" Questions like that.

Another area is free speech and assembly: What is speech in this medium? Is the computer virus a form of speech? What is assembly in the electronic media? Is a set of people on a bulletin board an assembly or not? Those kind of questions seem to be persistent.

Another breakdown is ... the question of financial transactions as we do more and more financial work on the networks. How do we know that money is not being stolen or duplicated? How do we know that the transactions that we're trying to perform are, in fact, being done correctly?

[These] breakdowns ... are in many ways associated with the information ontology. There's nothing wrong with the information ontology. It's just that [with] some of the questions that are coming up now the information ontology is simply not powerful enough to guide us in the design of the networks that we need to carry out our work.

...We could expand or augment the information ontology with another one, which for want of a better term I'll call the "marketplace ontology" - because a lot of the actions that are being carried out now in networks can be viewed as a marketplace. But the marketplace has now gotten very big because the networks allow many people to participate when formerly only a few could because they had to be together in the same place.

One of the ... main features in the electronic marketplace, if we can call it that, is conversations. That's what people do with each other over the networks; they carry out conversations ..., many different kind of conversations .... The main thing is that most of the actions that get taken ... inside the network are the consequence of the conversations that people have with each other.

Another very important distinction inside of a marketplace is exchange transactions, where I do something for you in return for which you do something for me. So you might offer a service; I might give you a payment. You offer a commodity; I give you something else in trade for it. ...The essence of the marketplace is the ability to perform exchange transactions. And that makes it quite obvious that, for example, payments need to be done as part of a network marketplace.

Another important distinction inside of a network - which is true and present in any business or marketplace situation - is the question of trust. Those with whom you're interacting you have different levels of trust in. We normally have become used to dealing with that by face-to- face recognition in everyday life, or you hear the voice on the telephone and you recognize who it is .... Over the network, ... the question of authentication - in order to validate that the persons you're talking with are the ones you trust - becomes very important.

Another key distinction here is naming: How do you name resources? How do you name people so that you can easily contact them?

Associated with that is the question of locating: How do you find people? How do you find machines? How do you find resources?

Another important distinction is making public offers and requests. You have something to advertise, you want to be able to do that [so] people can find you.

Important questions in this type of interpretation are, "Who's in the conversation that you're engaged in?" That brings up all the questions of access control, but gives a new guide to designing the access controls.

And finally, "Who can use the records that are produced by a conversation?" Again, that brings up the question of access control. This ... way of looking at it offers a new way of designing the computers and the networks and the services in it than might be obvious from completely within the "information ontology."

[The next speakers will discuss] this, offering their points of view and their interpretations inside this new network marketplace. ...

QUARTERMAN: The Matrix as Volksnet

... I have actually one point to make, which is ease of use of the networks, and I want to talk about a few things that are already easy to use and then some things that still need a lot of work.

[Displaying an ARPAnet map, circa 1978] [The point] is, there are all these nodes connected together by a packet-switch network protocol that just works. You don't have to worry about it. Actually it doesn't work anymore; the ARPAnet doesn't exist. But it did then.

And there are different packet-switch networks. For example, ... THEnet, the Texas Higher Education network, which is an NSFnet regional network in Texas. It's [connections are] fairly complicated, ... links of different speeds, ... T1 lengths, ... 56K links, ... 9600-baud links. But the average user doesn't care because that's already been made transparent, as have the connections between the networks.

And the backbone network that connects [one] regional to [another] regional ... [are] old, slow versions - slow being only 1.5 megabits per second. [A new one] is the fast, 45 megabit-per-second version that's being put in now. But all these speed differences, all [the] hierarchy of NSFnet backbone [where] THEnet is a regional, and all the other regionals, have already been made transparent by the IP protocol [Internet Protocol for coordinating computer communications].

... [There] is a Canadian network going from Quebec and Ontario all the way over to British Columbia and it has a node in almost every province, called "C A splat net" - C A "maple-leaf" which is usually spelled as asterisk, thus "splat," n e t, CA*net ... That's how it was explained to me. This is a network similar to the NSFnet backbone and it connects together regionals in Canada. There's one in British Columbia. There's one in Ontario. There's one in Quebec. There's a whole bunch of them. It all works and it's all interconnected into the worldwide Internet ... ..

[There are links between North America and Europe.] Generally you don't have to care about that, although ... for some purposes you have to know which link you want to use depending on what kind of business you're doing. Excuse me, sometimes you don't want to do business; you only want to do academic and research. Well, I can't say business, can I?

Generally you can get to networks in Europe. For example, ... NORGEnet [connects] all five of the Nordic countries, [including] Stockholm, Helsinki, Oslo, Copenhagen and ... Iceland. This is all part of the big IP Internet. ... You can get from Finland to Australia and it looks just like you're getting from one machine to another in your own company or educational organization.

You can draw distinctions as to exactly how this stuff is connected. But as far as [when you are] actually in the Internet proper, ... it's all transparent. Then you start getting outside of it, which is where some of the problems arise for use of networks.

For example, [Australia has] the Australian Academic and Research Network. Is all of that really part of the Internet or not? Maybe; maybe not.

A better example is Usenet. ... It is - and it isn't - on top of the Internet. And mostly you can get to it from the Internet but [it has limits].

Then we get into distinctions of services. Usenet is a news protocol that's carried transparently over the Internet and a number of different other networks. This mostly just works. [It] is another fairly good example of something that once you know how to use the user interfaces the underlying architecture doesn't really matter.

When you start talking about things like sending electronic mail across different networks then you start having little difficulties. There are all these other networks like BITnet, HEPnet, the Xerox Internet, SPAN, UUCP, FidoNet and the commercial networks like CompuServe and The Source ... .

When you start trying to send electronic mail between all these networks - I'm using electronic mail as the most ubiquitous example - then you have to know what network you're talking to.

Many of these networks accept the same kind of mail [address] syntax and host names as on the Internet, the main name-system syntax, sort of like "jsq@tic.com," my address. As far as that is recognized, you can - at least from the Internet - mostly just send mail and have it work. And from many of the other networks you can do the same in the same syntax.

But that starts to break down on, for example, the UUCP network because the UUCP network uses traditionally a completely different kind of addressing. It has this source-root thing with exclamation points between the host names. [The UUCP address syntax uses "bang" - exclamation marks - to separate parts of an address name that must be in a certain sequence.] It would be nice if the user didn't have to know about this kind of difference. I'm finally getting to the point here.

Unfortunately in the real world, although the DNS mail syntax is accepted on large parts of the UUCP network, on even larger parts it is not. If you want to send mail, you have to send it in the "bang" syntax This also means, since a lot of UUCP hosts don't even have domain names registered, that if you want to send from some other network - for example the Internet - to a UUCP host of that sort you have to use some sort of mixed syntax, the most popular one being "hostname!user@gateway."

The problem with that is that's ambiguous ... . [Does it mean] first send to the first host name and then interpret the rest as the user at the domain name? Or does it mean first send mail to the domain name ... then interpret the rest of it as a UUCP address? ... There's no way you can tell by looking at it what the precedent should be ... .

By convention on the Internet, everybody just assumes the "at" sign ("@") takes precedence and you send to the domain name, ... first. But it isn't necessarily so. Depending on what routing it happens to go through, it may or may not work. Why should the user have to know this anyway?

You can try to kluge around this by rationalizing the syntax, [by] using a percent sign ("%") as a pseudo "at" sign because you can't have more than one "at" sign. But it is a kluge. It would be nice if we had one kind of syntax everywhere, where you could just say "user@hostname.domainname."

I should, I suppose, clarify that I'm using Internet domain-name syntax as one example of a syntax that you could rationalize on everywhere. Another one obviously is X-400. I'm just using DNS because it's easier to write down for me. It might not be for you.

And when you start talking about FidoNet, the addressing looks even weirder. I know there's a lot of FidoNet people here, and this really isn't meant as a personal insult, but [displaying an example of complex FidoNet addressing; laughter] I really do think this is a bad example of how to do gatewaying because the actual architecture of FidoNet - it's a tree- structured hierarchy - is built into the domain address. Once again, why should the user have to know this kind of strange difference?

This kind of problem applies not only to, "How do you spell an address once you know where you want to go?" but [also to], "How do you find it in the first place?" There are conventions on some networks for this sort of thing. For example, in Europe, if you know what country you want to go to, you can send to a postmaster for at least one network for that country. But, in general - how do you find out, given that you want to talk to a certain person maybe in a certain organization - how do you know their address?

There's half a dozen different directory services that you can use if you know where they are. ... How do you even know what the services are? How do you know where the service suppliers are? How do you know how to get there? It's a big mess.

And the analogy I'd like to use here, since analogies are popular, is that if you want your grandmother to learn how to drive a car, you don't ask her to learn how to build an internal combustion engine and how to learn to navigate with map and compass, and then just say here's the car, go do it. You first make a car that she doesn't have to fix very much; you give her maps, and you train her how to use it.

Until we reach that state with these networks, we're really going to have a problem introducing this technology to the general public, as should happen, or we think should happen. So that's my point. Unless you make it more usable, people will not use it. [applause]

NEUMANN: Computers at Risk: The NRC Report and the Future

... One comment, before I begin, on John's talk. I get hundreds and hundreds of pieces of [electronic] mail each week for the Risks Forum [a computer-based discussion, circulated internationally by e-mail], from all over the world, and I am continually annoyed when I try and answer a piece of mail and find that the mailer - the outgoing mailer from the the guy who sent me the message - has not provided an address that I can answer to. It's absolutely infuriating. So we're dealing with amateur software here in large parts of the Internet.

Let me begin with a couple of obvious points that need to be reinforced.

Computer security is a double-edged sword. On one hand it can be used to protect personal privacy. On the other hand it can be used to undermine freedom of access, even to things about yourself, and that deal with confidentiality. There are integrity things, like security that can help defend against attacks, such as Trojan horses, viruses, and so on. It can help a little. If it were better, it could help a lot more. It can also make your life miserable if you find that all of the security controls are impossible to use in a legitimate way.

Monitoring is something that can be used to detect intruders and masqueraders and evil deeds, but it can also be used to spy on legitimate users. The point is that there is a very strong double-edged sword aspect to this whole thing. And those of you who say "Aaaah, security is irrelevant. We don't need it. We trust everybody," do not live in the real world. Those of you who think security controls are an end in themselves and all we have to do is provide better security for all these systems are also not living in the real world.

So somewhere in between is the problem that I've been asked to deal with, which is the National Academy of Science's National Research Council Report ..., Computers at Risk.

In order to tell you a little bit about that, I need to give you an idea of the process that goes into creating a report like that.

It's done by the National Academy of Science's National Research Council under the sponsorship of the Computer, Science and Technology Board. The funding came from DARPA [Defense Advanced Research Projects Agency], although they were explicitly forbidden from doing anything with us during the study. The charter was to come up with some recommendations on what's wrong with the world today and what can be done about it in the future. And, you have to understand that, with a committee of 16 people, none of whom is likely to agree with any of the other 15, it is almost impossible to come up with any sense of consensus. Nevertheless, the Academy process says we must attempt to come up with consensus.

There were basically six recommendations. One had to do with the establishment of what might be called "principles for secure system development" and for secure systems. The second set of recommendations dealt with short-term palliatives that might improve the situation somewhat. [The] third dealt with awareness, and things like incident repositories, analysis of past history of break-ins, internal misuse and so on. The fourth had to do with a very sticky issue that will come up here, which is that of "export control." Basically, the government has declared a bunch of things - that Marty will talk about relating to DES and RSA [cryptographic-based security methods] and things like that - which make it very difficult to do any truly international linking of computers, except in, say, the banking communities.

The committee argued long and hard about what to do about the export-control problem. We came to the conclusion that it is indeed a serious problem and that something drastic needs to be done about it. But the committee was incapable, I think, of coming to real grips with any specific recommendations, partly because a lot of the arguments in favor of export controls are in fact classified - which makes it very hard [laughter] in an unclassified report to deal with those issues, even if some of the people who were writing the report were privy to the classified arguments.

A fifth recommendation was that the funding situation has been exceedingly spotty in terms of research and development relating to better systems. Here, as in some of the other recommendations, the committee was very insistent that the problem is not just confidentiality, as you might guess if you'd looked at the Department of Defense's trusted-computer security evaluation criteria, familiarly known as the "Orange Book."

In fact, integrity, availability, reliability, human safety and all sorts of other issues are fundamental. So, the funding should really address, in some sort of a holistic way, the questions of what do we do with systems that have to be reliable, safe, secure, highly available and so on, particularly life-critical systems.

The sixth recommendation was by far the most controversial. That was the establishment of some sort of a new institution. The committee felt very strongly that NSA [National Security Agency] and the National Computer Security Center [NCSC, within the NSA] have a charter to deal with DoD-related [Department of Defense] matters, but they do not have a charter to deal with ordinary users. NIST [National Institute of Science and Technology], formerly NBS [National Bureau of Standards], had a charter to deal with the federal non-DoD sector, but does not really have a charter to deal with the user community. We felt that there was a very serious gap in terms of representation of the user community. There are groups that represent the vendor community, for example.

The committee came very strongly to the conclusion that we need to establish some sort of a new organization. How that organization should, in fact, be conceived is an extremely tricky and debatable issue. The first criterion was that it should be private - it should be independent of the government. On the other hand, it has to be very carefully coupled with NSA and NBS and NIST and the vendors and the international scene and the standards committees and everybody else if it's to do anything successfully. So, there is immediately a very difficult question in setting up the charter.

The second is that it should be fiercely independent of all of these other things, even though at the same time it's going to coordinate with them. The third is that it must be user-responsive, and I think the conclusion that we've come to since the writing of the report is that the only way that this kind of an organization could fly is if it is not a large organization that has hundreds of people doing all of the things that the report recommends - but rather, if it is some sort of a small, foundation- like organization which can ... perhaps motivate and spin-off other efforts and coordinate them. For example, in the evaluation of systems that are supposed to be safe, secure, reliable or whatever, it's unlikely that such an institute or foundation or whatever it is would be able to do all of the evaluations itself. But it might well be the organization that puts some sort of imprimatur or certification or something on the resulting products.

I think the only way that this makes sense in the long run is for the organization to be some sort of an enabling organization that, in fact, does a lot of the legwork, some of the politicking, some of the dealing with Congress and things like that - but where a lot of the actual evaluations, establishment of standards, and establishment of these principles that I talked about would be done by other organizations.

That was, in essence, the nature of the report. [Polling the audience, he estimated about one quarter to one third had seen the report.]

One last comment ... in response to something that Peter said about the marketplace ontology: If the marketplace ontology is done without a sense of marketplace, or with a somewhat detached sense of marketplace, it can become a human oncology in the sense that "quantitative risk assessment" is an extremely dangerous thing.

You all remember the Pinto scenario from years back. It's resurfaced in the movie, Class Action, which has just come out. They've used the Pinto story of doing quantitative risk assessment to come to the conclusion that it didn't pay to improve the gas tank because it would take $11 a pop, and it would cost more to fix it than to solve the lawsuits that would respond to all the deaths. Exactly that scenario shows up in the movie.

So, if you try to assess security on the basis of what has happened in the past you come to the conclusion that we don't need any of this stuff. We don't need reliable systems. We don't need highly available systems. We don't need secure systems - because we never had the terrific catastrophe that we would need to justify it.

If we wait for the catastrophe, we're in real trouble. Thank you. [applause]

HELLMAN: Cryptography and Privacy: The Human Factor

I have to thank the previous speakers because they provided a very nice introduction for some of the things I want to say, and we actually didn't coordinate as well as it might seem on all of this. ...

All of us here today have voted with our time and with our budgets that computers and privacy and freedom are very important issues. I think most of us would also agree that we sometimes feel like we're crying in the wilderness and this is not taken as seriously as it should be.

I'm going to address in my talk today some of the issues that I think help account for that. It's very different from the talk you might have heard me give 10 or 15 years ago, when I felt like if only we'd used cryptography the world would be safe.

Basically, freedom and privacy are highly valued in our society, and so there has been a response to the unprecedented threat that increased computerization has produced. But in my opinion we've placed way too much emphasis on technical and legislative remedies and way too little on the human factors involved. Technical and legislative remedies certainly are needed, and they have a very useful purpose. But there's significant evidence that by themselves they are impotent.

As examples from a non-technical area, I would note that since 1922 the Constitution of the Soviet Union has guaranteed freedom of speech and freedom of religion. In our own country, lest we not become too arrogant about this, between 1919 and 1933 our 18th Amendment to the Constitution outlawed the sale and consumption of alcoholic beverages. Furthermore, years elapsed - because of public skepticism - between the time that research clearly showed a connection between smoking and cancer and the time that this was acted on in any real way.

In the United States we still use the English [measurement] system as opposed to the metric system in spite of the obvious technical superiority of the metric system. And, seatbelts - while required on all new cars and even mandated by law in many states, including California - are still not used by a significant fraction of the population.

What ties all these together is [that] technical and legislative remedies by themselves are not enough. I'm not saying that we don't need these solutions. Rather, the point I'm trying to make is that only when the technical and legal remedies are considered along with human factors in a total-system solution can they prove really effective.

To turn to similar problems in computer security, where human factors were neglected or where they are the real problem, all the technical [and] all the legislative things we needed are in place. Although the technology allowed low-cost encryption as early as 1975 and probably several years before that, the majority of automated teller machines sold in 1980 - when I did a report on this - still communicated over unencrypted telephone lines that were readily accessible to a thief.

The 56-bit key size of the Data Encryption Standard [DES], which some of you have heard a lot about from me in years past, was determined by political, not technological, considerations. A larger key would have been much more secure and cost almost no more. But partly due to export restriction and other considerations, the key was set at 56 bits. Again, human factors.

Organizations that want to use encryption - there aren't that many - ... are stymied by political barriers, the most obvious being the American restrictions against export of cryptographic equipment, even including implementations of the widely published Data Encryption Standard. You can go out and read my papers. You can get the federal standard. All that stuff is unclassified. It's available all over the world. And yet you still cannot export the hardware incorporating DES without a license.

In a 1979 paper, Morris and Thompson showed that a significant fraction - if I remember, it was about 25 percent - of the computer accounts at Bell Labs could be opened using a dictionary of common passwords: Tom, Dick, Harry, Mary....

AUDIENCE MEMBER: 70 percent.

HELLMAN: 70 percent. Thank you. Nine years later, in 1988, Morris's son [Robert Morris, Jr., who created the "Internet worm"] brought academic computing - including in our laboratory at Stanford - to a halt by using, among other attacks, a dictionary with 200 common passwords. So again, we knew about the problem and yet we did nothing about it in many, many places.

Having described the problem, I suppose I have some responsibility to talk about a solution. It's always easier to talk about the problem, though.

First and most importantly, we will not recognize the importance of human factors until we admit the limitations of technology and the logic associated with it. This is going to be difficult because we, the engineers, scientists and mathematicians, are in love with technology and the miracles that we can work with it. We also are, on the average, less comfortable than the typical person when dealing with human factors. But if we see that logic alone is not solving a problem, we're in a Catch 22, because logic dictates that we then recognize that and do something about it.

Now I know this is easier said than done. I remember very well the time when I was first confronted with this paradox. I had based my life on logic. It had rescued me from the chaos that seemed to surround me as a child, and I retreated to the logic of science and mathematics. But when I tried applying this outside of a narrow area, particularly in my marriage and with my family, [laughter] it was an exercise in futility.

Now the funny thing is, even though it wasn't working, I kept trying to use it. But as I examined the evidence logically I could see logic was not working. Even so, for some time, I had great difficulty making the leap to what I saw as illogic - a step backward into the dark past. Fortunately, I saw I had no choice and I did find the courage to experiment with what I then saw as illogic.

Today I would draw an analogy to physics and distinguish between classical logic (which is what I used to call logic) and relativistic logic (which is what I used to call illogic). And far from becoming the illogical person I feared I would become, I actually have begun to understand an "expanded logic" that makes me more effective. I strongly recommend it.

Recognizing that technology alone cannot solve all the problems, I think, is maybe the key point for many of us.

Second is education. I don't just mean classroom education, although a certain amount of that can help, be it in a university or on-the- job. But I'm really talking about training, education by example. Is social responsibility an integral part of my approach to engineering? Do my colleagues know this about me? Because very few people are going to be the first on their block, so to speak, to take this seriously. Of course, no one wants to be the last on his block, either. So when enough of us start doing it then other people start to follow the lead.

We need to provide counter-examples to the belief that social responsibility is nerdy, anti-technical and "not playing as part of the team."

If our approach to social responsibility is "holier than thou," if it's anti-technical or anti-team, it won't work and we need to change our approach. Doing otherwise would be committing the same error that we're trying to rectify - neglecting what seem to be annoying human factors that get in the way of our planned action. ...

Professional societies can play a greater role by offering prominent recognition to those members who courageously take action where most would not. I'd like to see a fellow of the IEEE appointed because of social responsibility actions that he or she took. The engineers at Morton-Thiokol - who courageously opposed both NASA and their own top management by attempting to prevent the launch of the Challenger - unfortunately they failed - are a good example of individuals deserving recognition.

Another factor that we need to look at is psychic numbing. This is something I came up against when I was working with the - I hate to call it the peace movement, but that's the best word I guess we have for it - about five years ago. Basically, the group I was working with realized that a lot of people, instead of being scared into action - which was the predominant belief at that time; you had to show people the threat of nuclear war in all its gory detail so that they would be scared into action - ... were scared stiff instead, and maybe they were not responding because of all the negativity and seeing no way out of it.

Fortunately, this group did change it's approach and tried to emphasize (and several others did as well) ... the positive actions that individuals could take, along with the threat that we were facing.

I believe that the lack of ... use of readily available security measures - such as a computer having its own dictionary of common passwords and refusing your password if it's in the dictionary - is due to the fact that we don't have a complete solution to computer security, so many individuals prefer to ignore the problem.

We can help by not overplaying the importance of our partial solutions - because then, of course, we lose credibility - and by emphasizing the positive steps that can be taken in addition to the horrendous consequences that we face if we do not respond to this.

We also need to look at export restrictions. I think that's been touched on adequately, but I would like to contrast the fact that we cannot export DES without a license, and yet part of our response to the Iraqi invasion of Kuwait is to sell approximately $7 billion in new weapons systems to the Saudis, forgetting the weapons that we sold the Shah of Iran later became the arsenal of the Ayatollah.

Another approach is to build encryption and other security features into equipment as standard features. There's a bit of a chicken- and-egg problem: A lot of people won't use encryption until it's cheap, and it won't be cheap until a lot of people use it. If it takes up one percent of a customized [integrated circuit] chip area, you can build it in at almost no cost and I think you'll be at a great competitive advantage. A friend of mine who sold local area networks concurs with this. I strongly suggest you look at it.

The last point that Peter Denning touched on is that we need to include security in the original definition of the problem. You could end up with a vastly different electronic-funds-transfer system if you define your goal as developing a secure electronic-funds-transfer system [EFTS] than if you say, "My goal is [simply] to develop an electronic-funds- transfer system." Adding security as an afterthought just does not work.

... All the technology's there ... not to solve the problem completely, but an awful lot what could be done is not being done. ... Many laws are in place that are not being adequately used. I think we need to look to this area of human factors if we're really going to be effective, and I hope that will happen. Thank you. [applause]

CHAUM: Electronic Money and Beyond

I'd like to begin by ... pointing out two ways that techniques can influence, say, law. One is through new models and creating public awareness. And a second way, ... by actual fielded systems. Actually, I'll try to address both approaches. But in particular, I'd like to give my own optimistic view that there is a kind of irreversible trend that I've observed - where ordinary people are becoming more aware of the special properties of information and its importance. I think this is inevitably going to lead to fundamental human rights which I've called previously "informational rights."

But let me get into my main topic: What you can do with cryptography. I would like to claim that this approach can solve any security problem you might come up against; [any] you might imagine.

Let me try to illustrate what I mean. ... If you have a number of people, and they each have some secret data, and they're going to receive some outputs, they can rely on a mutually trusted mechanism to accomplish their objective.

In other words, take the example of an election. If you have one trusted party, everyone can merely tell that party their votes and the party will calculate the winner and inform everyone of the winner. Now if you have secure [communications] channels and this mutually trusted mechanism, then of course you can do that. You can also maintain a complete personal database about everyone in a whole country, if you have such channels and a mutually trusted mechanism. In fact, this was proposed some years ago in the United States. This is a very general solution to any information security problem, including privacy problems but, of course, it's based on some perhaps-unworkable assumptions.

Now, what I'd like to [present] is the result of some theoretical work where we've proven that you can achieve all of those same kinds of things without the mutually trusted party and merely by the participants exchanging messages amongst themselves.

In general, each participant will have to exchange messages with all the others, but we've shown [in our research] that any computation that a trusted party could do could also be achieved in this way, in principle.

More particularly, for those of you who are interested in the details, there are essentially two types of approaches. One [is] where the secrecy of the data of the participants is protected based on the assumptions of cryptographic intractability. Certain computations can't be made. Other kinds of solutions are based on what we call "unconditional privacy." That means that if other participants have infinite computing power they can't find out your secrets. Those are two fundamentally different kinds of ways to protect secrets. And, actually I've shown that we can combine these two.

... There is a paper of mine which shows some protocols which are practical. In other words what I've said is that, in principle, we can solve any information security problem and that we've proven that.

But are those protocols practical? [That paper illustrates] some protocols which are, in fact practical, which is different. In other words, the open problem in cryptography today is to find practical protocols for things that we want to build because we know everything that makes sense is possible.

What I'd like to do is focus briefly on an example practical protocol - [an] electronic-money protocol. ... There is a withdrawal transaction, where a person gets money from a bank, a payment, and later [makes] a deposit. The person is identified during the withdrawal, and the shop during the deposit, but during the payment the person need not be identified to the shop. I don't know if I have time to go into these details, but what I'm not talking about is systems like you see today, where they're based on account numbers - but rather, what I am interested in is "electronic cash," numbers that are money.

The bank will create a number, which will be just like a piece of paper money today. You'll withdraw that from your account. Later you'll pay at a shop, and maybe the shop will check that the number hasn't been spent yet. If you look in the [paper] you'll see ... how to make a provably, unconditionally, privacy-protecting electronic money system, which has very high security, where all the parties are protected from each other without requiring any mutually trusted mechanism.

That would be ideal for a network where everyone is connected to a central node.

There's a new result, which is also mentioned in that [paper] briefly, which says that people can pay locally without going to a central database. But then they can cheat by paying the same money at several places, and they will get caught only later if they cheat. If they don't cheat, their privacy is protected unconditionally.

This brings up a new problem. How can we let people do transactions without always having to contact every other person or a central database? How can we have distributed computations? A simple approach is that everyone is issued a little tamper-proof computer ... . This is being done in France and other places - the so-called "smart cards" are just that. It's not very hard to see that this gives you the same kind of security as we saw for the other general solution. Two could, for example, just communicate with each other without having to go to a central site.

You might think of these [smart cards] as kind of your representative. If you're a person, they know everything about you, and they represent you to the other individuals or organizations in society. All the other representatives trust your representative completely, and they communicate by an encrypted channel. It's authenticated, and privacy is preserved and everything works just as before, but probably much more efficiently. Of course, this isn't very much more attractive than one big mutually trusted center.

But what I've shown recently - and unfortunately this is not published in a very accessible way - is that just by turning things around a bit, we can do a lot better. In particular we'll put this smart-card chip - this tamper-proof chip that could be issued to each person in society - inside, say, a ... portable workstation that's owned by each person. So the person trusts their workstation. They filter all information that is transferred between this tamper-proof chip and the outside world through their workstation. By this means we can achieve ... exactly the same efficiency and properties as in the previous scheme, but now in such a way that the person doesn't have to trust this device, almost, at all. This device can signal the outside world by stopping working at random points, but that's the only way it can leak information or otherwise cheat the person.

That may seem rather futuristic to you, but actually we have developed a commercial system based on this principle. It's a product called Smart Cash, and it's a whole family of products, in fact. it works just as you see. There's a smart-card tamper-proof chip that's moderated by [a] user-controlled, so-called pocket reader. This allows very practical, privacy-protecting electronic payments of all kinds.

This technology's met with a great deal of interest in the market. Now, there's a company in the Netherlands which is very effectively putting this kind of thing into reality. ... Thank you very much. [applause]

FARBER: Will the Global Village be a Police State?

[adjusting microphone] See, it works. Technology works. ...

Let me make some comments fast on prior speakers. A point that I'd like to punch in is that a lot of the issues we're talking about are not retrofittable. ...About five years ago, Steve Walker, a student of mine, Peter Von Garn(?) and myself wrote a paper called, "Trusted Office of the Future." We pointed out there that, [at] the rate we were going in this field, most likely at some point in the future we'd have a major compromise of our electronic world. Not only would we have an embarrassment but we wouldn't be able to sell computers and networks to anybody until we actually secured these systems - and that was no time to start doing research in the area.

Five years later, I think I could rewrite that paper without changing a word except the date, and it would still be true. That's worrisome. [The paper] also pointed out that a lot of the changes you had to make were fundamental changes to the architecture of computers, potentially fundamental changes to the architecture of our protocols and our technology.

... I just came back from Finland, where a student, Arto Corilia(?) - mispronounced as usual for Finnish - defended a thesis where he pointed out that security had to be embedded within the OSI [Open Systems Interconnection] protocol structure in a fundamental way - not in the way it was currently being tacked on the edges - in order to have any hope for privacy. We see that constantly.

People want to tack on security when, in fact, you have to go in and make fundamental changes. Those fundamental changes take time, and people are not, in general, willing to pay for them. You can ask Intel and others if you think that's false.

Many moons ago, I and my colleagues, including Peter [Denning], started CSnet and then we perpetrated the sin even further by instigating NSFnet. It and other activities have grown into an international network. There's traffic flowing on that between cultures, between different laws - a whole lot of stuff. There are a lot of people periodically who sort of look and say, "Should that traffic be flowing?" To a degree, the subject that I want to touch on a little bit is this police-state problem, and the availability of potentially annoying consequences of it.

...There are a lot of problems that arise, when you have these large networks, ... with security. It's not only people telling you that you can't transmit data - which is allowable in one culture but not in another culture - across these networks, but accidents happen.

There was an interesting case two weeks ago when NEARnet [New England Academic and Research Net], the federation of the academic networks in the United States, accidently put a document in an anonymous FTP [File Transfer Protocol] file. There was a neat piece of technology which periodically runs and looks at all the reachable files in the global Internet. It saw that file. Somebody grabbed it. Unfortunately, it was a draft document, and suddenly that draft document got wide circulation, much to the embarrassment of many people.

There were several computers [produced] over the last couple years which had microphones in them, with everything set up so nicely so that it was trivial to turn on the microphone from anyplace on the Internet and listen in.

One wonders if there are people looking at all the traffic that goes over the Internet. I know this is not true on the NSF backbone. I'm not sure that it is not true in some universities, which bothers me. One observation people usually make is, "But there's so much data." If you believe that, I know of a nice bridge. [laughter]

Also let me coin a phrase, if I may: "retroactive wiretap." As they found out in IranGate, we have created a marvelous opportunity for retroactive wiretapping. I think we should be cautious about that. If the courts are willing to allow active wiretaps, they'll probably be very happy to allow you to go in and seize all the electronic mail from three years ago that sits on the archives of many computers.

What I want to do ... is just caution that life is simple now, really simple. We, as part of another National Academy study many years ago, proposed that we take the networks one more step from the T3 network ...to the gigabit [trillion bits per second] technology. That research activity is underway and it's a widespread, very large research activity involving academia, industry and others.

If you look at the applications that are apt to be made of those networks, they will include terms like medical-imaging work, applications to the business community, corporate simulations, gaming, etc.

That's tricky technology. It is not at all crystal clear how you build security into these very high-speed networks. It is very difficult at a gigabit [speed] to do anything. It is very difficult, in fact, to encrypt. It is very difficult to figure out what to do.

On the other hand, it is relatively easy to look. So I caution that, unless we apply now a lot of attention to this, ... the police state of the gigabit technology is going to be much, much more exciting and much, much more dangerous.

... Thank you. [applause]

QUESTION & ANSWER PERIOD

DENNING: Here's a question for Peter Neumann. Peter, has DARPA yet responded or reacted to the NRC report [Computers at Risk]?

NEUMANN: Interesting question. There has been a response from Steve Squires who said that he was fascinated with the report and that he would pull together his contractors and see whether they had any interest in it. The brunt of the report is pointing toward the user community as independent from Department of Defense, so it's not clear that that response is one that is constructively pushing toward the goals of the report directly, although it could be useful indirectly. I think the general response from DARPA was that this was a terrific report and it said a lot of things that needed to be said. That's about it.

AUDIENCE MEMBER: Janina Sajka from the World Institute on Disability. We have a great concern that technology is becoming increasingly inaccessible to people with disabilities. I just want to encourage everyone here that we do not want to end up in an adversarial relationship with an industry that holds terrific potential for benefit to a community that probably could be better served than it has been.

We need to discuss I/O [input/output] at the standards level, at the protocol level, as well as at the terminal-hardware level. It has not been met with assistive devices, and as it matures and develops it seems that it just simply assumes the availability of what are considered the normal range of human abilities. So, for everything that you can imagine of the machine communicating to you, for every means that there is to communicate back, everything in the I/O process needs to support alternative channels. That is a consideration which affects everything that has been spoken about here today, and it is being demanded much as curb cuts were demanded 20 years ago. The solutions need, also, probably not be any more difficult, if we do them early enough. Thank you.

DENNING: [A question] for David CHAUM: "In an electronic cash system, isn't the bank a trusted central host?"

CHAUM: Well, actually not. You might have noticed I was talking about what we call multi-party security, where each participant in a protocol can protect its own interests. In the case of electronic money, the bank is merely one participant in the protocol. It's able to protect its money, its own deposits, but it is not able to spy on any person's payments, even if it cooperates with all the shops and has infinite computing power available to it. Nor is it able to disenfranchise a person, to take money out of your pocket. These are truly multi-party secure protocols and electronic money is not an exception to that.

HELLMAN: The bank itself cannot forge a check on my account?

CHAUM: No. In the case when I said that the people who cheated would be caught later, it's actually the case that the bank will get a kind of un- forgeable digital confession from a person who's cheated, if they cheat. This is the information that's recoverable, and we can prove that even with infinite computing power, the bank - in cooperation with shops and all kinds of people - is unable, except with an exponentially small chance, ... to forge such a confession.

DENNING: [A question] for Dave FARBER: "Why is it so much more difficult to encrypt at one gigabit than at 10 megabits?"

FARBER: You know that's almost a hard question to explain. The problem that you seem to run into in practice is that it's very hard when you have a gigabit of information coming at you in real-time to do any amount of meaningful computation between the successive bits. We actually have a DES [Data Encryption Standard] card that'll go at 150 megabits, and it is hell. It's the fastest one that we know of to date, although there may be faster. Just very tricky doing logic at a gigabit. You're suddenly in a world of microwaves. It's hard to explain it any deeper than that. I'd recommend you get a good soldering iron, turn up the clock and you see rapidly the problem. Actually the analogy [is, to hook] your 1200-baud line into a computer and raise that to a gigabit. Just think of what the programming problem is, keeping up with it.

HELLMAN: It is hard doing anything at a gigabit, but you obviously can have several encryptors working in parallel as long as you can slow things down. You basically need a demultiplexer and a multiplexer, which is the same thing you'd need for the spying operation, which you said was fairly simple. Admittedly it's hard doing it. It's no harder to encrypt at a gigabit, I would say, than to spy at a gigabit, or am I wrong?

FARBER: I think I would much rather spy at a gigabit than actually have to do the algorithms at a gigabit. You're correct; your statement is correct. You can take four or five, six, of the 150-megabit cards and put them in parallel and get a gigabit card. That's a lot of circuits, by the way. It's a very expensive game and it doesn't integrate very well. Yet.

QUARTERMAN: Is most of the traffic going to be generated at a gigabit, or is it going to be lots of slower inputs being multiplexed? If so, why can't you do the encryption when you generate traffic?

FARBER: The intent of the gigabit testbed is to explore applications where traffic will be generated at a gigabit for host-to-host traffic at a gigabit. Realize that we're running 1.4 gigabits now, routinely. That's what the telephone system uses for the cross-country fibers [fiber optic communications lines]. So multiplexed is easy.

QUARTERMAN: The telephone system is an exact analogy to what I'm talking about. The traffic isn't generated at a gigabit.

FARBER: No, but I'm saying that's multiplexed traffic already. The belief is that there are a substantial number of applications which can use a gigabit, let me call point-to-point, host-to-host, in which case you have to encrypt it.

QUARTERMAN: So those applications will have a problem?

FARBER: Those applications will have problems, yeah.

HELLMAN: Also, I would recommend that the gigabit link itself be encrypted even if most people are encrypting, because not everybody will be and there still may be address information. Encrypting end-to-end, end- link, is in my opinion a good thing. You shouldn't think of it as an either/or.

FARBER: Yes.

DENNING: [Another written question]:"Do you feel that security measures built into modems, for example call-back, encryption, link-level passwords, are useful and effective?"

NEUMANN: They're part of the picture. They can be useful if they're used properly.

QUARTERMAN: Any sort of security requires personal responsibility on the part of the users.

DENNING: OK. Now we'll go to the microphone. Make sure you ask a question. We want interaction between you and the panel.

AUDIENCE MEMBER: ... [comment and question] I'm very much against agency regulation, for the following reason: that judicial review of agency action is minimal showing of arbitrary and capricious action or the slightest whiff of substantial evidence and they just won't look at what the agency has decided. It would be, in constitutional terms, the lowest standard of review. ...

My question is, "Are we in imminent danger of an agency being formed?"

NEUMANN: I don't think so. There are already several agencies kicking around whose responsibilities are primarily DoD or U.S. Government, federal, non-DoD. I don't think you're going to see another agency competing with them at the moment.

FARBER: In addressing the networking world, as opposed to the security world for a moment, I think the pressures are towards privatization of the network. There are many of us who believe that the correct role in government is exactly what it's been doing to date: to stimulate, and then to roll-over into the commercially secure community. There are a lot of us who are trying to orchestrate that roll-over to get the maximum benefit to the nation - and hopefully more than the nation, in fact.

DENNING: [Another written question:] "Since possessors of nodes are corporate and high-academic; since the language of access is arcane; and since even personal-computer software and hardware is expensive, how can the movement of society into the computer age not disenfranchise the lower, working and under classes?" ...

NEUMANN: I'm still beating my wife but I haven't been married for 25 years.

QUARTERMAN: One solution that's been going around: Instead of having the government subsidize the networks, have the government provide subsidies to those who can't otherwise get access, as in some places where they subsidize telephone access for those who can't otherwise afford it. I'm not necessarily advocating this as the solution. I'm just saying that's something that's been suggested.

FARBER: If my memory serves me correctly, and this is not my field specifically, there was a public policy set many years ago in the telephone world, at a national level, which essentially called for the ubiquitous deployment of telephones at all levels of the economy. The regulatory structure was set up to cross-subsidize that, especially the rural areas, ... from the business community. Up until divestiture, that was still the principle of the land, if I can call it that. There's no a priori reason that a similar principle couldn't happen in this country. It's to a degree happened in France with the Minitel, and it's happening in other countries, I think. The question is, "Do we believe, (a) that that's a viable national goal, and (b) is the political climate allowable to set that type of national goal?" I don't know.

HELLMAN: The cost doesn't have to be high. The average working class home had an Atari game system in it back in the early '80s, and today it's a Nintendo. When there are that many systems out there, the software doesn't have to be very expensive; the Commodore-64 showed that.

I think the real question is, "Will we make the systems interesting and easy to use?" There's a reason that the working-class kid wants a Nintendo, and his working-class parents will want whatever the communication package is when they can access information that's valuable to them. It may involve some kind of advertising, much as Prodigy is trying to create a relatively low-cost - for today - system. I think the real question is, "Can we find the market that this can serve, and find a way to do it that doesn't necessarily meet our immediate picture of how a communication network should be, but that will work and make us money?"

NEUMANN: I think that the disenfranchisement of the disadvantaged is one of the most difficult problems we've got confronting us now. I think these guys are looking at it from still too high-tech a viewpoint. It goes right down to every person in the country not just the guys that might have Ataris in their homes. The amount of knowledge and understanding and intuitive grasp of what is really going on in society, as we continue to "technologify" ourselves, is becoming just harder and harder to grasp for the people who are without education, or without means, or whatever.

I think the gaps are getting much bigger than they have been in the past. This is one of the really most difficult things that we've got to deal with. I read this morning's [San Francisco] Chronicle about how education is going to take a real [financial] hit in California in the next cycle. We're getting to a point where we can't even afford to educate kids to a minimum standard and we're talking about high-teching everybody in the nation. We've got a very serious problem. [applause]

YING-DA LEE [NEC America]: Basically I'd like to interject a little bit of internationalism in networking. Our company is embarking [on a project] to build a fairly sizable network both in the United States and in Japan. But there are some policy issues in the situation right now that are really preventing [a] company like us [from becoming] a fully participating member.

Two things that immediately come to mind would be the cryptography issues, limitation about export. I hope something is going to be resolved on that because that is going to be a big problem as everybody can see. The other issue is something like the access policies of NSFnet. There's really some silly things. We can get around it [but] I don't think it's really quite fair for people who want to adhere to the rules [to] have to suffer, whereas people who can just get around - it's been gotten around all over the place, we all know that. So I'm hoping that when policy is being set, there is going to be more of a view that this is going to be an international issue. This is going to be Internet for the whole world, basically. ...

FARBER: Let me support you in your second thing. As you wander around the world there's this constant problem - and within the U.S. - with our appropriate-use policies ... While we say one thing - as usual maybe in this country - we blithely ignore what we say and we do [what] we think we should do. Which makes life difficult to those companies and countries who believe that what you write down is what you do, or companies who just refuse not to do what the rules say. I think that's a major problem that has to be dealt with. One of the joys, if we can get this thing privatized rapidly enough, potentially, is that you can do what you're willing to pay for. Now, with my academic hat on, it would be very nice if we can also figure out if we can cross-subsidize us poor academics in the process [laughter] so we can do it.

JOHN GILMORE: ... How can we address the issue of anticipating social problems that come about from satisfying our technical goals? The business of hooking up all the universities was a technical goal that worked really well, but it had the effect of sort of stunting the business networking community. And the business of [creating] privacy-enhanced mail to satisfy a technical goal of authentication has the effect of reducing privacy in some ways. That still has not been fully addressed. I just sort of see these decisions being made on technical grounds, as they were in the old days when there were 40 guys in a room, but they have larger social impacts at this point.

DENNING: Actually I think that's a good question to be left to ponder rather than to try and answer, and I thank you for that. And I'd like to thank you all for the session. [applause] .



Return to CFP'91 Index page.


Return to the CPSR home page.


Send mail to webmaster.

Archived CPSR Information
Created before October 2004
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

Gain better understanding of the Information society.