Personal tools


CPSR Newsletter Vol 18, No 2
Volume 18, Number 2 The CPSR Newsletter Spring 2000

PETs, E-Commerce, and Ethics by Herman T. Tavani

Recent discussions about online privacy concerns have sometimes resulted in heated debates between members of two camps whose primary interests and goals would seem to be mutually incompatible. On the one hand, certain privacy advocates and consumer groups have argued that stronger privacy legislation is needed to protect the interests and rights of online users. On the other hand, groups representing the e-commerce sector have lobbied for voluntary controls and industry self-regulation as an alternative to new privacy legislation. Until quite recently, the respective solutions proposed by one camp have been unacceptable to the other. Now, some members of each camp appear ready to embrace a compromise resolution to the online privacy debate in the form of certain privacy-enhancing technologies or PETs that have recently been introduced.

What exactly are PETs? According to Burkert (1997, p. 125), PETs can be understood as "technical and organizational concepts that aim at protecting personal identity." As organizational concepts, PETS can perhaps be thought of in terms of establishing and enforcing certain industry-standard guidelines for privacy such as those adopted by the Platform for Privacy Protection (P3P), whereas in their technical sense PETs can be viewed as specific tools used by individuals to protect and enhance their privacy. Whether viewed as "technical" or as "organizational" concepts, however, it would seem that a primary function of PETs is to protect the identity of persons engaged in certain kinds of online activities.

Although PETs might be thought of as a relatively new technology, privacy-enhancing tools in the form of encryption programs have been available since the 1970s. In addition to providing encryption capabilities, however, PETs can perform a host of other functions as well. For example, Cranor (1999) notes that some PETs can function as "anonymizing agents" and "pseudonym agents." Whereas certain PETS, such as anonymizing tools (e.g. the Anonymizer) and pseudonym agents (e.g., Lucent Personalized Web Assistant) have been designed with the goal of enabling users to navigate the Net either anonymously or pseudonymously, other PETS have been developed to allow users to communicate online via correspondences that are encrypted with either digital-signature or blind-signature technologies. Much has been written about the technical details and nuances of various PETs, and I will not repeat that discussion here. Our primary concern in this study is with certain ethical considerations involving the use of PETS as tools in e-commerce activities.

It is perhaps important to note at the outset that PETs can be viewed as having contributed, innovatively, to attempts to resolve online privacy issues. Burkert (1997, p. 125) notes that, among other things, PETs "give direct control over revelation of personal information to the person concerned." Thus PETs would seem to offer users some choice with respect to whether and how much personal data they can elect to release or not release in one or more online activities. In this sense, then, PETs would seem to empower users and to provide them with some degree of autonomy or control over their personal data. But do PETs provide a solution to the online privacy debate that is fair to all users, and one that gets to the heart of ethical concerns involving privacy? In this essay, I argue that there are at least three reasons why PETS do not provide such a solution. Before discussing those reasons, however, I believe that a brief explanation of some of the background issues in the debate that has led to development and implementation of PETs would be useful.

Privacy Legislation, Data-Protection Principles, and Industry Self-Regulation Initiatives: A Brief Overview

In response to concerns about the collection and exchange of personal information in online activities, some nations have recently passed strong privacy and data-protection legislation. Many European nations, for example, have implemented strict data-protection principles designed to protect the privacy of their citizens. In 1980, most Western European nations signed on to the Organization for Economic Cooperation and Development (OECD) Principles, with its Fair Information Practices (FIPs); and in the early 1990s the European community began to consider proposals for synthesizing the data-protection laws of the individual European nations. The European Community has recently instituted a series of directives, including EU Directive 95/46/EC of the European Parliament and of the Council of Europe of 24 October 1995, which protects the personal data of its citizens by prohibiting the transborder flow of such data to countries that lack "adequate" protection of personal data. In the US, however, the government's response to privacy concerns involving the collection and use of personal data in the commercial sector has been quite different from that of the governments of most European nations.

Although the US Congress passed some privacy legislation such as the Privacy Act of 1974, that Act has been criticized for containing far too many loopholes and for lacking adequate provisions for enforcement. The 1974 Privacy Act is also restricted in its application to information collected by federal agencies and thus is not applicable in the private sector. Subsequent privacy legislation in the US has resulted in a "patchwork" of individual state and federal laws that are neither systematic nor coherent. Generally, the US government has resisted requests from consumer groups for stronger privacy laws, siding instead with various business interests in the private sector, whose opposition to privacy legislation is generally based on one or both of the following two assumptions: (a) the belief that such laws would limit the free flow of information and thus potentially result in a form of censorship (see, for example, Singleton, 1998, as cited in Spinello, 2000); and (b) the belief that such legislation would undermine economic efficiency and thus adversely impact the overall US economy. With respect to the second belief, however, critics point out that many of those US businesses who also have subsidiary companies or separate business operations in countries with strong privacy laws and regulations, such as nations in Western Europe, have found little difficulty in complying with the privacy laws of the host countries, and that profits for those American-owned companies have not suffered because of their compliance (see Tavani, 2000). In any event, there is now increased pressure on the US government to enact stricter privacy laws and data-protection schemes and on American businesses to adopt stricter privacy polices and practices because of global e-commerce pressures, especially from the European Union. Pressure on American businesses has also recently come from within the US as well, in the form of consumer concerns about the loss of personal privacy in online business transactions.

While many American entrepreneurs believe that it would be inappropriate for the federal government to pass stronger privacy legislation, they also worry that significant numbers of consumers might refrain from engaging in online commerce because of privacy concerns. To attract those online consumers who might otherwise be inclined to avoid e-commerce activities, most online entrepreneurs in the US have supported a set of Internet-wide privacy standards established by The World Wide Web Consortium (W3C), an international industry consortium. Although W3C was charged with establishing privacy standards on the Web, it was not set up to develop specific tools or, for that matter, to enforce the actual protection of personal data. Thus, certain tools, which are now commonly referred to as privacy-enhancing technologies or PETS, would have to be developed in order to assist users in protecting their personal data.

Are PETS an Adequate Solution to Online Privacy Issues?

Because PETs offer online consumers a certain degree of empowerment and autonomy that otherwise they would not have, It would seem that PETs are a significant advancement over mere voluntary controls and industry self-regulation. However, we can still ask whether PETs provide an adequate solution to issues involving online privacy, especially with respect to certain underlying ethical considerations. Before examining some the ethical implications and aspects of PETs, it is perhaps worth noting that one might also challenge the adequacy of PETs on grounds other than ethics per se'. For example, the adequacy of these tools might be challenged in terms of their technological effectiveness or on the basis of their security- and public-policy implications. With respect to technical adequacy, some have noted that "anonymizing tools" do not always ensure that users will have total anonymity while interacting with the Web. And others have questioned the effectiveness of certain encryption tools, even those PETs designed with public-key encryption technology, as full-proof technologies.

On the other hand, some government officials and law-enforcement agencies could challenge the adequacy of PETs from the point of public policy and security. For example, it has been suggested that anonymity tools are potentially dangerous for national security because they would allow terrorists to perform certain online activities that would be extremely difficult to trace back to the user(s) responsible for those activities. And with respect to encryption tools, some security advocates and policy makers fear that currently available encryption technology is too strong and thus allows criminals and terrorists to communicate messages that cannot be intercepted by appropriate law enforcement agencies. One need only refer to the recent literature surrounding the controversial Clipper chip to see arguments based on this line of reasoning. However, I will not pursue the lines of argumentation based on either the technical- or security-related inadequacies involving PETs. Rather, my interest in this essay is with certain ethical considerations underlying the use of PETs to protect the privacy of consumers in e-commerce activities.

Some Ethical Considerations Underlying the Use of PETS

With respect to ethical considerations, at least three aspects of the use of PETs in e-commerce activities merit closer consideration. These involve issues of education, informed consent, and social equity.

Educating Online Users About PETs

First, we can ask how exactly online users are supposed to find out about the existence of PETs, and whether it is fair to put the onus of finding out about these tools on the users themselves. At present, responsibility for learning about the existence of PETs would clearly seem to be incumbent upon online consumers, since there is no requirement for online entrepreneurs to inform users of the existence of PETs or to make those tools available to users. Furthermore, online consumers must not only discover that PETs exist, but in some cases they must also learn how to use these tools.

Another question, which is perhaps also related to the one above, has to do with who would be responsible for distributing PETs, if they are not automatically bundled with either operating-system or application software, or if they are not provided as part of the Web interfaces of online vendors. Should online entrepreneurs be responsible for providing them, or should consumers be required to locate PETs and then be further responsible for installing them on their systems? Is it reasonable and is it fair to expect users to be responsible for these tasks?

Consider the case of one of the more popularly known PETs such as PGP (Pretty Good Privacy). PGP enabled ordinary users to send encrypted e-mail messages, and the PGP tool cookie.cutter enabled users to avoid having "cookie" files sent to their computers. Fortunately for online users, PGP was available free of charge. However, the onus was on users, first to discover that PGP applications existed and then to track down the location of those tools and download them on to their computers. Today, of course, the latest versions of most Web browsers allow users to reject cookies. However, the default setting on many browsers is still such that cookies will automatically be accepted unless the user explicitly rejects them.

We could reasonably ask why the default setting should not be changed such that Web sites would have to get a user's permission to send a cookie file to that user's computer system. For example, the Web site could inform and possibly educate the user about the existence of cookies, and then ask whether he or she is willing to accept cookies. Why not presume that users do not want cookie information recorded and stored on their computer systems, and then set the default conditions on Web browsers accordingly? And why not further presume that users do not want their personal data used in ways they had not explicitly authorized at the time they released such data to a commercial Web site? Essentially, I agree with DeCew (1997) who claims that we should "presume in favor of privacy" and then develop ways that would "allow individuals to determine for themselves how and when that presumption should be overridden."

Independent of questions about where the presumption should reside--i.e., in favor of the privacy of individuals or with the interests of online vendors--it would seem that the widespread application and use of PETs will require a massive educational effort. It would also seem that those nations with strong data-protection agencies, such as Canada and the countries in the European Union, will be in a much better position to educate online consumers about the existence and the purpose of PETs than would nations like the US where there is currently no such agency. Many nations that already have data-protection agencies in place also have an educational mandate to inform users about the various means available to them to protect their personal data. And since PETs are one of those means, it would seem that such agencies would have an educational mandate to inform their citizens about PETs. However, countries lacking data-protection agencies would also, in all likelihood, lack such an educational mandate.

PETS and the Principle of Informed Consent

Even if the education-related issues involving PETs are resolved, other ethical aspects need to be addressed. For example, another question having to do with fairness is whether PETs sufficiently assist online users in making informed decisions about the disclosure of their personal data in commercial transactions? Traditionally, the principle of informed consent has been the model or standard for disclosure involving personal data. But in certain online commercial activities, it would seem that the informed consent principle is not adhered to as strictly as one might assume. For instance, users who willingly consent to provide information about themselves for use in one context often have no idea as to how that information might be used subsequently. That is, they do not always realize that the information they provide for one purpose, or in one online transaction, might also have secondary uses. Although this particular problem is not unique to PETs or to e-commerce, concerns about the secondary use of a consumer's personal data are nonetheless exacerbated by e-commerce activities.

So it would seem that there are perhaps two separate issues that need to be distinguished in the question re informed consent in e-commerce activities: (a) the need to inform users that their personal data is being collected (even if it is used only in one particular context for one specified purpose; and (b) the need to get permission to use an individual's personal data in secondary applications. Wasserman (1998) notes that, according to a 1998 survey conducted by the Federal Trade Commission, 92% of commercial Web sites now collect personal information. The survey further determined that only 14% of those commercial Web sites actually disclose their information gathering practices to consumers (cited in Spinello, 2000). Are such widespread practices on the Web fair to online users?

One argument that has been advanced by some online entrepreneurs is that no one is forcing users to reveal personal data and that the disclosure of such data is done on a completely voluntary basis. However, even if it is granted that a user has willingly consented to disclose personal data to an e-commerce vendor for use in a specific business transaction, i.e., in some specific context, does it follow that the user has ipso facto granted permission to use that information for additional purposes (i.e., secondary uses)? Does the online vendor now "own" that information, and is the vendor now free to do with that information whatever he or she chooses? Consider the case of various data-mining activities in the commercial sector. Specific information given by a consumer for use in one context, say in an application for an automobile loan, is collected and stored in a "data warehouse," and then the data in that warehouse is subsequently "mined" for implicit consumer patterns. As a result of data-mining activities, an individual could eventually be "discovered" to be a member of a newly created category or group, and possibly one which the user could have had no idea even existed. And based solely on his or her identification in such a newly discovered category or group, that individual might be denied a consumer loan, despite the fact this particular individual's credit history is impeccable (see Tavani, 1999). Would that be fair?

Another argument that might be advanced by online entrepreneurs, especially in defense of the secondary use of personal information as in the case of data-mining practices, is: if the user has put information about him- or her-self into the public domain of the Internet--i.e., disclosed the data as part of an online questionnaire for a transaction, then that information is no longer private. Of course, one response to this line of reasoning could be to question whether users, who in the process of consenting to disclose personal data in response to queries in online business transactions, understood clearly all of the conditions in which the data they had consented to reveal could be used, including certain future uses to which that data might also be put. For example, if users are queried as to whether they are willing to have their personal data "mined," many would likely be perplexed by this question since they might never have heard of the practice of data mining. Also we can certainly ask whether the businesses that collect personal data could possibly know in advance exactly how that data will be used--viz., to which uses that data would be put in secondary and future applications. This being the case, it would seem that online businesses could not adequately inform users about exactly how their personal data will be used. What kind of informed choice, then, could these users make in such a case? Can we--indeed should we--assume that most consumers understand the intricacies of a technique such as data mining? Furthermore, it is dubious that such a technique could be explained adequately as part of an online transaction query, without making the transaction appear to be cumbersome and perhaps "unfriendly" to the point that it might altogether discourage certain kinds of online e-commerce activities.

Some online entrepreneurs have responded to charges involving privacy violations by pointing out that in most cases users are now provided with the means either to "opt-in" or "opt-out" of having their personal data collected, as well as having that data made available for secondary use. Currently, however, the default seems to be such that if no option is specified by the user when that individual discloses personal data for use in one context, that disclosed personal data would also be available for secondary use. We can certainly ask whether that presumption is fair to online consumers. We can also ask whether having the ability simply to opt-in or opt-out of disclosing personal information is a resolution that is fair to all online consumers, especially to those consumers who are less affluent.

PETS and Issues of Social Equity

Even if issues related to informed consent, as well as those related to education, can be resolved, there is at least one additional ethical aspect of PETs that needs to be addressed. We saw in the preceding section that certain PETs now enable users to opt-in or opt-out of releasing their personal data when responding to queries in online commercial transactions. DeCew (1997) has referred to this flexibility in choice as the principle of "dynamic negotiation." On this principle, users would seem to be empowered and perhaps even autonomous because they can choose whether to grant or withhold information about themselves in online transactions. As an enticement to get users to disclose personal data, some commercial Web sites can offer discounts or rebates on their products in exchange for the right to use personal information provided by that consumer for a range of purposes, including secondary uses such as data mining. Unfortunately, less affluent persons might be more inclined to sell personal data about themselves for financial incentives than would their wealthier counterparts. We can certainly ask whether it is fair that those users who are members of lower socioeconomic groups will, by virtue of their economic status, have less choice in (i.e., less control over) whether to sell their personal data.

We can also ask whether the fact that certain groups of individuals would have less control over their personal data is acceptable from a human rights perspective. For example, if privacy is indeed a fundamental human right--as some have maintained that it is!--then we can reasonably ask whether individual privacy ought to be put up for sale as some kind of commodity. Of course, one response to this question could be: one's having a certain right involving X means that one also has a right to waive one's right involving X. So, on this view, if one chooses to waive one's right to privacy by selling one's personal data, then there would be no moral issue that needed to be resolved. However, issues involving human rights might not be able to be resolved in quite that simple a manner. Consider, for instance, an analogy involving the sale of human organs, such as kidneys, especially by persons in certain developing countries. Even though some persons have, in one sense, freely elected to sell one of their their kidneys--i.e., sell a kidney without external coercion--we can certainly ask whether they would have freely chosen to do so if it were not for their financially impoverished situations. On the one hand, it might seem that those individuals who chose to sell their organs, did so freely. But are those persons really acting as fully autonomous agents? And we can further ask whether a social system that allows certain wealthy persons to live longer than certain poor persons merely because of the financial ability of wealthier persons to purchase vital human organs from poorer persons is a just and moral social system?

Admittedly, it might appear to be a stretch--perhaps even a giant leap!--to move from issues involving the sale of one's personal data to those involving the sale of one's vital human organs. However, the principle of whether a person is able to act autonomously in matters involving human rights, in light of one's financial status, is similar in each case (assuming, of course, that privacy is a basic human right).

We have already heard about concerns involving the "technology haves" and "technology have-nots" and about the "information poor" vs. the "information rich." Will there soon be classes of "privacy rich" and "privacy poor," as well? An ideal resolution to privacy issues involving e-commerce--whether that resolution turns out to be technological, legal, or both--would need to respect the rights of all individuals, regardless of their social class or economic status. PETs, at least as they current form, clearly do not appear to provide us with such a solution. For even if privacy can be defined as "having control over one's personal information," as some have suggested, and even if PETs provide users with an ability to control that information in a formal and theoretical sense, there are larger issues involving fairness that would seem to militate against PETS as a definitive solution to privacy issues related to e-commerce.

Closing Remarks

We have noted some of the virtues of PETs, especially with respect to the fact that these tools offer far more protection to consumers than would the mere use of voluntary controls and industry self-regulation. We have also seen that, to their credit, PETs offer online consumers some degree of choice and thus would appear to be an empowering rather than a disabling technology. It would also seem, however, that the use of PETS alone is insufficient for resolving some of the underlying ethical challenges--especially those involving consumer education, informed consent, and social equity--for personal privacy in an information age.

Finally, one might object to position defended in this essay by arguing that privacy--as an individual right and an individual good--has to be balanced against the larger social good. On this view, it might be argued that if e-commerce activities bring about a greater social good--e.g., in terms of a stronger economy--then such a consideration must also be factored into the equation. Regan (1995) has pointed out that when we frame the privacy debate simply in terms of how to balance privacy interests as an individual good against interests involving the larger social good, support for those interests believed to benefit the latter good will generally override concerns regarding individual privacy. For example, if evidence were put forth to show that e-commerce activities would increase the number of jobs in a community or would raise that community's standard of living, then a decision to support such an activity would likely be perceived as yielding a greater overall good than a decision to forgo or restrict certain e-commerce activities for the sake of protecting the privacy of individuals. If, however, privacy is understood not merely as a value involving the good of individuals, but as one that also contributes to the broader social good--i.e., a value that is essential for democracy and freedom--then concerns for individual privacy might have a greater chance of receiving the kind of consideration they deserve in those debates involving the balancing of competing interests.

Throughout this essay there has been a presumption on my part that privacy is a positive value worth protecting. To the extent that PETs contribute to the protection of personal privacy, they also contribute to that good, both at the level of the individual good as well as to the overall social value of the community. In so far as PETs fall short of adequately protecting that social value, on the other hand, additional measures are still needed to protect the privacy of individuals and, by extension, the overall social good.

Works Cited

Burkert, Herbert (1997). "Privacy-Enhancing Technologies: Typology, Critiique, Vision." In Technology and Privacy: The New Landscape (eds. P. E. Agre and M. Rotenberg). Cambridge, MA: MIT Press, 125-142.

Cranor, Lorrie Faith (1999) "Internet Privacy," Communications of the ACM, Vol. 42, No. 2, February, 29-31.

DeCew, Judith Wagner (1997). In Pursuit of Privacy: Law, Ethics, and the Rise of Technology. Ithaca, NY: Cornell University Press.

Regan, Priscilla M. (1995). Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill, NC: University of North Carolina Press.

Singleton, Solveig (1998). "Privacy as Censorship: A Skeptical View of Proposals to Regulate Privacy in the Private Sector," Cato Institute, Washington DC.

Spinello, Richard A. (2000). CyberEthics: Morality and Law in Cyberspace. Sudbury, MA: Jones and Bartlett Publishers.

Tavani, Herman T. (1999). "Informational Privacy, Data Mining, and the Internet," Ethics and Information Technology, Vol. 1, No. 2, 137-145.

Tavani, Herman T. (2000). "Privacy and Security." Chap. 4 in Internet Ethics (ed. D. Langford). London, UK: Macmillan Press.

Wasserman, Elizabeth (1998). "Internet Industry Fails Government Test," The Industry Standard, June.

Herman Tavani (Ph.D., Temple University) is Associate Professor and Chair of the Philosophy Department at Rivier College. He is past president president of the Northern New England Philosophical Association and a visiting scholar (in applied ethics) at the Harvard School of Public Health. The author of numerous publications in computer ethics, he is currently Associate Editor of Computers and Society and Book Review Editor of Ethics and Information Technology.

What's inside...

© Computer Professionals for Social Responsibility
P.O. Box 717
Palo Alto, CA 94302-0717
Tel. (650) 322-3778
Fax (650) 322-3798

the end [ top ] Newsletter Index
Archived CPSR Information
Created before October 2004

Sign up for CPSR announcements emails


International Chapters -

> Canada
> Japan
> Peru
> Spain

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
Why did you join CPSR?

I want to use my expertise to try to change the way the public sees the whole voting machine mess.