Personal tools
faq.html
Computer Professionals for Social Responsibility
Filtering FAQ
Version 1.2
6 April, 2001
Written by Harry Hochheiser, CPSR Board Member
hhochheiser@cpsr.org
A Spanish translation of an earlier version of document can be found at
http://www.spain.cpsr.org/docs/faq-filtros.htm
Introduction
Seen by some as a powerful tool for protecting children from online
pornography and by others as "censorware," Internet content filters
have generated much controversy, debate, and confusion.
6 April, 2001
hhochheiser@cpsr.org
This document attempts to describe the concerns and issues raised by the various types of filtering software. It is hoped that these questions and answers will help parents, libraries, schools, and others understand the software that they may be considering (or using).
Additions, clarifications, and corrections regarding the content of this document will be most graciously accepted: please send email to hhochheiser@cpsr.org.
Questions
0What's New
1)Basics
1.1) What is a content filter?1.2) Why do many people want filtering?
1.3) Can filtering programs be turned off?
1.4)I don't want to filter, but I do want to know what my child is viewing. Is that possible?
1.5)What is the scope of Internet content filtering? Do filters cover the WWW? Newsgroups? IRC? Email?
2)Stand-alone Systems
2.1) What is a stand-alone system?2.2) Who decides what gets blocked and what doesn't?
2.3) How do stand-alone programs determine what should be blocked?
2.4) What's wrong with list-based filtering?
2.5) What's wrong with filtering based on keyword searches?
3.0) The Platform for Internet Content Selection (PICS)
3.1) What is PICS? 3.2) How does PICS-based filtering differ from stand-alone systems? 3.3) What is a ratings system?3.4) How are ratings systems developed?
3.5) Who rates sites?
3.6) What PICS-based ratings systems can I use? 3.7) How do I use PICS?
3.8) Should I rate my Site?
3.9) What should a publisher consider before self-rating?
3.10) What concerns are raised by Third-Party Ratings?
3.11) What about sites that aren't rated? What if someone puts the wrong rating on a site?
3.12) What if I don't like the ratings systems that are available? Can individuals and organizations start new ratings systems?
3.13) What's wrong with PICS and Internet ratings in general?
4.0) Alternatives
4.1) Can anything work?4.2) I understand that there are many problems with filters and ratings. What can I do to protect my children?
4.3) What roles can ISPs play?
4.4) What about Internet access in libraries?
5.0) Where Can I Find More Information?
7.0) Credits
7.1) Who gets the credit?7.2) Who is CPSR?
Answers
0) What's New?
Version 1.2 is the first revision of this document since October 1998. Most of the content is unchanged, but the following changes have been made:- Links in Where can I Find More Information have been updated, removed, and revised as necessary.
- The discussion of available PICS ratings schemes has been updated.
- The discussion of PICS ratings systems has been updated to include the Internet Content Rating Assocation (ICRA).
1) Basics
1.1) What is a content filter?
A content filter is one or more pieces of software that work together to prevent users from viewing material found on the Internet. This process has two components.Rating: Value judgments are used to categorize web sites based on their content. These ratings could use simple allowed/disallowed distinctions like those found in programs like CyberSitter or NetNanny, or they can have many values, as seen in ratings systems based on Platform for Internet Content Selection (PICS, see question 3.0).
Filtering: With each request for information, the filtering software examines the resource that the user has requested. If the resource is on the "not allowed" list, or if it does not have the proper PICS rating, the filtering software tells the user that access has been denied and the browser does not display the contents of the web site.
The first content filters were stand-alone systems consisting of mechanisms for determining which sites should be blocked, along with software to do the filtering, all provided by a single vendor.
The other type of content filter is protocol-based. These systems consist of software that uses established standards for communicating ratings information across the Internet. Unlike stand-alone systems, protocol-based systems do not contain any information regarding which sites (or types of sites) should be blocked. Protocol-based systems simply know how to find this information on the Internet, and how to interpret it.
1.2) Why do many people want filtering?
The Internet contains a wide range of materials, some of which may be offensive or even illegal in many countries. Unlike traditional media, the Internet does not have any obvious tools for segregating material based on content. While pornographic magazines can be placed behind the counter of a store, and strip-tease joints restricted to certain parts of town, the Internet provides everything through the same medium.Filters and ratings systems are seen as tools that would provide the cyberspace equivalent of the physical separations that are used to limit access to "adult" materials. In rating a site as objectionable, and refusing to display it on the user's computer screen, filters and ratings systems can be used to prevent children from seeing material that their parents find objectionable. In preventing access, the software acts as an automated version of the convenience-store clerk who refuses to sell adult magazines to high-school students.
Filters are also used by businesses to prevent employees from accessing Internet resources that are either not work related or otherwise deemed inappropriate.
1.3) Can filtering programs be turned off?
It is assumed that parents or other authoritative users who install filtering programs would control the passwords that allow the programs to be disabled. This means that parents can enable the filter for their children but disable it for themselves. As with all other areas of computer security, these programs are vulnerable to attack by clever computer users who may be able to guess the password or to disable the program by other means.
1.4) I don't want to filter, but I do want to know what my child is viewing. Is that possible?
Some products include a feature that will capture the list of all Internet sites that have been visited from your computer. This allows a parent to see what sites their child has viewed, albeit after the fact. Similar software allows employers to monitor the Internet use of their employees. Users of these systems will not know that their Internet use is being watched unless they are explicitly told.Whether used in homes or workplaces, these tools raise serious privacy concerns.
1.5) What is the scope of Internet content filtering? Do filters cover the WWW? Newsgroups? IRC? Email?
While some stand-alone systems claim to filter other parts of the Internet, most content filters are focused on the World-Wide-Web. Given the varied technical nature of the protocols involved, it's likely that filtering tools will do well with some of these, and poorly with others. For example, filtering software can easily block access to newsgroups with names like "alt.sex". However, current technology cannot identify the presence of explicit photos in a file that's being transferred via FTP. PICS-based systems currently only filter web sites.
2) Stand-alone Systems
2.1) What is a stand-alone system?
A stand-alone filtering system is a complete filtering solution provided by a single vendor. These filters block sites based on criteria provided by the software vendor, thus "locking in" users. If a customer does not like the vendor's selection of sites that are to be blocked, she must switch to a different software product.
2.2) Who decides what gets blocked and what doesn't?
This is the biggest practical difference between stand-alone systems and protocol-based systems. Stand-alone systems limit users to decisions made by the software vendor, although some let the parents or installers and and remove sites. Protocol-based systems provide users with a choice between alternative ratings systems, which publishers and third parties can use to develop ratings for content. See question 3.2 for more information.
2.3) How do stand-alone programs determine what should be blocked?
Currently available filtering tools use some combination of two approaches to evaluate content: lists of unacceptable (or acceptable) sites, and keyword searches.List-based blocking works by explicitly enumerating sites that should either be blocked or allowed. These lists are generally provided by filter vendors, who search for sites that meet criteria for being classified as either "objectionable" or "family-friendly".
Filtering software vendors vary greatly in the amount of information and control they make available to users. Most vendors do not allow users to see the actual list of blocked sites, as it is considered to be a kind of trade secret. However, some vendors provide detailed descriptions of the criteria used to determine which sites should be blocked. Some vendors might allow users to add sites to the list, either in their own software or by sending sites to the vendor for review.
Stand-alone filtering tools also vary in the extent to which they can be configured by users. Some software packages allow users to make selections from a list of the categories they would like blocked. For example, a parent may wish to block explicit sex but not discussions of homosexuality as a life-style. Others might allow users to choose from a range of choices in any given topic area. For example, instead of simply blocking all nudity, these tools might allow users to chose to allow partial nudity while blocking full nudity.
Keyword-based blocking uses text searches to categorize sites. If a site contains objectionable words or phrases, it will be blocked.
2.4) What's wrong with list-based filtering?
There are several problems with filtering based on lists of sites to be blocked.First, these lists are incomplete. Due to the decentralized nature of the Internet, it's practically impossible to definitively search all Internet sites for "objectionable" material. Even with a paid staff searching for sites to block, software vendors cannot hope to identify all sites that meet their blocking criteria. Furthermore, since new web sites are constantly appearing, even regular updates from the software vendor will not block out all adult web sites. Each updated list will be obsolete as soon as it is released, as any as any site that appears after the update will not be on the list, and will not be blocked. The volatility of individual sites is yet another potential cause of trouble. Adult material might be added to (or removed from) a site soon after the site is added to (or removed from) a list of blocked sites.
Blocking lists also raise problems by withholding information from users, who may or may not have access to information describing the criteria used to block web sites. While some vendors provide descriptions of their blocking criteria, this information is often vague or incomplete. Several vendors have extended blocking beyond merely "objectionable" materials. In some instances, political sites and sites that criticize blocking software have been blocked.
This obscurity is compounded by practices used to protect these lists of blocked sites. Vendors often consider these lists to be proprietary intellectual property, which they protect through mathematical encryption, which renders the lists incomprehensible to end users. As a result, users are unable to examine which sites are blocked and why. This arbitrary behavior demeans the user's role as an active, thoughtful participant in their use of the Internet.
2.5) What's wrong with filtering based on keyword searches?
Keyword searching is a crude and inflexible approach that is likely to block sites that should not be blocked while letting "adult" sites pass through unblocked. These problems are tied to two shortcomings of this approach:Keyword searches cannot use contextual information. While searches can identify the presence of certain words in a text, they cannot evaluate the context in which those words are used. For example, a search might find the word "breast" on a web page, but it cannot determine whether that word was used in a chicken recipe, an erotic story, or in some other manner. In one notable incident, America Online's keyword searches blocked a breast cancer support group.
Keyword searches cannot interpret graphics. It is not currently possible to "search" the contents of a picture. Therefore, a page containing sexually explicit pictures will be blocked only if the text on that page contains one or more words from the list of words to be blocked.
3.0) The Platform for Internet Content Selection (PICS)
3.1) What is PICS?
The Platform for Internet Content Selection (PICS) was developed by the W3 Consortium - the guiding force behind the World-Wide-Web - as a protocol for the exchange of rating information. Paul Resnick - University of Michigan professor and the creator of PICS - described PICS in a Scientific American (March 1997) article:
The Massachusetts Institute of Technology's World Wide Web Consortium has developed a set of technical standards called PICS (Platform for Internet Content Selection) so that people can electronically distribute descriptions of digital works in a simple, computer-readable form. Computers can process these labels in the background, automatically shielding users from undesirable material or directing their attention to sites of particular interest. The original impetus for PICS was to allow parents and teachers to screen materials they felt were inappropriate for children using the Net. Rather than censoring what is distributed, as the Communications Decency Act and other legislative initiatives have tried to do, PICS enables users to control what they receive.There are two components involved in the practical use of PICS: ratings systems, and software that uses ratings systems to filter content.
3.2) How does PICS-based filtering differ from stand-alone systems?
Stand-alone filtering products generally include lists of sites to be filtered and explicit filtering criteria. Purchasers of these products are tied to the filtering decisions made by the software vendor.PICS-based software uses an alternative approach based on distributed sharing of ratings information. Instead of using blocking lists or keyword searches, programs that use PICS use standardized "ratings systems" to determine which sites should be blocked. Available from software vendors or from Internet sites, these ratings systems are be used to describe the content of Internet sites (see question 3.7 for a description of how PICS works in practice). Users of PICS-based software are usually given the ability to choose which ratings system they would like to use.
As an open standard, PICS can be used for a wide range of applications. In addition to providing a means for blocking content deemed unsuitable for children, PICS might also be used for describing content in terms of its educational content, potential for violations of privacy, or any other criteria that involve rating of Internet sites.
In some senses, programs that use PICS are much more flexible than stand-alone filtering software. Users of PICS software are not tied to the judgments of the software vendor, and the descriptions of the criteria used by the ratings systems are publicly available. However, users are currently limited to choosing between a small number of ratings systems, each of which has its own biases and viewpoints. Users that disagree with the popular ratings systems may be unable to use PICS in a manner that fits their needs and viewpoints.
3.3) What is a ratings system?
A ratings system is a series of categories and gradations within those categories that can be used to classify content. The categories that are used are chosen by the developer of the ratings system, and may include topics such as such as "sexual content," "race," or "privacy." Each of these categories would be described along different levels of content, such as "Romance; no sex ", "Explicit sexual activity", or somewhere in between. Prominent ratings systems currently in use include ICRA and SafeSurf. A rating is a description of some particular Internet content, using the terms and vocabulary of some ratings system.3.4) How are ratings systems developed?
The PICS developers and the W3 Consortium built PICS to be an open standard, so anyone can create a ratings system. Individuals and groups can develop ratings systems by defining categories and describing ratings within those categories. Once a ratings system is developed, it must be publicized to users and publishers.
3.5) Who rates sites?
The PICS standard describes two approaches to the rating of sites:Self-Rating: Web site publishers can evaluate their own content and put PICS rating information directly into their web pages. Currently, this evaluation can be done through Web pages provided by developers of the major ratings services.
Third-Party Ratings: Interested third parties can use PICS ratings systems to evaluate web sites and publish their own ratings for these sites. Educational groups, religious groups, or individuals can rate sites and publish these ratings on the Internet for users to access.
3.6) What PICS-based ratings systems can I use?
From a technical perspective, you can use any PICS-based ratings system. However, your practical options are somewhat more limited. While you might configure your browser to use "Joe's Internet Ratings", it's unlikely that many sites have ratings for Joe's system, so it wouldn't be of very much use.Your browser software may influence choice of ratings service. If you use Microsoft's Internet Explorer, you only have one choice (RSACi) built in to the initial distribution. To use other ratings services, IE users must download files from the 'Net and install them on their PCs.
The three most prominent PICS services are: RSACi: Sponsored by the Recreational Software Advisory Council (known for ratings on video games), RSACi is probably the most widely used PICS ratings system in use today. RSACi's ratings categories include violence, nudity, sex, and language, with 5 ratings within each category. At one point, RSACi claimed to have over 43,000 sites rated. IRCA In December 2000, the Internet Content Rating Assocation's (ICRA) rating scheme was launched as as a succssor to RSACi. ICRA has a ratings scheme that is more detailed and nuanced than that of RSACi. However, the extent of ICRA's adoption is not yet clear. SafeSurf: Developed by the SafeSurf corporation, this system's categories include "Age Range," "Profanity," "Heterosexual Themes," "Homosexual Themes," "Nudity," "Violence," "Sex, Violence, and Profanity, " "Intolerance," "Glorifying Drug Use," "Other Adult Themes," and "Gambling," with 9 distinctions for each category.
IRCA, RSACi, and SafeSurf all rely on self-rating of Internet sites by web publishers.
3.7) How do I use PICS?
To use PICS, users start by configuring their browsers or PICS software to use a ratings system (such as ICRA or SafeSurf). Once the ratings system is chosen, users must examine each of the categories in order to choose a preferred level of information for that category. In practical terms, this means deciding how much they are willing to allow. For example, one ratings system's choices for nudity include "none," "revealing attire," "partial nudity," "frontal nudity," and "explicit."Once these choices have been made, the browser software uses them to filter sites. When an Internet site is requested, the browser compares the site's rating with the user's selection. If the site has ratings for the chosen system and those ratings fit within the parameters chosen by the user, it is displayed as usual. If the appropriate ratings fall outside of those parameters (perhaps the site has "frontal nudity," while the user was only willing to accept "partial nudity"), access to the site is prohibited, and the user is shown a message indicating that the site is blocked.
Since most web sites are not currently rated, most software provides users with the option of blocking out sites that do not contain PICS ratings.
In order to prevent mischievous children from changing ratings or disabling PICS altogether, most browsers can be configured to require a password before disabling PICS.
3.8) Should I rate my site?
The answer to this question will depend upon who's being asked.ICRA, SafeSurf, and other proponents of ratings would obviously like everyone to rate their sites, while civil libertarians and opponents of ratings argue against any ratings.
Publishers of family-oriented sites or those who are trying to reach audiences concerned with Internet content might consider rating. Similarly, purveyors of adult material might rate their sites in order to be "good citizens".
3.9) What should a publisher consider before self-rating?
Web site publishers must decide which (if any) ratings systems to use. Since each ratings system requires a separate valuation process, and separate modifications to web pages, it may not be practical for web-site publishers to use all of the popularly available ratings.In evaluating ratings systems, publishers may want to examine the categories used by each system and the distinctions used by those categories. Different systems will classify ratings systems in different ways, some of which may misrepresent the content of web sites. For example, sites discussing safe sex might not want to be placed in the same category with pornographic sites.
Web site publishers might also consider the popularity of the ratings services. There are only a few major ratings services. Publishers are free to user other ratings, but these may not be useful to the Internet users who rely upon the popular systems. This presents a dilemma for some publishers, who can either accept the ratings of the popular systems, even if those ratings misrepresent their material, or refuse to rate their sites, knowing that this might cause their sites to be unavailable to some users.
Versions of Microsoft's Internet Explorer have provided an extreme example of this problem. Although IE allows user to use any PICS ratings system, RSACi is the only system that is built in to the selection list (as recently as IE 5.5). Since Internet Explorer is the most widely-used PICS-capable browser, it seems likely that many PICS users will rely upon RSACi. For publishers interested in reaching a wide audience, this market force may determine their choice of ratings system.
Finally, philosophical concerns may cause some people to decide not to rate. Web-site publishers who are not comfortable with the general content of available ratings systems, or who object to the concept of ratings, may choose not to rate their own sites.
MSNBC's troubles with ratings provide an ironic illustration of this possibility. Displeased with the RSACi ratings that would be necessary, MSNBC management removed all rating information from the site. MSNBC and other news organizations briefly discussed the possibility of creating a new ratings system specifically for news reporting.
While this proposal was eventually rejected, it illustrates some of the problems with content ratings. Well-funded publishers like MSNBC might be able to effectively create ratings systems that meet their needs, but smaller publishers who want to rate their sites may be forced to accept unsatisfactory ratings.
3.10) What concerns are raised by third-party ratings?
Since third-party ratings aren't validated by any technical means, third-party ratings can be easily misused. Just as stand-alone filtering software can block sites for political or business reasons (even if those sites do not contain adult content), third party raters might apply inaccurate labels to web sites in order to make sure that they would be blocked by PICS-compliant software.To make matters worse, third party rating does not require the consent or even notification of a web-site publisher. Since third party ratings are distributed by third party "label bureaus," a web-site publisher may not know if her pages have been rated, or what the ratings said.
Third-party ratings also present significant technical challenges that may discourage their development. Unlike self-ratings, third party PICS ratings do not reside on publisher's web pages. Instead, they must be distributed to users using one of two methods:
- File Transfer: Users could download ratings from the web sites provided by third-party services. For ratings services that cover any significant portion of the Internet, this could easily amount to megabytes of data, which could be cumbersome to download using slow modems. Furthermore, these lists would quickly become obsolete, and would therefore require regular updates.
- Label Bureaus: Third-party raters (or others) might establish servers that would provide ratings information. In this model, users of a rating service would retrieve a rating from the rating service, and this rating would be used to determine whether or not the site should be blocked. For a widely-used ratings system, this would require computing power and Internet bandwidth capable of handling constant streams of requests for ratings. This might be cost-prohibitive for many potential ratings services.
3.11) What about sites that aren't rated? What if someone puts the wrong rating on a site?
PICS ratings can be truly useful for parents only if a significant percentage of the Internet's web sites are accurately rated. At one point, RSACi and NetShepherd had claimed to have rated 40,000 and 500,000 sites, respectively. Cumulatively, these numbers represent a tiny fraction of the total number of web sites available. However, the effective total may be even smaller: as of April 2001, NetShepherd does not seem to exist as an active filtering company. Some software, such as Microsoft's Internet Explorer, provides users with the option of blocking out any site that does not have a rating. This choice may be appropriate for some, but it severely restricts the available options. By blocking out most of the Web (including possibly some sites designed for younger users), this approach presents children with a severely restricted view of the world.The accuracy of PICS ratings is obviously a concern. For example, unscrupulous purveyors of adult material might attempt to use an inaccurate rating in an attempt to slip through PICS filters. In RSACi's terms of use, the RSAC reserves the right to audit sites in order to guarantee accuracy of ratings. SafeSurf takes this one step further. The proposed Online Cooperative Publishing Act calls for legal penalties for sites that label inaccurately, or refuse to rate. In June 1997, Sen. Patty Murray (D-Washington) proposed the Child-safe Internet Act of 1997, which called for similar penalties. While these legislative suggestions might be effective in promoting the use of ratings, they raise serious concerns in terms of first-amendment rights and possibilities for overly aggressive enforcement. Question 4.1 discusses these possibilities in more depth. There are currently no quality controls on third-party ratings.
These issues of quality and accountability would become even trickier if numerous schemes were to come into use. If there were dozens of PICS ratings schemes to choose from, publishers would not know which to choose, and users might not know which to trust.
3.12) What if I don't like the ratings systems that are available? Can individuals and organizations start new ratings systems?
Currently, there are two choices for individuals and organizations that are uncomfortable with the existing ratings systems.The first - and currently the only viable alternative - is to avoid use of PICS for self-rating, and in Internet browsers.
The second approach would be to develop a new ratings vocabulary, as an alternative to ICRA, SafeSurf, or other currently available ratings systems. This involves several steps:
The first step is generation of a ratings system, including categories that would be discussed and distinctions within those categories. This would require a discussion of the values that will be represented in the ratings system, and how these values should be expressed.
Once the system has been developed, sites must be rated. This can be done in one of two ways:
- The developers of the ratings system could convince web-site publishers to self-rate. This would require significant resources, as raising awareness of the new ratings system through advertising, press contacts, and other means can be quite expensive. Of course, this new ratings system would raise "chicken-and-the- egg" concerns. Why should publishers use this system for self-rating unless they know that it's being used? And, conversely, why should users choose a ratings system that doesn't have very many sites rated?
- The new ratings system can create third-party ratings for the Web. This would also require significant human resources to generate these ratings. If we assume that workers could generate these ratings at a rate of 1/minute, or 480 over the course of an 8-hour day, it would take 8 people working 40-hour weeks roughly an entire year to rate one million web sites. Of course, the Internet already has more than one million sites, and it will have grown significantly before those 8 people finish their year of ratings work. Furthermore, workers rating web sites at this rate would probably make more than a few mistakes in their choice of ratings. As described in question 3.10, distribution of third-party ratings also presents significant technical challenges and expenses.
Given the significant resources that will be needed to effectively deploy a new ratings system, it seems unlikely that there will be a large number of PICS alternatives available in the near future.
3.13) What's wrong with PICS and Internet ratings in general?
In theory, there are many useful applications of rating information.Book reviews and movie ratings are only two examples of the many ways in which we use information filters. Used in conjunction with other information sources - including advertising and word-of-mouth - these ratings provide a basis for making informed decisions regarding information.
Unfortunately, PICS does not currently provide users with the contextual information and range of choices necessary for informed decision making. When deciding which movies to see, we have access to reviews, advertisements and trailers which provide information regarding the content. These details help us choose intelligently based on our values and preferences. On the other hand, PICS-based systems do not provide any contextual detail: users are simply told that access to a site is denied because the site's rating exceeds a certain value on the rating scale.
Furthermore, the limited range of currently available PICS ratings system does not provide users with a meaningful choice between alternatives. Parents who are not comfortable with any of the current ratings systems may not find PICS to be a viable alternative.
Continuing with our analogies to other media, consider book reviews in a world where only two or three publications reviewed books. This might work very well for people who agree with the opinions of these reviewers (and, of course, for the reviewers themselves!), but it would work very poorly for those who have differing viewpoints.
Some might argue that the "success" of a single set of movie ratings offers a model for PICS. However, ratings are generally applied only to movies made for entertainment by major producers. Documentaries and educational films are generally not rated, but similar web sites could be rated under PICS.
Movie ratings also provide a cautionary lesson that should be considered with respect to the Internet. Unrated movies, or movies with certain ratings, often have a difficult time reaching audiences, as they may not be shown in certain theaters or carried by large video chains. This has led to self-censorship, as directors trim explicit scenes in order to avoid NC-17 ratings. This may be appropriate for commercially-oriented entertainment, but it could be dangerous when applied to safe-sex information on the Internet.
Ratings systems also fail to account for the global nature of the Internet. Legal or practical pressures aimed at convincing Internet publishers to rate their own sites will have little effect, as these businesses or individuals have the option of simply moving their material to a foreign country. Furthermore, the existing ratings systems are of limited value to those in countries that do not share western values.
Concerns about unrated international material or differing cultural values could be addressed through direct censorship. For example, governments might use PICS ratings or proprietary filtering software to implement "national firewalls" which would screen out objectionable material. Alternatively, ratings might be used to "punish" inappropriate speech. If search engines chose to block sites with certain ratings (or unrated sites), or if browsers blocked certain ratings (or lack of ratings) by default, these sites might never be seen.
It is possible that a wide range of PICS ratings system could come into use, providing families with a real opportunity to choose ratings that meet their values. The utility of PICS might also be increased by use of new technologies like "metadata" (data about data, used to describe the content of web pages and other information resources), which might be used to provide contextual information along with PICS ratings. However, these tools may not be available for general use for some time, if at all.
Some people confuse ratings with the topical organization that is used in libraries and Web sites like Yahoo. While no system of organization of information is neutral, topical schemes attempt to describe what a resource is "about". Rating rarely helps us find information resources topically and is usually too narrowly focused on a few criteria to be useful for information retrieval.
4.0) Alternatives
4.1) Can anything work?
The answer to this question will depend largely on the perspective of the asker.If this question is taken to mean: "Are there any solutions that would provide children with the ability to use the Internet without ever seeing material that is explicit or "adult,"the answer is probably yes. This would require a combination of three factors:
- Legislation requiring "accurate" ratings and specifying penalties for those who do not comply.
- Technical measures to prevent the transmission of unlabeled material, or any material from foreign sites (which would not be subject to US laws).
- Mandatory use of filtering software, using mandated settings.
If the question is interpreted as meaning: "Are there any solutions that provide some protection from adult or objectionable material without restricting free speech?" the answer is much less clear. Stand-alone systems clearly don't meet these criteria, as they place users at the whims of software vendors, who may block sites for arbitrary reasons. In theory, PICS might fit this role, but the lack of a meaningful choice between substantially different ratings systems leaves parents and publishers with the choice of using ratings that they may not agree with, or that fail to adequately describe their needs or materials.
Describing speech as "adult" or "appropriate for children" is inherently a tricky and value-laden process. In the U.S., many people have attempted to prevent schools and libraries from using everyday publications like Huckleberry Finn and descriptions of gay/lesbian lifestyles. The fierce debates over these efforts show that no consensus can be reached. Increased use of filtering software would likely be the beginning, rather than the end, of debates regarding what Internet materials are "appropriate" for children, and who gets to make that decision.
4.2) I understand that there are many problems with filters and ratings. What can I do to protect my children?
The first thing that parents should do is to consider the extent of the problem. While some news reports might leave parents with the impression that the Internet is nothing but pornography, this is far from the case. In fact, it's unlikely that children would randomly stumble across pornographic material. Furthermore, many adult sites have explicit warnings or require payment by credit card, which further decrease the chances of children "accidentally" finding pornography.Secondly, parents should play an active role and interest in their children's use of the Internet. For some children this might mean restricting Internet use to closely supervised sessions. Other children might be able to work with clearly defined rules and guidelines. To discourage unsupervised use of the Internet, parents might consider measures such as placing the family computer in a common space in the home and retaining adult control over any passwords required for Internet access.
Parents should also work to educate children regarding proper use of the Internet. Just as parents teach children not to talk to strangers on the street, parents might discourage children from visiting certain web sites, divulging personal or family information, or participating in inappropriate chats.
Some parents might consider using filtering software, despite all of the potential drawbacks. Parents considering this route should closely examine their options, in order to understand their options and the implications of any choice.
For stand-alone filtering systems, this means investigating the criteria used in developing blocking lists and/or news reports describing the software. If possible, parents might try to find stand-alone systems that allow users to view and edit the lists of blocked sites.
Parents considering the use of PICS systems should investigate the categories used by the various ratings systems, in order to find one that meets their needs. Information about PICS-based systems can be found at the home pages of the respective ratings systems.
In general, the use of a filtering product involves an implicit acceptance of the criteria used to generate the ratings involved. Before making this decision, parents should take care to insure that the values behind the ratings are compatible with their beliefs.
Finally, parents should realize that the Internet is just a reflection of society in general. Much of the "adult" content on the Internet can be found on cable TV, at local video stores, or in movie theaters. Since other media fail to shield children from violence or sexual content, restrictions on the Internet will always be incomplete.
4.3) What roles can ISPs play?
Some have called upon ISPs to play a greater role in helping parents filter the 'Net for their children. There are two ways that ISPs might participate in these efforts:ISP-Based Filtering: ISPs might do the filtering themselves, preventing their customers from accessing objectionable materials, even if those customers do not have their own filtering software. This requires the use of a proxy server, which would serve as a broker between the ISP's customers and remote web sites. When a customer of a filtering ISP wants to see a web site, his request goes to the proxy server operated by the ISP. The proxy server will then check to see if the site should be blocked. If the site is allowable, the proxy server retrieves the web page and returns it to the customer.
This approach is technically feasible. In fact, it's currently used by many corporations, and some ISPs that offer this service. However, proxying requires significant computational resources that may be beyond the means of smaller ISPs. Even if the ISP can afford the computers and Internet bandwidth needed, this approach is still far from ideal. In order to do the filtering, proxy servers would have to use stand-alone or PICS-based systems, so they would be subject to the limitations of these technologies (see 2.4, 2.5, and 3.13). The shortcomings of existing filtering systems may prove particularly troublesome for ISPs that advertise filtering services, as these firms could be embarrassed or worse if their filters fail to block adult material. Finally, ISPs that filter material may lose customers who are interested in unfiltered access to the Internet.
Providing Filtering Software: Others have suggested that ISPs should be required to provide users with filtering software. While this might be welcome by parents who are thinking about getting on to the 'Net (and by software vendors!) it could present a financial serious burden for smaller ISPs.
4.4) What about Internet access in libraries?
Internet access in public libraries has been a contentious area of discussion. Claiming concern for children using library computers to access the Internet, numerous municipalities have installed, or are considering installing filtering software on publicly-accessible Internet terminals. However, as cyberspace lawyer, publisher, and free-speech activist Jonathan Wallace has pointed out, the use of blocking software in public libraries may be unconstitutional:Most advocates of the use of blocking software by libraries have forgotten that the public library is a branch of government, and therefore subject to First Amendment rules which prohibit content-based censorship of speech. These rules apply to the acquisition or the removal of Internet content by a library. Secondly, government rules classifying speech by the acceptability of content (in libraries or elsewhere) are inherently suspect, may not be vague or overbroad, and must conform to existing legal parameters laid out by the Supreme Court. Third, a library may not delegate to a private organization, such as the publisher of blocking software, the discretion to determine what library users may see. Fourth, forcing patrons to ask a librarian to turn off blocking software has a chilling effect under the First Amendment.
5.0) Where Can I Find More Information?
World-Wide-Web Consortium PICS Home Page: http://www.w3.org/PICSInternt Content Rating Association:http://www.icra.org
SafeSurf: http://www.safesurf.com
CyberPatrol: http://www.cyberpatrol.com
NetNanny: http://www.netnanny.com
Fahrenheit 451.2: Is Cyberspace Burning - The ACLU's Report on Filtering Software: http://www.aclu.org/issues/cyber/burning.html
Peacefire: http://www.peacefire.org
The Censorware Project http://www.censorware.net
The Global Internet Liberty Campaign: http://www.gilc.org/speech/ratings
The Internet Free Expression Alliance: http://www.ifea.net
Computer Professionals for Social Responsibility (CPSR): http://www.cpsr.org
6)Credits
6.1)Who gets the credit?
This document grew out of discussions held by CPSR's Cyber-Rights working group and other concerned individuals during the summer of 1997, and has been maintained by the author since then. Andy Oram, Craig Johnson, Karen Coyle, Marcy Gordon, Bennett Hasleton, Jean-Michel Andre, and Aki Namioka provided invaluable assistance. Please feel free to distribute or copy this document. Comments can be sent to hhochheiser@cpsr.org.
6.2)Who is CPSR?
CPSR is a public-interest alliance of computer scientists and others concerned about the impact of computer technology on society. We work to influence decisions regarding the development and use of computers because those decisions have far-reaching consequences and reflect our basic values and priorities. As technical experts, CPSR members provide the public and policymakers with realistic assessments of the power, promise, and limitations of computer technology. As concerned citizens, we direct public attention to critical choices concerning the applications of computing and how those choices affect society.
Last modified 8 October 2002 03:39 PM EST, hsh hhochheiser@cpsr.org
Return to the CPSR Home Page. |
Send Mail to Webmaster. |