Personal tools
Identification Technologies: An Analysis
In the post-9/11 United States, new identification technologies are being considered as ways to enhance security. While most proposals have been well-intentioned, some are misguided in that they overlook the potential for unintended consequences or underestimate the technical challenges and risks inherent in their implementation and use.
To assist policymakers in understanding the issues involved in adopting identification technologies, this paper explains: a) what sorts of identification schemes have been proposed, b) how they would be used, and c) how they might fail.
Taxonomy of Identification Schemes
Current proposals for National IDs fall into several categories depending on whether people carry a physical card, and if so, what data are on it vs. in a database. Current proposals also vary as to whether possession of an ID card would be mandatory or voluntary. Some proposals make it voluntary for citizens, but mandatory for visitors and immigrants.
Unique ID number; no card: This scheme assigns each person a unique ID number, like a Social Security number. People use their numbers to identify themselves. Data about each individual is stored in a database, which can be either centralized or distributed. There is either no card, or the card is just a scrap of paper that is irrelevant (like the Social Security card).
Biometric data on card only; no ID number; no database: The card contains an encoding of the person's fingerprint or retina-scan, as well as a photograph and other data. At authentication locations (e.g., airport security gates) the data on the card is compared with a new scan of the person's fingerprint or retina. The biometric data is kept only on the cards; there are no government databases storing it. No unique ID number is required for such a scheme. Civil Rights lawyer Alan Dershowitz advocates a voluntary version of this scheme. Sun Microsystems CEO Scott McNealy agrees, and advocates a Java-based smart ID card.
Unique ID code on card and in database(s); biometrics and other data in database: The card has a unique ID number, like a Social Security Number, which is used as a database key. Other data about the person is stored in databases of government agencies, probably in a centralized database.
Unique ID code and biometric data on card; ID code, biometrics and other data in database: This is the scheme being pushed by Larry Ellison, the CEO of Oracle, a database company.
Biometric data in database only, no card: Data is read from individuals' bodies at security points (or elsewhere) and matched against a database. The database may be centralized or distributed. Examples are fingerprints, retina scans, or voice-prints. Face-recognition is another technology being considered, despite the fact that it currently is much less reliable than other biometric identification schemes.
How Government Would Use ID Schemes
Any identification scheme potentially provides at least five different security functions:
Authentication (proof of ID) at initial registration. When someone first applies for a card, they presumably must provide adequate documentation that they are who they say they are. If their documentation is insufficient or unconvincing, they can be rejected immediately. (Note:It isn't yet clear what info a person would have to supply when applying for a card, and whether that info would be easy or hard to fake.)
Checking background of applicants. Assuming that an applicant's identification documents seem valid, authorities can use the application as an opportunity to check the applicant's background to determine whether there is any problem with issuing the person an ID card. This background checking need not be done on-the-spot; it can take an arbitrary amount of time and therefore can be comprehensive.
- Authentication (proof of ID) at security checkpoints. A card or a biometric reading would be used to prove one's ID to gain admittance to locations and to services. This authentication must be done in a matter of seconds, and yet must also be highly reliable: it cannot flag significant numbers of innocent persons as suspects, and it cannot allow significant numbers of suspects to pass.
Scanning for suspects. Identification equipment can be placed to collect data from those who pass by. Placement need not be at security checkpoints; it can be anywhere: secured zones, known trouble spots, places where people gather. Depending on the type of data being collected, passers by may not be aware of it. The data can be analyzed to look for particular people, to spot questionable activities, or to track someone's movements. The data need not be analyzed instantly; it can be analyzed over time, or only after an incident has occurred. One example is automatic toll-collection mechanisms, which could be used to track vehicles (and presumably their owners) around a state. Another example is cameras augmented with face-recognition software.
Data mining and matching. Authorities may compare information in their databases to determine whether data about a person is present in more than one database, in order to augment what is known about that person. They may also analyze data to try to detect patterns of behavior that suggest illicit activity. Such patterns can involve individuals or groups of people.
Different identification schemes provide different subsets of these security functions. For example, a scheme in which biometric data was only on cards and not in any database would allow functions 1, 2, and 3, but not functions 4 or 5. One issue is whether a system that did not provide all of these functions would be viewed as useful in preventing terrorism.
A big complaint of law enforcement officials now (especially in the wake of the 9/11 attacks) is that they lack the ability to compare records of different agencies (e.g., INS, IRS, FBI, police, private orgs and companies) to collect info about potential terrorists. In their view, a national unique ID would allow better cross-agency matching, and therefore make it easier to track individuals, pull together disparate bits of evidence, and detect patterns that arouse suspicion.
Of course, the above list doesn't include the many ways that identification technologies can fail or be misused. That is addressed in the following section.
Possible Failures of Identification Technologies
How identification technologies can fail or be countered must be considered. To do otherwise is to engage only in best-case, "rose-colored" thinking. We may end up bearing the monetary and social costs of the measures without gaining any benefit.
Car alarms, for example, are a "security" technology that was never subjected to a critical analysis of failure-modes: mainly massive false-alarm rates, making them virtually useless. Therefore, we all suffer noise pollution while their owners get no added security.
The social costs of widespread adoption of identification technologies, such as National ID Cards and face-recognition, are much more serious. Therefore, we must examine them critically and thoroughly. We may find that some "security" measures are not workable in the real world.
Toward that end, here are several ways that identification technologies can fail to enhance security:
Workers at a card-issuing agency succumb to corruption, issuing fake IDs to people, either for money or because they support the people who need fake IDs. E.g., some Virginia DMV employees were caught selling fake drivers' licenses [1].
Workers at a database facility misuse data for their own purposes or deliver it to outsiders to misuse. E.g., a former Arizona law enforcement officer used a govt. database to track and kill an ex-girlfriend [1].
Erroneous data is entered into the database by well-meaning but low-wage workers, or by inaccurate automatic methods such as character recognition of scanned documents. Erroneous data in the system causes identification errors later.
Hackers break into the database online, and either collect data for later misuse or alter data in the database to cause identification failures later.
Underpaid, bored guards at security checkpoints fail to examine IDs carefully, allowing unauthorized people through. E.g., journalist Steven M. Cherry's acquisition and use of a fake New York ID card [2], as well as the many documented lapses at airport security checkpoints.
Hackers "break" the scheme for coding ID data on cards (e.g., on magnetic-stripe or bar-code) and use this knowledge to alter data on cards or produce counterfeit cards.
Biometric readers (e.g., fingerprint, retina, face) at security checkpoints malfunction non-obviously, mis-identifying innocent travelers as suspects, or failing to identify actual suspects. Even when detectors don't malfunction, their normal expected error rates may be too high for the technology to be useful. E.g., current face-recognition technology is known to have high error rates.
Biometric readers (e.g., fingerprint, retina, face) at security checkpoints malfunction massively and obviously, forcing authorities to choose between stopping all commerce or letting people pass unchecked. E.g., when theft-alarms in stores start going off for nearly everyone exiting. Outages can be normal breakdowns or they can be intentionally caused by people.
This is just a brief sample of the possible failure-modes of identification technologies. For any proposed identification scheme to be taken seriously, it must include measures for preventing the failures and attacks listed above. Otherwise, we will find ourselves living in an oppressive surveillance society while enjoying no more - perhaps even less - security.
References
- Peter G. Neumann and Lauren Weinstein, "Risks of National Identity Cards", Communications of the ACM, Vol. 44(12), December 2001.
- Steven M. Cherry, "Security, Fear, and National ID Cards", IEEE Spectrum Online.