Even before America became a lot more concerned with physical security after the September 11, 2001 attacks, biometric identification products represented a booming technology. The idea that people can be positively identified by physical characteristics that can’t be ditched, substituted or counterfeited is a real attractive proposition in an era rife with identity theft and drivers who can’t be bothered to carry their licenses with them. Unfortunately, the companies that have been marketing this technology may have been making claims that haven’t been justified in field use.
Facial recognition software tries to make a computer do something that people still tend to do better— recognize relatively random patterns and associate them with categorized data. People do this all the time. You walk into your workplace, and without thinking about it, you are absolutely certain that you see Alice in the break room, Bob in his office, and Ellen talking on the phone.
A part of this instant recognition is because you are seeing these people in the place where you expected them to be anyway, but you are also resolving the “Who is that?” question through a process of pattern recognition. The physical dimensions of the person, the way he moves, the sound of a few spoken words, and the appearance of his face all aid you in putting a name with the person you are seeing, and telling him, “Good morning, __________.”
Computers tend to be fairly bad at pattern recognition, but they are very, very good with math. Therefore, in order for a computer to recognize a random pattern like a face, it has to reduce that face to a mathematical construction. It does this by obtaining a digital image of the face, locating major landmarks on the face such as eyes, nose, mouth and their placement on the oval of the head, and measuring the relationships between these features.
The digital image of a face taken from a distance of 18 inches will be different from one taken at three feet, but the ratio of the distances between the major facial features will remain the same. Likewise, if a person changes his hair color, grows or shaves off a beard, wears eyeglasses, or even registers a substantial change in body weight, those ratios will not change.
Reducing the face to a mathematical statement is only half the battle, though. In order for facial recognition software to do its job, it must then determine whether the facial analysis matches a pattern already stored in its database. The setting where the software is used determines to a large extent how much work the computer will have to do to find a match or determine that there isn’t one.
In an environment where the facial recognition system is being used to verify identity for access to a facility or a computer, it usually has to compare the input data to a relatively small set of records. If the analysis is being compared with a single record, this is a fairly straightforward task. This is the case when Employee A inserts a keycard into a slot, keys in a code, and then allows a camera to scan her face. The pattern is only compared with the one corresponding to the keycard, yielding a “match” or “no match” response.
In large volume environments, the job is much more difficult. Television and the movies depict facial recognition systems that can pull faces from a wide-angle camera shot, analyze them, and then compare them against a file of wanted persons, suspects or terrorists (depending on which show you’re watching) almost instantly. If the comparison database is very large, this requires some substantial computing power— probably more than most law enforcement agencies have on hand. The task is complicated by poor lighting, faces that are captured at other than the optimum straight-on angle, and movement.
Two commercial facial recognition systems were tested over a three-month period in 2003 at Boston’s Logan International Airport. Logan was the originating airport for both of the airliners that were flown into the World Trade Center towers. Forty Logan employees had their faces scanned (sometimes said “enrolled”) into the facial recognition databases.
The systems then monitored cameras at two security checkpoints that were routinely used by the employees to see how often the system would recognize the faces of the people who had been scanned. The systems made correct identifications 153 times, but failed to recognize the enrolled faces 96 times— a failure rate of 39%. It is unknown whether the employees went to any effort to disguise their appearances, but presumably a terrorist or wanted person who knew he was going to pass through a monitored location would be more motivated.
This experiment was conducted under conditions not as demanding as those depicted in the popular media, and not even as rigorous as many crowded venues, like a subway station or a sports arena, would demand. Further, in order for these systems to be effective in screening large numbers of individuals, the comparison database is probably going to need to be much larger than 40 records. Few managers concerned with personnel security would be ready to accept a report of “We are 61% certain that no known terrorist passed through this checkpoint today.”
There are other problems with facial recognition technology. Even in one-on-one comparisons, these systems have been spoofed by holding photographs of the enrolled individuals in front of the scanning camera. There are also public policy issues, where concerns over personal privacy and government monitoring of citizens going about their lawful business are hotly debated.
These systems may still have a place in law enforcement. Booking photos are of relatively high quality as compared to captures from security cameras, and nearly always have the optimum perspective for analysis. There are systems available that compare booking photos and even computer-generated composite “sketches” to an image database, and can rapidly bring up possible matches without having to take fingerprints and run them against an AFIS database. A good composite can be matched against a booking photo from a prior arrest when there are no latent fingerprints or other evidence to compare. The key here is in knowing the limitations of these systems, and not expecting them to perform as well as they did on TV.
Tim Dees is a former officer who writes and consults about applications of technology in law enforcement. He can be reached at (509) 585-6704 or by e-mail at firstname.lastname@example.org.