Hendon Publishing - Article Archive Details
Written by Law and Order Staff
One of life’s rites of passage is the first time that we are given a key to something. Having a key means that we are trusted enough to have access to something that is kept secure from others, and it also suggests that we now have an increased expectation of privacy in whatever that key helps protect. It is an inevitable rite of passage that we will eventually lose that key, and usually several others, and create a security problem.
Is the person that finds the key likely to know what lock it fits? Should we change the lock? How many keys and people will be involved in the change, and what is the risk-versus-rewards calculation? Keys are necessary, but they are also a nuisance. We collect so many of them that we eventually forget what locks they all fit.
What if your keys were contained in the physical characteristics of your body? You would always have your “keys” with you, so they couldn’t be accidentally lost, and no one could have a duplicate. This premise is the essence of biometric security technologies. By keying locks to buildings, cars, safes, computers, documents and all of the other things we want to secure to physical characteristics, we ensure that these resources cannot be accessed unless the person authorized to do so is physically present.
Biometric technologies work against the bad guy in another way. Certainly, crooks want to get access to things they aren’t supposed to, but almost as commonly, they want to conceal their own identities. Sometimes they are looking to avoid capture and avoid being held accountable for their crimes, but increasingly, they are trying to impersonate others in a variety of schemes that have come to be grouped under the heading “identity theft.”
Identity theft crimes are often a nightmare of red tape for the victim, even if the monetary loss is minimal or covered by insurance and various protection plans. Victims spend weeks or months contacting creditors, trying to convince them that either they are who they say they are, or that someone else wasn’t them.
Traditionally, our system of identifying people for civil purposes has been based on a few foundation documents: birth certificates, Social Security cards, and the ubiquitous driver’s license. Skilled identity thieves who get control of any one of these documents often can parlay it into a full set of credentials, sufficient to file address changes, apply for credit cards and loans, and generally misrepresent themselves. If the foundation credential was, instead, a unique, nonreplicable physical identifier, such as a fingerprint, it would be much more difficult to establish an identity other than the one you were born with.
Because fingerprints are more or less the gold standard of identification, that is where manufacturers of biometric systems have put most of their efforts. There are a number of physical characteristics that are as unique as fingerprints, but they are not as well-known, and scanning them is usually more awkward and invasive.
The pattern of blood vessels on the retina, distribution of pigment in the iris, hand geometry, voice patterns, and handwritten signatures are all forms of biometric identifiers. Fingerprints, retinal and iris patterns, and hand geometry are physiological biometrics, as they cannot be altered by the possessor, while voices and handwritten signatures are behavioral biometrics. Behavioral biometrics change all by themselves over time, can be altered by the possessor, and can even be duplicated by a skilled imitator.
Another standard of biometric identification is facial recognition, which is a physiological biometric, but is not unique to the individual. Facial recognition has its place in the biometric arsenal, but it is insufficiently reliable to be used as a positive identification method.
As with illegitimate issuance of conventional identification documents, faulty initial recording of the biometric information or associating that information with the wrong person can make the system worse than useless. The administrators of these systems have to use special care to ensure that identification of all parties is fully documented and matched with the right records in the system database. The process of initial matching of biometric signatures with the records of the people to whom they belong is called enrollment.
Type I and Type II Errors
Biometric engineers make reference to Type I and Type II errors, terminology that is normally used in discussing statistics. A Type I error, also known as a False Acceptance Error, occurs when the system incorrectly matches the biometric signature provided to a record in the database, providing access to the wrong person. The rate of Type I errors is called the False Acceptance Rate or FAR. A high FAR indicates an unreliable system that does not provide adequate safeguards for the resource to be protected.
A Type II Error, or False Rejection Error, occurs when the biometric signature presented to the system does in fact match a record in the enrolled database, but the system fails to match the sample and the record successfully, denying access to the authorized user. The incidence of Type II Errors is called the FRR, or False Rejection Rate. No biometric system can promise FARs and FRRs of zero.
In evaluating the acceptable rates of error, the nature of the application has to be considered. A biometric system to screen jail inmates picking up their meal trays could tolerate a high FRR, as the maximum consequence would be that the inmate got his meal a few minutes later than usual. However, consider a product that didn’t quite make it to market a few years back. A gun leather manufacturer produced a sidearm holster that released the gun only after it recognized the fingerprint of an authorized enrolled user. A False Rejection Error would keep the officer’s gun in the holster when he needed it, and the maximum consequence could be extremely grave.
Humans are much better at pattern recognition than are computers. A human can see the older script logo of Coca-Cola and not only recognize it for what it is, but read the text and pick out individual letters, if needed. A computer would see a graphic and would probably not be able to render the words into its component letters. Much of pattern recognition for people is contextual.
If you were to see your dentist at the grocery store, you might find his face familiar, but not recognize it at first. This is because you didn’t expect to see the face in that context, and because most of the time that you see the face, it is upside down to you. If the dentist speaks to you, the voice supplies another characteristic of pattern recognition that assists in identifying the appropriate “record” in your cerebral “database.”
People consider the entire impression first, then move to analysis of particular details (voice, facial hair, clothing, gait) for refinement of the identification. A substantial amount of the information normally available for identification can be missing, and yet people still will be able to match what they see with what they remember.
Computers generally, and biometric systems in particular, have to first resolve the information presented to them into a mathematical model. A biometric system usually maps landmarks of the biometric sample presented to it, then renders the arrangement of those landmarks into a number.
With fingerprints, the system first identifies the minutiae (properly pronounced “my-NOO-she-ee,” but commonly said as “min-NOO-sha”) present in the sample (the location of crossovers, bifurcations, deltas, and so on) then uses a procedure called an algorithm to convert that pattern into a number or numbers. These numbers are compared against the results of calculations already recorded in the database during the enrollment process.
If there’s a match, then the system grants access to the person requesting it. The match is seldom exact. In most cases, the match will be expressed as a percentage indicating confidence in the match, e.g. 99%, 90%, etc. Lower confidence levels occur when the sample doesn’t exactly match an enrolled record but is close, or the sample is of poor quality because of haste, malfunction, or dirt on the scanner.
The system administrator has to determine in advance what level of confidence is acceptable for the application. Setting too high a level of acceptable confidence will increase the number of Type I errors significantly, while setting it too low will increase the Type II errors.
In Automated Fingerprint Identification Systems (AFIS), where the necessity to maintain 100% confidence in identifications is critical, the final identification is always done by a human fingerprint examiner. An AFIS may spit out a hundred or more “possibles,” ranked in order of confidence, and most of the time, the appropriate record will be one of those near the top. But users of these systems know that the computers are not 100% reliable, and that is why we always keep a human in the loop. The humans still use old-fashioned pattern recognition, which will remain the gold standard for the foreseeable future.
Because no system can be 100% accurate, it often makes sense to configure biometric systems used for security applications as verification systems, rather than identification systems. An identification system must match the biometric sample provided with the correct record in the entire database, with no “hints” as to whom the submitted sample belongs.
Type II or FRR errors are likely to be common, especially if the confidence level is set high. In a verification system, the user submits his biometric signature, and at the same time, swipes a card and/or keys in a passcode that tells the system “I am Bill Jones.” If the biometric signature matches the one on file for Bill Jones to the preset confidence level, Jones is given access. This method is reasonably convenient and drastically reduces Type II errors.
When biometric security systems need to be identification systems (such as those used to verify the identity of inmates before release), it is almost mandatory to have a human operator in the loop. The operator, probably a corrections officer, would verify the inmate’s identity against the most likely match made by the biometric system, usually by a photo or by personal knowledge of the inmate. Systems used in the field are somewhat more problematic because of the delays typically encountered by wireless transmission of the biometric data, but officers have been negotiating this sort of problem (“No, that’s not me, that’s somebody else with my name”) for many years without the aid of biometric systems.
Past practice is not the only reason that fingerprint-based biometric systems are the most commonly used. It’s usually much easier to place a finger onto a scanning plate than it is to hold your face in front of an iris or retinal scanner or submit most of the other biometric signatures that are in use.
People tend to believe that biometric systems that use fingerprints store the fingerprint itself and compare the scanned image with the one on file. Although there may be a stored graphic image of the enrolled fingerprint in the database, the comparison is actually performed against a mathematical model of the minutiae identified in that scanned image. When a sample fingerprint is presented to the system for comparison against the database, the system identifies the minutiae of the sample, calculates the mathematical representation based on that set of minutiae, and compares that against the models in the database. The number and type of minutiae can vary from scan to scan, depending on a number of factors, so the match is almost never an exact one.
There are four types of fingerprint scanners in widespread use. Optical scanners are by far the most common. In optical scanning, light is reflected from the finger surface through a prism. Wet fingers or dirt on the fingers or the scanner may degrade the quality of the scan. Thermal scanning records a thermograph of the finger image. Capacitance sensing uses a CMOS sensor to create an image of the fingerprint from the electrical pathways created by the friction ridges. Finally, ultrasound sensing uses high-frequency sound waves to scan the finger surface. This last method isn’t affected by dirt or moisture, but the equipment is bulky, and the process takes considerably longer than the others, so it is not widely used.
In the movies, the diabolical bad guy severs a finger from his victim and uses the finger to gain access to the vault/computer/secret spy headquarters. If you see this as a realistic scenario, you have some truly serious security problems. However, a far more likely scenario is that a fake finger, complete with fingerprint, can be made and used to spoof a fingerprint scanner. A Japanese cryptographer by the name of Tsutomo Matsumoto published a paper documenting several methods that he used for making “gummy fingers” out of silicon compound. The fingerprint dummies were of sufficiently good quality to fool most scanners, both for enrollment and for identification/verification purposes, and the materials cost less than $10.
Other methods of defeating fingerprint scanners seem to be dependent on the type and even the manufacture of the scanner in use. Fingerprint reactivation uses the latent image of a print placed on the scanner by a previous legitimate user. The print can be “reactivated” by merely breathing on the scanner plate, in some instances. The warmth and moisture in the exhaled breath is enough to reveal the image left by the friction ridges and spoof the scanner. Experimenters have also been successful in using a latent print that was developed and lifted onto a plastic carrier, then applying the latent print image onto the scanner with a plastic bag of warm water on top. The warm water supplies “body heat” sufficient to fool a thermal scanner and accepts the fake.
Retinal and Iris Scanning
Up until a few years ago, retinal scanning was far more commonplace than iris scanning. In retinal scanning, an optical device not unlike an opthalmoscope (the little flashlight gizmo that doctors shine into your eyes) is used to capture an image of the patterns of blood vessels on the retina on the rear wall of the eyeball. These patterns are believed to be random and unique for every individual, and they remain static through one’s lifetime. Obviously, if the scanner is not positioned just right, the retinal image won’t be scanned correctly. Iris scanning is now more common, as the iris is visible to the naked eye and is more accessible. Close examination of the iris shows that there is a complex design of colors contained there that is also believed to be unique for each person.
Retinal scanning is used for access to very high-risk facilities, such as military installations and nuclear power plants. Some experts consider it to be the most reliable and foolproof biometric method available today. However, the hardware is expensive, and both the enrollment and verification process tedious. It is probably not practical for most applications.
Iris scanning is far more commonplace. Users facing an iris scanner need not stand as close to the device (some work from as far away as 2 feet), and many don’t require the removal of eyeglasses. Experiments indicate that most iris scanners can be fooled, however, with a photograph or a high-resolution video clip of the user’s face. Like most security systems, manufacturers have reacted to these spoofing methods by incorporating refinements such as varying light levels during scanning, with concurrent monitoring for reactive pupil dilation. By randomizing the light intensity variances, advance production of a properly reactive video clip would be that much more difficult.
Because the scanning equipment does not have to come into physical contact with the person being scanned and can thus be secured behind a protective sheet of glass, iris scanning may be the biometric of choice for ATMs and other unattended premises where identity needs to be verified.
The computers that run facial recognition routines don’t “see” faces in the same way that you or I do. An image of the face is captured, and the computer identifies the minutiae of the face in much the same way as a fingerprint scanner does for friction ridge detail. The location of the eyes, nose, corners of the mouth, chin and crown of the head, and jaw points are mapped, and a mathematical representation of the proportions of these landmarks is created. This numeric string is then compared against a database of people of interest, whether they be people enrolled and entitled to be admitted to a facility, or known terrorists or wanted persons. In the latter case, possible matches are presented to a human operator for evaluation and follow up.
Performance of these systems in large-scale environments hasn’t been spectacular. A few years back, people attending the Super Bowl in Tampa, FL were unknowingly scanned by a facial recognition system as they entered the stadium. Local police had stored images of wanted persons into the database, and a few positive identifications were made (along with just as many incorrect identifications). The systems have shown similar reliability in experimental deployments at airports and mass transit facilities. They may be better suited for environments where the system can consider one image at a time, such as in a police station or booking area.
The Los Angeles County Sheriff’s Department has been using a facial recognition package for some years to help verify the identity of persons coming into their booking facilities, and who are prone to providing false names in order to avoid warrants and other court processes.
In access control applications, facial recognition systems have been defeated by showing the camera a photo of the enrolled person, or by playing a video clip of their face. Some of the more sophisticated systems use a 3-D method called elastic graph matching to record information that can be perceived only if the subject is scanned from multiple angles. Although early models could be defeated by displaying a video clip of the person moving his head slightly, refinements require the person to smile or move his head in a specified way on command, so the video clip is less likely to work.
In this context, speech recognition seeks to match a voice pattern with one stored in the enrolled database. Other speech recognition applications convert the spoken word into machine-readable text, using the voice as the interface instead of a keyboard. Speech patterns may be among the easiest biometric keys to imitate, as evidenced by the career of Rich Little and other entertainers. Most of us have had the experience of answering a telephone call and mistaking the voice of one person for that of another.
Early speech recognition systems merely recorded a passphrase that the user had to repeat into a microphone in order to gain access. In most cases, a reasonably high quality recording of the person speaking the same passphrase would be enough to get past the access protection. Current speech recognition systems require the user to record a variety of words and phrases during enrollment.
When the user desires access to the resource, the system displays a random assortment of these phrases that have to be spoken in the correct order. Still another refinement is the monitoring and recording of high and low frequencies in the user’s speech. Only very high quality playback devices can faithfully reproduce these frequencies, so the use of most common tape recorders and other portable players is not practical.
Speech recognition systems can reject authorizer users’ attempts to gain access (Type II errors) if the user has a cold, has strained his voice, or has some other affliction that changes the sound of his speech. These systems are also perceived as being less convenient because people often resist being required to talk to a computer.
Biometric security applications are probably going to become more commonplace in the future as we move closer to a paperless business model and require increased verification of identity to reduce fraud and theft. Even though most of the biometric standards described above can be defeated, given enough effort and expertise, they still represent better protection than a paper identity document or a password that can be guessed, copied, or stolen. If you build a better mousetrap, you eventually get a smarter mouse. So long as manufacturers are building these systems, malefactors will be looking for ways to defeat them, and the industry will respond with countermeasures. The key is not to become complacent or believe that you have a truly impregnable system.
The best application of biometrics is most likely in conjunction with other security measures, such as a paper or plastic identity document, a password, or a “smart card” that contains a complex passcode that is difficult to duplicate. While any one of these methods can be compromised, it would be difficult to compromise all of them at the same time. Further, the flagging of any one of the criteria in the master database would alert the human in the loop (and there should always be a human in the loop) that this person’s identity had possibly been compromised and to give any credentials presented greater scrutiny.
There is also a fear, for lack of a better word, that we are moving toward a national identity document that all people would be required to have in order to access even the most basic resources of civilization. Biometric standards would almost certainly be a part of any such credential. While a national identity card might make the job of police officers considerably easier, it invites a whole new discussion of civil liberties and the intrusion of government into one’s personal affairs. People want the government to intervene to catch the offender when their personal credentials have been compromised, but not until that happens.
Published in Law and Order, Jul 2006
Rating : 6.0
Related ProductsBiometricsFacial Recognition TechnologyFingerprint ScanningIris ScanningRetina ScanningSecurity SystemsVoice Recognition Devices
Click to enlarge images.