Precrime detecting is a huge jurisprudence step beyond predicting.
By: Stephenie Slahor
The short story “Minority Report,” authored by Philip K. Dick, and brought to the screen by Steven Spielberg in 2002, looked at a future time in which precrime detecting rode roughshod over personal liberty in favor of public safety. Dick was more of a futurist than a sci-fi writer. His short stories were the basis for movies like Blade Runner (“Do Androids Dream of Electric Sheep?”), Total Recall (“We Can Remember It for You Wholesale”) and Terminator (“Second Variety”).
While societies built on the rule of law and justice dismissed the Minority Report short story and movie as science-fiction fluff and a bit of escapist entertainment, the notion of precrime detecting presents a scarier side as it appears to be being considered as a sooner, rather than later, reality.
It stands to reason that with the plethora of hardware, software, statistical data and, yes, gadgets and gizmos marketed to police and security, data streams might, at times, be able to give police the ability to predict crime before it actually happens. The accuracy of such a prediction may be limited or even impaired.
However, the opportunity to detect crime before it can happen may be perceived as worth outweighing the occasional mistaken analysis of data, at least in the eyes of those who would push away the deep roots of individual rights and established jurisprudence in civilized societies, and the Constitutional rights of individuals in those societies in which the rule of law is protective of citizens.
Precrime detecting is a methodology that goes beyond the forecasting that has long been a part of policing and security. That forecasting has been based on such factors as trends in criminal activity, geographical mapping of crime, and sociological and economic trends in neighborhoods. But precrime detecting relates more to prognostication and reaction to past events in making predictions, using far more behavioral data and analysis of that data, combined with patterns, statistics and other information to devise models.
Spicing the mix of criminal intelligence with behavioral data will focus on such human behavior as examining people in their routines, habits, past crimes and other information to predict both the “when” and the “if” of a person’s committing of a crime—and whether that person is initially under suspicion or not.
Also entering this process of precrime analysis is the rampant, massive growth of social media. The data packed into social media communications may hold even more leverage for precrime detecting not only for the individuals engaged in the communications, but also in the likelihood that criminal activity will take place in a particular spot, neighborhood or area.
Sociologists and psychologists say that some people can easily be creatures of habit. Thus, it is likely that a person’s preferences for being in a particular place during certain behavior may be easier to predict than one might first think. “Random” acts occur, of course, but those might be more of an exception than a rule.
Take, for example, the regular meetings and evaluations that are a part of a person’s parole. Such record keeping may help decisions about the amount of supervision and treatment needed for the parolee. But such data could be used even deeper to analyze why the person acts in a certain way, when that person is likely to revert to criminal behavior, and even where that criminal behavior is probably to be staged.
Offender profiles and offender behavior would both be analyzed in non-traditional ways to lead to precrime detecting, combining crime data with such fields of knowledge as sociology, psychology and geography.
Of course, it is no quantum leap to apply such analyses to any individual, not just an offender or a parolee. And that brings up problems involving Constitutional rights, personal privacy rights, and legal arguments about the “reasonableness” of suspicion. It also brings up questions of officer safety in that analysis cannot violate Constitutional rights, nor should it hamper an officer who might wrongly trust a machine, data stream or communications device rather than observe and make decisions based on professional and personal training and experience.
Humans have countless more abilities in thinking than any computer. Machines do not come close to human intuition, intelligence and rational decision-making. A police officer must use appropriate tactics and strategies to deal with a situation, adapting as necessary—something a machine cannot do. Nor should a machine “direct” an officer to do something.
Computer algorithms and predictive analysis software are developing to the extent that they may be able to forecast where and when crime might occur. But mixing in behavioral analyses takes matters further than analysis of probabilities, and even moves beyond the analysis akin to that collected by a surveillance camera at a tourist attraction, or aboard a bus or train, or at a particular facility—analysis that says what is “normal” at such places, and what is not.
Aberrations from that norm might be regarded as “suspicious” behavior. On such information, deployment of extra security might be done, based on the data gathered through the cameras. That is part of the daily security decision-making at a site.
However, there is a question of the step toward a more encompassing “Big Brother”-like approach that would impinge on personal and legal rights. A security camera in a public place or a place where there is no reasonable expectation of privacy is one matter, but the use of sensors that could detect physical changes in an individual will raise challenges in courts, politics, police methods, health care, and, yes, civilized society.
Proponents will argue that sensors could spot someone about to commit a crime, but just how “public” is one’s pulse, voice, perspiration, facial expression, eye movements, body movement changes, breathing, skin temperature, pupil width, and gazes, and to what, if any, degree can such physical traits be freely observed, measured and acted upon by “authority?”
There is also a danger of use of such data and the false positives it will inevitably engender. More questions will arise about how the data will be stored, for how long, and who oversees the erasing of the data and any false positives.
The controversy surrounding precrime detecting was spotlighted with Mr. Dick’s short story, and with Mr. Spielberg’s version of it for the screen. Other authors and filmmakers have brought to the fore characters who are so well-trained and skilled that they are able to disguise their reactions and expressions to avoid detection—yet another question to consider because there would be such people in reality, too, capable of avoiding precrime detection.
There is yet another knotty problem in precrime detecting—that of moving from arrest for a crime and exploring the question of intent, whether deliberate, reckless or negligent, to scenarios in which someone is pre-arrested for a thought. And one step further—would there be banning of individuals from a particular place or event in the guise of preventing “bad” behavior before it occurs? Would the chant be that such precrime detecting is “for the public good” or “for public safety” and therefore something that should be allowed even though it stampedes over individual legal rights?
The Electronic Frontier Foundation (EFF) was founded in 1990 and confronts matters involving technology and free speech, privacy, innovation, consumer rights and legal/Constitutional rights of individuals. The EFF challenges legislation or other actions that violate individual rights. A recent incident involving GPS surveillance technology used in Massachusetts on a car owned by an arson suspect (and his passenger) saw the defendants later prosecuted, but they challenged the misrepresentations used to obtain a search warrant and the subsequent installation of a GPS device on the car.
The EFF argued, as amicus curiae, about the privacy questions involved in traveling in a tracked vehicle, including the fact that information gleaned could point to a windfall of information about such things as who family and friends are, religious affiliation, medical conditions, political leanings and other personal matters shown in the gathering and use of GPS surveillance (and other electronic records) without proper court oversight that protects Fourth Amendment and other rights.
As implausible as such matters might seem in societies in which law, equity and justice rule, the ramifications of precrime detecting would raise new thoughts similar to those once pondered by Juvenal in his Satire, or Socrates and Plato in the Republic—the question of “who guards the guardians?” Finally, while many of the first computers were given the name HAL, will the first precrime programs run on these computers be given the name Agatha?
Thomson Reuters CLEAR Pre-Crime Software
By: Stephenie Slahor
Thomson Reuters has created “CLEAR®”, an investigative suite that can be used for pre-crime detecting. The company recently released a research white paper done by its Fraud Prevention and Investigation unit. The white paper explores the growth, use, and legal and privacy challenges of pre-crime detecting, and examines the use of such capabilities in the forecasting of criminal activity. The white paper, titled “Technology Fuels New Advances and Challenges in Predictive Policing,” is available on their website.
The paper highlights the growth of predictive policing, including CLEAR, the Thomson Reuters investigative suite not only for law enforcement agencies, but also for financial institutions, corporate security and others needing intelligence about people and/or companies via live gateways of real-time data for historical and current information.
CLEAR has both public and proprietary records, arrest and incarceration records, photo lineups, work affiliations data, and other resources, and the methodology of its “Web Analytics” to search and categorize social media, blogs, news sites and watch lists. Such public and proprietary records are then integrated into the user’s systems or searched via online platforms. Such integration can be done with large-volume batch capabilities and batch alerting functionality to identify and intercept individuals and/or organizations likely to participate in criminal activity.
In essence, pre-crime detecting is the result of data analysis, using a variety of resources including technology, mapping, software platforms and computing systems to winnow down to the information needed for a particular investigation. Thomson Reuters notes that some believe that pre-crime or predicting policing is merely doing policing in a quicker fashion, but others feel that the use of new technology places departments in a more analytical mode in collecting, organizing and acting on data.
In other words, there is a plethora of data, but it must be put into a more useful and/or accessible format in order to be predictive. Akin to clues that can range from the obvious to the almost imperceptible, data needs to linked, sifted and sorted to be useful enough to offer investigative momentum to the user.
A nation’s laws, the presence of police, and the solid existence of a civilized society are deterrents to criminal activity, but Thomson Reuters believes that systems such as CLEAR add technology to the mix of deterrents.
The company also recognizes that legal and privacy challenges to such technology, though, will increase the need for better training of investigators using the technology, and will multiply Constitutional legal challenges to the use of such technology against someone who is not under the required “probable cause” for investigation.
In addition, the white paper recognizes the public perception that officer training, insight and experience may be jeopardized by too much reliance on technology instead of human intelligence.
Stephenie Slahor, Ph.D., J.D., writes in the fields of law enforcement and security. She can be reached at firstname.lastname@example.org.