“Ethics.” The emphasis placed on the subject and its inclusion as part of many a law enforcement academy’s curriculum is understandable; our profession seeks to bolster a character trait that, in theory, already inhabits its men and women. And as ethics, according to its Wikipedia entry, seeks “to resolve questions dealing with human morality and concepts such as good and evil, right and wrong, virtue and vice, justice and crime,” the ethical ideal is as coveted an entity within the profession as the moral one—perhaps, more so.
Modeling this concept in reality, however, can be a more difficult task for law enforcement than one might expect. For all its censuring of personnel for on-duty trysts and off-duty DUIs, the profession pays curiously little attention to an ongoing transgression perpetrated within it: stat fudging.
History of Lies
Misreporting statistics about crime is not a new phenomenon. The practice may well predate the inception of the Uniform Crime Reports (UCR) in 1930.
Developed in the 1920s by the International Association of Chiefs of Police, the Uniform Crime Reports program was designed to collect, classify, and store a variety of crime data in a uniform manner to allow comparisons across the nation. The FBI took over management of the UCR program the following year, and currently works alongside over 18,000 law enforcement agencies that voluntarily provide crime data to the program. The statistical results from this data are available in four annual publications: Crime in the United States, National Incident-Based Reporting System, Law Enforcement Officers Killed and Assaulted, and Hate Crime Statistics—that are widely used by criminal justice scholars and law enforcement and government leaders.
The UCR is extremely valuable and often the foundation of media stories on crime rates rising or falling. It is also flawed. There has long been a notion that some of the data provided by individual agencies may be inaccurate, either through human error or human intervention.
When purposely committed, the impetus for such statistical “errors” may be a strategic feint, an attempt to create an illusion of vulnerability, or strength, depending on one’s agenda. Within law enforcement agencies, that feint can be the illusion of success, particularly in metropolitan areas where a desire to attract the dollars of citizens, visitors, tourists, and businesses trumps concerns over the welfare of that patronage.
Elsewhere, creating the impression of a crime problem can have its benefits, as well.
“It could go either way,” notes Debra Littlejohn Shinder, IT security specialist, former police officer, and academy instructor. “The environment in which the police work can be made to look worse than it is in order to convince management, legislators, and/or the public that the budget needs to be increased and more personnel need to be hired. In extreme cases, a concerted effort on the part of the troops can make administrators look bad in hopes of instigating a change from above.
“Then there are numerous ways that individual police personnel or police agencies can ‘pick and choose’ when it comes to statistical analysis, to create the outcome or interpretation they desire. A common way of misleading with statistics is sampling bias (for example, presenting statistics based on only a particular portion of the geographic area where crime information is not typical of the entire jurisdiction).”
The decision to input this, delete that, or gloss over all is a conscious one. It can be an action taken by underlings acting in accordance with the desires of those above them; or of those still higher up the food chain. As such, it is a collaborative effort that takes place throughout all levels of law enforcement.
Cooking the Books
While sawing crime statistics in half and making the paper trail of real victims disappear often entails sleight of hand at the keyboard, the means by which a department may inflate or deflate the numbers comes down to initiative, creativity, and acts of prestidigitation that extend well beyond computerized Magic Markers.
And so felonies may become misdemeanors; misdemeanors, infractions; or vice versa.
Such statistical subterfuge was witnessed by retired police officer Mark Cirone during his 20 years with the Geneva (N.Y.) Police Department. “For instance, if someone reported a larceny of something off their front porch, the reporting officer correctly listed the appropriate larceny charge—petit larceny,” Cirone says. “The offense would later be changed to ‘police information.’ Burglaries would be reclassified to trespasses. Even attempted burglaries with smashed windows and damaged doors would be changed to criminal mischief. This prevented corresponding stats for the Uniform Crime Report and gave wiggle-room for the chief to tell the public that property crime was down across the city.”
Cirone’s experience is not unique. And while its inherent nature makes auditing of such practices difficult—after all, who watches the watchmen?—complaints leveled against multiple departments for arbitrary case closures on workable crimes suggests that fudging statistics might be on the increase. This is particularly understandable in an era in which many law enforcement agencies are financially strapped and facing diminishing personnel in the investigative ranks.
Despite the FBI’s efforts to screen UCR crime data submitted by law enforcement agencies before it is entered into the UCR national program, such screening only serves to weed out anomalies in the data and ensure that the data conforms to the national standard. In depth audits of the UCR data in direct comparison to the original arrest reports are rare. So the UCR’s flaws can simply be a matter of garbage in, garbage out. In addition, any given agency may go years without having a thorough review of its crime data collection practices. As a result, once the statistics are publicly reported, few changes are ultimately made to the reports generated by the UCR program.
Back in May, the New Orleans Times-Picayune reported that the crime statistics in the Crescent City appeared to be seriously out of sync. For 2011, New Orleans reported 199 murders, making it one of the most deadly cities in the nation. Criminologists consider the murder rate to be a fair indication of crime because murder is almost always reported whereas lesser crimes often go unreported. However, for the same year New Orleans recorded among the lowest number of aggravated assaults in the nation. The large disparity between the two numbers calls the veracity of the crime statistics into question.
Rick Rosenfeld, Curators Professor at the University of Missouri-St. Louis and former president of the American Society of Criminology, told the Times-Picayune, “I find the growing gap between assaults and homicides to be very puzzling. For New Orleans to exceed the national figure by that much requires a good deal of imagination.”
While Rosenfeld and other crime statisticians did not publicly accuse New Orleans of fudging crime data, Rosenfeld added, “The two real possibilities are that citizens have reduced the rate at which they’re reporting these crimes to police, or that police have changed the way in which they classify such reports.”
The FBI cautions against ranking different areas based on crime statistics because it is difficult to determine the precise causes for underreporting of crime. It may well be that the citizens of New Orleans have become inured to the level of aggravated assaults and therefore forego reporting them. Whatever the reason for the decrease in non-lethal crimes, the numbers benefit New Orleans, which depends on tourism dollars in its rebuilding effort in the wake of Hurricane Katrina. It is less certain whether the reported crime rates benefit the individual citizens and visitors to the city.
Compounding the Problem
The misplaced emphasis on numbers and doctoring of them exacts an exorbitant price across the board, killing more than just the credibility of the perpetrators and inhibiting more than the mere filing of reports and data. As of 2010, many eligible candidates within NYPD were passing up opportunities to promote to captain. The reason? Many candidates didn’t want the numbers-focused stress attendant with the promotion.
“It’s the least-appreciated rank in the NYPD,” one NYPD lieutenant told the New York Post. “As a captain, you’re at work at your command 24-7, and when you go to Compstat, they make you feel that you haven’t…accomplished anything.”
As most officers know, CompStat is a geographic information-based system designed to map crime and identify problems, ostensibly to allow law enforcement leaders to proactively address criminal trends in their jurisdictions. To the minds of many within the profession, it has also become synonymous with weekly or monthly meetings wherein captains are raked over the coals in the presence of their peers for crime spikes committed on their watch. It is the kind of thing that lends itself to dramatization, as in the HBO series, “The Wire,” wherein a Baltimore Police Deputy Commissioner ordered subordinates to lower felony statistics or find a new home: “I don’t care how you do it, just f**kin’ do it,” he barks at the captains who are shown drinking Maalox antacid before the meeting.
As stressful as the realities of such in-house aggravations are, far more damning are the real emotional and physical costs that can accrue in the aftermath of administrative cover-ups and the subterfuge employed to prevent them. Cirone notes that a student at a college in his old jurisdiction of Geneva, N.Y., had committed several forcible rapes.
“With the first two, the victims reported their rapes to campus security,” reflects Cirone. “The college blew the girls off and never reported the rapes to the police department. The third victim actually came to the PD after her rape and the investigation led to the other two and opened up Pandora’s box: another classic case of the colleges not wanting those crime stats known. I was involved in the arrest of the perp. We then found prior rapes from the perp from when he was in high school.
“One victim even committed suicide.”
Beyond such individual cases, underreporting crime statistics can have detrimental effects on the community at large. Administrators and investigators may overlook criminal trends within a neighborhood because key incidents are not reported or are reported as lesser offenses. By failing to address the criminal activity during its nascent stage, departments face the more difficult and more dangerous task of cleaning up the problems after they have escalated. In the meantime, residents continue to be victimized.
Time to Clean Up?
One would think that law enforcement agencies would be naturally inclined toward doing the right thing. That the bean counters and number crunchers would, of their own initiative, perform an obligatory gut check and realize that however damning the impression of perceived failure may be, it does not diminish the credibility or standing of their agency the way lying about it does.
And yet the problem continues.
But there are those who believe that the problem is not as pandemic within the profession as reported. Still others resent what they perceive to be a needless stigmatization of CompStat.
“I truly don’t think this whole issue of fudging is anywhere close to being as common as people think,” says Ed Claughton, former police officer and owner of PRI Management Group. “Most of the suspicions and outright beliefs that a department is falsifying stats are the result of a total misunderstanding of how the production of the numbers works and of what the UCR/NIBRS rules state.
“If a structure doesn’t meet the UCR definition of a structure, then it is a theft, not a burglary when someone enters and steals something. If twenty cars are broken into in a parking lot overnight and the four UCR conditional elements are present, (same time frame, same location, same MO and it appears to be same offender), these cases should all be reported as one theft from motor vehicle (not a burglary according to UCR). If someone does a drive-by and the house is unoccupied and the intent of the offender isn’t known, it should be reported as a criminal mischief. And many sexual assault cases have to be reported as inconclusive because victims often refuse to cooperate with the investigation,” he explains.
Claughton’s points have merit, even if they reveal inherent problems with some of the UCR criteria: Would an officer be justified in shooting at a subject he saw firing at a house during what was seemingly a drive-by shooting? Not if the UCR had anything to say about it; certainly not until the intentions of the subjects and the occupancy of the dwelling were made known.
As someone who knows precisely how UCR/NIBRS works, what the rules are, and who has worked in police records as an officer and as a detective and as a supervisor, Claughton finds the situation extremely frustrating.
“Problems with records management systems (RMS) routinely are a large part of the problem. Incorrect code tables and faulty mapping of UCR reportable fields within these systems are common and result in erroneous stats,” he says.
If there is an identifiable villain to be found in the proceedings, Claughton contends that it is the news media.
“It’s completely unfair in its reporting on this topic,” Claughton states. “Nashville PD was accused of cooking the books and lambasted in the news. An audit was conducted (which I worked on) and it was cleared of any wrongdoing. There was a scripting error in RMS leading to over 10,000 cases not being reported electronically as they should have been. Their clearance rate was wrong because they didn’t know the UCR rules on clearing cases.”
Phoenix PD was likewise plastered with allegations of creatively increasing their kidnapping numbers with an eye toward gaining more grant funding.
“Subsequent to a DOJ investigation and an independent panel, they were cleared—but only after the chief lost his job and the department was dragged through the mud. The only outlet to report what really happened was the Phoenix New Times, which revealed that the allegations came from a disgruntled employee whose life was unraveling and went on a false smear campaign of the department. They had problems with RMS and poor records management quality control procedures, nothing more.”
It is easy to be sympathetic with Claughton’s arguments, particularly if one isn’t feeling defensive on the matter. But there does seem to be a disconnect, and even if the problem is perhaps not as pandemic as some would contend, another argument could be made that one instance is one too many. The real question then, is to what extent the problem is one of conscious deception, or gross misinterpretation. Or both.
Ironically, there are those critics of CompStat and its ilk that blame the news media for not reporting enough on the problems.
An anonymous Chicago officer observed, “I actually was speaking to someone in the media about how the department kills crime and I was told, ‘Give me the evidence.’ I said, ‘Are you nuts? It is your job to go and request case reports on lost property, trespass, or theft from persons and when you read the narratives, you will see how they should be thefts, burglaries, or robberies.’
“You know what the f**r’s response was? ‘So, I guess you have no evidence, then?’ Unf**king believable how the media does their jobs around here.”
If it’s true that such fudging takes place at all levels, then it is also true that the bad stuff rolls downhill.
Another Chicago cop asserts that, “Anything that keeps us from doing police work is wrong. CompStat keeps us from doing police work, and encourages fudging the numbers.
“I remember when we left roll call and the bad guys had something to worry about. Our front line supervisors were out there with us, not inside massaging some numbers. They understood what we were doing because they were with us, not in the station or at some meeting.
“Now we run from job to job. When we are not handling the radio, we are on some useless mission that only looks good on paper. The Beat guy knows his beat. Give him time to concentrate on the buggers they know. When is the last time you saw a Beat car on a traffic stop? This job is broken.”
Statistical crime reporting plays an important role in contemporary law enforcement, but the data must be accurately reported in order to be completely useful. Once we develop a true and unbiased model for criminal activity, police departments must use that information in a constructive, not punitive, way to proactively fight crime.
Back when I was working the streets for the Los Angeles County Sheriff’s Department, a wise man once asked me: “If someone shoots a gun on the west side (the most affluent part of Los Angeles) and there are no police to take the report, is it really a crime?” It’s a good question, and an even better one to ask now is: Even if the police took the report, would it show up in the local crime stats or the UCR?