By Leon Digard and Jacob Kang-Brown
Before the FBI released its 2021 crime statistics last week, expectations about the quality of the data were already perilously low.
To estimate national trends, the FBI collects crime reports from thousands of law enforcement agencies across the country. Last year, the FBI—through the Uniform Crime Reporting (UCR) program—made significant changes to how local, county, state, tribal, and federal agencies submit their data. Now, agencies can only submit reports through one system—NIBRS—requiring a big administrative shift among police departments and other law enforcement agencies, and substantially more detailed input. As a result, participation plummeted. Of the 18,000 agencies in the country, only half submitted a full year’s worth of data. Only 63 percent submitted any data at all.
And so, as predicted, the 2021 crime data tells us . . . well, not much.
The truth is, however, that the FBI data has never been a reliable source of information on crime in the United States. Many people do not report their experiences of crime to the police, and reporting varies by year as well as a host of social, economic, and personal factors. Agency submissions to the UCR have long been spotty.
Furthermore, the way in which the data is produced means that it has always been at risk of inconsistency, error, and even manipulation. Let’s look at how the data is made.
The process starts when a law enforcement officer files an incident report following an alleged criminal event. These reports are then processed by the officer, their unit, or bureau into their agency’s database. The agencies produce monthly summaries of these reports, following FBI counting and crime classification rules, and submit them through the UCR. Finally, these monthly reports are processed by the FBI and turned into national crime statistics.
Not every agency submits consistent reports, and some report nothing at all. So the FBI estimates national and state totals, sometimes using a relatively small percentage of jurisdictions in a state.
Even small inconsistencies in how officers and police departments report crimes can have a big effect on overall numbers.
One reason for this is that the FBI does not report out on all kinds of violent crime. It excludes incidents designated as the least serious, and these are the most common. We know this from looking at another source of data—the National Crime Victimization Survey (NCVS). Although the NCVS has its own limitations, it is more consistent over time and arguably a more accurate measure of historical crime trends. Rather than relying on police reports, the NCVS collects information directly from the public—skipping the different levels of processing that leave the FBI data open to bias and manipulation. It also includes crimes that were never reported to the police, and so never counted by the FBI.
The trouble is, the decision to classify an event as a simple or an aggravated assault is not always clear—maybe coming down to the presence or absence of a weapon—but one will be included in the FBI’s national crime trends, and the other will not. Local classification decisions and policies can therefore push crime numbers up or down, regardless of any actual change in crime rates.
Like any data collection process, bias and distortion can creep in at every stage. Different officers can interpret the same alleged criminal event in different ways. They may be incentivized or pressured to downgrade or upgrade a charge. Filing decisions might be influenced by local bail and charging practices. The local agency might have other reasons for purposefully smudging the data—such as demonstrating their effectiveness or building a case for further funding.
A combination of these different issues can have a profound effect on how we understand crime at a national level. Between 2019 and 2020, for example, the FBI reported a 12 percent increase in aggravated assaults; NCVS data showed a 21 percent decrease.
We therefore have many reasons to not rely on the new FBI crime data; even at its best, it was deeply problematic.
And while a birds-eye view of crime across the United States is important information to have, we must also remember that public safety is, at heart, a local issue. The neighborhoods that suffer through the worst concentrations of crime are the same neighborhoods that have suffered decades of economic inequality, racist policy making, and disinvestment from basic services.
To hold law enforcement and local officials accountable, we need local data. For this reason, Vera assessed the quality of policing data from 94 of the country’s largest cities in our Police Data Transparency Index. Vera researchers worked with community members across the country to identify the data points that would be most meaningful and useful to them—an important addition to national trendlines that, while making for snappy headlines or platform points for politicians, are difficult to translate into action.
The results were, perhaps predictably, underwhelming. Of the 94 localities included, only 21 scored more than 50 out of 100 on Vera’s index—which scored jurisdictions on things such as data accessibility and completeness. More than half of places publish no data about arrests or traffic stops.
This, and the woeful state of the FBI’s national data, should be a wake-up call.
Public safety is a recurring concern for many people living in the United States, and public spending on law enforcement runs into the billions every year. The public needs, and deserves, a higher standard of transparency. Ultimately, if we are to address the sources and consequences of crime, we must repair years of systemic inequity—requiring a broad-based effort to reshape many of our public institutions. We’ve always needed better.
Originally published by Vera Institute of Justice.
Léon Digard Editorial Director for Research for Vera Institute of Justice. Jacob Kang-Brown Senior Research Associate for Vera Institute of Justice.