Using Citizen Feedback to Improve Local Government
We wrote a Blog Post with partner organization mySidewalk about the use of civic technology in the United States.
(Header Photo Credit: Daria Shevtsova, Unsplash)
Ben Armstrong collaborated with mySidewalk to write a blog post describing findings from his research project in Pittsburgh, PA. You can read the blog post here.
The Internet Age has made two implicit promises to local democracies. First, it will be easier for residents to provide feedback to their elected officials and participate in public decision-making. Second, governments will have more and better data on public services and resident behavior to inform public policies.
You are a city official, and you have decisions to make.
Perhaps there is a vacant lot, and you need to decide what it should become. Congestion might be overtaking local streets, and you need to find a way to mitigate it. Or rental prices could be forcing residents to move elsewhere, and you need to negotiate more affordable housing.
How do you incorporate the feedback your residents provide by email, telephone and in person? Do you pay attention to the data on what residents buy, where they send their kids to school, and how they commute?
Of course neither is sufficient — the challenge is making sense of multiple sources of information that might contradict one another. After all, Town Hall meetings and online feedback represent hundreds of voices with specific things to say. Census, economic, and other data summarizes what’s happening with thousands, but with far less detail.
Even more challenging, so much online and town-hall feedback might be entirely irrelevant to the challenges facing the city or what government can do about them. The citizens who provide feedback might represent extreme and uninformed opinions.
In other words, more data is not necessarily better.
Inferring the best public transportation options from income, education, and commuter data seems more prone to error than asking the same people what public transport options they prefer.
Our research highlights the risks of relying too much on data or public feedback alone. We propose below that cities use data on resident behavior to focus how they solicit feedback from residents. Public behavioral data can educate the public and frame the discussions so that resident feedback is more targeted and informed. The hope is that data helps cities ask better questions that receive more feedback from more residents. As a result, public feedback is less dominated by a few extreme voices and more representative of the population most affected by the issues at hand.
The Data Dilemma in Pittsburgh
When scandal forced the Pittsburgh, PA police chief to resign in 2013, the mayor and his policy team appointed a task force to explore the public safety problems facing the city and hire a new chief who could solve them.
The city held town-hall meetings throughout Pittsburgh where it collected feedback from citizens verbally and on paper. It also used MindMixer (now mySidewalk), an online civic engagement forum, to gather input from citizens online. These two channels helped amass hundreds of pieces of public feedback in response to the question “What priorities does your neighborhood need the new Police Chief to address?”
The public feedback that citizens expressed online and in person were vastly different. Online feedback focused on broad issues of traffic, bike safety, and law enforcement. A typical example of online feedback is:
Although some online feedback mentioned violence or community policing, the common focus was on everyday concerns like improving the speed of the response time and improving professionalism by “firing bad cops.”
The in-person feedback at town halls focused more on community policing; it emphasized violence that persists in Pittsburgh neighborhoods and distrust of the police. Consider one example:
It was not just the substance of the comments that varied, but also the form. In-person feedback was often a short phrase that expressed a general idea: “drug activity,” “domestic violence,” “police accountability,” or “gun violence” are all examples of in-person feedback. Online feedback was much longer; it often included a particular proposal and might have even included a link suggesting another city’s example. Consider this online feedback for bike safety improvements:
Words Used at Town Hall Meeting (reflects in-person feedback)
The frequency of words used online and at town hall meetings illustrate these trends. Online feedback emphasizes “people” (individual residents) and demands like “stop,” “want,” and “get.” Commenters online might be expressing a problem that they have experienced that they would like to change. “Traffic,” “bike,” and “car” reflect the types of problems they emphasized, and “model” and “law” show how these commenters often cited how other cities work in order to support their point.
At town hall meetings, comments were focused on “community” concerns that emphasized problems with “violence” and “crime.” They also focused on one solution: improving community relations with the police force.
When we asked city officials to evaluate the feedback they received, the expert task force and mayor’s staff ranked a sample of individual comments according to their usefulness, relevance, and novelty. City officials and experts found the online feedback to be more novel — it offered some new ideas — while the in-person feedback was considered slightly more relevant to the subject at hand.
Our initial conclusion from these results was frustratingly simple.
Neither online nor in-person feedback is sufficient on its own. Online feedback mechanisms might work better for generating new ideas while in-person channels are still important for collecting resident opinion on particular topics.
The problem was that we were focused only on feedback that the city actively collected. There is a third source of feedback that Pittsburgh had not yet tapped: crime data from the city and the FBI.
A Third Source of Feedback
The City of Pittsburgh asked its citizens what they prioritize when it comes to public safety, yet Pittsburgh already has crime data that reveals the types of problems that citizens are facing, and where those problems are most prevalent in the city.
mySidewalk allows users to visualize crime data in Pittsburgh in charts and maps. Three facts stand out.
Fact #1: Most neighborhoods do not report much crime at all.
A handful of Pittsburgh’s gentrifying neighborhoods (e.g. East Liberty/Larimer, South Side Flats) have experienced high crime per capita, but most neighborhoods — particularly the centers of Pittsburgh’s old wealth (e.g. Squirrel Hill, Point Breeze) — do not report much crime at all. When the City of Pittsburgh reaches out to gather feedback on crime, they reach out with the same questions in each police district, yet low-crime and high-crime neighborhoods face different problems and might have different concerns.
City officials should use crime statistics to ask questions and gather feedback that is more relevant to neighborhood populations. When governments reach out to neighborhoods with localized questions based on whether neighborhoods are low- or high-crime and the types of crime they experience, it shows that the government understands some of the particular challenges in a neighborhood and wants more specific feedback on how to address them. It enables citizens to connect with government in a way that more closely reflects their concerns and experiences.
Moreover, these crime statistics highlight that the Central Business District experiences some of the highest crime in Pittsburgh (both overall and per housing unit). While Pittsburgh’s town halls focused mostly on neighborhoods and not the Central Business District, future outreach might examine crime data before determining where to focus citizen outreach.
Pittsburgh Crime per Capita (by Neighborhood)
Fact #2: The public’s feedback on crime focused on problems at either extreme.
Online feedback discussed traffic and bike lanes, while in-person feedback focused on violent crime like murder and drugs. Neighborhood data measures overall crime per capita, but it does not capture the types of crime. Since violent crimes like murders and assaults happen only a fourth as much as property crimes (4,104 v. 17,901), an area might show up as low crime overall if it has few property crimes but a high incidence of violence. Although a low overall crime, high violent crime combination is rare, the opposite might be true in the Central Business District where property crimes are high but there are fewer murders and assaults than in neighborhoods on the outskirts. Unfortunately, the data are not available to confirm.
The lesson from city-wide crime statistics is not only that overall data can be misleading, but using more detailed data on violent and non-violent crime by neighborhood can help gather even more relevant feedback.
Fact #3: Compared to Cleveland, Pittsburgh is experiencing lower property and sexual violence crimes.
Comparing Pittsburgh’s crime statistics in 2014 to Cleveland’s highlights the areas where Pittsburgh’s crime is encouragingly low, and where crime is particularly problematic.
Cleveland’s population is approximately 30 percent larger than Pittsburgh’s, yet Cleveland has more than double the property and violent crime. Although property crime is the most frequent offense in Pittsburgh by a long shot, the city is still performing well on property crime comparatively. Sexual violence is also a much more prominent problem in Cleveland, which has reported approximately five times the number of rapes as Pittsburgh.
Yet there are certain areas of public safety where Pittsburgh struggles just as much or more than Cleveland does. Pittsburgh still has approximately the same number of aggravated assaults per capita, and there were more murders in Pittsburgh than in Cleveland in 2014. Although Cleveland appears to experience more violence overall, Pittsburgh has a relative concentration of severe violence.
This should not be entirely surprising to Pittsburgh’s public officials. At a town hall meeting in one particular district, residents focused on these particular issues — gun violence and murder — whereas smaller crimes went largely ignored throughout the public meetings.
Thus, the data help affirm some of the feedback officials already received and can focus how officials reach out for continued feedback in each neighborhood.
A Different Approach to Resident Feedback
Cities miss opportunities to understand their citizens. They might host town hall meetings, but these are often full of open-ended questions that lead to grandstanding. Online feedback can be creative, but is often all over the map. Public data is becoming more and more powerful, but is prone to misinterpretation. None of these sources can stand alone.
We propose that cities begin by using what they learn from public data to shape the questions they ask citizens.
For example, Pittsburgh can share neighborhood data on crime and engage community members and the police in a discussion of how to reduce the numbers. If the community sets benchmarks in partnership with the police, the city can share updated statistics that reveal whether they have reached their goal.
Comparing Pittsburgh’s violent crime and aggravated assault rates can frame community outreach that explores what is going wrong in Pittsburgh and experiments with what has worked in other cities.
These same principles can guide government outreach across issue areas. Beginning with data allows government to promote informed citizen engagement without guiding it toward a particular outcome.
About the Author: Ben Armstrong is a Graduate Research Associate at MIT’s GOV/LAB. His research focuses on the origins of economic growth in cities as well as the impact of new technology on local political participation. He can be reached at email@example.com.