UH Researchers Awarded NSF Grant to Develop Technology to ‘Improve Local Government’
Whether we realize it or not, algorithms – sequences of instructions that tell computers how to perform tasks – are part of our everyday lives.
In public policy, data collected by algorithms is used to help policymakers make decisions related to criminal justice, public education, the allocation of public resources and national defense strategy. But algorithms in public policy that can influence a person’s prison sentence, for example, can be biased, lack transparency and cause mistrust in the system, according to researchers.
So, how can these algorithms be held accountable?
Ryan Kennedy, associate professor of political science at the University of Houston is working to answer that question with a $750,000 grant from the National Science Foundation. Over the next three years, Kennedy’s research team will conduct community-based research to design an algorithm-accountability benchmark for a wide range of algorithms used in public policy.
“Too often, algorithms are developed and removed from the needs and concerns of the community,” said Kennedy, principal investigator of the Community Response Algorithms for Social Accountability (CRASA) project. “We need to have a set of principles that can be used to determine the degree in which we can exercise democratic control over the algorithms that are being used in public policy.”
Co-investigators from UH on the CRASA project include Lydia B. Tiede, associate professor of political science; Ioannis Kakadiaris, Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Computer Science, Electrical and Computer Engineering and Biomedical Engineering; and Andrew Michaels, assistant professor of law in the UH Law Center.
The research team will gather input from a diverse group of advisors in Harris County that include local government officials, legal professionals, non-governmental organizations, companies that produce algorithms and community members to establish and evaluate algorithm standards.
“We have to have general ways of looking at these algorithms and study them instead of just assuming they are correct,” Tiede added. “Part of the debate about accountable algorithms is the need to analyze what data is inputted into algorithms and how they generate outcomes.”
In addition to creating an algorithm-accountability benchmark, the team will apply a scoring toolkit to software for criminal risk estimation and facial recognition technologies.
“If there is a process being used that makes impactful decisions on people’s lives, people should be aware of it and they should have the ability to engage with it,” Kennedy explained. “If we are ever going to have an honest discussion about using technology to improve local government, that can’t take place without some discussion of algorithms and what we expect from them.”
- Sara Tubbs, University Media Relations