By Canyu Gao

Human decision-making is complex and shaped by a variety of factors that sometimes result in ostensibly irrational behavior. Despite significant progress in documenting factors that systematically bias decision-making, much less is known about how to correct these biases. This gap in knowledge is significant because cognitive biases often contribute to poor decisions that lead individuals to engage in discrimination or engage in behaviors that adversely impact their health.

The ascent of digital technologies, notably artificial intelligence (AI), can augment human decision-making to avoid the adverse consequences of human errors in decision-making. These advancements encompass diverse applications such as mapping technologies, voice-activated smartphones, handwriting recognition for mail delivery, language translation, and more. The commanding computational capabilities and advanced algorithms of AI have yielded heightened efficiency, and reduced labor and time expenditures. Given the potential of AI to improve human decision-making, public agencies are increasingly using these tools in hopes of improving different dimensions of public service delivery. For instance, the Department of Justice employs the Threat Intake Processing System (TIPS) database, employing AI algorithms to identify, prioritize, and expedite the processing of actionable tips.[1] Here, AI plays a pivotal role in prioritizing imminent threats, aiding FBI field offices and law enforcement in addressing severe threats promptly. The algorithm assigns scores to tips, positioning those with the highest scores at the forefront of the queue for human review.

Nonetheless, the integration of digital technologies into the public sector’s decision-making arena is fraught with complexities and concerns. Despite their intent to mitigate biases and promote impartiality, automated systems can inadvertently introduce fresh biases due to skewed model parameters and biased data. For example, if police have historically targeted certain communities more than others, the algorithms of TIPS may recommend continued heavy policing in those areas, leading to over-policing and potentially discriminatory practices. This phenomenon can culminate in the generation of incomplete and prejudiced forecasts, thereby possibly amplifying as opposed to remediating disparities in service provision.[2] Even when training data is unbiased, algorithms can introduce bias, inadvertently weighting certain variables or using proxies for race or socioeconomic status, resulting in discriminatory outcomes.

Furthermore, a clear discrepancy emerges in the level of trust attributed to digital technologies across diverse groups, including citizens and public managers. In a 2021 study employing a survey experiment, participants were presented with a fictional news story in which a local government was considering two options to address a troublesome intersection: deploying a police officer or installing a red light camera. The study’s findings revealed that when police officers were depicted as White, Black citizens were more inclined to perceive automated decision-making as fairer and express a stronger preference for it in comparison to police officers. Intriguingly, this pattern did not apply when police officers were portrayed as Black among White citizens.[3] The divergence in attitudes toward digital technologies raises concerns about the potential for uneven outcomes in the dispensation of government services.

In conclusion, the nuanced interplay between human behavior, technological advancements, and decision-making processes underscores the need for careful scrutiny of biases and a commitment to equity. Striving for fairness and impartiality in decision-making within the dynamic landscape of technological integration necessitates a comprehensive understanding of the interdependent roles played by human cognition and digital technologies.

Canyu Gao is a Ph.D. student at the School of Public Affairs and Administration at Rutgers University-Newark.

 

References:

[1]  https://www.justice.gov/open/page/file/1517316/download

[2]  Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

[3]  Miller, S. M., & Keiser, L. R. (2021). Representative bureaucracy and attitudes toward automated decision making. Journal of Public Administration Research and Theory31(1), 150-165.