Artificial Intelligence and Racial discrimination: Predictive Policing is a clear example

“Recent developments in generative artificial intelligence and the burgeoning application of artificial intelligence continue to raise serious human rights issues, including concerns about racial discrimination,” said Ashwini K.P.*, UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance.

There is an enduring and harmful notion that technology is neutral and objective, according to Ashwini K.P., during her interactive dialogue for the launch of her new report at the Human Rights Council’s 56th session in Geneva, Switzerland.

In her report, she explores how this assumption is allowing artificial intelligence to perpetuate racial discrimination.

“Generative artificial intelligence is changing the world and has the potential to drive increasingly seismic societal shifts in the future,” Ashwini K.P. said. “I am deeply concerned about the rapid spread of the application of artificial intelligence across various fields. This is not because artificial intelligence is without potential benefits. In fact, it presents possible opportunities for innovation and inclusion.”

A clear example of how racial biases are reproduced through technological advances is predictive policing. Predictive policing tools make assessments about who will commit future crimes, and where any future crime may occur, based on location and personal data.

“Predictive policing can exacerbate the historical over policing of communities along racial and ethnic lines,” Ashwini K.P. said. “Because law enforcement officials have historically focused their attention on such neighbourhoods, members of communities in those neighbourhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question.”

According to her findings, location-based predictive policing algorithms draw on links between places, events, and historical crime data to predict when and where future crimes are likely to occur, and police forces plan their patrols accordingly.

When officers in overpoliced neighbourhoods record new offences, a feedback loop is created, whereby the algorithm generates increasingly biased predictions targeting these neighbourhoods. In short, bias from the past leads to bias in the future.

Ashwini K.P., UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance

As with location-based tools, past arrest data on people, often tainted by systemic racism in the criminal justice systems, can skew the future predictions of the algorithms, she said.

“The use of variables such as socioeconomic background, education level and location can act as proxies for race and perpetuate historical biases,” Ashwini K.P. said.

The report also provides a brief analysis of efforts to manage and regulate AI at the national, regional, and international levels.

“Artificial intelligence technology should be grounded in international human rights law standards,” Ashwini K.P. said. “The most comprehensive prohibition of racial discrimination can be found in the International Convention on the Elimination of All Forms of Racial Discrimination.”

There are other rights where AI has posed a risk including AI in healthcare where some tools for creating health risk scores have been shown to have race-based correction factors. Ashwini K.P. also found when AI is applied to educational tools it can include racial bias. For example, in academic and success algorithms, due to the design of the algorithms and the choice of data, the tools often score racial minorities as less likely to succeed academically and professionally, thus perpetuating exclusion and discrimination.

In his vision statement, “Human Rights: A Path for Solutions,” UN Human Rights Chief Volker Türk said generative artificial intelligence offers previously unimagined opportunities to move forward on the enjoyment of human rights, however, its negative societal impacts are already proliferating.

“In areas where the risk to human rights is particularly high, such as law enforcement, the only option is to pause until sufficient safeguards are introduced,” Türk said.

For Ashwini K.P., while artificial intelligence does have real potential for impact, it is not a solution for all societal issues and must be effectively managed to balance its benefits and risks.

Regulating AI is also a way to ensure this balance, according to Ashwini K.P. She  recommended that States address the challenge of regulating AI with a greater sense of urgency bearing in mind the perpetuation of racial discrimination; develop AI regulatory frameworks based on an understanding of systemic racism and Human Rights Law; enshrine a legally binding obligation to conduct comprehensive human rights due diligence assessments, including explicit criteria to assess racial and ethnic bias, in the development and deployment of all AI technologies; and consider prohibiting the use of AI systems that have been shown to have unacceptable human rights risks, including those that foster racial discrimination.

“Placing human rights at the centre of how we develop, use and regulate technology is absolutely critical to our response to these risks,” Türk said.

Ashwini K.P. was appointed as Special Rapporteur by the Human Rights Council in October 2022. She is an academic, activist and a researcher focusing on social exclusion, race, descent based discrimination, and intersectionality. She is a visiting fellow at Stanford University. She has served as assistant professor in India and is co-founder of “Zariya: Women’s Alliance for Dignity and Equality,” an organization that builds solidarity and connects women from various marginalized groups in India. She’s represented Dalit women in various civil society groups helping them in strategizing on how to ensure women from marginalised communities are empowered and are in decision-making roles in activism and mainstream social movements.

NHRCLB
NHRCLBhttps://en.nhrclb.org
NHRC-CPT is an independent commission established by Law No. 62 based on the Paris Principles (‘Principles Relating to the Status of National Human Rights Institutions’). It also includes Lebanon’s national preventive mechanism (CPT) In accordance with the provisions of the Optional Protocol to the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT) under Law No. 12 of September 5, 2008.
spot_img
spot_img

Related posts

Publications

Support for National Institutions and International Cooperation under Resolution 31/51 of the Human Rights Council

هذه المقالة متاحة أيضًا بـ: العربية (Arabic)   The present report is...

A Manual for NHRIs on Monitoring Economic, Social & Cultural Rights

هذا التقرير متاح أيضًا بـ: العربية (Arabic) Defending Dignity: A Manual...

Monitoring human rights violations in places of detention in Lebanon: deprivation of everything

هذا التقرير متاح أيضًا بـ: العربية (Arabic) MONITORING HUMAN RIGHTS VIOLATIONS...

Report of the Visits to Tripoli and Zahle Prisons: Harrowing Conditions

هذا التقرير متاح أيضًا بـ: العربية (Arabic) Report of the Visits to...