Yet another case of racial bias in artificial intelligence (AI) has come to light. A Canva user flagged some questionable results from a recent attempt to use Canva’s Text to Image app.

Adriele Parker is a DEI thought partner. She shared in a LinkedIn post that her attempt to create an AI image showing a “‘Black woman with bantu knots'” led to an error message. It told her that “‘bantu may result in unsafe or offensive content.’”

“Tell me your AI team doesn’t have any Black women without telling me your AI team doesn’t have any Black women. My goodness,” she continued on her LinkedIn post.

“Canva, if you need a DEI consultant, give me a shout. I’ve been a fan of your platform for some time, but this is not it. Be the change. Please,” she added.

Racial Bias in Artificial Intelligence

The comments on Parker’s post were filled with LinkedIn users who faced similar issues with the Text to Image app. Those stories reflect larger issues that have surfaced around the racial biases that exist within AI technology.

“There is ample evidence of the discriminatory harm that AI tools can cause to already marginalized groups. After all, AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination — from the criminal legal system, to housing, to the workplace, to our financial systems,” Olga Akselrod wrote in a piece for the ACLU back in 2021.

Akselrod’s piece came years ahead of the major trending conversation recently sparked by technology like ChatGPT and the AI image generators through tools like Canva and photo sites like Shutterstock.

“Bias is often baked into the outcomes the AI is asked to predict. Likewise, bias is in the data used to train the AI — data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups — and can rear its head throughout the AI’s design, development, implementation, and use,” Akselrod wrote.

Canva first announced the release of its AI image generator app in November 2022.

In its post about the release of the app, Canva acknowledged the issues that still exist with the evolving AI technology.

“We’ve invested heavily in safety measures that help the millions of people using our platform ‘be a good human’ and minimize the risk of our platform being used to produce unsafe content,” the statement read. “For Text to Image this includes automated reviews of input prompts for terms that might generate unsafe imagery, and of output images for a range of categories including adult content, hate, and abuse.”

“We’ve also created a feedback loop giving our community the opportunity to report any issues with the images generated, including if they feel they’re enforcing biases or stereotypes.”