Image generated by Midjourney AI(Command to the AI- /Imagine a Beautiful Black Mum holding the children in an urban area. Render 4k etc) 

 “Color ya Thao”…..The color of the new generation 1000 Kenya shillings note, which is light brown, and a colloquial term used to refer to fair skin, and that term stuck with me… all through. I wondered if the color code for skin tone drove biases in AI and would they  have any effects on the society? and are there inherent color biases in the image generators in the market and are there preferred skin tones from the AIs that can drive the biases? 

    The experiment to test colorism and new media was inspired by a discussion in one of my graduate classes on communication strategy design, where at one point we looked at colorism in our local productions and the implications of the same on global audiences. Our able professor insisted that as communicators we should not only speak to our local communities but also seek to engage global audiences based on the power provided by the internet.

    The discussion prompted me to think about the various AI tools that have emerged and what we can do to ensure we have the right balance on what is out there as well as what needs to be done to drive the right narratives and reduce such biases and derive results and images that can define the world in all colors. 

    Colorism is a type of prejudice in which one group of people treats another group differently due to the color of their skin. It is frequently associated with racism and has been demonstrated to contribute to prejudice against people of color. This can be seen in a variety of settings, including the workplace, classroom, and interpersonal relationships. Colorism is often perpetuated by societal and cultural ideals of beauty and desirability that favor lighter skin tones.

Colorism and AI have been extensively covered in various sectors, particularly by Google, which switched from the Fitzpatrick scale to the Monk Skin Tone (MST) scale to classify skin tones for their computer vision algorithms, from Google Search Images to Google Photos and beyond.

According to industry experts, Coded Bias occurs when racism is built into technology, such as Google Photos mislabeling pictures of black people as gorillas, racist soap dispensers, and automatically generated stereotypical images. Google launched an algorithm to detect lesions, but it excluded people with dark skin. Self-driving cars have been found to be much less likely to recognize people with dark skin than those with white skin. The work being done to address colorism bias is a work in progress, and many organizations have done their part to drive inclusion and diversity in their workplaces.

In my quest to explore the various image generation tools and their colorism bias, I evaluated Mid journey AI and what it generated from various command prompts provided by different users.

The results are certainly debatable!


The command prompt by @Fantancy 2022- on Mid-journey AI was as follows- romantic full-length portrait of a woman, a stunning woman in magical white flowing embellished dress, flowers, crystals, and a handsome modern man in evening wear…..

The command had no indication of the skin tone the generator of the image had in mind, why then do we have white faces only?  Why not a mix of the 2? Some black some white?

The upgrade can then be done by the individual or image generator in the next step…..

Is it to say? we only have stunning men and women who are white?

The same command prompt with the black generated the image below.

There is actually a fair-skinned white face as part of the image generated for a further upgrade! 

As we enter into the era of AI-generated tools and images, the lessons we should keep in mind about how far we have dealt with biases and race issues have to be kept in mind.

Those generated by DALL.E by openAI were not any better, it’s only when we put black that they generate black images. 

Dealing with the biases will require a concerted effort by all stakeholders.

and as ChatGpt correctly put it take steps in dealing with the issues:

train AI models in a fair and unbiased manner:

  1. Collect diverse and representative data: The data used to train an AI model should be diverse and representative of the population it will be used on. This is important to ensure that the model can perform well on a wide range of inputs and that it does not perpetuate existing biases.

  2. Pre-processing: Data pre-processing is important to ensure that the data is cleaned and ready for training. It includes removing outliers, duplicates, and irrelevant data, as well as handling missing values.

  3. Annotate data: Annotating data is critical for supervised learning. It is the process of adding labels or tags to the data, which the AI model uses to learn the relationship between inputs and outputs.

  4. Fairness evaluation: Use fairness metrics and evaluation methods to evaluate the performance of the model on different subgroups of the data to ensure that it does not perpetuate existing biases.

  5. Monitor and iterate: Monitor the model’s performance during and after training and make adjustments as needed. This may include collecting more data, adjusting the model’s architecture, or fine-tuning the model’s parameters.

  6. Explainability: Make sure that the model is interpretable and that its decisions can be explained. This is important to understand how the model makes its predictions and identify potential sources of bias.

  7. Ethical considerations: consider the ethical implications of using the model and how it could potentially harm certain groups of people.

 Signed:

Categorized in: