(Image generated by Mid journey AI)
(Command to the AI- /Imagine a Beautiful Black Mum holding the children in an urban area. Render 4k etc)
Whats is Colorism
The recently introduced 1000 Kenyan Shillings note has acquired a colloquial nickname among locals, referring to women with lighter skin tones as “Color ya Thao”. This informal term, “Color ya Thao”, symbolizes the idealized light skin tone that holds significant value within our culture. This observation prompted me to ponder: could the color codes utilized in AI technology be reinforcing existing colorism biases, thus impacting our society on a broader scale?
As a graduate student specializing in communication strategy, I’ve delved into the intricate nuances of colorism prevalent in media portrayals. Our professors emphasize the importance of crafting content for a global audience while also acknowledging the significance of catering to local demographics. This perspective piqued my curiosity about the influence of AI on shaping societal perceptions of attractiveness and desirability.
Colorism, often misunderstood by the layperson, encompasses the discriminatory treatment based on an individual’s skin tone. This bias frequently intersects with racism and permeates various aspects of life, from professional environments to personal interactions. Our cultural inclination towards favoring individuals with lighter skin tones exacerbates the perpetuation of colorism.
Could this phenomenon be attributed to coded bias within AI systems?
Based on recent research, it’s evident that AI systems are susceptible to inherent biases such as racism. For instance, Google has adopted the Monk Skin Tone (MST) scale over the Fitzpatrick scale in its computer vision algorithms, a shift driven by the concept of “coded bias,” where racial prejudices become embedded within technology. Instances like Google Photos erroneously classifying black individuals as gorillas underscore this issue, alongside incidents involving racist soap dispensers and stereotyped images generated by computers. Moreover, the Google skin lesion detection algorithm exhibited inefficiency in detecting issues on darker skin tones. Research also indicates challenges faced by autonomous vehicles in accurately identifying individuals with darker skin tones compared to those with lighter tones.
Motivated by these insights, I embarked on an experiment using Mid Journey AI and diverse user-provided command prompts to explore the impact of colorism on AI systems. While the study’s results are still under review, they highlight the pressing need to address and rectify colorism biases entrenched within AI technologies.
The deep stick analysis promises to facilitate more extensive research in future endeavors
Let’s explore the various images generated by command prompts:
(The command prompt by @Fantancy 2022- on Mid-journey AI was as follows- romantic full-length portrait of a woman, a stunning woman in magical white flowing embellished dress, flowers, crystals, and a handsome modern man in evening wear…..)
When I looked at the command prompt that @Fantancy 2022 on Mid-journey AI had provided, I couldn’t help but note that the produced image didn’t have very much variety in its appearance. There was no hint of the skin tone that the generator had in mind for the prompt, which asked for a passionate full-length portrait of a couple holding each other’s hands. This has left me with a question: why did the people in the picture only have white faces? Is it to imply that white people, in general, are the only beautiful men and women?
Lets now examine the same command prompt with black added to it
There is in fact a white face with light skin included as part of the image that was made for an additional upgrade!
With DALL·E 2 – OpenAI was not any better.
To our esteemed industry experts:
There is a significant amount of work ahead. Designers and content creators must ensure they have a range of options available within the constraints of tight deadlines, as this is an industry driven by time-sensitive demands.
Do we fully grasp the potential impact of these efforts on both the general public and the communication industry?
Extensive research is imperative.
While there are numerous gaps to address, there are also abundant opportunities. It’s a matter of perspective—whether we see the cup as half empty or half full.
There are simple solutions out there, ChatGPT offers some on its platform and we hope OpenAI can work on the simple inherent Biases we see on DALL·E models.
ChatGPT postulates the following as a way of dealing with biases:
- Collect diverse and representative data: The data used to train an AI model should be diverse and representative of the population it will be
used on. This is important to ensure that the model can perform well on a wide range of inputs and that it does not perpetuate existing biases. - Pre-processing: Data pre-processing is important to ensure that the data is cleaned and ready for training. It includes removing outliers, duplicates, and irrelevant data, as well as handling missing values.
- Annotate data: Annotating data is critical for supervised learning. Itis the process of adding labels or tags to the data, which the AI model uses to learn the relationship between inputs and outputs.
- Fairness evaluation: Use fairness metrics and evaluation methods to evaluate the performance of the model on different subgroups of the data to ensure that it does not perpetuate existing biases.
- Monitor and iterate: Monitor the model’s performance during and after training and make adjustments as needed. This may include collecting more data, adjusting the model’s architecture, or fine-tuning the model’s parameters.
- Explain ability: Make sure that the model is interpretable and that its decisions can be explained. This is important to understand how the model makes its predictions and identify potential sources of bias.
- Ethical considerations: consider the ethical implications of using the, model and how it could potentially harm certain groups of people.
As communicators, we have a responsibility to advocate for a more fair and inclusive future in which technology accurately represents the variety of the world in which we live. It’s time for accurate storytelling and the creation of images that include people of various races, ethnicities, and skin colors. Let us paint a bright future for ourselves.
Subscribe to our email newsletter to get the latest posts delivered right to your email.
Comments