The New 1000 Kenyan Shillings Note: “Color ya Thao” and the Idealization of Light Skin Tone

Color ya Thao, informally used as a nickname for the new 1000 Kenyan Shillings note, symbolizes the idealized light skin tone admired by many in our culture. This led me to contemplate the potential contribution of color codes used in AI technology to perpetuating biases related to colorism and its societal implications.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt USD-634976118 ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. 

Colorism in the Media and its Nuanced Effects

As a graduate student in communication strategy, I have gained insight into the multifaceted impact of colorism in the media. Our professors emphasize the importance of creating content that appeals to diverse demographics while maintaining a global audience reach. This piqued my interest in exploring how AI shapes our notions of attractiveness and desirability.

Understanding Colorism: Discrimination Based on Skin Tone

Colorism refers to the discriminatory practice of treating individuals differently based on their skin tone. It often intersects with racism and permeates various aspects of society, from workplace dynamics to private interactions. Favoring lighter skin tones within our culture perpetuates the persistence of colorism.

The Impact of Colorism Biases on AI: Unveiling Prejudices

Recent research reveals that AI technology is not immune to inherent prejudices, including racism. For instance, Google’s computer vision algorithms have transitioned from the Fitzpatrick scale to the Monk Skin Tone (MST) scale for categorizing skin tones. This shift was prompted by the concept of “coded bias,” wherein racism becomes embedded in technology. Troubling incidents, such as Google Photos misclassifying black people as gorillas, highlight the urgent need to address bias. Racist soap dispensers and computer-generated stereotyped images further illustrate the challenges associated with biased AI algorithms. Additionally, studies indicate that AI applications, like skin lesion detection and autonomous vehicles, face difficulties in accurately identifying people of color compared to those with lighter skin tones (although ongoing research and updates are actively addressing these issues).

Experimenting with Mid Journey AI: Uncovering Colorism Bias

To delve deeper into the effect of colorism on AI, I conducted an experiment utilizing Mid Journey AI and a diverse range of user-provided command prompts. While the study findings are currently under review, they indicate a significant need to eliminate colorism bias in AI systems.

Eliminating Colorism Bias in AI: A Call to Action

The presence of colorism biases within AI systems necessitates immediate attention and action. To foster a more equitable and inclusive society, we must undertake the following steps:

  1. Awareness and Education: Increase awareness among developers, researchers, and users about colorism and its impact on AI. Foster education and dialogue to enhance understanding of biases and their consequences.
  2. Diverse Data and Representation: Ensure the inclusion of diverse datasets during the development and training of AI algorithms. Representation from various ethnicities, skin tones, and cultures is crucial to minimizing biases.
  3. Algorithmic Audits and Testing: Regularly conduct audits and testing of AI algorithms to identify and address biases effectively. Implement comprehensive evaluation processes to ensure fairness and accuracy in AI systems.
  4. Collaboration and Ethical Guidelines: Encourage collaboration among researchers, technology companies, policymakers