Home Latest Insights | News Fixing AI Discrimination

Fixing AI Discrimination

Fixing AI Discrimination

We hate discrimination and machines discriminate just as humans do. Discrimination is a phase of racism even when done by machines.   Take a drive across many parts of Lagos. You can easily notice Nivea advertisement sign-posts promising some Nigerians how a cream will make their skins “visibly fairer”. Apparently dark skin is not good enough; it has to be fairer.  That is a data point for the internet. If Nigerians prefer whiter skin, Google has taken note. That means beauty must be “white” in Nigeria.

Indeed, if you ask Google and machines to judge a beauty contest in Nigeria, it may call it for girls with fairer skin since it has datasets that show that many Nigerian women are using creams to turn their dark skins to white ones (very painful on health grounds). AI (artificial intelligence) uses data and that data informs what it thinks is normal. Usually, the largest cohort of the datasets shapes its constructs of normality.

This company makes dark skin to become fairer and many Nigerian girls are believers

If you make a chatbot and feed it 20,000 messages with 19,000 of those messages racist and crude, the likelihood that the bot will see racism as being normal is there. That is why it is very easy to train any Twitter bot to be anything you want: saint, racist, etc. Just feed the data you want and over time, that bot correlates the most data as the new normal. Unless there are breakers in the design, there is nothing you can do about it, if you really want a near-natural bot.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

The Greek philosophers have always maintained that Number is the universe. If the Number is that more people want to be fairer, and will pay for it, it can be deduced by machines that Fairer is more beautiful. You may blame machines which generate and generalize the outcome, but check what you are feeding it with! In this age of AI, empires will be reshaped at scale.

That bots can be stupid does not make it right: developers have to find ways to mitigate that problem. Among many options, one easy way is to find a way to generate “balanced data”. For Africa, if we do not generate contents, allowing Silicon Valley, Paris and London to feed the new species of AI with only data from the Western World, the AIs will see them as the normal data. In other words, if the bot sees out of every 20,000 photos of girls, only 100 are dark-skinned, it may not necessarily capture dark photos as being normal. The system will default to white photos as the normal state. In some extreme cases, it may simply throw away the dark photos as totally non-human. Mitigating that problem will be feeding say 11,000 white and 9,000 dark photos. With that balanced datasets, the AI will have a better equilibrium.

That reminds me of a training I went while in the industry on IP protection. We were told to respond to email conversations via email instead of asking the person to talk things over. For example, if someone writes you capturing a statement like “I saw that Intel used this design and has a patent on it, there is a way I can get around the patent”. You do not tell your subordinate to see you for you to explain. It is better you write “Please if the design is patented, leave it and explore other designs”. The problem with talking it over without documenting is it that if bad things happen and there is litigation, what will make it to court is the written evidence. That is what the AI searching the emails will be fed with.

While that analogy: Africa and the black race will have to generate its own datasets to ensure machines can use same as they build the new data economy. Even if we are complaining of the obvious AI discrimination, without generating data, nothing will change. If you allow one cohort of people to be writing, talking and generating data, Google and the rest will think that the world is simply about those cohorts. That is why Amazon Alexa, a personal assistant AI, struggles with my Nigerian accent: it does not see that version of English a lot, so it is abnormal in its own world. It is not necessarily discriminating against me, it is just using datasets they have fed it to deconstruct my communication. Unfortunately, I am not sure they have any datasets from Nigeria.

Africa needs to create data to balance the game. Complaining on how machines dehumanize us will not fix the problem. It will only get worse unless we are ready to participate as technology creators and not just consumers who merely consume whatever they package for us.

Sure, that does not stop the makers from making sure decency rules in the market with circuit breakers to prevent situations where humans are classified as animals. No excuses on such failures!


---

Register for Tekedia Mini-MBA (Jun 3 - Sep 2, 2024), and join Prof Ndubuisi Ekekwe and our global faculty; click here.

No posts to display

1 THOUGHT ON Fixing AI Discrimination

  1. Nice to see someone finally sees what I am trying to say to the AI community. Nigeria and blacks in general is underrepresented and its definitely going to affect us in an unforeseen future. I’m an AI researcher and developer. If you’re reading this and you are interested in having our own open-source data for Nigerian or blacks in general. Please do hit me up here https://www.linkedin.com/in/jeremiah-fadugba-5b3a5375/
    Datasets relating for both NLP and computer vision.

Post Comment

Please enter your comment!
Please enter your name here