What you need to know about bias in AI 

Robot thinking

With the increasing popularity of AI, the issue of bias has become a significant concern. AI, after all, is designed to learn from data, and if the data it is trained on is biased, the AI will likely reflect that bias in its decisions.  

For example, when ChatGPT (produced by OpenAI) first debuted, people quickly noticed that it seemed to prefer one presidential candidate over another. This raised questions about whether an AI could have a political preference, and if so, where that preference might come from. 

Can a computer be biased? 

AI is not some kind of magical, all-knowing entity. At the end of the day, it’s just a computer program. It doesn’t have emotions or opinions, and it’s only as good as the data it’s trained on. As they say, garbage in, garbage out.  

Let’s take Jo, the Job Board Bot, as an example. Jo has been trained on resumes from people who have been hired for various positions. Based on this data, Jo can tell a candidate which job they’re most likely to succeed in based on their qualifications. 

Now imagine Barbara launches Jo and asks, “What position am I best fit for?” Even if Barbara has all the relevant experience, if 84% of engineers are men, Jo might not suggest that Barbara would be a good fit for a computer engineer position. That’s not the outcome you’re looking for!  

If the bot is just a computer, who is biased here? 

Now, let’s revisit the issue of bias in ChatGPT’s political preferences. ChatGPT relies on numerous public data sources, including Wikipedia, which is written by humans, and unpaid humans at that! Unfortunately, if these humans have any inherent biases in their writing, ChatGPT is likely to inherit those same biases. Essentially, ChatGPT’s biases reflect the collective biases of the content creators it relies on.  

Update: ChatGPT has since placed filters on certain outputs like those discussed above and instead will respond that the AI doesn’t have preferences. 

Who is teaching and training the AI? 

If AI is trained on biased data, it will produce biased outcomes, which can create a feedback cycle; for example, as Jo essentially screens out female applicants, the gender disparity increases, making new female applicants look even less likely to succeed. How do we break the cycle?  

The common technique is called supervised training, which OpenAI does employ. Per the Wired article linked, “[Supervised training] …was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.” However, the supervisors are humans, so how do we keep the human supervisors from creating their own bias? Oh boy! 

When implementing this technology in the enterprise, the stakes are significantly higher. Some AI products don’t even use your organization’s data. Some do, but they are not trained by your people. Instead, the AI provider has a team of their own people, many times in offshore centers, who are monitoring the AI’s decisions and providing feedback to supervise the training.  

It is unlikely, then, that the vendor team teaching the AI has the same view of what is or isn’t appropriate as the people who will be using the AI. Knowing what you know now about bias in AI, do you think it is a good idea to have a team in a different part of the country, or another country, being the sole influencer of the AI? 

A better approach 

At Gideon Taylor, we prioritize meeting the specific enterprise needs of organizations we work with. Our digital assistant, Ida, uses a language model from Oracle, a company with extensive experience handling enterprise data. However, we don’t stop there. We recognize that each customer’s needs and data are unique, so we train their bot independently, rather than relying on a one-size-fits-all approach. 

We also understand the importance of our customers influencing their own AI. By allowing your people to provide feedback to Ida, we ensure that the AI is constantly learning and adapting to better represent your organization, your language, and your culture. If you haven’t seen Ida, contact us below and setup a personalized demo. 

Contact Us