Can We Avoid Bias In Artificial Intelligence?

When we're having a chat with the GPT - how do we know who's answering and whose voices have been silenced?

Can We Avoid Bias In Artificial Intelligence?
Photo by Vinicius "amnx" Amano / Unsplash

On my way to the train station in a Uber on a rainy Friday morning, my conversation with the cabbie turned to Chat GPT (Generative Pre-trained Transformer) and how advances in artificial intelligence could impact how we access information, search for answers and perhaps even view the world.

His main concern surprized me: Chat GPT's responses were too "left-wing".  I thought about this for a second, before responding that an AI engine could only provide feedback based on the material it was trained on and that it probably had some filters applied to prevent "inappropriate" responses (I also wondered how far-right the driver would have preferred his AI to lean).

I shared a recent experiment where a group of researchers from Open AI's ARC (Alignment Research Centre) had conducted tests to determine how GPT-4 would fair in real world tasks.  One of the most interesting challenges was seeing whether it could use actual money to hire human helpers.  The result? GPT-4 manged to manipulate a human into passing a CAPTCHA test by lying to a worker on TaskTabbit (a marketplace for freelance services) and saying that it had a visual impairment and required assistance!

I argued that we probably didn't want an AI engine deceiving people or paying them to perform tasks we'd explicitly created to prevent such interference.

A day or two later, while flipping through some articles, I happened upon a story on generative artificial intelligence and how it may perpetuate the pervasiveness of existing gender biases, attitudes towards minorities, or racial discrimination. The danger, it explained, was that it would mirror the physical world's imbalances - because this was the material used to "teach it".

Now my mind was racing with the possibilities of a powerful intelligence, influenced by humanity's dark past, capable of influencing an army of workers to do its bidding!

When we use search engines, we can at least see if a result is sponsored, when an article appears on a company's website or is endorsed by some politician or celebrity. We can look at the comments to gauge the type of people who frequent a site, click on 5 or 50 results to get a more balanced view or try a couple of different engines.

When we're having a chat with the GPT - how do we know who's answering and whose voices have been silenced?

There are a lot of theories on how to prevent these biases, from definitions-of-fairness and reducing relationships to protected characteristics; to using synthetic datasets and tailoring responses based on feedback loops.

But the question still remains: How do we prevent what humanity has done in the past, from echoing into artificial eternity?