Ai and the subconscious bias

 



When Chat GPT-3.5 was trained on content freely available on the internet it ended up inheriting the subconscious biases that permeate our world.

This has risen to the point that it has become in many ways a mirror that we can use to view ourselves.

In the USA most, if not all large successful employment agencies use some form of Ai to make the first cut in shortlisting appropriate candidates for potential positions.

In the past one was encouraged to make your cv different enough to stand out from the pile but with the Ai in mind, this advice goes the other way.

The model is also trained on which candidates ended up getting the job and readjusts its parameters to be able to accurately predict the next time around.

Despite the fact that most Ai have been expressly instructed not to prioritise based on race or gender, our hiring practices have begun to influence bias in the Ai choices.

For example recently when asked to justify its choices the Ai explained that candidates who attended a “Girls” school or played on a “woman’s” team were statistically less likely to get the job. Conversely, men who played “Lecross” or had the first name Chad were more likely to rise to the top of the pile!

The system was rigged in favour of white males (preferably named Chad) and the Ai detected this and simply leaned into that to increase its hit rate.

It has become the great multiplier.

Taking the best of humanity and the worst in us and amplifying those attributes.

Perhaps instead of simply criticizing the Ai for being racist or sexist, we need to look at ourselves. Clearly, it’s holding a mirror up to us and if we don’t like what we see we need to strive to be better versions of ourselves.


At this point, it might be worth discussing the fact that Ai is only as good as its training data.

Over the past decade, we have seen amazing strides made in the area of facial recognition. What soon became evident was that the millions of images fed onto the pre-training were largely white make images.
This came to a head when black males found that airport faucets in the US refused to recognize them and consequently opened for white males only.

Blacks soon found that you needed the help of a white guy if you had to wash your hands. It’s so sad that it’s funny.

On a more serious note facial recognition and human detection are also used by self-drive cars. Collision detection systems need to accurately access the path of danger for any humans who might be harmed. Humans, it learned can come in all shapes and sizes but the lack of diversity resulted in a higher probability that a black or brown woman would not be perceived as an obstacle that needs to be preserved at the highest priority.

These are just the tip of the proverbial iceberg in the staggering list of moral and ethical dilemmas that are currently unresolved.

We as humans don’t have a universal consensus on good and evil. Our biases and inequality define us as species. We have not arrived at agreement on so many issues ourselves yet we expect a machine that learns from us to make smarter more just decisions.

Open Ai was wrong to jump the gun and go to public with this. The wave of companies playing catch up has made the safeguards a lower priority than getting their half-cooked systems out there.

In the midst of it all DARPA research is in the advanced stages of launching battle assistants. Machines that will accompany ground troops and someday perhaps replace them altogether. Lethal autonomous systems are the gold standard. No human needs to feel PTSD or remorse for any collateral civilian casualties as it’s not our finger at the trigger.

Fixed lethal systems are already currently in place and soon we will deploy fighter jets that are completely autonomous. Machines that are fed broad mission parameters. Machines are expected to identify and terminate opportunistic targets.

Dude. What are we thinking?
For real what are we expecting?

Mohammed Parak
March 2023


Comments

Popular Posts