AI Bias and Ethics
What’s Hot in Tech – A vicict4women event
Topic: Intelligent Systems: Automation, AI and ML
Digital Innovation Futures 2023 Opening Cyber Debate
Topic: The cyber risks of using AI tools like ChatGPT outweigh the benefits
Intelligent Systems: Automation, AI and ML
What’s Hot in Tech – A VicICT4Women Event
Listening, learning, and participating in this event was a lot of fun. While the core of the event centred on intelligent systems, there were also different themes running through discussions. I would like to explore one of the themes a little further. Bias.
Bias comes in many forms and the participants in the event were all over this topic with views on what it is, how it impacts our lives, and what to do about it. Including the age-old argument of “Best Person for the Job” versus “Quotas”.
“We do not think and talk about what we see; we see what we are able to think and talk about” – Edgar Schein
How does bias affect artificial intelligence? It seems obvious, if your initial dataset is biased then your results will be biased and will be forever biased. Easy.
Let’s work through the logic of that statement using ChatGPT as our example. And what ChatGPT will do about it.
If the initial dataset is the internet (whatever that means) and there is an underlying bias on the internet toward white, middle-aged men living in the USA (as an example), our outputs will hold the same bias. If those outputs find their way on to the internet, that will increase the bias and we head in a downward spiral.
Hmm, is ChatGPT that dumb?
Maybe it is……today. Most of us can read a paragraph on a web page, in an email, in a CV, and quickly determine if ChatGPT was involved. There are already a substantial number of applications that claim to be ChatGPT detectors. Surely ChatGPT can include those in its algorithms.
ChatGPT would become redundant very quickly if it relied too heavily on its previous outputs as the basis for future inputs.
Has ChatGPT reached the peak of its powers? I don’t think any of us believe that.
Is it too far-fetched to think that one day soon, ChatGPT will be able to recognise bias in its inputs, and provide less bias outputs? Or at least, provide outputs that stand for alternate views of bias for the consumer of the outputs.
I have my fingers crossed that ChatGPT will increase awareness of bias at the least and with some encouragement might be able to reduce bias.
But I’m a glass half full kind of person.
Ethics and AI
DIF2023 Opening Cyber Debate (Digital Innovation Futures)
Ethics in AI is a hot topic at the moment. How can we make sure AI is ethical, and ensure regulations can keep up with it in this rapidly changing technological environment?
Recently attending the DIF2023 Opening Cyber Debate on the topic – The cyber risks of using AI tools like ChatGPT outweigh the benefits, was quite informative on AI and ethics.
The ethics around AI plays into the exploitation of people through a variety of ways. It has the potential for social engineering attacks on vulnerable persons. Workers who screen through AI inputs to ensure explicit and abusive material are kept out of datasets are exposed to horrific content. The pace at which AI is evolving through market leaders’ latest projects, means legislation and Government bodies are a step behind. Some Government agencies are being advised by the companies that are working in this space over what they think it needed, this is quite a subjective view and many are pushing their interests and agendas over what should be prioritised.
Having unbiased advisors to government, through independent sources such as education programs, researchers, universities and the like, will help reduce the bias on creating legislation that protects our society against the potential harms of AI.
The ethics debate around AI comes down to one thing – people. If people use AI for good, then it should have a very minimal negative impact on society. Unfortunately, like with most things, people can take advantage of a situation if it will benefit them, whether this be illegally, or legally right and morally wrong. Therefore, we need to have appropriate safeguards in place.
0 Comments