12 Jun 2023 ///

Another statement has been released by experts warning that Artificial Intelligence poses a threat to humanity

For the public, AI has been an all-consuming topic around the dinner table among friends, family, colleagues and strangers. We tend to be at the mercy of experts or industry players; technology has been drip-fed to us as commercially viable products or enhancements to our life. Since ChatGPT, an AI model by OpenAI, was launched into the public domain; frenzy has arisen across the entire ideological and philosophical spectrum concerning AI’s rapid acceleration, and its fast-tracking ability for learning; with some stating that advancement is about twenty years earlier than expected. This comes with rapid shifts in the service offerings of digital tools; Adobe launched a generative AI tool for Photoshop; coming at a time when many industries have seen massive lay-offs, indicating a general tightening of the job-market,  Buzzfeed ‘quietly’ releasing full AI-created articles, and reports that around 4000 people last month alone lost their job to ‘new technologies’. 

While fear-mongering is seemingly the status-quo in our society, it would be remiss of us to not err on the side of caution and posit both the negative and positive outcomes of Artificial Intelligence. It is here to stay, it is with us – and there’s that line that says “AI won’t take over the world, but people who know how to use it will.” What happens, though, when the very creators of the technology start to come forward with concerns?

On May 30th 2023, the Centre for AI Safety released an ominous statement, signed by a myriad of experts. The statement simply reads:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

With signatures including that of Geoffrey Hinton, the ‘Godfather of Deep Learning’ and a prominent figure in the field of artificial intelligence (AI) and machine learning, and Sam Altman – CEO of OpenAI, the developers of the varying ChatGPT models. Hinton recently resigned from Google so that he could speak openly about the risks of AI’s accelerated advancements, stating “I’m just a scientist who suddenly realized that these things are getting smarter than us.”

DTS, The Internet, Fanette Guilloud

DTS, MISC 3, Madeline Spanier

This comes after an open-letter that was released in March 2023, also by a host of experts, urging there to be at least a six month pause in the industry so that policy-making and safety precautions can be initiated. As the letter says, “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

There are generally three areas into which these concerns fall. The ethical and social implications could be that AI systems have the potential to reinforce existing biases, infringe upon privacy rights, and impact the job market by automating certain tasks, potentially leading to unemployment or inequality if not managed carefully. In an already unequal world, with this already happening as is widely reported on ‘algorithm bias’, AI could be utilized to further oppress marginalized people in society. Then, the development of autonomous weapons powered by AI raises concerns about the potential for these systems to be used for unethical purposes, leading to a destabilization of global security and an escalation of armed conflicts; we are already facing the biggest nuclear-war threat since the 1980s due to the Russian invasion of Ukraine. Imagine an AI-initiated nuke being launched without any human consent? This is compounded with the risks of ‘superintelligence’ – the literal superseding of an intelligence beyond our own – and if they are not aligned with human values or if their decision-making processes become difficult to understand or control, potentially leading to unintended consequences and full-scale control by AI overlords. 

We are firmly living the future, folks. Our sci-fi dream or nightmare is here, and as the public, we have a responsibility too. A number of things that we can do to advocate for responsible AI implementation is to stay informed, engage in education on these tools (their positive and negative influences), sign the open statement and ones like it, engage in discourse with our peers, and support ethical AI initiatives like Future of Life Institute and Centre for AI Safety. Like when the first Nokia 3310 hit the shelves, only this time with sentience, we have to embrace adaptability and not let our futures be solely directed by corporate, Cartesian interests.

Feature Image by DTS, Joyce Miu, Vermillion

For more news, visit the Connect Everything Collective homepage www.ceconline.co.za

You May Also Like