While unveiling a new 39-member global advisory body to oversee AI governance, UN Secretary-General António Guterres waxed poetic about our emergent robot sidekick.
“In our challenging times, AI could power extraordinary progress for humanity,” he said. “From predicting and addressing crises, to rolling out public health services and education services, AI could scale up and amplify the work of governments, civil society, and the United Nations across the board.”
According to Guterres, AI has the potential to advance climate action to achieve the Sustainable Development Goals (SDG)—a set of interconnected objectives adopted by the UN in 2015 and designed to serve as a "shared blueprint for peace and prosperity for people and the planet.”
UN Warns of AI-Generated Deepfakes Fueling Hate and Misinformation Online
The United Nations has sounded the alarm over AI-generated deepfakes being used to spread hate and misinformation on social media. In a report released Monday, “Information Integrity on Digital Platforms,” the global organization underscored the need for responsible AI use. "While holding almost unimaginable potential to address global challenges, there are serious and urgent concerns about the equally powerful potential of recent advances in artificial intelligence—including image generators an...
He added that the technology could also help "leapfrog outdated technologies" and better serve populations "where needs are bigger," adding that the world is in "urgent need of this enabler and accelerator."
To harness all of these purported benefits, Guterres hopes the new governance body can help do just that while minimizing many of the risks.
Guterres said that AI has to be “harnessed responsibly” and be accessible to all people in the world, “including the developing countries that need them most.”
AI risks abound
Much like blockchain's most vocal proponents, Guterres is also acutely aware of centralization risks.
“AI expertise is concentrated in a handful of companies and countries,” warned the UN Secretary-General, adding that this could result in deeper global inequalities and “turn digital divides into chasms.”
The UN chief further pointed out concerns over misinformation and disinformation, surveillance and invasion of privacy, fraud, and other violations of human rights.
One prominent development that has stoked concerns about AI risks is the introduction of ChatGPT, a creation by tech company OpenAI, which rolled out last year.
With Its Security Under Scrutiny, OpenAI Is Recruiting a Cybersecurity ‘Red Team’
Looking to bolster the security of its popular AI chatbot, OpenAI is turning to outside cybersecurity and penetration experts, also known as "red teams," to find holes in the AI platform. The company says it is looking for experts across various fields, including cognitive and computer science, economics, healthcare, and cybersecurity. The aim, OpenAI says, is to improve the safety and ethics of AI models. The open invitation comes as the US Federal Trade Commission launches an investigation int...
The tool's ease of use has many worried it could not only replace humans in some sectors, but also open up new attack vectors such as the creation of fake press releases, phishing emails, or other social-engineering-based attacks.
Another area of concern is the use of AI-generated deepfakes to spread hate and misinformation on social media, an issue the UN also pointed to in its report in June.
“Without entering into a host of doomsday scenarios, it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion, and threaten democracy itself,” said Guterres.
The new AI governance group's primary objective is thus to provide preliminary recommendations for avoiding those doomsday scenarios and maximizing more leapfrog moments by the end of the year.
These recommendations will be finalized before the UN Summit of the Future next September.
The UN isn't alone in the task of reigning in the dangers of artificial intelligence, either.
To prevent harmful consequences of the improper use of ChatGPT and to improve the safety and ethics of AI models, OpenAI last month revealed it would create a “red team” of experts across various fields, including cognitive and computer science, economics, healthcare, and cybersecurity.
Edited by Liam Kelly.