3 min read
This Monday marked a new high—or arguably low—in demonstrating the power of artificial intelligence (AI). A picture depicting a fabricated explosion at the Pentagon, quickly flagged as being AI generated, spread like wildfire across social media. It also appears to have spurred a momentary sell-off in the U.S. stock market.
The alarming image, portraying smoke billowing from the iconic building, was disseminated by numerous accounts, including a Russian state-owned media channel.
Interestingly, reports of the false Pentagon explosion also made its way onto non-official Twitter accounts with blue verification checkmarks, further amplifying the confusion and the impact of the falsehood—highlighting both the importance of rigorous source verification as well as the unsurprising result of Elon Musk's new criteria for account verification.
As the photo went viral, U.S. stock indexes took a minor hit, although markets quickly recovered after the photo was exposed as a hoax. Bitcoin, the leading cryptocurrency, also experienced a brief "flash crash" following the spread of the fake news, slipping to $26500. Yet, Bitcoin is slowly but surely recovering and it is currently being traded at $26,882, according to CoinGecko.
Image: Yahoo! Finance
The hoax’s impact was significant enough to prompt the Arlington County Fire Department to intervene. "There is NO explosion or incident occurring at or near the Pentagon reservation,” they tweeted “and there is no immediate danger or hazards to the public."
This type of online deceit has raised serious concerns among critics of unmitigated AI development. Many experts in the field have warned that advanced AI systems could become tools for malevolent actors worldwide, spreading misinformation and causing online pandemonium.
This isn't the first time such trickery has emerged. Viral AI-generated images have previously deceived the public, such as images of Pope Francis sporting a Balenciaga jacket, a fake arrest of President Donald Trump, and deepfakes of celebrities like Elon Musk or SBF promoting crypto scams.
Notable personalities have also sounded alarms about the spread of disinformation.
Hundreds of tech experts already called for a six-month halt on advanced AI development until proper safety guidelines are established. And even Dr. Geoffrey Hinton, widely known as the 'Godfather of AI', even resigned from his role at Google to voice his concerns about potential AI risks without damaging his former employer's reputation.
Episodes of misinformation like this one feed into the ongoing debate surrounding the need for a regulatory and ethical framework for AI. As AI becomes an increasingly potent tool in the hands of agents of disinformation, the consequences can be chaotic.
Based on today's events, one question stands out: What if an AI was the agent using the power of social media to spread chaos and control the financial markets? We kind of saw it coming.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.