The Biden administration is taking serious steps to rein in artificial intelligence.
Today, the White House issued an executive order that includes a sweeping 26-point mandate for firms working on AI.
The points range from mitigating the technology’s penchant for discrimination to urging developers to “share their safety test results and other critical information with the U.S. government.”
AI and ICE: U.S. Immigration Scans Social Media Before Approving Visas
Artificial intelligence (AI) is seeping into every sector, and that now includes border control. The U.S. Immigration and Customs Enforcement (ICE) agency is leveraging an AI-powered tool, Giant Oak Search Technology (GOST), to scan social media posts for content deemed "derogatory" to the U.S. The revelation, first reported by 404 Media, has ignited concerns about privacy and the ethical implications of such surveillance. GOST assists the agency by scrutinizing social media posts and determinin...
Each of the 26 points is organized into eight different groups, all of which aim to ensure AI is being used to maximize its purported benefits and minimize any risks. The order also seeks to put the U.S. at the forefront of defining AI guardrails and promoting continued research inside its borders, while also protecting American privacy.
With many large language models scraping data from all corners of our digital lives, user privacy has become a key sticking point amid the rise of artificial intelligence.
A president issues an executive order to better organize resources within the U.S. government toward a specific objective. Though it technically has the weight of law, meaning the government must follow the order, it can be overturned in court or changed by future presidents.
Notably, the order touches on key concerns around protecting workers and any displacement caused by the technology. This could include bolstering federal support for displaced workers as well as preventing firms from undercompensating their employees due to AI.
It also highlights the rise of “AI-enabled fraud and deception,” adding that it will develop “content authentication and watermarking to clearly label AI-generated content.”
In May this year, the U.S. government found itself at the center of one of the higher-profile consequences of AI-generated content. The stock market plunged double digits after an image of billowing smoke next to the White House began circulating social media, accompanied by claims that “explosions rocked near the Pentagon building in the USA.”
The image was determined to be created by AI shortly after.
Biden and AI
This isn’t the first time that the U.S. government has sought to influence the discussion around artificial intelligence.
In July this year, the Biden administration announced that it had landed “voluntary commitments” from some of the biggest names in AI, including Meta, Microsoft, and OpenAI.
Google Doubles Down with $2 Billion Investment in Claude AI Developer Anthropic
Global tech giant Google is massively upping its investment in Anthropic, creators of Claude AI. The $2 billion investment, first reported by the Wall Street Journal and confirmed by an Anthropic spokesperson to Decrypt, will be made in two payments of $500 million and another of $1.5 billion. This $2 billion commitment from Google is a significant step up from the $400 million the global search giant put into Anthropic in February. Last month, e-commerce titan Amazon committed to investing $4 b...
Like today’s executive order, those commitments revolved around minimizing any risks associated with the technology.
Another eight AI and AI-adjacent firms, like Nvidia and Adobe, joined the same safety initiative in the summer.
Edited by Stephen Graves