The Biden administration is taking serious steps to rein in artificial intelligence.

Today, the White House issued an executive order that includes a sweeping 26-point mandate for firms working on AI.

The points range from mitigating the technology’s penchant for discrimination to urging developers to “share their safety test results and other critical information with the U.S. government.”

Each of the 26 points is organized into eight different groups, all of which aim to ensure AI is being used to maximize its purported benefits and minimize any risks. The order also seeks to put the U.S. at the forefront of defining AI guardrails and promoting continued research inside its borders, while also protecting American privacy.

With many large language models scraping data from all corners of our digital lives, user privacy has become a key sticking point amid the rise of artificial intelligence.

A president issues an executive order to better organize resources within the U.S. government toward a specific objective. Though it technically has the weight of law, meaning the government must follow the order, it can be overturned in court or changed by future presidents.

Notably, the order touches on key concerns around protecting workers and any displacement caused by the technology. This could include bolstering federal support for displaced workers as well as preventing firms from undercompensating their employees due to AI.

It also highlights the rise of “AI-enabled fraud and deception,” adding that it will develop “content authentication and watermarking to clearly label AI-generated content.”


In May this year, the U.S. government found itself at the center of one of the higher-profile consequences of AI-generated content. The stock market plunged double digits after an image of billowing smoke next to the White House began circulating social media, accompanied by claims that “explosions rocked near the Pentagon building in the USA.”

The image was determined to be created by AI shortly after.

Biden and AI

This isn’t the first time that the U.S. government has sought to influence the discussion around artificial intelligence.

In July this year, the Biden administration announced that it had landed “voluntary commitments” from some of the biggest names in AI, including Meta, Microsoft, and OpenAI.

Like today’s executive order, those commitments revolved around minimizing any risks associated with the technology.

Another eight AI and AI-adjacent firms, like Nvidia and Adobe, joined the same safety initiative in the summer.

Edited by Stephen Graves

Stay on top of crypto news, get daily updates in your inbox.