Congress is preparing to take a significant step towards regulating AI at the national level, following on the heels of a robust UK AI law introduced last week. A bipartisan bill was introduced yesterday by Reps. Ted Lieu (D-Calif.), Anna Eshoo (D-Calif.), and Ken Buck (R-Colo.), aiming to establish a federal commission on AI.

The proposed legislation—Bill H.R.4223—requires Congress and the White House to assemble a 20-person "blue-ribbon commission." This group—comprising representatives from government, industry, civil society, and the computer science field—will work on defining a comprehensive regulatory strategy.

Rep. Ted Lieu said his enthusiasm for the technology is tempered by some serious concerns.

“AI is doing incredible things for our society. But it could also do great harm if unregulated,” he tweeted while announcing the bill.


Rep. Ken Buck mirrored Lieu’s sentiments, acknowledging the profound potential of AI and its corresponding risks.

“Artificial Intelligence holds tremendous opportunity for individuals and our economy, but it also poses a great risk for our national security,” Buck tweeted. He also emphasized the importance of expert consultation before legislative action, and reiterated his commitment to working with other congresspeople on this critical issue.

In a nod toward bipartisanship, the President would appoint eight commissioners, with the remaining 12 tapped collectively by party leaders from both the House and Senate.


The commission's remit is extensive. Over a span of two years, it is mandated to produce three reports for policymakers. The content of these reports should include recommendations on how to mitigate risks and potential harms posed by AI while safeguarding US technological innovation.

The proposal doesn't render Congress helpless while waiting for recommendations, either. Despite encouraging restraint in "overarching legislation" until the commission has had its say, there are provisions for interim congressional actions in "discrete areas," especially in matters concerning national security.

Meanwhile, across the Atlantic, the European Union (EU) is also diligently working on its AI legislation. The EU is developing a regulation to establish a European Artificial Intelligence Board. This board will provide guidance to national authorities and work towards uniform application across EU countries.

OpenAI, a major player in the AI field, has been actively lobbying in the United States and other countries, advocating for its perspective in AI rule-making. AI's potential disruption to various fields, from the arts and medicine to architecture, highlights the necessity of these legislative movements, the company asserts.

Despite the introduction of numerous bills to set privacy guardrails for AI tools and require companies to vet their algorithms for biases, most have not passed. But the tides may be turning. The recent surge in AI interest and the rising popularity of AI chatbots like OpenAI's ChatGPT and Google's Bard has revitalized legislative efforts.

The age-old query, "Who watches the watchmen?" is taking on new significance in the realm of AI regulation. Legislators shaping AI policy could inadvertently infuse their own biases into the nascent tech. So, who ensures that these lawmakers maintain impartiality? It’s a crucial challenge to address while establishing a fair and effective AI legislative framework.

Stay on top of crypto news, get daily updates in your inbox.