Prime Minister Rishi Sunak has announced a bold plan for the country to become a global epicenter for the Artificial Intelligence (AI) industry. As part of this blueprint, the UK government plans to launch major AI scholarships and invest over $100 million in an AI Taskforce.

A mix of research, education, and international collaboration is at the heart of this vision, a potentially seminal moment for the UK's tech landscape.

Sunak said he recognized the concerns that many hold regarding AI technology. After all, it’s not every day that a group of recognized scientists says the world may come to an end. To address this, Sunak emphasized his commitment to pioneering safety research within the UK's borders, balancing his excitement about the future with a recognition of the risks that it may bring.

“I get people are worried about AI. That’s why we are going to do cutting edge safety research here in the UK,” Sunak said in the announcement. The idea is “to ensure that wherever and whenever AI is put to use in the UK it is done safely and responsibly.”

AD

One aspect of the government's plan that has sparked interest is its collaboration with AI behemoths: Google's DeepMind (Bard, PaLM-2), OpenAI (ChatGPT, GPT-4), and Anthropic (Claude AI, Constitutional AI). These companies have committed to giving the UK government early or priority access to their AI models for research and safety purposes.

The nature of this collaboration, however, has raised some eyebrows.

The potential dangers of excessive government supervision of AI models are becoming apparent. For one, biases inherent in these AI models could potentially become institutionalized. Moreover, the power dynamics between AI companies and governments are a cause for concern.

AD

A delicate balance of power and a keen eye for bias is required to prevent the development of models that are politically aligned instead of just politically correct.

“More complex questions, such as those that are political or philosophical in nature, can have several answers,” explains the tech-oriented outlet TechTarget, “AI defaults to its training answer, causing bias since there may be other answers.”

The UK Government's investment will be focused on two AI fellowships dedicated to solving pressing problems in crop supply and healthcare through technology.

"These new fellowships, along with all our work on AI so far, will help build a brighter future for you and your families," said Sunak, emphasizing the human-centered approach to AI in the UK.

Prime Minister Sunak’s commitment to AI safety was echoed in his speech at the London Tech Week conference, where he assured the audience that the UK would become the geographical hub of global AI safety regulation.

An AI Safety Summit is already in the works, he said, an event likened to the COP Climate conferences.

However, the government's new-found enthusiasm for AI safety marks a noticeable shift from its previous stance.

Until recently, the government's white paper reflected a pro-innovation approach to AI regulation that played down safety concerns. This sudden change of tune, following meetings with AI industry bigwigs, has raised questions about the government's susceptibility to industry influence.

AD

Parallel to these developments, OpenAI CEO Sam Altman is on an international tour, aiming to strengthen ties with regulators from around the globe. This has also sparked a debate about his interest in leading the regulatory efforts around the world—while at the same time threatening to leave jurisdictions that want to regulate the industry.

A diverse discourse involving independent researchers, civil society groups, and vulnerable communities is essential for a well-rounded perspective on AI safety. After all, while these "AI Titans" hold the keys to the future, it's every day people who will live in it.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.