Fresh on the heels of its Google Cloud Next conference in San Francisco, where a host of generative AI projects and tools was already unveiled, Google announced on Monday the launch of the Digital Futures Project, which aims to bring together diverse voices in AI development. Google also announced a $20 million fund to support the "responsible development of AI."
“Through this project, we’ll support researchers, organize convenings, and foster debate on public policy solutions,” Google Director of Product Impact Brigitte Gosselink wrote in a blog post.
Gosselink said that while AI has the potential to make life easier, it also raises questions about fairness, bias, adverse impacts on the workforce, misinformation, and security. To tackle these challenges, Gosselink said it will take the cooperation of leaders in the tech industry, academia, and policymakers.
The first recipients of the $20 million fund, Gosselink said, include the Aspen Institute, Brookings Institution, Carnegie Endowment for International Peace, the Center for a New American Security, the Center for Strategic and International Studies, the Institute for Security and Technology, Leadership Conference Education Fund, MIT Work of the Future, R Street Institute and SeedAI.
Regulate AI Like Nuclear Power, Says UK Labour Party
Officials in the United Kingdom suggested that artificial intelligence technology be regulated and require a government license—similar to pharmaceutical or nuclear power companies, according to a report by the Guardian. “That is the kind of model we should be thinking about, where you have to have a license in order to build these models,” a digital spokesperson for the Labour Party, Lucy Powell, told the publication. “These seem to me to be good examples of how this can be done.” Powell said p...
Several of the largest tech companies, including Google, Microsoft, Amazon, and Meta, have entered what has been called an AI arms race to see who can develop the best, fastest, and cheapest AI tools. Google and Microsoft alone have spent billions on AI tools over the last decade, including investments in OpenAI and Google’s Bard, Vertex, and Duet AI platforms.
While artificial intelligence goes back decades, the technology entered the mainstream with the public release of generative AI tool ChatGPT. Generative AI tools use user prompts to create content ranging from text to images and videos.
So swift was its spread, several high-profile tech leaders—including SpaceX CEO and OpenAI co-founder Elon Musk, Stability AI CEO Emad Mostaque, Apple Co-founder Steve Wozniak, and 2020 Presidential candidate Andrew Yang—signed an open letter calling for a pause in AI development.
You’re Not Imagining It: Top AI Chatbots Have Political Biases, Researchers Say
Popular AI chatbots have distinct political biases, according to tests performed by computer scientists and documented in a recent research paper. The scientists determined that OpenAI's ChatGPT and its new GPT-4 model to be the most left-leaning libertarian chatbot, while Meta's LLaMA leaned the furthest right and most authoritarian. "Our findings reveal that pretrained [language models] do have political leanings that reinforce the polarization present in pretraining corpora, propagating socia...
Policymakers and watchdog groups have continued to sound the alarm about the potential dangers and misuse of generative AI technologies, including the Secretary General of the United Nations, the Center for Countering Digital Hate, and the UK-based Information Commissioner’s Office (ICO). Even the Vatican under Pope Francis has spoken about the emerging technology.
In May, artificial intelligence pioneer Geoffrey Hinton resigned from his position at Google to be able to freely express his concerns about how AI could negatively impact the world. In July, ChatGPT creator OpenAI, Google, Microsoft, Nvidia, Anthropic, Hugging Face, and Stability AI met with the Biden Administration and committed to developing safe, secure, and transparent AI technology.
New York City Law Aimed at Curbing AI Bias Goes Into Effect
New legislation focused on artificial intelligence-driven employment tools went into effect in New York City on Wednesday, prohibiting employers and agencies from using automated employment decision tools (AEDT) unless that tool was audited for bias within the last year. Based on the 2021 Local Law 144, the new rule aims to prevent bias and ensure fairness in employment decisions using artificial intelligence. In addition to the one-year audit window, the NYC Department of Consumer and Worker Pr...
“Getting AI right will take more than any one company alone,” Gosselink concluded. “We hope the Digital Futures Project and this fund will support many others across academia and civil society to advance independent research on AI that helps this transformational technology benefit everyone.”
Google has not yet responded to Decrypt’s request for comment.