Google is forging a deeper alliance with Anthropic, the startup behind ChatGPT competitor Claude AI, by providing its specialized computer chips to boost its capabilities.
This partnership is fortified by a substantial financial infusion from Google to Anthropic. As Decrypt previously reported, Google's commitment featured a 10 percent stake acquisition for $300 million followed by additional funding, totaling a significant $500 million—with a further promise of $1.5 billion in additional investments.
"Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way," said Thomas Kurian, CEO of Google Cloud, in an official press release. "This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud."
Anthropic will use Google Cloud's fifth-generation Tensor Processing Units (TPUs) to perform AI inference, the process by which a trained AI model makes predictions or decisions based on new input data.
Google Doubles Down with $2 Billion Investment in Claude AI Developer Anthropic
Global tech giant Google is massively upping its investment in Anthropic, creators of Claude AI. The $2 billion investment, first reported by the Wall Street Journal and confirmed by an Anthropic spokesperson to Decrypt, will be made in two payments of $500 million and another of $1.5 billion. This $2 billion commitment from Google is a significant step up from the $400 million the global search giant put into Anthropic in February. Last month, e-commerce titan Amazon committed to investing $4 b...
Such strategic moves by tech leaders underline the fierce competition and high stakes in developing ever-more sophisticated artificial intelligence. The most notable partnership in the AI space is the one between Microsoft and OpenAI with $10 billion on the table.
But what do these technological developments portend for the AI chatbots and tools people use daily? It comes down to the fundamental differences between the computational workhorses of AI training: GPUs and TPUs.
Graphics Processing Units (GPUs), long the backbone of AI computational tasks, are adept at handling multiple operations simultaneously. They are versatile and widely used, not just in gaming and graphics rendering but also in accelerating deep learning tasks.
In contrast, tensor Processing Units (TPUs) are Google's brainchild, custom-designed to turbocharge machine learning workflows. TPUs streamline specific tasks, offering swifter training times and energy efficiencies, which are critical when processing the enormous datasets that LLMs like Anthropic's Claude require.
Not Just Nvidia: These Are the Other Big Winners in the AI Chip Biz
The pandemic may have brought about a storm, but it has also fostered fertile ground for innovation. The rise of AI, led by generative AI tools such as ChatGPT and Stable Diffusion, has not only aided the tech sector's recovery but has also driven it towards unprecedented growth. As we look into the future, one thing is clear: AI is not just a part of the tech industry; it is becoming the tech industry. Markets, in general, are rebounding post-pandemic. However, the advent of AI has been a parti...
The distinction between these processors is stark: GPUs (like the ones used by OpenAI) offer a broad application scope, but TPUs focus performance on machine learning. This suggests that for startups like Anthropic, which rely on massive volumes of data to refine their models, Google's TPUs may provide a compelling advantage, potentially leading to quicker advancements and more nuanced AI interactions.
On the other hand, OpenAI's recent advancements, particularly the GPT-4 Turbo, challenge any perceived lead by Anthropic. The brand new Turbo model handles 128K context tokens, which is a significant leap from the previous 8K milestone and a blow to Anthropic’s prior dominance with Claude’s 100K capabilities.
However, the battle is not without its nuances. These powerful TPUs could help Anthropic develop a more powerful LLM faster. But the larger context window, while interesting, is a double-edged sword—those huge prompts tend to lead to poor performance under current circumstances.
As the AI race heats up, Anthropic may now hold a golden ticket thanks to Google's hefty backing. But they've got to play their cards right because OpenAI isn't just resting on its laurels—they're also on the fast track with Microsoft in their corner.
Edited by Ryan Ozawa.