OpenAI is on the cusp of releasing two groundbreaking models that could redefine the landscape of machine learning. Codenamed Strawberry and Orion, these projects aim to push AI capabilities beyond current limits—particularly in reasoning, problem-solving, and language processing, taking us one step closer to artificial general intelligence (AGI).
Strawberry, previously known as Q* or Q-Star, seems to be more than just a chatbot; it focuses on showing a significant leap in AI reasoning abilities. Sources familiar with the project have told different media outlets like Reuters or The Information that it has demonstrated remarkable proficiency in solving complex mathematical problems and enhancing logical analysis.
Orion, meanwhile, is positioned as OpenAI’s next flagship language model, potentially succeeding GPT-4. It's designed to outperform its predecessor in language understanding and generation, with the added ability to handle multimodal inputs, including text, images, and videos.
Both projects have garnered attention from U.S. national security officials, underscoring their potential strategic importance. This development comes as OpenAI continues to raise capital despite substantial revenue growth, likely due to the high costs associated with developing and training these advanced models.
Strawberry and reasoning power
Despite an unending flurry of speculation online, OpenAI has not said anything officially about Project Strawberry. Purported leaks, however, gravitate toward its capabilities for sophisticated reasoning.
Unlike traditional models that provide rapid responses, Strawberry is said to employ what researchers call "System 2 thinking," able to take time to deliberate and reason through problems, rather than predicting longer sets of tokens to complete its responses. This approach has yielded impressive results, with the model scoring over 90 percent on the MATH benchmark—a collection of advanced mathematical problems—according to Reuters.
Another key innovation anticipated from Strawberry is its ability to generate high-quality synthetic training data. This addresses a critical challenge across AI development: the scarcity of diverse, high-quality data for training models. If true, Strawberry not only enhances its own capabilities, but also paves the way for more advanced models like Orion.
Considering the huge amounts of data already scraped by OpenAI, and the privacy movement that is now very present among users unwilling to give their data to AI trainers, this feature may play an important role in the quality of future AI models—just like some users today train their own custom models using images generated by Stable Diffusion.
However, Strawberry's deliberate processing approach may present challenges for real-time applications. OpenAI researchers are reportedly working on "distilling" Strawberry's capabilities—basically decreasing its quality so consumers can do massive amounts of inferences at low computing costs.
Even so, the potential integration of Strawberry's technology into consumer-facing products like ChatGPT could mark a significant boost to the way OpenAI trains new models. It’s possible, however, that OpenAI will use Strawberry as a foundation to train new models rather than made widely available to consumers.
Project Orion or GPT Next
Project Orion stands as OpenAI's ambitious successor to GPT-4o, aiming to set new standards in language AI. A recent presentation by by Tadao Nagasaki, CEO of OpenAI Japan, suggests that it could be named GPT Next. Leveraging advancements from Project Strawberry, Orion is designed to excel in natural language processing while expanding into multimodal capabilities.
And OpenAI claims the leap will not be incremental.
“The upcoming AI model, likely to be called ‘GPT Next,’ will evolve nearly 100 times more than its predecessors, judging by past performance,” Nagasaki said at the KDDI SUMMIT 2024 in Japan as reported by IT Media, “Unlike traditional software, AI technology grows exponentially. Therefore, we want to support the creation of a world where AI is integrated as soon as possible."
'GPT Next’ to Achieve 3 OOMs Boost. Great insights from the #KDDISummit. Tadao Nagasaki of @OpenAI Japan unveiled plans for ‘GPT Next,’ promising an Orders of Magnitude (OOMs) leap. ⚡️ This AI model aims for 100x more computational volume than GPT-4, using similar resources but… pic.twitter.com/fMopHeW5ww
— Shaun Ralston (@shaunralston) September 3, 2024
Training Orion on data produced by Strawberry would represent a technical advantage for OpenAI. However, this technique should be used with caution. Researchers have already proven that models start to degrade after being trained on too much synthetic data, so finding that sweet spot in which Strawberry can make Orion powerful without affecting its accuracy seems key for OpenAI to remain competitive.
Orion's native multimodal capabilities will also represent a significant advancement. The model is being developed to seamlessly integrate text, image, and even video inputs and outputs, as reported by The Information, opening up new possibilities for ChatGPT users and putting the company in direct competition against Google’s Gemini—which can process up to 2 hours of video input.
This is the model that users will interact with when they use ChatGPT or OpenAI’s API Playground.
The development of Orion aligns with OpenAI's broader strategy to maintain its competitive edge in an increasingly crowded AI landscape. With open-source models like Meta's LLaMA-3.1, and state-of-the-art models like Claude or Gemini making rapid progress, Orion is basically OpenAI's bid to stay ahead of the curve.