In brief
- MATHVISTA, built with more than 6,000 annotated datapoints from Sahara AI, tests AI models on multimodal math reasoning.
- GPT-4V scored 49.9%, the highest result among 12 models tested, but still 10.4 percentage points below human performance.
- Researchers say progress toward AGI may depend less on model size than on better training and evaluation data.
Artificial general intelligence, or AGI, is often described as a system that can perform across many domains the way humans do. Results released this week from the MATHVISTA benchmark test show current models still fall short of that goal.
Researchers from Microsoft Research, Sahara AI, and Emory University tested capabilities central to general intelligence, mathematical reasoning grounded in visual information, including charts, graphs, and diagrams.
Across 12 foundation models tested, including ChatGPT, Gemini, and Claude, GPT-4 Vision scored highest at 49.9%. Human participants averaged 60.3%, highlighting a gap between current AI systems and the broader reasoning ability often associated with AGI.
“We want the machine to do things that a normal, average person can do for their daily tasks,” Principal Researcher at Microsoft Research Hao Cheng told Decrypt. “That’s basically what everybody is pursuing for AGI.”
By putting problems into images, diagrams, and plots, the project tests whether models can accurately interpret visual information and solve multi-step mathematical and logical problems—skills that go beyond pattern-matching on text alone.
Models still struggle with those tasks, and measuring that limitation is difficult.
When Cheng’s team reviewed existing evaluation datasets, many included problems that did not require visual reasoning. Models often reached correct answers by relying solely on text.
“Which is not ideal,” Cheng said.
MathVista, available on GitHub and Hugging Face, launched in October 2023. Since then, it has been downloaded more than 275,000 times, including more than 13,000 downloads in the past month, according to Microsoft Research.
Creating the dataset required more than standard data labeling, however. Microsoft Research needed annotators who could work through problems across arithmetic, algebra, geometry, and statistics, while distinguishing deeper mathematical reasoning, such as interpreting graphs or solving equations, from simpler tasks like counting objects or reading numbers.
After a pilot phase, Microsoft selected Sahara AI to support the effort. The company provided trained annotators, custom workflows, and multi-stage quality checks to produce more than 6,000 multimodal examples used in the benchmark.
Without reliable benchmarks, measuring progress toward broader machine intelligence becomes difficult, according to Sean Ren, CEO of Sahara AI and an associate professor of computer science at USC
“There’s this nuance of data contamination, where once we start using this dataset to test, those results get absorbed into the next version,” Ren told Decrypt. “So you don’t really know if they are solving just a data set, or they have the capability.”
If benchmark answers appear in a model’s training data, high scores can reflect memorization rather than reasoning. That makes it harder to determine whether AI systems are actually improving.
Researchers also point to limits in training data. Much of the publicly available internet has already been incorporated into model datasets.
“You definitely need to have some way to inject some of the new knowledge into this process,” Cheng said. “I think this kind of thing has to come from high-quality data so that we can actually break this knowledge boundary.”
One proposed path involves simulated environments where models can interact, learn from experience, and improve through feedback.
“You create a twin world or a mirror of the real world inside some sandbox so the model can play and do a lot of things humans do in real life, so that it can basically break the boundary of the internet,” Cheng said.
Ren said humans may still play an important role in improving AI systems. While models can generate content quickly, humans remain better at evaluating it.
“That kind of gap between human and AI, where they’re good at, where they’re not good at, can be leveraged to really improve the AI down the road,” he said.

