AI startup Hugging Face envisions that small—not large—language models will be used for applications including "next stage robotics," its Co-Founder and Chief Science Officer Thomas Wolf said.
"We want to deploy models in robots that are smarter, so we can start having robots that are not only on assembly lines, but also in the wild," Wolf said while speaking at Web Summit in Lisbon today. But that goal, he said, requires low latency. "You cannot wait two seconds so that your robots understand what's happening, and the only way we can do that is through a small language model," Wolf added.
Small language models "can do a lot of the tasks we thought only large models could do," Wolf said, adding that they can also be deployed on-device. "If you think about this kind of game changer, you can have them running on your laptop," he said. "You can have them running even on your smartphone in the future."
Ultimately, he envisions small language models running "in almost every tool or appliance that we have, just like today, our fridge is connected to the internet."

AI Startup Hugging Face Hits $4.5 Billion Valuation After Google and Nvidia Backed Raise
In a significant endorsement of the surging interest in artificial intelligence (AI) and platforms that foster its growth, AI startup Hugging Face has raised a whopping $235 million in its Series D funding round. The news, initially brought to light by The Information and subsequently confirmed by Salesforce CEO Marc Benioff on Twitter, has the tech industry buzzing. With participation from tech behemoths such as Google, Amazon, Nvidia, Intel, and many others, Hugging Face’s latest funding round...
The firm released its SmolLM language model earlier this year. "We are not the only one," said Wolf, adding that, "Almost every open source company has been releasing smaller and smaller models this year."
He explained that, "For a lot of very interesting tasks that we need that we could automate with AI, we don't need to have a model that can solve the Riemann conjecture or general relativity." Instead, simple tasks such as data wrangling, image processing and speech can be performed using small language models, with corresponding benefits in speed.
The performance of Hugging Face's LLaMA 1b model to 1 billion parameters this year is "equivalent, if not better than, the performance of a 10 billion parameters model of last year," he said. "So you have a 10 times smaller model that can reach roughly similar performance."
Meet HuggingChat: The Free Open-Source Chatbot That's Ready to Rival ChatGPT
You probably know about good ol' ChatGPT. You may know about other nice chatbots like Claude, Reka or Meta AI. But there’s a pretty sweet chatbot that you may have missed: It’s free, open source, and could be more capable than ChatGPT in specific tasks—like image generation, document parsing, video editing and more. HuggingChat is an open-source alternative to ChatGPT. It comes from the team at Hugging Face, a platform already beloved by open-source AI researchers and developers. If you're looki...
"A lot of the knowledge we discovered for our large language model can actually be translated to smaller models," Wolf said. He explained that the firm trains them on "very specific data sets" that are "slightly simpler, with some form of adaptation that's tailored for this model."
Those adaptations include "very tiny, tiny neural nets that you put inside the small model," he said. "And you have an even smaller model that you add into it and that specializes," a process he likened to "putting a hat for a specific task that you're gonna do. I put my cooking hat on, and I'm a cook."
In the future, Wolf said, the AI space will split across two main trends.
"On the one hand, we'll have this huge frontier model that will keep getting bigger, because the ultimate goal is to do things that human cannot do, like new scientific discoveries," using LLMs, he said. The long tail of AI applications will see the technology "embedded a bit everywhere, like we have today with the internet."
Image credit
Main image by Shauna Clinton/Web Summit via Sportsfile licensed under CC BY 2.0
Edited by Stacy Elliott.