Close Menu
  • Home
  • Business
  • Gaming
  • General
  • News
  • Politics
  • Sport
  • Tech
  • Top Stories
  • More
    • About
    • Privacy Policy
    • Contact
    • Cookies Policy
    • DMCA
    • GDPR
    • Terms
Facebook X (Twitter) Instagram
ZamPoint
  • Home
  • Business
  • Gaming
  • General
  • News
  • Politics
  • Sport
  • Tech
  • Top Stories
  • More
    • About
    • Privacy Policy
    • Contact
    • Cookies Policy
    • DMCA
    • GDPR
    • Terms
Facebook X (Twitter) Instagram
ZamPoint
Business

The case for liquid foundation models

ZamPointBy ZamPointJanuary 21, 2026Updated:January 21, 2026No Comments8 Mins Read
The case for liquid foundation models
The case for liquid foundation models

Liquid AI, a unicorn AI lab in Cambridge, Massachusetts, builds foundation models essentially in another way. It designs AI from scratch, with a hardware-in-the-loop strategy that enables it to ship the best velocity and lowest latency on any processor, corresponding to graphics processing models, central processing models, and neural processing models. It doesn’t design transformer models however as an alternative builds liquid foundation models (LFMs), a brand new era of AI models that Ramin Hasani and fellow cofounders Alexander Amini, Daniela Rus, and Mathias Lechner pioneered at MIT. LFMs signify a brand new era of high-performance models that may course of textual content, photographs, audio, and video concurrently on any machine, corresponding to telephones, laptops, wearables, and residential home equipment, in addition to units in vehicles or airplanes. Hasani sat down with QuantumBlack, AI by McKinsey, to debate Liquid AI’s educational roots and enterprise constructing and its deal with optimizing AI for units.

This interview has been edited for size and readability.

The journey from MIT to {the marketplace}

QuantumBlack: Your firm is constructed on totally different expertise than the big language models [LLMs] powering the generative AI most individuals are aware of. Can you give us a fast historical past of your founding, from analysis lab into the mainstream market?

Ramin Hasani: When we began our machine studying analysis a few decade in the past, we needed to attract inspiration from nature and physics on how cells course of info, after which we launched the learnings into the machine studying world—for instance, learning animal brains to construct new and higher algorithms.

At MIT, in Professor Daniela Rus’s Computer Science and Artificial Intelligence Laboratory, we had been specializing in AI for robotics. We developed algorithms which might be considerably extra compressed than typical AI programs on the time and that carry out significantly better. For occasion, driving a automobile autonomously was doable with a handful of neurons in contrast with synthetic neural networks with thousands and thousands of parameters.

We realized we may take that brain-inspired expertise and apply it to many various domains, going from robotics to predictive markets to healthcare. We additionally noticed that the expertise can convey numerous worth and outperform the prevailing models in a way more environment friendly and compact means.

We known as these versatile types of intelligence programs that we designed “liquid neural networks”—as in “liquid” for flexibility. These versatile intelligence programs facilitated higher decision-making in extremely automated duties, corresponding to autonomous driving.

[That’s when we told ourselves that] we’d wish to develop the horizon of what we are able to do with this core expertise and construct and scale Liquid AI so we are able to go from predictive AI to generative AI. LFMs grew to become the core constructing block of our expertise, which constructed on the whole lot we discovered from nature, physics, and algorithms from our decade-long analysis at MIT.

Reducing value with out sacrificing high quality

QuantumBlack: How would you clarify an LFM to a CEO?

Ramin Hasani: The systematic means we design clever algorithms for enterprise purposes permits us to supply much more management, certainty, and reliability into the deployment of AI programs. LFMs, in distinction to transformer-based models, provide you with confidence that you’ve got probably the most cost- and energy-efficient AI stack delivering the best high quality.

LFMs are the optimum selection of generative AI models to serve on a tool—outdoors of information facilities, in factories, in a automobile, and in your telephone, laptop computer, and PC—or inside a knowledge middle for ultra-low-latency purposes. These models will systematically cut back the price of intelligence whereas delivering the identical frontier-model high quality on specialised purposes.

That was the core premise of Liquid AI: environment friendly and dependable AI for all. We needed to provide enterprises the arrogance to know that once they use AI, they’re getting the absolute best model, one that’s coming from a essentially new form of expertise.

And this new expertise isn’t just another guess. It’s a first-principles option to look into the chances of general-purpose computer systems and get them to carry out duties with the power necessities that the duty really entails.

Turning early purchasers into buyers

QuantumBlack: How did you differentiate yourselves to buyers who’ve already seen dozens of AI start-ups?

Ramin Hasani: For us, constructing a core expertise with an A group with complementary expertise was the unlock to constructing Liquid AI. In this house, expertise is the whole lot: If you’re constructing a foundation mannequin firm, there aren’t many AI scientists who know the best way to construct and deploy one from scratch with style. You want specialists and innovators.

All 4 cofounders of Liquid AI are AI scientists with good outreach in educational circles. One of the primary issues that we did was strategy our innovator associates to affix us. The first group at Liquid AI was well-known in business and educational circles, and I believe that grew to become a cornerstone of our success as a result of technical expertise with credibility is crucial factor once you’re introducing a radically totally different expertise.

After expertise comes the enterprise, and I used to be decided to keep away from hallucinating use instances. From day one, we saved extraordinarily near purchasers, which included enterprises in sectors corresponding to semiconductors, finance, client electronics, automotive, robotics, e-commerce, healthcare, and extra. That proved very influential, since our first spherical of financing got here collectively from our purchasers and their strategic buyers. We approached these corporations as companions very early on, and as we developed the expertise, they noticed the promise of what it may do for their very own companies.

The benefits of on-device AI

QuantumBlack: In a subject largely outlined by knowledge facilities and the necessity for important compute energy, your models run on tiny units, from Raspberry Pis to smartphones. What form of income models or buyer experiences does on-device AI allow that cloud-based models don’t?

Ramin Hasani: On-device AI is a brand new market. To unlock this market, we needed to innovate on the effectivity of generative AI algorithms to convey them to device-level processors, however we additionally needed to convey the standard and reliability of the models to be akin to the frontier models within the cloud. There’s numerous consideration being paid to the constructing of information facilities to host the most important and most subtle variations of AI. But there may be a lot extra outdoors of information facilities for us to discover.

For occasion, with on-device AI, automotive corporations can introduce in-car intelligence, permitting you to speak to your automobile. Why is that vital? Because you can not depend on the cloud to energy a vital security function inside a automobile because of safety, connectivity, and privateness points. The intelligence must be on-device, as a result of for those who all of a sudden expertise community interruptions, you’re going to be in hassle.

Another benefit of device-aware AI is constructing models which might be quick decision-makers. Applications in monetary providers and e-commerce, corresponding to suggestions and high-frequency buying and selling, are extraordinarily latency-critical. In these purposes, our models full duties with millisecond and microsecond latency. That velocity could be very difficult to acquire with the bigger models in the case of these latency-critical, privacy-sensitive, and security-critical purposes.

Our models present the standard of frontier LLMs on specialised purposes however with LFMs, that are as much as 1,000 occasions smaller. As a by-product of the effectivity and velocity of LFMs, the price of intelligence considerably calibrates.

Why we nonetheless want LLMs and large knowledge facilities

QuantumBlack: How can we take into consideration allocating AI funding throughout numerous use instances spanning bodily, edge, and cloud?

Ramin Hasani: I consider in a hybrid future. If you’re asking, “Do we really need all of this infrastructure being built for AI?,” my reply is sure.

Why? Because we wish to clear up probably the most complicated issues on the earth with frontier AI. Larger and extra elaborate AI programs are extraordinarily helpful as a result of they’ll enable us to find new science, new math, and new physics and, normally, to higher perceive the universe round us.

What is the aim of humanity? We wish to perceive the place we’re, who we’re, and the place we’re going whereas having fun with the journey. So for that to occur and for us to increase human life, for instance, we have to construct extra clever programs. I’m a techno-optimist and consider AI may help us remedy most cancers as soon as and for all. But AI for scientific discovery goes to want numerous power that we shouldn’t have, and we have to get artistic on that entrance.

But there’s this different kind of AI that may clear up day-to-day issues proper on our units whereas we attempt to construct the frontier AI. The on-device AI extends the world past knowledge facilities. It allows a bodily AI world the place we’ve robots, AI glasses, and hyperpersonalized computer systems performing duties on our behalf in society in a managed and personal means. We want this sort of intelligence on the sting in addition to cloud AI to really do a planet-scale deployment of AI for good.

ZamPoint
  • Website

Related Posts

Resilience to relevance: The next Philippines’ takeoff?

February 3, 2026

‘No one is illegal on stolen land’: how the Grammys turned into a giant Trump roast and ICE protest

February 2, 2026

Exclusive: Anthropic announces partnerships with Allen Institute and Howard Hughes Medical Institute as it bets AI can make science more effici

February 2, 2026
Leave A Reply Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Facebook X (Twitter) Instagram Pinterest RSS
  • Home
  • About
  • Privacy Policy
  • Contact
  • Cookies Policy
  • DMCA
  • GDPR
  • Terms
© 2026 ZamPoint. Designed by Zam Publisher.

Type above and press Enter to search. Press Esc to cancel.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by