On September 8, Thinking Machines Lab, a Silicon Valley startup founded by OpenAI’s former chief tech officer Mira Murati, raised a $2 billion seed round at a $12 billion valuation to build systems with AI reliability, stability, transparency, and research-friendly models.
Backed by $2 billion in funding and led by some of the sharpest minds from OpenAI, Thinking Machines Lab is taking a fresh approach to solving the problem of unpredictability in large language models (LLMs).
The more AI systems become more pervasive in our lives, the more the demand for stability and reliability is rising. Businesses, scientists, and developers need models that behave predictably, not ones that give different answers every time.
That’s exactly what Mira Murati’s new company is trying to fix.
AI Adoption for Diagnostics Reliability
In the first public research blog entry, “Defeating Nondeterminism in LLM Inference,” Thinking Machines Lab underlines a technical reason for inconsistent AI responses.
Following researcher Horace He’s opinion, nondeterminism in responses typically arises from how GPU seed, tiny programs within chips run during inference. Inference is what happens after you hit “enter” and get AI giving different answers.
The argument circles about more accurately controlling this piece of the system, it’s possible to develop deterministic AI models. That is, the identical input will always produce the identical output.
This is particularly critical in fields like healthcare and science, where reproducible results can influence actual-world decisions. This change toward AI consistency would also improve the manner in which AI is trained.
On the other hand, Reinforcement Learning (RL), where AI is trained through rewards for correct answers, is tricky if the model doesn’t react in exactly the same way twice. By producing predictable outputs, the researchers aim to smoothen out RL and make models more trainable. It falls under the broader company goal of creating responsible AI development, where systems are not only powerful but also reliable and interpretable.
Once Men Turned Their Thinking over to Machines, All Was Lost
Thinking Machines Lab is already one of the most valuable AI startups in history.
The $12 billion valuation, as reported at TechCrunch, was in a seed round led by Andreessen Horowitz. Some other big names like Nvidia, Accel, Cisco, and AMD were also among them. Murati says that the company’s first product is looming, and that it would have a major open-source offering.
Moreover, the startup is collaborating with Google Cloud to train its models, a step that was needed given how resource-intensive AI deployment challenges can be. The intention is to build tools that are useful both for startups and researchers.
Murati has promised frequent updates, in the form of blog posts and code, to let others observe what they’re doing. The open approach is contrary to the closer direction OpenAI has recently taken.
To fix AI giving different answers isn’t necessarily a technological issue, it’s an issue of trust. Whether doctors are diagnosing illnesses, developers are debugging codes, or scientists are testing theories, AI reliability could lead a new era of trust in digital intelligence.
That’s also why solving LLM limitations is more than a niche issue. It’s a step along the way to building trustworthy AI.
If Thinking Machines Lab can turn this promise into a reality, they may just update the way we approach dependable machine learning. But for now, the world watches to determine whether the firm’s good start means real momentum and whether AI reliability is more than an idea.
Soon enough, it could be the average for how AI systems operate overall, much-needed AI output stability in an extremely changing world, especially that consistency is as critical as accuracy.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.