- Soumith Chintala departs Meta and joins Mira Murati’s Thinking Machines Lab.
- Reports indicate rapid hiring, significant funding aims, and an initial tool, Tinker, in pilot testing.
- PyTorch’s open-source footprint grows, with widespread adoption and a landmark NeurIPS paper.
- DeepLearning.AI launches a PyTorch Professional Certificate on Coursera to upskill developers.
The AI world is watching a pivotal transition as PyTorch’s co-creator Soumith Chintala leaves Meta to join Thinking Machines Lab, a new venture founded by former OpenAI CTO Mira Murati. The move underscores the pace at which leading researchers are reshaping teams and priorities across the industry.
Beyond the headlines, the change signals evolving strategies around the framework that many researchers and engineers rely on daily. With , leadership changes may reflect a shift toward broader community stewardship and fresh directions in product development and research collaboration.
Leadership shifts around PyTorch

After over a decade helping to shape Meta’s AI infrastructure, Soumith Chintala announced in early November 2025 that he would step down from PyTorch leadership and depart the company. Shortly after, he confirmed he had joined Thinking Machines Lab, citing the strength of the team and a desire to build new things.
Chintala’s path has been widely cited as an inspiration: from Hyderabad and VIT to co-founding PyTorch in 2016, turning a research-centric toolkit into a standard that now powers cutting-edge work across labs, startups, and large enterprises.
Under his stewardship, PyTorch evolved from an experimental favorite into a production-ready platform. The framework’s growth, governance, and community contributions cemented its role as a cornerstone of modern machine learning workflows.
His exit arrives amid broader realignments at major AI organizations. While specifics vary by report, the common thread is clear: teams and roadmaps are being re-tuned to compete in an era defined by model scale, data pipelines, and deployment at global footprint.
Thinking Machines’ goals, hiring, and early product signals
Mira Murati established Thinking Machines Lab with a focus on what she describes as collaborative general intelligence. The group’s north star is building multimodal systems for natural human interaction, with an emphasis on responsible and scalable research-to-product pathways.
Reports point to substantial investor interest: an earlier $2 billion seed round has been widely discussed, alongside talks referencing a potential valuation in the $50-$60 billion range. Hiring appears brisk, reflecting the race to assemble cross-disciplinary talent spanning infrastructure, research, and product.
The startup’s first tool, Tinker, has been described as a system to simplify the fine-tuning of large language models. Early pilots at institutions like Princeton and Stanford and trials with initial enterprise users suggest a measured rollout as the team iterates with real-world feedback.
Multiple reports also highlight prominent recruits and advisors across the industry, indicating that Thinking Machines is building a deep bench to accelerate development amid intense competition for expertise.
PyTorch’s open-source footprint keeps expanding
PyTorch has become a platform of choice for research and production, with usage cited in well over 150,000 public projects. Its impact is visible across computer vision, NLP, and generative modeling, where rapid prototyping and flexible deployment are essential.
A notable milestone was PyTorch’s first full paper at NeurIPS (2019), authored by Adam Paszke and collaborators, documenting core design choices through version 0.4. That work codified the framework’s principles and helped unify a growing ecosystem of libraries and tools.
From the PyTorch Foundation’s governance to vibrant community contributions, the framework’s trajectory illustrates how open-source collaboration scales when research, infrastructure, and education align around shared goals.
Education push: a new PyTorch Professional Certificate
DeepLearning.AI announced the PyTorch for Deep Learning Professional Certificate on Coursera, guided by Laurence Moroney. The curriculum focuses on how to build, train, and deploy PyTorch models, aiming to make practical deep learning more accessible to a wider audience.
For learners and teams, this type of structured pathway can compress the time it takes to go from fundamentals to production. By standardizing hands-on projects and best practices, the certificate broadens the talent pipeline and supports organizations that are formalizing their MLOps stacks around PyTorch.
How the ecosystem could evolve from here
As Thinking Machines ramps up and other labs double down on infrastructure, the PyTorch community stands to benefit from renewed focus on efficiency, tooling, and distributed training. The next phase will likely feature tighter loops between research and deployment, with an eye on safety and reliability.
Meanwhile, the developer community keeps pushing boundaries with projects that blend rigor and accessibility. Educational write-ups and implementations—from foundational tutorials to end-to-end guides on building StyleGAN in PyTorch—continue to lower barriers for practitioners at every level.
With a proven track record in open-source and a growing slate of training resources, PyTorch is positioned to remain a central pillar in AI development. The combination of experienced leadership joining new ventures, sustained community energy, and formal education pathways suggests a cycle of innovation that feeds both experimentation and real-world adoption.