WHO WE ARE
Foundation models have transformed text and images, but structured data - the largest and most consequential data modality in the world - has remained untouched. Tables power every clinical trial, every financial model, every scientific experiment, every business decision. No one has built a foundation model that truly understands them.
Until now. What LLMs did for language, we're doing for tables.
Momentum: We pioneered tabular foundation models and are now the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature [Upgrade to PRO to see link] and set a new state-of-the-art for tabular machine learning. Since its release, we've scaled model capabilities more than 20x, reached 3M+ downloads, 6,000+ GitHub stars, and are seeing accelerating adoption across research and industry - from detecting lung disease with Oxford Cancer Analytics [Upgrade to PRO to see link] to preventing train failures with Hitachi [Upgrade to PRO to see link] to improving clinical trial decisions with BostonGene [Upgrade to PRO to see link]
The hardest work is in front of us. We're scaling tabular foundation models to handle millions of rows, thousands of features, real-time inference, and entirely new data modalities - while building the infrastructure to deploy them in production across some of the most demanding industries on earth. These are open problems no one else is working on at this level.
Our team: Weβre a small, highly selective team [Upgrade to PRO to see link] of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by Frank Hutter [Upgrade to PRO to see link] Noah Hollmann [Upgrade to PRO to see link] and Sauraj Gambhir [Upgrade to PRO to see link] and advised by world-leading AI researchers such as Bernhard SchΓΆlkopf and Turing Award winner Yann LeCun. We ship fast, create top-tier research, and hold each other to an extremely high bar.
Whatβs Next: In 2025, we raised β¬9m pre-seed led by Balderton Capital, backed by leaders from Hugging Face, DeepMind, and Black Forest Labs. The next modality shift in AI is happening - and we're hiring the team that makes it.
ABOUT THE ROLE
Most companies treat open source as a side job for researchers who'd rather be doing something else. We think that's wrong. Prior Labs is rooted in open source β TabPFN started as a research project the community adopted, and that's how we became a company.
Language models and image models have had years to build out their ecosystem interfaces and integrations. For tabular foundation models, none of that exists yet. You're not plugging into existing patterns β you're creating them. The engineering is genuinely hard: TabPFN does in-context learning, not traditional fit/predict, so wrapping it behind a clean sklearn interface means solving problems no other library has solved. You're designing APIs for a model whose architecture evolves faster than users can upgrade, and making inference robust to the full chaos of real-world tabular data. You understand the model deeply enough to push back when something will break downstream, and you care enough about the details to write great docs and error messages on top of great code.
What you'll work on:
- Design sklearn-compatible APIs around a foundation model that doesn't behave like a traditional estimator β solve the hard abstraction problems so the interface feels simple
- Build and maintain PyTorch serialization, HuggingFace Hub model distribution, and checkpoint management across a multi-model, multi-version ecosystem
- Build MCP and tool-use wrappers for agentic AI pipelines
- Model-adjacent ML engineering: preprocessing pipelines, inference wrappers, dtype handling, edge case hardening against real-world data
- Own releases, CI, testing, and docs across the TabPFN ecosystem β TabPFN (core), tabpfn-client, tabpfn-extensions, tabpfn-time-series
- General ML engineering: benchmarking, evaluation pipelines, data loading, tooling that makes the team faster
You may be a good fit if you have:
- 3+ years building and maintaining Python packages or ML libraries used by others (open source track record strongly preferred)
- Deep fluency in PyTorch, scikit-learn, pandas, NumPy β their internals, extension points, and failure modes, not just their APIs
- Strong software engineering: testing, CI/CD, packaging (pyproject.toml, uv), semantic versioning, multi-version Python support
- Comfortable reading and working with model code β forward passes, checkpoint loading, inference optimization β and forming opinions about it
- Solid ML fundamentals: enough to write correct preprocessing, catch data leakage, and push back on design choices that break downstream
- Genuine care about developer experience: you write great docs and great error messages because you think they're engineering, not chores
Bonus:
- Maintainer or significant contributor to a popular open source ML/data library
- Strong AI tooling skills β you use Claude Code, Cursor, or similar fluently to move fast
- MCP server or tool-use integration experience
- HuggingFace Hub model distribution experience
- Background in tabular data, AutoML, or time series
- Experience debugging cross-platform packaging, or contributing to PyTorch/sklearn core
OUR COMMITMENTS
- We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. Thatβs why we welcome applications from people of all identities and walks of life, especially anyone whoβs ever felt discouraged by "not checking every box."
- Weβre committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.