WHO WE ARE
Foundation models have transformed text and images, but structured data - the largest and most consequential data modality in the world - has remained untouched. Tables power every clinical trial, every financial model, every scientific experiment, every business decision. No one has built a foundation model that truly understands them.
Until now. What LLMs did for language, we're doing for tables.
Momentum: We pioneered tabular foundation models and are now the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature [Upgrade to PRO to see link] and set a new state-of-the-art for tabular machine learning. Since its release, we've scaled model capabilities more than 20x, reached 3M+ downloads, 6,000+ GitHub stars, and are seeing accelerating adoption across research and industry - from detecting lung disease with Oxford Cancer Analytics [Upgrade to PRO to see link] to preventing train failures with Hitachi [Upgrade to PRO to see link] to improving clinical trial decisions with BostonGene [Upgrade to PRO to see link]
The hardest work is in front of us. We're scaling tabular foundation models to handle millions of rows, thousands of features, real-time inference, and entirely new data modalities - while building the infrastructure to deploy them in production across some of the most demanding industries on earth. These are open problems no one else is working on at this level.
Our team: Weβre a small, highly selective team [Upgrade to PRO to see link] of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by Frank Hutter [Upgrade to PRO to see link] Noah Hollmann [Upgrade to PRO to see link] and Sauraj Gambhir [Upgrade to PRO to see link] and advised by world-leading AI researchers such as Bernhard SchΓΆlkopf and Turing Award winner Yann LeCun. We ship fast, create top-tier research, and hold each other to an extremely high bar.
Whatβs Next: In 2025, we raised β¬9m pre-seed led by Balderton Capital, backed by leaders from Hugging Face, DeepMind, and Black Forest Labs. The next modality shift in AI is happening - and we're hiring the team that makes it.
ABOUT THE ROLE
We spend tens of millions per year on GPU compute to train tabular foundation models. That's not a target, it's what we're running today, and it's growing. The person who owns this infrastructure makes decisions worth millions of dollars: cluster architecture, scheduling efficiency, provider strategy, hardware selection. A wrong call costs six figures.
Today we run Slurm on GCP across multiple clusters. We're scaling to multi-cluster, multi-provider infrastructure and evaluating new hardware generations as they come online. You own the full stack, from cluster operations and cost optimization to distributed training performance and the tooling layer that keeps researchers moving fast. You work directly with the research team and understand what they're doing well enough to make infrastructure decisions that actually help them. And this isn't a pure support role. We operate an open environment. If you've got the next SOTA tabular architecture up your sleeve, go ahead and train it.
What you'll work on:
- Own and evolve multi-cluster GPU infrastructure. Slurm on GCP today, multi-provider and new hardware tomorrow. Architecture, scheduling, reliability, cost optimization
- Drive GPU utilization and training throughput: profiling, memory optimization, communication bottlenecks, systems-level debugging of distributed training across large runs
- Architect the next generation of our infrastructure: multi-cluster orchestration, new GPU generations, provider diversification, capacity planning against growing compute demands
- Build the developer productivity layer: CI pipelines, experiment tracking, model registry, data processing, and internal tooling that keeps research iteration speed high
- Own the compute budget. You understand cost per FLOP across providers and hardware, and you hate wasted compute
Tech stack: Slurm, GCP, Docker, wandb, GitHub Actions, uv, PyTorch, Triton
You may be a good fit if you have:
- 5+ years building and operating production GPU infrastructure or distributed training systems at scale. At a major AI lab, a well-funded ML startup, or an HPC environment
- Deep hands-on experience with Slurm and cluster management. You've debugged scheduling failures, optimized utilization across multi-tenant GPU workloads, and operated infrastructure where downtime has real cost
- Expert-level systems thinking: memory bandwidth, GPU profiling. You reason about hardware, not configs
- Strong Python and genuine fluency with PyTorch internals. Enough to profile a training run and tell whether the bottleneck is data loading, communication, or compute
- Track record of making infrastructure decisions that measurably improved training throughput or cost efficiency
- Strong AI tooling skills. You use Claude Code, Cursor, or similar fluently to move fast without sacrificing quality
Bonus:
- Experience operating at tens-of-millions-scale GPU spend
- Multi-cloud or hybrid HPC/cloud infrastructure experience
- Triton, CUDA, or custom kernel experience
- Experience scaling from single cluster to multi-cluster orchestration
- Background building experiment tracking, model registry, or ML pipeline tooling
OUR COMMITMENTS
- We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. Thatβs why we welcome applications from people of all identities and walks of life, especially anyone whoβs ever felt discouraged by "not checking every box."
- Weβre committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.