ABOUT FATHOM
We created Fathom to eliminate the needless overhead of meetings. Our AI assistant captures, summarizes, and organizes the key moments of your calls, so you and your team can stay fully present without sacrificing context or clarity. From instant, searchable call summaries to seamless CRM updates and team-wide sharing, Fathom transforms meetings from a source of friction into a place for alignment and momentum.
Β
Weβre a small company that creates magical experiences through the hard work of focused builders. We try to live our values - Care Deeply, Seek Leverage, Share Ownership, Sustain Urgency, and Be Tenacious - in everything we do, every day.
Β
We started Fathom to rid us all of the tyranny of note-taking, and people seem to really love what we've built so far:
π₯ #1 Most Used App of the Year on HubSpot for 2025 [Upgrade to PRO to see link]
π₯ #1 Rated on G2 with 4,500+ reviews and a perfect 5/5 rating [Upgrade to PRO to see link]
π₯ #1 Product of the Day and #2 AI Product of the Year [Upgrade to PRO to see link]
π Most installed AI meeting assistant on both the Zoom [Upgrade to PRO to see link] and HubSpot [Upgrade to PRO to see link] marketplaces
π Weβre hitting revenue and usage records every week [Upgrade to PRO to see link]
We think youβll be pretty excited about Fathom too if you give it a try. Sign up today [Upgrade to PRO to see link] (itβs free)!
ROLE OVERVIEW
We're hiring a Model Performance Engineer to own the speed, cost, and reliability of our model inference stack, and to build the fine-tuning infrastructure that makes the rest of the AI team faster.
This is not a research role. You'll be optimizing real systems serving millions of meetings β choosing between quantization trade-offs, debugging speculative decoding, or figuring out why one GPU family's tail latency explodes at high concurrency while another stays stable.
You'll own two things:
1. Inference performance. You'll make our models faster and cheaper β speculative decoding, quantization, serving configuration, GPU selection, batching strategies, cold start mitigation, adapter swapping. Our traffic is extremely spiky (meetings end in 30-minute blocks), so you need to think about throughput curves. Our team greatly values offering a fast product.
2. Fine-tuning pipelines. The AI team constantly fine-tunes models for new tasks β distilling large teacher models for classification, training adapters for domain-specific behavior, DPO for preference tuning. Right now each project reinvents the training loop. You'll build repeatable infrastructure so an AI Engineer can go more quickly from dataset to deployed model.
HOW YOUβLL HELP US WIN
- Benchmark FP8 quantization across GPU families, find that FP8 KV cache causes catastrophic repetition loops, identify static quantization as 6% faster than dynamic on certain hardware, and ship a production config that gets 1.3x speedup with <1% quality degradation
- Evaluate serving frameworks (vLLM vs SGLang) with speculative decoding β discover that ngram speculation degrades ASR quality while EAGLE3 draft models don't, and that torch.compile makes certain GPUs 7% slower
- Build a fine-tuning pipeline that takes a JSONL dataset and produces an optimized tune ready for serving, so a teammate can train a small classifier in an afternoon instead of a week
- Optimize GPU spend β know which GPU families are best for batch workloads (stable under high concurrency) vs latency-sensitive paths (40% faster, but tail latency blows up under load), and when a 30% cost premium isn't worth it
- Debug production inference issues β trace a quality regression to a serving framework upgrade that changed the default attention backend, or find that audio format handling in the multimodal pipeline silently drops segments
REQUIREMENTS
Hard Skills:
- Deep experience with LLM serving frameworks (vLLM, SGLang, TensorRT-LLM, or similar) β not just deploying them, but tuning them: attention backends, scheduling strategies, CUDA graph warmup, prefix caching
- Hands-on quantization experience β you've gone beyond "apply FP8 and hope." You understand weight vs activation quantization, per-channel vs per-tensor scaling, and when dynamic quantization introduces more overhead than it saves
- Production fine-tuning experience β LoRA/QLoRA SFT, familiarity with training frameworks (ms-swift, Axolotl, torchtune, or similar), understanding of data formatting, learning rate schedules, and how to diagnose training failures
- Strong Python. You'll write serving infrastructure, benchmarking harnesses, and training pipelines β not notebooks
- Comfort with GPU profiling and performance analysis. You should be able to look at a benchmark result and know whether the bottleneck is compute, memory bandwidth, or scheduling overhead
Strong signal:
- Cost modeling for GPU infrastructure β you've had to choose between GPU types and justify the tradeoff
- Experience with multimodal models (audio/vision encoders + LLM decoders)
- Experience with Modal, Ray Serve, or similar serverless GPU platforms
- Understanding of audio processing (codecs, chunking, sample rates)
- Experience building internal tooling that other engineers use β this role succeeds when the rest of the team ships faster
Not required:
- ML research background or publications
- Prompt engineering expertise (we have a team for that)
- Frontend or full-stack experience
- Masters/PhD (though it's fine if you have one)
Β
WHAT'S IN IT FOR YOU
- The opportunity to shape the foundational software services of a growing company
- A role that balances innovation and incremental improvement
- A dynamic and collaborative engineering team
- Competitive compensation and benefits
- A supportive environment that encourages innovation and personal growth
Β
WHY YOU SHOULD JOIN US
- Opportunity for impact. Weβre established enough to ship instead of fighting fires and early enough that your work will have a real impact.
- Startup experience. Youβll work closely with our CEO, a 2X Founder/CEO with a background in computer science and product design.
- We embrace being fully remote. We schedule meetings sparingly and instead heavily use async comms (Slack, Notion, Loom)
ABOUT THE INTERVIEW
- Youβll meet the entire team. We think itβs important that you get to meet everyone youβll be working with.
- No bullshit. Ask us anything you like. Weβve never understood why companies pretend theyβre something that theyβre not in the hiring process - youβre going to find out eventually so weβd rather you know who we are up front so we can both make sure this is a good fit for all involved.
- Quick turnaround time. We know you have lots of options so we move fast usually in less than a week from start to finish.
HOW TO APPLY
Include a brief write-up or demo of inference optimization or model serving work you've done. We care about the reasoning behind your decisions β why you chose a specific quantization strategy, how you diagnosed a performance regression, what tradeoffs you navigated. A GitHub repo, blog post, or even a few paragraphs in your cover letter works.