WHO WE ARE
At TwelveLabs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media.
With a $110+ million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIAβs NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation.
Our partnership with NVIDIA and AWS gives us access to the most advanced chips, including B300s, enabling us to push the boundaries of what's possible in video AI.
We are a global company that values the uniqueness of each personβs journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI.
ABOUT THE TEAM
The Pegasus team sits at the core of TwelveLabs' video understanding capabilities and is responsible for driving Pegasus, our Video Analysis product. Our focus is on developing multimodal video analysis systems that are designed for high instruction following capability and producing highly complex, hierarchically structured outputs. We focus on shipping products with real-world value rather than doing research in isolation, and we work in a goal-oriented, cross-functional team that encompasses both ML researchers and engineers.
Our work covers a broad range of challenges: large-scale distributed training of multi-modal LLMs that span from pre-training to RL, accurate temporal segmentation and structured metadata extraction for real-world use cases, extending temporal context length to multiple hours, and data curation processes that enable well-aligned evaluation and performance improvements through training data enhancements.
Our team has access to the most advanced chips in the world, including NVIDIA B300s, to push the boundaries of video analysis systemsβaccelerating our research-to-production cycle as fast as possible.
IN THIS ROLE, YOU WILL
- Drive technical direction for ML engineering within Pegasus while remaining deeply hands-on in critical system design and implementation.
- Own the design and evolution of critical production ML systems for Pegasus, with a focus on scalability, reliability, performance, and fast iteration.
- Lead technical decision-making across model deployment, inference architecture, metadata systems, and ML infrastructure for Video Language Models (VLMs).
- Improve and automate the end-to-end ML lifecycle so research advances can translate into product improvements quickly and reliably.
- Mentor engineers and raise the teamβs execution bar through strong technical judgment, design reviews, and hands-on collaboration.
- Explore and adopt AI-assisted development tools such as Claude, Gemini, and GPT to improve productivity across coding, experimentation, debugging, and documentation.
YOU MAY BE A GOOD FIT IF YOU HAVE
- Significant experience building and productionizing ML systems as a hands-on individual contributor.
- Experience driving technical direction across ML projects and making architectural decisions in complex production environments.
- Strong foundations in machine learning and deep experience with multimodal systems such as vision, language, or video-based models.
- Experience building and evolving distributed ML or data workflows, ideally in Kubernetes-based environments.
- Strong technical judgment across system design, performance, reliability, and long-term maintainability.
- A track record of mentoring engineers and creating technical leverage beyond your own individual contributions.
PREFERRED QUALIFICATIONS
- Experience serving or optimizing LLM/VLM systems in production, including inference optimization, throughput and latency tuning, batching, caching, or quantization.
- Experience designing and operating mission-critical AI/ML applications from 0 to 1 and scaling them in production.
- Experience with large-scale training or serving infrastructure for ML systems, including high-performance GPU environments.
- Masterβs or PhD in Machine Learning, Computer Science, or a related technical field.
HIRING PROCESS
Application Review β Recruiter Interview (λΉλλ©΄/30λΆ) β Coding test β Hiring Manager Interview(λΉλλ©΄/30λΆ) β Live Coding Test Interview (λλ©΄/135λΆ) β System Design(λΉλλ©΄/105λΆ) β Final Round μΈν°λ·°(λΉλλ©΄/30λΆ) β Reference Check β Offer
BENEFITS AND PERKS
- κΈλ‘λ² B2B κ³ κ°κ³Ό ν¨κ» μ±μ₯νλ Global Team
- μμ¨μ±κ³Ό νμ
μ λͺ¨λ κ°μΆ νμ΄λΈλ¦¬λ 근무
- μ μ§μμκ² λ§₯λΆ λ° 70λ§ μ μλΉ μ¬ν근무 μ₯λΉ μ§μ, 3λ
μ£ΌκΈ°λ‘ μ΅μ μ₯λΉ κ΅μ²΄
- μμ¬Β·κ΅ν΅λΉ λ± μμ λ‘κ² μ¬μ©ν μ μλ μ 60λ§ μ νλ λ²μΈμΉ΄λ μ 곡
- μ¬λ¬΄μ€ λ΄ μ€λ΅λ°(κ°μ, 컀νΌ, μ μ μν μ 곡)
- μ°λ§ 2μ£Όκ° κ²¨μΈλ°©ν μ΄μ
- μ° 1ν 건κ°κ²μ§ μ§μ
- μμ΄κ΅μ‘ νλ‘κ·Έλ¨ μ§μ