About the Role
As a Staff Platform Engineer - AI Infrastructure, you will build and scale the infrastructure behind
Paytm's AI inference platform, serving internal teams and enterprise customers and supporting
new customer use cases from the ground up. You will own GPU infrastructure, model hosting
and serving, and multi-model routing across modalities. This includes running our own coding
and domain-specific models (voice, vision, risk, fintech workflows) as well as third-party models
on shared GPU and accelerator clusters.
You will also build self-service platforms that let teams provision, compute, deploy and
customize models, and manage resources through APIs and control planes, so they can use AI
without rebuilding infrastructure each time.
Your work will form the AI control plane for Paytm Intelligence (Pi): policy-driven routing, quotas,
observability, and usage and cost visibility. It will directly affect how fast we ship agents and AI
features, how reliably they run, and how efficiently we use our hardware across payments, risk,
fraud, collections, support, and developer experience.
What You'll Do
β’ Design and operate GPU infrastructure for model hosting, including provisioning, scheduling, and cost optimization across cloud and on-premise environments
β’ Build and scale model serving systems using vLLM, TensorRT-LLM, Triton, or equivalent, supporting real-time inference with strong latency and availability guarantees
β’ Implement multi-model routing to serve multiple models across modalities (text, voice, code, vision) on shared infrastructure
β’ Own the model lifecycle end to end: download, deploy, serve, monitor, swap, and scale
β’ Drive inference optimization including quantization strategies (AWQ, GPTQ), batching, caching, and cold start reduction
β’ Build self-service infrastructure platforms where teams provision compute, storage, and model endpoints through APIs and control planes
β’ Implement infrastructure-as-code at scale using Terraform, Pulumi, or CDK
β’ Build observability and reliability for inference systems: SLIs/SLOs, GPU utilization
β’ monitoring, latency tracking, automated capacity planning, and alerting
β’ Define platform standards and governance including multi-tenant isolation, cost attribution, and resource quotas
β’ Lead architectural design and influence engineering direction across the AI infrastructure stack
What You'll Bring
β’ 8+ years of software engineering experience, including 3+ years building infrastructure platforms or ML/AI infrastructure
β’ Deep experience with cloud infrastructure (AWS, GCP) and Kubernetes
β’ Hands-on experience with GPU workloads and model serving (vLLM, TensorRT-LLM, Triton, or similar)
β’ Strong software engineering fundamentals in Python, Go, or C++
β’ Experience with infrastructure-as-code (Terraform, Pulumi, CDK)
β’ Experience designing self-service platforms or internal developer tooling
β’ Understanding of model optimization: quantization, batching, serving architectures
β’ Proven ability to lead complex cross-team technical initiatives
β’ Strong communication skills and the ability to influence technical direction
Nice to Have
β’ Experience building or operating inference infrastructure at scale
β’ Experience with CUDA, GPU scheduling, or hardware-level optimization
β’ Experience with multi-model serving across different modalities
β’ Experience with edge inference or on-device model deployment
β’ Experience with model fine-tuning infrastructure (LoRA, QLoRA, PEFT)
β’ Background in fintech or regulated industries
Go Big or Go Home!
Paytm Labs believes in diversity and equal opportunity and we will not tolerate any forms of discrimination or harassment.Β Our people are critical to our success and we know the more inclusive we are, the better our work will be.Β
We thank all applicants, however, only those selected for an interview will be contacted.Β
Paytm Labs is committed to meeting the accessibility needs of all individuals in accordance with the Accessibility for Ontarians with Disabilities Act (AODA) and the Ontario Human Rights Code (OHRC). Should you require accommodations during the recruitment and selection process, please let us know.