ABOUT THE JOB
We believe using large language and multimodal models should be as simple as calling an API. To achieve this in production, we need to serve enterprises across clouds, with authentication, billing, multi-tenant isolation, and zero tolerance for downtime.
We are looking for a Senior Backend Engineer who is excited by the full breadth of what it takes to run a platform in production. You will own the business logic layer that sits between our inference engine and every customer who relies on it. Your work spans API engineering, service development, and data architecture. If you like solving problems that only reveal themselves in the wild, this is your role: edge cases in multi-cloud orchestration, enterprise requirements that donβt fit neatly into a spec, performance bottlenecks that are hard to reproduce.
You will move across domains, make decisions under uncertainty, and build systems that work cleanly, reliably, and at scale. We are looking for people with a track record of owning complex systems in production and solving unique problems. A great candidate is a strong collaborator who enjoys solving complex architectural challenges, cares deeply about developer workflows, and is eager to help define the future of AI adoption.
KEY RESPONSIBILITIES
- Own the architecture and evolution of core backend microservices powering our AI inference platform, from the API layer through business logic to the data layer.
- Design and build production-grade APIs (REST, gRPC, GraphQL) that serve as the foundation for AI deployments, developer integrations, and enterprise workflows.
- Build and scale enterprise-grade platform capabilities: authentication, RBAC, billing, organization management, and secure multi-tenant SaaS infrastructure.
- Develop AI-specific platform features, including LLM deployment workflows and inference-specific service integrations.
- Design and optimize data models and pipelines across OLTP (PostgreSQL) and OLAP (ClickHouse) systems.
- Collaborate with infrastructure engineers on multi-cloud deployment and resource orchestration pipelines.
- Set reliability and performance standards for the services you own, resolving production issues with urgency and rigor
- Drive engineering quality through design reviews, automated testing, and CI/CD.
QUALIFICATIONS
- 5+ years of backend or systems engineering experience in production environments
- Bachelor's or Master's degree in Computer Science, Computer Engineering, or equivalent.
- Expertise in Python and modern frameworks (e.g., FastAPI); should be able to write code others learn from.
- Strong experience designing and operating distributed systems at scale.
- Solid API design experience across REST, gRPC, and GraphQL.
- Proficiency in data modeling and SQL, with hands-on experience in PostgreSQL and OLAP systems such as ClickHouse.
- Working knowledge of LLM serving.
- Experience building secure, multi-tenant SaaS architectures: authentication, RBAC, and compliance requirements.
- Familiarity with cloud-native development and observability tooling (OpenTelemetry or equivalent).
- Strong systems thinking and ability to reason about failure modes.
PREFERRED EXPERIENCE
- Hands-on experience with AI or model serving infrastructure.
- Experience with Kubernetes for production container orchestration and scaling.
- Background building developer-facing SDKs, CLIs, or internal engineering platforms.
- Exposure to multi-cloud environments and cross-cloud resource management.
- Experience leading incident response and postmortems for production systems.
- Basic familiarity with modern frontend frameworks (e.g., React/Next.js [Upgrade to PRO to see link] for cross-functional collaboration.
BENEFITS
- Flexible working hours
- Daily lunch and dinner provided; unlimited snacks and beverages
- Supportive and highly collaborative work environment
- Health check-up support and top-tier equipment/hardware support
- A front-row seat to the generative AI infrastructure revolution
- Competitive compensation, startup equity, health insurance, and other benefits.
ABOUT FRIENDLIAI
FriendliAI is building the worldβs best AI inference platform that makes large language and multi-modal models fast, efficient, and deployable at scale. We power high-throughput, low-latency AI workloads for organizations worldwide and integrate directly with Hugging Face, giving developers instant access to over 500,000 open-source models.
We are a small, fast-moving team doing work that matters at one of the most exciting moments in the history of technology. With our world-class inference engine, we are building a platform that the AI industry can actually rely on.