#NewRole #DevOps #MLOps #AI #Python #Legal #Cupertino #Austin
DevOps MLOps Engineer (Python), Legal Operations
Day 1 onsite Cupertino, CA / Austin, TX (Prefers local folks)
Hybrid β 3 days onsite and 2 days remote
Long term contract
Direct client opportunity
No mid layer / No Implementation partners are Involved
For this role, ideally someone has previous working exp with same client, because this role is specific to how we deploy solutions/ products/ code in Client environment.
The Applied Data Science team within Legal Operations builds AI-powered tools, analytics capabilities, and data infrastructure that enable Clientβs legal organization to work smarter and faster. The team uses Clientβs Β internal AI platform to build and deploy production-grade AI applications β from document analysis and legal research to spend analytics and conversational intelligence. As the volume and complexity of AI deployments grows, the team needs a dedicated engineering discipline to own the path from prototype to production.
The MLOps Engineer is the bridge between the AI applications the Applied Data Science team builds and the production environment where stakeholders rely on them. You will own deployment pipelines, integration infrastructure, access governance, and scalability for every AI-powered tool the team ships. This is not a model training or data science role β it is an engineering role focused on making AI applications reliable, governed, and scalable in an enterprise environment.
Key Responsibilities
Deployment & Delivery
Integration & Data Connectivity
Governance & Security
Observability & Reliability
Enablement
Minimum Qualifications
3+ years of experience in MLOps, DevOps, or platform engineering roles
Strong Python skills β this is the primary language for tooling and automation
Hands-on experience deploying and operating LLM-powered applications in production
Experience building CI/CD pipelines for AI or software applications (GitHub Actions or equivalent)
Experience with REST API development and integration β connecting applications to live data sources
Working knowledge of containerization β Docker; Kubernetes basics
Experience implementing access control, authentication, and audit logging in enterprise environments
Preferred Qualifications
Exp with LLM observability tooling β monitoring latency, token usage, response quality in production
Familiarity with dbt, Snowflake, or enterprise data warehouse environments
Exp with vector database management β Chroma, Pinecone, Weaviate, or equivalent
Exp in a regulated or compliance-sensitive environment where auditability and data access governance are non-negotiable
Familiarity with RAG (Retrieval-Augmented Generation) application architecture β not to build them, but to deploy and operate them reliably.
Interested candidates please send your updated resume at [Upgrade to PRO to see contact] or message me on LinkedIn for more details.