Weβre seeking a AI Engineer to design and ship production-grade agentic AI systems that automate complex workflows end-to-end. This is a hands-on role with significant technical ownership. Youβll work closely with the Chief Architect, product, engineering, and domain experts to translate ambiguous, high-impact problems into reliable AI-driven user experiences.
What success looks like:
Ship AI capabilities that measurably improve user outcomes (quality, time saved, throughput)
Build systems that are reliable by design: evals, observability, safety, and cost/latency controls from day one
Iterate quickly using a tight loop of instrument β evaluate β improve β deployΒ
What Youβll Do
Agentic AI Feature & Workflow Development
β’ Build and integrate AI-driven features using LLM APIs (OpenAI / Azure OpenAI, Anthropic, Gemini on Vertex AI)
β’ Design and implement tool-using agents (structured function calling, schema validation, retries, fallbacks)
β’ Build multi-agent workflows when appropriate (e.g., planner/worker, reviewer/critic, specialist routing) and know when a simpler architecture is better
β’ Create agentic workflows such as document understanding, extraction, reasoning over evidence, task automation, and multi-step decision support
β’ Own context engineering end-to-end:
β’ dynamic context assembly (retrieval + state + tool outputs)
β’ context budgeting and compression/summarization
β’ grounding strategies to reduce hallucinations and improve consistency
β’ Implement retrieval-augmented generation (RAG) and search workflows using off-the-shelf vector stores and embedding services
Evaluation, Quality & Iteration (Core)
β’ Establish evaluation frameworks for accuracy, reliability, and output quality
β’ Build task-specific eval suites: golden datasets, adversarial cases, regression tests, and rubric-based scoring
β’ Set up automated evaluation pipelines and release gates (CI/CD-friendly) tied to prompt/model/version changes
β’ Define and monitor online metrics (e.g., task success rate, human override rate, safety flags, latency, cost) and run experiments/A-B tests where appropriate
β’ Use LLM-as-judge responsibly: calibrate, validate, and pair with human labels when needed
Engineering, Integration & Observability
β’ Develop scalable backend services and APIs that incorporate AI functionality
β’ Integrate AI pipelines into existing cloud, microservices, and event-driven architectures
β’ Implement observability and analytics for all AI features (tracing, evaluations, prompt versioning, cost tracking) Example tooling: Langfuse (and/or OpenTelemetry-compatible stacks)
β’ Ensure reliability, uptime, performance, and security of AI services
β’ Build internal tooling for evaluation, testing, prompt/version management, and safe deployment
Product & Collaboration
β’ Partner with product managers, designers, the Chief Architect, and domain SMEs to shape AI-first solutions
β’ Rapidly prototype concepts and iterate based on user feedback and measurable eval results
β’ Translate business problems into well-structured AI workflows without requiring ML model training
β’ Document system behavior, known failure modes, and operational playbooks
Governance & Safety
β’ Implement guardrails, checks, and fallback logic for safe and predictable AI behavior
β’ Help define and follow compliance, privacy, and responsible AI guidelines
β’ Design for safe tool execution (bounded actions, permissions, escalation paths, human-in the-loop review
What You Bring
Core Strengths (Required)
β’ Strong software engineering background (Python preferred) and experience shipping backend services
β’ Deep hands-on experience building agentic LLM systems from first principles: agent loops, tool interfaces, planning/replanning, memory/state, and failure handling
β’ Strong context engineering ability: retrieval strategies, routing, grounding, context budgeting, and long-context tradeoffs
β’ Strong evaluation discipline: golden datasets, regression gating, automated eval pipelines, and online monitoring
β’ Practical experience with LLM APIs (OpenAI/Azure OpenAI/Anthropic/Gemini) and AI orchestration frameworks
β’ Excellent debugging, systems thinking, and problem decomposition skills
β’ Comfortable operating in fast-paced, ambiguous environments with high ownership
Signals We Value
β’ Youβve shipped an LLM/agent system in production and can clearly explain:
β’ the failure modes you discovered
β’ the evals you built to catch regressions
β’ how you improved cost/latency while increasing quality
β’ how you monitored and iterated safely over time
β’ You keep up with industry developments (model releases, frameworks, best practices) and can translate them into pragmatic improvement
Nice to Have
β’ Experience with cloud platforms (AWS and/or GCP), microservices, and event-driven systems
β’ Experience with observability stacks (OpenTelemetry, Datadog, Honeycomb) and AI-specific tooling (e.g., Langfuse, Braintrust, HumanLoop, W&B Weave)
β’ Experience with workflow orchestration for long-running jobs (Temporal, Celery, Airflow)
β’ Experience building enterprise AI features (permissions, auditability, compliance constraints)
β’ Experience with safety/policy layers (PII handling, prompt injection defenses, sandboxed tool execution)
Why Join Us
β’ Build core AI capabilities that directly impact users and product strategy
β’ Work on cutting-edge, real-world agentic systemsβfocused on applied engineering (no model training required)
β’ High ownership, fast iteration cycles, and strong cross-functional collaboration
β’ Competitive compensation and opportunities for rapid advancement
What Your First 90 Days Could Look Like
Ship one production agent workflow end-to-end with:
β’ tracing + observability
β’ an offline eval suite with regression gates
β’ cost/latency targets and monitoring
β’ documented failure modes and fallback path