Why Kimchi?
Kimchi is the AI platform inside CAST AI. We started by helping companies run LLMs on their own Kubernetes clusters - now we're building the execution layer where agents do real work.
Our Infrastructure today: multi-model inference (MiniMax, Kimi, GLM-5, Nemotron, DeepSeek) with intelligent routing, an OpenAI-compatible API, and deployment flexibility from our GPUs to your VPC. The inference layer is the foundation. What we're hiring for sits on top of it: coding agents, agent runtimes, orchestration systems, and the reliability engineering that makes them actually finish things.
Tech Stack: TypeScript, Go, Kubernetes, AWS/GCP/Azure, MCP, Prometheus/Grafana/Loki, GitLab CI, ArgoCD.
Why harness engineering matters here
OpenAI and Anthropic ship models. They also ship one harness each - the scaffolding that turns a raw model into something that can plan, execute, recover, and complete work. We ship a different kind of harness: one built for cost-conscious, long-horizon autonomy, running on inference infrastructure we control end-to-end.
A decent model with a great harness beats a great model with a bad harness. We've watched this play out. The gap between what today's models can do and what you see them doing is largely a harness gap - and that gap is where we operate.
What you'll build
The ratchet.
Every time our agent makes a mistake, we engineer a solution so it never makes that mistake again. That means hooks that enforce constraints the model "knows" but forgets: pre-commit lint checks, permission gates, context compaction before the window fills. Success is silent, failures are verbose.
Long-horizon execution.
Our harness is built around spec-driven autonomy: meta-prompting, fresh context per task, worktree-per-slice git strategy, automatic replanning, crash recovery, stuck detection. We're implementing Ralph loops - when the model tries to exit, we intercept and reinject the goal into a fresh context. The agent reads state from disk and continues. Multi-session, multi-day work, without context rot.
Planner/executor splits.
Planning with a reasoning model, executing with a fast one, evaluating with a third. Separating generation from evaluation beats self-verification because agents reliably skew positive when grading their own work.
The harness surface.
CLI, TUI, MCP integration, sandboxed execution, telemetry. Our AGENTS.md is short - every line traces to a specific thing that went wrong. TypeScript on the surface, Go where it matters.
Memory and context.
Moving agents off laptops, giving them state that survives across sessions, managing context so information lands where it's actionable. Compaction, tool-call offloading, progressive skill disclosure.
What makes this different (with receipts)
You've seen the pitch: "we route to the best model." Everyone says that. Here's what we actually have:
β’ GPU infrastructure we own. Not just an API reseller. From GPU placement across clouds to the inference endpoint your agent calls - we control the cost curve.
β’ A harness-first thesis. We treat agent failures as configuration problems, not model problems. When we moved from a stock harness to our own, completion rates on internal benchmarks improved by 40%+ on the same model.
β’ Agents.md that earns every line. No brainstormed rules - every constraint in our system prompt traces to a real failure we saw and fixed.
Requirements:
β’ You've used AI coding agents in anger. Not demos - real work. You have opinions about Claude Code, Codex, OpenCode, Cursor. You know what it feels like when an agent gets stuck and why.
β’ Strong TypeScript or Go in production. Comfort moving between them. Our surface is TypeScript; our core is Go.
β’ You think in harness terms. You read "the agent hallucinated" and your first instinct is to ask what context it was missing, what hook should have caught it, what constraint should exist.
β’ You drive features end-to-end. Design β build β ship β measure β iterate. We don't have layers that absorb ambiguity for you.
Responsibilities:
β’ Build and evolve the agent harness - ship hooks, permission gates, and context compaction. Every AGENTS.md constraint traces to a failure you personally diagnosed.
β’ Own long-horizon execution - multi-session task completion via spec-driven prompting, worktree-per-slice git, Ralph loop recovery, and stuck detection. Completion rate is your metric.
β’ Architect planner/executor/evaluator pipelines - planning with a reasoning model, execution with a fast one, evaluation with a third. No self-verification.
β’ Manage agent memory and context - state persistence across sessions, context compaction, tool-call offloading. Zero context rot on multi-day work.
β’ Own the harness surface - CLI, TUI, MCP integrations, sandboxed execution, telemetry. TypeScript on the surface, Go where it matters.
What success looks like (after 6 months):
β’ You've shipped at least one major harness feature end-to-end: designed it, built it, measured it, iterated.
β’ You've added constraints to our AGENTS.md based on failures you personally observed and diagnosed.
β’ You've improved a measurable reliability metric - completion rate, context efficiency, or cost per successful task.
β’ You've formed strong opinions about where our harness is load-bearing and where it's dead weight.
Whatβs in it for you?
β’ Competitive salary (β¬6,500 - β¬9,000 gross, depending on the level of experience).
β’ Enjoy a flexible, remote-first global environment.
β’ Collaborate with a global team of cloud experts and innovators, passionate about pushing the boundaries of Kubernetes technology
β’ Equity options.
β’ Get quick feedback with a fast-paced workflow. Most feature projects are completed in 1 to 4 weeks.
β’ Spend 10% of your work time on personal projects or self-improvement.
β’ Learning budget for professional and personal development - including access to international conferences and courses that elevate your skills.
β’ Annual hackathon to spark new ideas and strengthen team bonds.
β’ Team-building budget and company events to connect with your colleagues.
β’ Equipment budget to ensure you have everything you need.
β’ Extra days off to help maintain a healthy work-life balance.
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote