Code and Theory is seeking a senior cloud engineer who wants to own the full technical lifecycle of enterprise client deployments β from assessing a client's existing infrastructure and designing the integration architecture, to provisioning the environment and keeping it running in production. This is a hands-on role that also requires real client engagement: you will sit in technical working sessions, work directly with client DevOps teams, and be the person who translates security and compliance requirements into concrete infrastructure decisions.
You will be working on a platform that connects our AI product to enterprise marketing environments across different cloud providers. Each client brings its own infrastructure, data stack, and security perimeter. Your job is to figure out how to connect to it, build the integration reliably, and hand it off in a state that can be operated and maintained over time.
WHAT YOU'LL DO
β’ Assess each client's cloud infrastructure, data stack, and security perimeter before any build starts β translate findings into a concrete integration plan
β’ Design and provision client environments using Terraform β networking, IAM, container orchestration, managed storage, and secrets management across GCP, AWS, or Azure depending on the client
β’ Deploy and operate LLM inference pods in client cloud environments β managing API integration, rate limits, latency, and failure handling without needing a data scientist in the room
β’ Build and maintain integration layers that connect client data sources to the AI layer β you own the plumbing that makes inference useful; the data science team owns what runs on top of it
β’ Deploy and maintain containerized workloads via Helm β orchestration, ETL workers, and AI inference pods running inside the client's cloud perimeter
β’ Own data pipeline deployments end to end β scheduling, pagination, retry logic, and rate-limit management against client API gateways
β’ Manage distributed ETL jobs at scale β JSON flattening, schema enforcement, and structured output delivery
β’ Enforce data residency requirements β raw data stays inside the client's environment, only structured output leaves to our shared infrastructure
β’ Serve as the primary technical contact for each client's DevOps and infrastructure teams throughout the engagement
β’ Lead technical working sessions with client teams β validate configurations, confirm IAM and credential models, review cluster specs before deployment
β’ Triage and resolve pipeline and infrastructure failures across multiple client environments simultaneously
β’ Implement container security standards β non-root execution, read-only filesystems, startup integrity hashes, tamper protection
β’ Mentor junior and mid-level engineers and contribute implementation patterns to the shared playbook after every client engagement
WHAT YOU'LL NEED
β’ 6+ years of cloud engineering experience β production experience on at least two of GCP, AWS, or Azure, and willing to operate across all three depending on client environment
β’ Deep Terraform experience β you have provisioned multi-environment, multi-tenant production infrastructure from scratch, not just applied existing configurations
β’ Comfortable with Kubernetes and Helm in production β deploying, debugging, scaling, and securing containerized workloads, not just running them
β’ Have run data pipelines in production β you know what breaks, how to recover, and how to build retry and backoff logic that actually holds up
β’ Hands-on experience with distributed processing at scale β not just theoretical knowledge
β’ Enterprise API integration experience β OAuth 2.0, API key management, rate limiting, API gateways β end to end
β’ Solid security fundamentals β container hardening, credential management, and data residency enforcement in environments where it actually matters
β’ Have worked directly with client or customer engineering teams β comfortable leading technical conversations with enterprise DevOps and infrastructure teams, not just supporting them
β’ You write things down β runbooks, integration notes, playbook contributions. The next engineer should be able to operate what you built
β’ Hands-on experience integrating LLM APIs (Anthropic, OpenAI, or equivalent) into production pipelines β not as an end user, but wiring inference into systems that run at scale across real client environments
NICE TO HAVE
β’ Experience with:
β’ LLM inference infrastructure
β’ Marketing technology stacks (DAM, CDP, CRM)
β’ Multi-tenant client environments
β’ A cloud certification (AWS Solutions Architect, GCP Professional Cloud Architect, or Azure equivalent)
β’ Agency, consultancy, or product company background serving multiple enterprise clients simultaneously
ABOUT US
Born in 2001, Code and Theory is a digital-first creative agency that sits at the center of creativity and technology. We pride ourselves on not only solving consumer and business problems, but also helping to establish new capabilities for our clients. With a global client roster of Fortune 100s and start-ups alike, we crave the hardest problems to solve. We have teams distributed across North America, South America, Europe, and Asia. The Code and Theory global network of agencies is growing and includes Kettle, Instrument, Left Field Labs, Create Group, Current, and TrueLogic.
Striving never to be pigeonholed, we work across every major category: from tech to CPG, financial services to travel & hospitality, government and education to media and publishing. We value the collaboration with our client partners, including but not limited to Adidas, Amazon, Con Edison, Diageo, EY, J.P. Morgan Chase, Lenovo, Marriott, Mars, Microsoft, Thomson Reuters, and TikTok.
The Code and Theory network is comprised of nearly 2,000 people with 50% engineers and 50% creative talent. Weβre always on the lookout for smart, driven, and forward-thinking people to join our team.
The target range of base compensation for this role is $110,000 - $150,000. Actual compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, and location.