<div>
<p>Product Owner, Operations (AI-First)</p>
<p>The Role<br>Belong is building the Residential Operating System: a fully integrated, AI-powered platform that<br>manages homes, coordinates thousands of real-world service moments, and creates authentic<br>belonging experiences for homeowners and residents. The member journey is the product. But<br>the Residential OS only delivers on that promise if the operational machinery running beneath it<br>is intelligent, instrumented, and self-improving.<br>Most companies say they are AI-first. At Belong, it means something specific: by the end of<br>2025, the majority of communications across sales, leasing, homecare, and concierge functions<br>are AI-generated. Human Advisors and Concierges handle trust-critical moments. AI agents<br>handle everything else: triage, scheduling, status updates, escalation routing, vendor<br>coordination, documentation. The operations product surface is where that architecture lives or<br>dies.<br>As Product Owner, Operations, your job is to design, deploy, and relentlessly improve the<br>AI-powered system that runs the homeowner and resident journey from inspection through<br>occupancy. You are not writing requirements for a future that engineers will build someday. You<br>are shipping agent-driven workflows today, measuring their quality and deflection rates next<br>week, and iterating the week after. This role is for someone who understands that the frontier of<br>operations is not better dashboards. It is autonomous systems that perform with the judgment of<br>your best operator, at infinite scale, at the moment the member needs it.</p>
<p>What You'll Own<br>AI agent architecture across the operational journey.<br>Every operational phase, from home preparation, move-in orchestration, homecare and<br>maintenance, to Pro coordination and vendor scheduling, has a human workflow today and an<br>AI-assisted target state. You will define that target state phase by phase: what the agent<br>handles autonomously, what triggers human review, what escalates immediately. You will write<br>the logic, instrument the outcomes, and own the quality bar. An agent that deflects volume but<br>degrades CSAT is not a win. You hold both numbers simultaneously.<br>The agent-human handoff model.<br>The Member Journey Brief is explicit: humans are deployed at trust-critical moments. AI handles<br>orchestration, speed, and precision behind the scenes. You are the person who defines exactly<br>where that line sits, and who moves it systematically as agent quality improves. You will build<br>confidence thresholds, fallback protocols, and human-in-the-loop checkpoints that protect the<br>member experience while continuously expanding the autonomous surface area.</p>
<p>LLM-powered communication workflows.<br>Belong's target is 80% AI-generated communications across operational functions by Q3. You<br>will own the product layer that makes this real for operations: the prompt architecture, context<br>retrieval pipelines, output quality review systems, and the feedback loops that improve<br>generation quality over time. You will define what context an agent needs to respond like a<br>trained Concierge, and build the retrieval and injection infrastructure that delivers it.<br>Foundation as the AI control panel.<br>Foundation is where Belong's operational teams live. Every tool your squad ships into<br>Foundation is either creating leverage for humans or replacing manual work with agent-driven<br>automation. You will define the roadmap for Foundation's evolution from task management<br>system to AI control panel: where agents surface for review, where exceptions queue for human<br>action, where quality scores and deflection rates are visible in real time.<br>Operational instrumentation and model feedback.<br>AI systems degrade without structured feedback. You will build the instrumentation that captures<br>ground truth: CSAT signals, escalation rates, rework rates, SLA breach patterns, and member<br>sentiment. You will design the feedback loops that push this signal back into model evaluation<br>and prompt improvement. You are not shipping a model. You are shipping a system that learns.</p>
<p>The AI Stack You Will Work With<br>β’ LLM-based communication generation with context injection from CRM and operational<br>state<br>β’ Agentic scheduling and coordination workflows (Homecare triage, Pro dispatch, vendor<br>coordination)<br>β’ Automated escalation routing based on signal classification<br>β’ Quality scoring and anomaly detection on agent outputs<br>β’ Retrieval-augmented generation for Concierge and Homecare agent context<br>You do not need to build the infrastructure from scratch. You need to define what it should do,<br>instrument it, and iterate it.</p>
<p>What Success Looks Like<br>90 days: Every operational phase has a documented AI target state with defined autonomous<br>scope, human escalation thresholds, and instrumentation in place.<br>6 months: AI-assisted workflows have measurably reduced manual communication volume<br>across at least 2 operational functions with no CSAT degradation.</p>
<p>Year 1: The majority of routine operational communications in your product surface are<br>AI-generated. Human operators are handling exceptions, escalations, and trust-critical<br>moments, nothing else. Failed move-in rate is below 3%. Time-to-list is trending down quarter<br>over quarter.</p>
<p>Example KPIs You Will Be Held To<br>β’ AI deflection rate vs. manual handling baseline, by operational function<br>β’ CSAT from homeowners and residents at each operational phase (the constraint:<br>deflection gains cannot come at CSAT cost)<br>β’ SLA compliance rates for homecare and Pro services<br>β’ Time-to-list (inspection to live listing)<br>β’ Move-in readiness rate and failed move-in rate<br>β’ Human escalation rate as a quality signal on agent confidence calibration</p>
<p>Who You Are<br>**AI systems thinker.** You do not think about AI features. You think about AI systems: input<br>context, output quality, fallback behavior, quality measurement, and continuous improvement<br>loops. You have designed or operated LLM-powered workflows in a production environment and<br>understand how they fail, not just how they work.<br>**Operationally grounded.** You have worked in environments where things break in the real<br>world, with real vendors, real homes, real members, and you understand that an agent<br>operating without the right context is more dangerous than no agent at all. You design for failure<br>modes first.<br>**Outcome obsessed.** You hold deflection rate and CSAT simultaneously. You do not celebrate<br>automation that degrades experience. You ship, measure, and decide based on what the<br>numbers actually say.<br>**Technically fluent.** You can write a SQL query, read a vector similarity result, reason about<br>retrieval quality, and understand the tradeoffs in a prompt engineering decision. You do not need<br>to write the code. You need to understand it well enough to make the right call.<br>**Cross-functional driver.** Operations, Homecare, Leasing, Vendor Ops, and Engineering all<br>touch your surface. You run the rituals, translate across languages, and hold the delivery<br>cadence.</p>
<p>What You Bring<br>β’ 3 to 5 years of product experience, with at least 1 to 2 years directly building or operating<br>AI-powered products in a production environment<br>β’ Hands-on experience with LLM integrations, prompt engineering, RAG pipelines, or<br>agentic workflow design<br>β’ Demonstrated ownership of operational tooling or service orchestration products in a<br>marketplace, logistics, or operations-intensive environment<br>β’ Proficiency with data: SQL, funnel analysis, and the ability to detect when a metric is<br>being gamed or misread<br>β’ Experience with AI evaluation frameworks and output quality measurement is a strong<br>advantage<br>β’ Prior work in consumer real estate, hospitality, or residential services is a plus</p>
</div>