ABOUT FAR.AI
FAR.AI [Upgrade to PRO to see link] is a non-profit AI research institute working to ensure advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.
Since our founding in July 2022, we've grown to 45+ staff [Upgrade to PRO to see link] published 40+ academic papers [Upgrade to PRO to see link] and convened leading AI safety events [Upgrade to PRO to see link] Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times [Upgrade to PRO to see link] Nature News [Upgrade to PRO to see link] and MIT Technology Review [Upgrade to PRO to see link] We conduct pre-deployment testing on behalf of frontier developers such as OpenAI and independent evaluations for governments including the EU AI Office [Upgrade to PRO to see link] We help steer and grow the AI safety field through developing [Upgrade to PRO to see link] [Upgrade to PRO to see link] [Upgrade to PRO to see link] with renowned researchers such as Yoshua Bengio; running FAR.Labs [Upgrade to PRO to see link] an AI safety-focused co-working space in Berkeley housing 40 members; and supporting the community through targeted grants [Upgrade to PRO to see link] to technical researchers.
ABOUT RED-TEAMING AT FAR.AI [Upgrade to PRO to see link]
FAR.AI [Upgrade to PRO to see link] red team is building toward a simple outcome: materially raising the bar for safety and security of the most widely deployed and capable AI systems in the world. We intend to be the tip of the spear in AI safety: the team that consistently finds the failures others miss, resulting in real mitigations, and setting the standard that labs and governments converge on. We also leverage our in-depth understanding of weaknesses in frontier models to advise frontier developers on mitigations, to guide our own research and grantmaking for improving model security, and to inform the public of key AI risks.
We are already one of the leading independent red-teaming organizations. Our work has helped most Western frontier model developers improve safeguards through pre- and post-deployment testing e.g., we have directly influenced safeguards at major frontier developers like OpenAI and Anthropic. Our work is also assisting high-leverage government efforts e.g., leading a consortium building CBRN evaluations for the European Commission/EU AI Office, and collaborating with the UK AI Security Institute.
In 2026, we are scaling from a strong team with standout wins into a new level of impact for any AI red team globally:
- Red-teaming all major frontier model releases (closed and open-weight) within days/weeks of release;
- Expanding strategic engagements with governments and conducting pre-deployment testing with most frontier labs;
- Deepening our testing of key risk areas like CBRN, cyber, and agents, and exploring new ones like AI control and alignment;
- Building tools, agents, and insights that raise the global standard for red-teaming;
ABOUT THE ROLE
FAR.AI [Upgrade to PRO to see link] is hiring a Technical Project Manager to be the delivery backbone of one of the world's most impactful frontier AI red-teaming programmes. You will own the delivery of some of our highest-stakes engagements with governments and frontier AI companies, support technical engagements/outcomes, assist our red-team recruiting, and be the operational glue that lets our red-team succeed.
This is a force-multiplier role. You will report to Edward Yee (Head of Growth & Strategy, who also leads the red-team), dotted line to Kellin Pelrine (co-leading and technical lead for the red-team). You will work alongside our red-teamers, researchers, and the rest of the team to turn a fast-growing team and portfolio of engagements into a reliable, high-velocity delivery team.
The red-team has ambitious goals and is evolving rapidly, and we expect this role to evolve to fit the priorities of the team. Our team is scaling from 2 to 15+ this year, with candidate profiles that require active sourcing and careful shepherding in order to hire the very best talent. Our engagement portfolio across frontier labs and governments is already complex and will grow further. You will be moderately technical β comfortable enough with red-teaming substance to read a report, engage credibly with technical colleagues, do technical writing/reviews, and understand what we are hiring for. You do not need to jailbreak models yourself, but you should be able to meet our red-teamers on their terms. Our best guess for the 2026 shape:
- Engagement delivery (~35%). Own the delivery of our largest multi-party engagement. Take delivery ownership of other engagements as they come in such as frontier-model red teaming engagements and government RFPs. Run the request for proposals (RFP) and opportunity pipeline β tracking live opportunities, scoping new engagements, drafting proposals, and supporting contract negotiations. Programme-manage new initiatives as they land, such as grant- making.
- Red-team recruiting (~30%). Own the red-team hiring pipeline end-to-end. Run daily pipeline management (candidate correspondence, scheduling, early advance/reject calls). Headhunt and source top talent, working with Kellin and Edward to refine target lists and reach passive candidates. Identify bottlenecks and smooth the process. Help write JDs, plan and run work trials, support candidates through the experience, and assess them at the end. This complements our org-wide technical recruiting function rather than replacing it.
- Misc (~35%). The red-team's needs will shift through the year. Likely stretch work includes analysis (e.g., identifying bottlenecks in our throughput, mapping the ecosystem of competitor and partner organisations), technical writing, event organising (convenings, workshops), support on policy and government-facing work, grant applications, and ad-hoc projects that need an owner. We will allocate this together based on what the team needs most, where you have comparative advantage, and where you want to grow.
We expect the balance to shift as the team grows.
This role is a great fit if you
- Have shipped complex, multi-stakeholder technical projects on real deadlines, ideally involving government counterparts or frontier technology.
- Enjoy recruiting and see it as core strategic work, not a box to tick. You get energy from finding the right person, pulling them through a process, and getting them to say yes.
- Are motivated by impact over recognition, publishing papers, or building a personal policy brand.
- Are excited to be a force multiplier for one of the most impactful teams in the world.
- Low ego and drive to do whatever work most advances the teamβs goalsβeven when itβs behind the scenes, challenging tasks and schedules, or outside a narrow role definition.
- Are comfortable being moderately technical.
- Enjoy moving fast in ambiguous environments where priorities shift and resourcefulness matters more than process.
- Take ownership of outcomes, not tasks β you ask whether the thing we care about is actually going to happen, and if not, you act.
This role is a poor fit if you
- Prefer to write specs and hand them to engineers β this role requires you to be close to the work.
- See recruiting as administrative overhead rather than strategic work.
- Need a highly structured environment with stable problem definitions and clear playbooks.
- Are motivated mainly by compensation, title, or visible authority over senior relationships.
- Are uncomfortable working closely with a technical team and engaging with technical material.
- Are not willing to be relentless.
ABOUT YOU
Strong candidates typically have many (but not all) of the following:
- Substantial programme or engagement management experience (>5y) in a high-velocity technical environment (frontier labs, AI safety organisations, AISIs, technical consultancies, government programme offices, or scaling technical startups).
- A track record of delivering complex multi-party programmes to hard deadlines, with clear evidence of the judgment calls you made and the trade-offs you owned.
- Comfort with the technical substance of our work. You do not need to be a researcher or engineer, but you should be able to read technical reports (red-teaming, security/vulnerability etc.) and form a view on what matters, engage credibly with our technical team, and be curious enough to dig in when it's relevant.
- Experience running or materially contributing to technical recruiting β sourcing, pipeline management, work-trial design, or end-to-end hiring. You don't need to have been a full-time recruiter, but you should have shown you can own hiring outcomes.
- Experience drafting proposals, responses to RFPs, or similar written artefacts for government or frontier technology counterparts.
- Strong written communication β the ability to write status updates, risk memos, outreach to candidates, and external briefs with minimal editing.
- Demonstrated ability to operate with significant autonomy and good judgment in high-stakes settings.
It is a plus (but not required) if you have:
- Familiarity with AI safety as a field β the risk models, the landscape of labs and institutes, and the case for independent testing.
- Experience working with governments, frontier AI companies, or AI Safety organisations.
- A technical background in ML, cybersecurity, software engineering, or a related field.
- Previous exposure to AI evaluations, red-teaming, or AI governance work.
- Experience running grantmaking or RFP processes.
LOGISTICS
If based in the USA or Singapore, you will be an employee of FAR.AI [Upgrade to PRO to see link] (501(c)(3) research non-profit / non-profit CLG). Outside the USA or Singapore, you will be employed via an EOR organisation (employment contract) on behalf of FAR.AI [Upgrade to PRO to see link] or as a contractor.
- Location: Remote globally. We can sponsor US or Singapore visas.
- Hours: Full-time. Expect up to one trip per month for convenings, government meetings, or team gatherings.
- Compensation: USD 125,000β190,000, depending on experience. Exceptional candidates may be offered more.
We know these roles are rare and the skill combination is unusual. If you're uncertain whether your background fits but are excited by the mission and challenges, we encourage you to apply β we're looking for excellence and potential, not a perfect resume match.