ABOUT ARENA INTELLIGENCE
Arena Intelligence is the open platform for evaluating how AI models perform in the real world. Created by researchers from UC Berkeleyโs SkyLab, our mission is to measure and advance the frontier of AI for real-world use.
Millions of people use Arena Intelligence each month to explore how frontier systems perform โ and we use our communityโs feedback to build transparent, rigorous, and human-centered model evaluations. Leading enterprises and AI labs rely on our evaluations to understand real-world reliability, alignment, and impact. Our leaderboards are the gold standard for AI performance โ trusted by leaders across the AI community and shaping the global conversation on model reliability and progress.
Weโre a team of researchers, engineers, academics, and builders from places like UC Berkeley, Google, Stanford, DeepMind, and Discord. We seek truth, move fast, and value craftsmanship, curiosity, and impact over hierarchy. Weโre building a company where thoughtful, curious people from all backgrounds can do their best work. Everyone on our team is a deep expert in their field โ our office radiates excellence, energy, and focus.
ABOUT THE ROLE
Arena is seeking a Scientific Content Lead to define and defend the scientific credibility of the worldโs most trusted AI evaluation platform. Youโll ensure that Arenaโs methodology, data quality practices, and evaluation results are understood clearly by researchers, labs, policymakers, analysts, and enterprises.
This role is deeply technical and highly cross-functional. Youโll work directly with our research team to translate evaluation science into rigorous public communication and content, anticipate methodological critiques, and uphold Arenaโs commitment to transparency and neutrality.
YOUโLL
- Own Arenaโs scientific communications strategy, ensuring that our evaluation methodology, benchmarks, and data quality practices are clearly understood and accurately represented externally.
- Lead Arenaโs proactive data quality narrative, defending against common critiques and mischaracterizations through transparency, evidence, and high-integrity storytelling.
- Develop canonical explanations of Arenaโs measurement approach, including Bradley-Terry-Luce-style ranking, confidence intervals, and uncertainty-aware interpretation.
- Ensure that Arenaโs leaderboards are communicated responsibly: rankings are statistical estimates, small differences are often noise, and uncertainty must be preserved in public interpretation.
- Anticipate, track, and respond to methodological critiques, especially around contamination, overfitting, gaming, distribution shift, and evaluation validity.
- Partner closely with researchers to translate technical work into rigorous public materials, including methodology documentation, research posts, and open-source releases.
- Support Arenaโs Academic Partnerships Program, strengthening scientific connectivity through collaborations, citations, and peer-reviewed credibility.
- Create briefing materials for high-stakes audiences, including frontier AI labs, policymakers, analysts, and enterprise partners, ensuring that technical nuance survives external scrutiny.
- Serve as a scientific editor and reviewer across external communications, stress-testing claims before they become public narratives.
YOUโLL HAVE
- 8-10 years of experience in AI/ML, evaluation, research, or scientific communications, with deep familiarity in how frontier model performance is measured and debated.
- Strong technical background in machine learning, benchmarking, or model evaluation, with the credibility to engage directly with leading labs and researchers.
- Exceptional writing and communication skills, especially the ability to explain complex methodology clearly without oversimplifying or overstating conclusions.
- Track record of producing scientifically rigorous external-facing work, such as technical publications, evaluation reports, methodology documentation, or research translation.
- Deep comfort operating in ambiguity, where uncertainty, tradeoffs, and limitations must be communicated transparently rather than smoothed over.
- High editorial judgment and the ability to identify where scientific nuance is most likely to be misunderstood or weaponized.
- Collaborative mindset and experience partnering across research, product, policy, and communications teams.
Great-to-have's
- Direct experience working with large-scale human preference data, evaluation platforms, or benchmarking systems.
- Familiarity with common failure modes in AI evaluation, including contamination, overfitting, gaming, and distribution shift.
- Experience contributing to open source scientific tooling or methodology transparency efforts.
- Existing relationships within the AI research, safety, or evaluation community.
- Experience engaging with academic institutions, research alliances, or scientific journals.
- Comfort operating within neutrality and integrity constraints required of an independent evaluation platform.
WHAT WE OFFER
- We offer competitive compensation and equity aligned to the markets where our team members are based. The base salary range will depend on the candidateโs permanent work location.
- Comprehensive health and wellness benefits, including medical, dental, vision, and additional support programs.
- The opportunity to work on cutting-edge AI with a small, mission-driven team
- A culture that values transparency, trust, and community impact
Come help build the space where anyone can explore and help shape the future of AI.
Arena Intelligence provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, genetics, sexual orientation, gender identity, or gender expression. We are committed to a diverse and inclusive workforce and welcome people from all backgrounds, experiences, perspectives, and abilities.