ABOUT JUNIPER SQUARE
Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology β all while contributing as a member of a values-driven organization β we want to hear from you.Β
Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first [Upgrade to PRO to see link] operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time.
ABOUT YOUR ROLE
We are building a next-generation intelligent data platform for private markets β a greenfield initiative that will reshape how financial data is ingested, normalized, validated, enriched, and distributed across a complex ecosystem. This is a foundational role on a small, high-caliber seed team working at the intersection of modern data engineering and applied AI.
As a Senior Software Engineer on the Data Platform team, you will own the end-to-end delivery of core pipeline components: schema mapping, data normalization, validation, enrichment, and distribution to downstream systems. You will write production code every day, work closely with staff engineers to make meaningful architectural contributions, and help build the technical standards and practices that the broader team will grow into.
This role is for someone who executes with high craft and speed, is fluent in agentic development as a first-class part of their workflow, and who wants the challenge and ownership that comes with building something genuinely new.
WHAT YOUβLL DO
Deliver Core Pipeline Components
β’ Build and ship production-quality implementations of the data normalization, schema mapping, validation, enrichment, and distribution pipeline for a net-new intelligent data warehouse
β’ Write clean, well-tested, performant code across the full stack β backend services, data pipeline logic, and API integrations
β’ Take end-to-end ownership of features from design through deployment, with accountability for correctness and reliability in production
Contribute to Technical Architecture
β’ Work closely with staff engineers to shape the architecture of a modern, AI-native data warehouse serving institutional financial clients
β’ Bring thoughtful input on schema design, normalization approaches, and API patterns β and execute those decisions with precision
β’ Identify and raise technical risks early; propose and implement solutions rather than waiting to be directed
Build and Operate with AI-Native Practices
β’ Use agentic coding tools and LLM-assisted development as your primary workflow β this is how the entire team operates
β’ Critically evaluate AI-generated code for correctness, edge cases, and regressions β shipping quality output regardless of how it was produced
β’ Contribute to the teamβs evolving practices around AI-accelerated development and testing
Support Data Quality and Reliability
β’ Build and maintain data validation checks, monitoring, and observability tooling that keeps the pipeline trustworthy at scale
β’ Participate in on-call and production support, diagnosing and resolving data quality issues quickly and thoroughly
β’ Write and maintain clear technical documentation for the systems you build
Collaborate Across Engineering and Product
β’ Partner effectively with staff and senior engineers, your engineering manager, and product management to translate requirements into well-scoped, executable work
β’ Participate in design and code reviews, offering and receiving feedback constructively
β’ Develop domain intuition around private markets data β fund administration, investment data schemas, institutional reporting β to make better technical decisions
QUALIFICATIONS
Required
β’ 4β7 years of software engineering experience, with a track record of shipping production systems end-to-end
β’ Strong full-stack engineering fundamentals β backend services, data pipelines, and API design; we will ask you to walk through systems you personally built
β’ Hands-on experience with data pipeline or data warehouse engineering: ETL/ELT patterns, schema design, normalization, and data distribution
β’ Production experience building with LLMs β prompt design, model integration, and output validation in real systems
β’ Fluency with AI-assisted and agentic development workflows; you use these tools daily and evaluate their output critically
β’ Experience with AWS data infrastructure; Redshift experience a plus
β’ Strong written communication β able to document technical decisions clearly for engineering and product audiences
Preferred
β’ Experience with RAG pipelines, vector stores (e.g. OpenSearch), or document extraction systems
β’ Background in financial services data β familiarity with fund administration, investment data schemas, or institutional reporting workflows is a meaningful differentiator
β’ Experience building data products for external customers, not just internal tooling
β’ Familiarity with evaluation frameworks for AI outputs: deterministic checks, cross-model comparison, or human-in-the-loop review patterns
COMPENSATION
Compensation for this position includes a base salary, equity, and a variety of benefits. The U.S. base salary range is $165,000 - $200,000 USD. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.
Benefits include:
- Health, dental, and vision care for you and your family
- Life insurance
- Mental wellness coverage
- Fertility and growing family support
- Flex Time Off in addition to company-paid holidays
- Paid family leave, medical leave, and bereavement leave policies
- Retirement saving plans
- Allowance to customize your work and technology setup at home
- Annual professional development stipend
Your recruiter can provide additional details about compensation and benefits.