We're hiring a Data Engineer to build the pipelines that move financial institutions' messiest data into our escheatment platform and back out as compliant, audit-ready records. This is a hands-on role for someone who finds satisfaction in turning chaotic provider data into clean, reliable systems, and who treats data quality as a product feature rather than an afterthought.
You'll design and operate the ingestion, transformation, and warehousing layer that powers: ingesting nightly files from banks and fintechs whose schemas, date formats, and tax-ID conventions are all their choices, not ours. You'll build the pipelines that reconcile millions of accounts and billions in balances, and you'll own the systems that keep them running.
The ideal candidate is rigorous about data correctness, comfortable making judgment calls under ambiguity, and skeptical of any pipeline that silently drops bad rows. You've seen enough real-world data to know that the schema in the documentation rarely matches the schema in the file, and you build accordingly. You move fast, but you check your output before declaring it done.
Eisen is based in New York City, and we have 4 onsite days a week. We believe that in person collaboration will drive success quicker and easier
ABOUT US
Eisen is building the first account offboarding platform for financial institutions. We streamline manual processes like escheatment, disbursement, and customer outreach β helping banks and fintechs close the loop on inactive accounts and reunite people with their money. We just raised our Series A and are looking to expand and build a strong engineering team.
KEY RESPONSIBILITIES
Data Pipelines & Infrastructure
- Build and operate ingestion pipelines that handle provider data with inconsistent schemas, formats, and quality
- Design transformations that apply complex domain rules (state-by-state escheatment regulations, dormancy calculations, anonymization requirements) reliably and traceably
- Own the data warehouse layer (Snowflake) modeling, performance, and the pipelines that feed it
- Treat rejected records, schema drift, and data quality issues as first-class signals; build the observability to catch them before customers do
- Use AI-powered tools to accelerate development on schema mapping, rule encoding, and pipeline scaffolding
- Apply strong computer science fundamentals to design systems that handle large-scale data efficiently and correctly
- Contribute to data strategy by translating ambiguous compliance requirements into clear, testable pipeline logic
ABOUT YOU
- 3+ years building production data pipelines, ideally in a startup or 0β1 environment
- Fluent with Python and the data stack: pandas at minimum, plus orchestration (Airflow or similar), warehousing (Snowflake, BigQuery, or similar), and streaming (Kafka a plus)
- Comfortable working across operational and analytical stores (we use MongoDB β Kafka β Snowflake)
- Strong SQL -> you can debug a slow query and you know why your join doubled the row count
- Rigorous about data quality: you don't drop bad rows silently, and you build pipelines that explain themselves when something looks off
- Comfortable making judgment calls on edge cases and naming your assumptions out loud
- Curious about the domain. you want to understand why the rules are the way they are, not just implement them
- Fluent in using AI tools (Claude, ChatGPT, Copilot, etc.) to accelerate iteration
- Collaborative, thoughtful communicator who makes people around you better
- Bonus: Experience in fintech, regulatory data, or any domain where getting the numbers wrong has real consequences
COMPENSATION & BENEFITS
- Competitive salary and equity package
- 100% employer-paid health, dental, and vision insurance
- 401(k) with company match
- Unlimited PTO
- Covered lunches for days in office
- OneMedical membership, commuter benefits, and more
Join us at Eisen and play a critical role in delivering a seamless, compliant, and trustworthy offboarding experience for financial institutions and their customers.