Data Scientist, AI Data Foundations
About the Role
Reporting into the Data Engineering organization, the Data Scientist is responsible for designing and building the curated data structures that AI and ML applications consume across MeridianLink. You will own the vector stores behind our RAG systems, the feature store that powers model training and inference, and the graph databases that capture relationships across applicants, products, and decisions. You will also lead targeted data discovery work, surfacing hidden trends in our lending and account-opening data that inform both AI use cases and the broader business.
This is a hands-on, build-oriented role. You will not be primarily training large models β you will make sure the people training and serving models have high-quality, well-governed, well-engineered data to work with, and you will use your data science skills to validate that the data is fit for purpose.
What You Will Do
β’ Build and maintain vector stores for RAG: Design embedding pipelines, chunking strategies, indexing approaches, and refresh patterns for the vector stores powering retrieval-augmented generation across MeridianLink products.
β’ Own the feature store: Design, build, and operate feature store assets used for model training and online/offline inference, including feature definitions, freshness SLAs, lineage, point-in-time correctness, and reuse across teams.
β’ Design graph data structures: Build graph databases that model relationships between applicants, applications, products, lenders, decisions, and outcomes β and make them queryable for both AI use cases and analytical investigations.
β’ Lead data discovery: Profile our lending, deposit, and behavioral datasets to identify hidden trends, segments, anomalies, and potential model drivers; turn findings into actionable hypotheses for product, risk, and growth teams.
β’ Engineer for AI consumption: Build the curated, AI-ready datasets that downstream model builders, application engineers, and analysts rely on β with appropriate quality, documentation, and governance baked in.
β’ Evaluate retrieval and feature quality: Define and run evaluation frameworks for RAG retrieval quality, feature drift, embedding quality, and graph completeness; iterate based on what the metrics tell you.
β’ Partner with model builders: Work closely with ML engineers and applied scientists to make sure the data structures you build accelerate their work rather than slow it down.
β’ Champion responsible data use: Partner with governance, security, and compliance to ensure that AI-facing data assets respect data classification, customer consent, and regulatory boundaries from day one.
β’ Communicate findings: Translate discovery work into clear narratives β write-ups, notebooks, dashboards, and short presentations β that help non-technical stakeholders act on what the data is showing.
Required Qualifications
β’ 4β7 years of experience in a data science, ML engineering, or applied data role, with a meaningful portion of that time spent building data assets that other people's models or applications consumed.
β’ Hands-on experience designing and operating vector stores for RAG or semantic search, including embedding generation, chunking, indexing, and retrieval evaluation.
β’ Experience building or operating a feature store (e.g., Databricks Feature Store, Feast, or a custom internal platform), including offline training and online serving patterns and point-in-time correctness.
β’ Experience modeling and building graph data structures using Neo4j, TigerGraph, Azure Cosmos DB Gremlin, or similar graph databases β and writing graph queries to answer real questions.
β’ Strong proficiency in Python (pandas, NumPy, scikit-learn, PySpark) and SQL; comfortable working day-to-day in Databricks notebooks and jobs.
β’ Practical experience with embedding models and LLM tooling (e.g., Hugging Face transformers, OpenAI / Azure OpenAI APIs, LangChain or similar) in a production or near-production context.
β’ Demonstrated data discovery skills: profiling messy real-world datasets, surfacing non-obvious patterns, validating findings statistically, and explaining them clearly.
β’ Solid grounding in classical ML concepts β supervised vs. unsupervised learning, train/test discipline, leakage, evaluation metrics β even though you will not own model training day-to-day.
β’ Strong written and verbal communication skills; able to write up findings for both technical and business audiences.
Preferred Qualifications
β’ Experience working in a SaaS or FinTech environment, particularly with lending, deposit, credit, fraud, or KYC/AML data.
β’ Experience with Databricks-native AI/ML tooling: Databricks Vector Search, Databricks Feature Store, MLflow, and Unity Catalog.
β’ Familiarity with open-source vector databases such as pgvector, Pinecone, Weaviate, Chroma, or FAISS, and a clear point of view on when to use which.
β’ Experience with Microsoft Azure data and AI services (Azure OpenAI, Azure AI Search, ADLS Gen2).
β’ Experience evaluating RAG systems end-to-end (recall@k, faithfulness, answer quality, hallucination measurement).
β’ Exposure to graph algorithms (community detection, link prediction, centrality) applied to real business problems.
β’ Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Engineering, or a related quantitative field, or equivalent professional experience.
Our Data & AI Stack
β’ Lakehouse: Azure Databricks, Delta Lake, Unity Catalog, PySpark, SQL
β’ AI Data Foundations: Databricks Vector Search, Databricks Feature Store, MLflow
β’ Vector & Graph (current and exploratory): pgvector, Pinecone, Weaviate, FAISS; Neo4j, TigerGraph, Azure Cosmos DB (Gremlin)
β’ Cloud: Microsoft Azure (ADLS Gen2, Azure OpenAI, Azure AI Search, Event Hubs)
β’ AI Models and Agents: Databricks, AWS Bedrock, Azure ML
β’ Integration & Governance: Informatica Data Management Cloud (IDMC), Unity Catalog