We are seeking a Senior Data Engineer to architect, build, secure, and optimize enterprise‑grade data platforms and end‑to‑end pipelines, while leading AI‑assisted engineering practices across the team. 
The ideal candidate brings deep expertise in Python, Snowflake, modern cloud ecosystems, data warehousing design, and DevSecOps‑led automation, and actively leverages AI tools (e.g., GitHub Copilot) to drive productivity, quality, and maintainability—without compromising engineering rigor. 
You will own technical direction, mentor engineers, and set standards for how AI is responsibly embedded into data engineering workflows. 
Key Responsibilities 
Data Engineering & Pipelines 
• Design, develop, and optimize scalable ELT/ETL pipelines using Python and SQL. 
• Build real-time, near-real-time, and batch frameworks using cloud-native services. 
• Implement incremental loads, CDC, SCD, schema evolution, and orchestration best practices. Snowflake Engineering 
• Architect and manage Snowflake environments: warehouses, databases, schemas, resource monitors, RBAC, zero-copy clones. 
• Implement Snowflake Tasks, Streams, Pipes (Snowpipe) for event-driven data workflows. 
• Optimize compute cost and query performance using clustering, micro-partitioning, caching, and warehouse sizing. 
• Leverage AI‑assisted tools (GitHub Copilot, AI code review aids) to:  
• Accelerate pipeline development 
• Refactor legacy code safely 
• Improve SQL and Python quality 
• Generate tests and documentation—validated through reviews Cloud Engineering (AWS | Azure | GCP) 
• Build and maintain production‑grade data solutions using cloud‑native services:  
• Azure: Blob Storage, ADLS, Functions, ADF, Purview 
• (or equivalents in AWS/GCP) 
• Design event‑driven and serverless architectures where appropriate 
• Use AI tooling to accelerate infrastructure design validation and failure analysis Data Warehousing & Modeling 
• Design enterprise-grade Data Warehouses, Data Marts, and Semantic Layers. 
• Implement Kimball, Data Vault, and modern ELT-first design patterns. 
• Work closely with BI/ML teams to operationalize features and analytics models. DevSecOps & Platform Engineering 
• Implement CI/CD pipelines for data engineering code (GitHub Actions / Azure DevOps / GitLab CI). 
• Enforce DevSecOps practices:  
• secret scanning 
• IaC security gates 
• dependency scanning 
• policy-as-code (OPA/Conftest) 
• Build infrastructure using Terraform / Azure Bicep / CloudFormation. 
• Encourage AI usage to:  
• Improve pipeline reliability 
• Reduce mean‑time‑to‑restore (MTTR) 
• Identify operational risks earlier Automated Testing & Data Quality 
• Define quality strategy and enforce shift‑left testing 
• Implement:  
• Data unit testing (pytest) 
• Schema and contract enforcement 
• Great Expectations / dbt tests 
• Automated data profiling 
• Build quality dashboards, lineage, and SLA/SLO monitoring 
• Use AI‑assisted tools to:  
• Identify data anomalies 
• Generate test scenarios 
• Improve coverage and consistency AI Adoption, Learning & Engineering Excellence 
• Champion responsible AI adoption across the data engineering function 
• Lead by example in:  
• Effective GitHub Copilot usage 
• Reviewing and validating AI‑generated code 
• Teaching best practices for AI‑assisted development 
• Continuously evaluate and pilot new AI tools relevant to data engineering 
• Drive a culture of continuous learning, experimentation, and measurable improvement Leadership & Collaboration 
• Lead architecture, design, and code reviews 
• Mentor and guide mid‑ and junior‑level engineers 
• Set engineering standards, patterns, and best practices 
• Partner with Product, Data Science, Security, and Business teams to deliver high‑impact data solutions 
• Influence roadmap decisions with a strong balance of cost, scalability, and reliability