ABOUT DEVSAVANT
DevSavant is an operating partner for startups and growth-stage companies, helping them turn ambition into execution.
We support founders and leadership teams with product engineering and global staffing, from early prototypes and MVPs to scaling high-performing teams. Our vetted talent across LATAM and Asia embeds directly into client teams, operating as true extensions rather than external vendors.
With over 8 years working in venture-backed ecosystems, DevSavant is trusted to accelerate delivery, scale teams efficiently, and support companies as they reach their next milestone.
ABOUT THE ROLE
We are seeking a Senior Data Engineer to join a cross-functional team working on scalable data systems and analytics infrastructure.
This is an individual contributor role focused on building, maintaining, and optimizing data pipelines and data models that power analytics and business-critical decision-making. The role requires a strong technical generalist mindset, combining software engineering principles with deep data expertise.
You will work closely with Data Science, Data Ops, and business stakeholders to ensure data is accurate, accessible, and structured for self-service analytics. The ideal candidate is someone who enjoys working with complex datasets, simplifying systems, and building scalable data infrastructure from the ground up.
KEY RESPONSIBILITIES
DATA ENGINEERING & PIPELINE DEVELOPMENT
- Own, build, maintain, and optimize scalable data pipelines
- Design and implement data architectures that support analytics and operational use cases
- Work with large, complex datasets to meet evolving business requirements
- Ensure data quality, reliability, and performance across systems
- Apply best practices for developing specialized datasets for analytics and modeling
- Continuously improve data workflows, pipelines, and infrastructure
DATA MODELING & ANALYTICS ENABLEMENT
- Develop a deep understanding of core data models and business logic
- Partner with Data Science and Data Ops teams to maintain trusted, well-documented datasets
- Enable self-service analytics by structuring and organizing data effectively
- Support analytical workflows and downstream consumption of data
- Assist analysts with query development and dataset preparation
CROSS-FUNCTIONAL COLLABORATION
- Work with a wide range of stakeholders to gather requirements and translate them into technical solutions
- Communicate complex technical concepts clearly to both technical and non-technical audiences
- Collaborate closely with engineering, analytics, and product teams
- Contribute to documentation and knowledge sharing across teams
INFRASTRUCTURE & SYSTEMS DESIGN
- Contribute to the design of scalable and maintainable systems
- Optimize data delivery and infrastructure for performance and scalability
- Support integration across multiple data platforms and tools
- Maintain and improve existing systems, including search and indexing solutions
DEBUGGING, OPTIMIZATION & RELIABILITY
- Independently troubleshoot complex systems and resolve data-related issues
- Perform root cause analysis and implement long-term fixes
- Improve system reliability and performance through monitoring and optimization
- Ensure stability and efficiency of data platforms
CORE TECHNICAL STACK
DATA & BACKEND
- SQL for querying, transformation, and data modeling
- Python or other general-purpose programming languages (e.g., JavaScript/TypeScript, Java, C#, Go, Scala)
- Experience with data pipeline tools such as Spark and DBT
- Data warehouses such as BigQuery, Snowflake, or Databricks
DATA ORCHESTRATION & PROCESSING
- Workflow orchestration tools such as Airflow or Dagster
- Experience handling large-scale data processing and transformations
- Familiarity with batch and/or streaming data systems
INFRASTRUCTURE & CLOUD
- Cloud platforms such as GCP or AWS
- Infrastructure as Code tools (Terraform, Pulumi, or CloudFormation)
- Experience designing scalable and maintainable systems
ADDITIONAL TOOLS & SYSTEMS
- Experience with backend engineering and web services is a plus
- Familiarity with analytics and data visualization ecosystems
- Exposure to transaction, receipt, or viewership data is beneficial
REQUIRED QUALIFICATIONS
- 5+ years of experience in software engineering, with at least 3 years focused on data engineering or data infrastructure
- Strong expertise in SQL and working with relational databases
- Experience building and maintaining scalable data pipelines
- Proficiency in at least one general-purpose programming language (Python preferred)
- Experience with modern data stack tools (e.g., Spark, DBT, Airflow/Dagster)
- Strong debugging and problem-solving skills in complex systems
- Experience working with cloud data warehouses (BigQuery, Snowflake, or Databricks)
- Ability to design data systems that support analytics and business intelligence
- Strong communication skills and ability to work cross-functionally
- Experience documenting and simplifying complex systems
NICE TO HAVE
- Experience with backend engineering and API development
- Experience with Infrastructure as Code (IaC) tools
- Exposure to system design for customer-facing or high-scale platforms
- Familiarity with analytics-heavy environments and data-driven products
- Experience working with large-scale, real-world datasets (e.g., transactions, behavioral data)
QUALITIES WE'RE LOOKING FOR
- Ownership mindset: Ability to take responsibility and drive systems end-to-end
- Technical versatility: Strong foundation as a software engineer with data expertise
- Problem-solving focus: Ability to navigate ambiguity and solve complex challenges
- Communication skills: Clear and effective collaboration across teams
- Execution-driven: Ability to move quickly and deliver results in a fast-paced environment
- Continuous improvement: Desire to refine systems, processes, and technical approaches