Why Blue Coding?Β
At Blue Coding, we specialize in hiring excellent developers and amazing people from all over Latin America and other parts of the world. For the past 11 years, weβve helped cutting-edge companies in the United States and Canada build great development teams and develop great products. Large multinationals, digital agencies, Saas providers, and software consulting firms are just a few of our clients. Our team of over 150 engineers, project managers, QA, UX/UI designers, and many more is distributed in more than 10 countries across the Americas. We are a fully remote company working with a wide array of technologies, and we have expertise in every stage of the software development process.
Our team is highly connected, united, and culturally diverse, and our collaborators are involved in many initiatives around the world, from wildlife preservation to volunteering at local charities. We stand for honesty, fairness, respect, efficiency, hard work, and cooperation.
This position is open exclusively to candidates based in LATAM countries.
Β
What are we looking for?
Β
We are looking for a Senior Data Engineer to join an agile, geographically distributed engineering team responsible for building and evolving the data layer that supports critical analytics, data science, and business operations.
Β
This role is focused on leading and executing large-scale data migration and modernization initiatives, including the migration of on-premises SQL Server databases to cloud-based infrastructure. You will work closely with data engineering peers, analytics, and technology stakeholders to ensure data integrity, performance optimization, and reliability across enterprise data platforms.
Β
This role operates within a flat, collaborative team structure with high individual ownership and a strong culture of trust. It offers a high degree of autonomy, minimal meetings, and close collaboration across teams. The work is primarily focused on building robust, scalable data pipelines and datastores, with continuous opportunities to improve and evolve the data platform.
Β
If you are fully fluent in English, proactive, communicate well, like to solve problems, and have strong attention to detail, this role might be a great fit for you! Our jobs are fully remote and you will be integrated directly into the clientβs team, gaining valuable experience and forming meaningful connections.
Β
Β
What's unique about this job?
Β
This role offers the opportunity to be deeply involved in a large-scale data transformation, supporting the migration and modernization of enterprise data systems while enabling advanced analytics and data-driven decision making.
Β
You will have real ownership across the data engineering lifecycle, from designing and maintaining data pipelines and datastores, to optimizing performance, ensuring data quality, and supporting downstream analytics and data science use cases. Beyond execution, you will contribute to technical decision-making, documentation, and mentoring, helping shape the long-term evolution of the data platform.
Β
Β
This is a term-based contract role, initially planned through October, subject to project progress.
Β
Here are some of the exciting day-to-day challenges you will face in this role:
β’ Design, develop, and maintain data pipelines and datastores that support enterprise analytics, data science, and operational workloads.
β’ Lead and support large-scale database migration initiatives, including on-premises to cloud migrations.
β’ Monitor, analyze, and optimize the performance and stability of data layer services and platforms.
β’ Ensure data integrity, quality, and compliance across pipelines and datasets.
β’ Collaborate closely with peers across engineering, analytics, and technology teams.
β’ Guide, coach, and mentor data engineers, BI developers, and analysts.
β’ Design and implement enterprise-scale data solutions with long-term business impact.
β’ Build and maintain data processing solutions using Python and/or Scala.
β’ Work with a variety of data ingestion patterns, including SFTP, APIs, streaming, and batch processing.
β’ Design and support database models optimized for analytical and reporting use cases.
β’ Implement monitoring, alerting, and observability for data pipelines and infrastructure.
β’ Maintain clear and comprehensive documentation of data architectures, pipelines, and processes.
β’ Work within an Agile environment, collaborating through tools such as Jira and Git
You will shine if you have:
β’
5+ years of experience in DevOps or DataOps roles with a strong focus on AWS.
β’
Hands-on experience with AWS services such as EMR (Spark), Redshift, RDS, Glue, Lambda, Kinesis, Step Functions, EventBridge, SNS/SQS, KMS, and CloudWatch.
β’
Strong proficiency in Python and SQL.
β’
Experience working with relational, NoSQL, and columnar databases.
β’
Experience implementing Infrastructure as Code using Terraform or CloudFormation.
β’
Experience designing and maintaining CI/CD pipelines.
β’
Familiarity with data quality frameworks, observability tools, and data governance practices.
β’
Knowledge of handling sensitive data (PII) and compliance standards such as HIPAA.
β’
Bachelorβs degree in Computer Science, Data Science, or equivalent experience.
β’
Proven ability to work autonomously and take ownership of infrastructure and data workflows in a production environment
It doesnβt hurt if you also have:
β’
AWS certifications (e.g., Data Engineer β Associate, DevOps Engineer β Professional).
β’
Experience with AWS DMS, Secrets Manager, SES, and containerization tools such as Docker.
β’
Experience with BI tools such as Tableau or Power BI.
β’
Experience with data observability platforms.
β’
Advanced degree in a related field
Here are some of the perks we offer you:
β’ Salary in USD
β’ Flexible schedule (within US Time zones)
β’ 100% Remote
β’ Work with modern AWS stack (EMR, Kinesis, Glue, etc.)
β’ High-impact role with ownership over key technical decisions
Ready to learn more? Apply below!