About You
You are a Data Engineer passionate about building scalable, reliable, and high-performing data solutions that enable analytics, reporting, and data-driven decision-making. You enjoy working with modern cloud-based data platforms and designing robust data architectures that support evolving business needs.
You bring a proactive, autonomous, and detail-oriented mindset, with strong problem-solving skills and a solid technical foundation in data engineering practices. You are comfortable working across batch and streaming environments, optimizing data pipelines, and ensuring data quality, governance, scalability, and performance.
You thrive in collaborative environments, partnering with technical and non-technical stakeholders to deliver accessible, secure, and well-structured data solutions. You are committed to continuous improvement and enjoy contributing to the evolution of scalable cloud-based data platforms and engineering best practices.
β’ This position is open to candidates based in El Salvador, Colombia, or Mexico.You bring to Applaudo the following competencies:
β’ Bachelorβs degree in Computer Science, Software Engineering, Data Engineering, or a related field.
β’ Strong proficiency in SQL (advanced level) β fundamental requirement.
β’ Strong coding experience with Python for data engineering and pipeline development.
β’ Hands-on experience with cloud data platforms, particularly GCP services such as BigQuery, Dataflow, Dataform, Pub/Sub, GCS, Firestore, and Spanner.
β’ Proven experience designing, building, and maintaining scalable ETL/ELT pipelines.
β’ Strong understanding of data modeling concepts and modern data architectures, including data lakes, data warehouses, and lakehouse solutions.
β’ Experience developing batch and streaming data pipelines.
β’ Experience with workflow orchestration, version control, CI/CD practices, and infrastructure-as-code approaches.
β’ Knowledge of monitoring, logging, debugging, and troubleshooting data systems.
β’ Experience with performance tuning, scalability, and cost optimization of cloud data workloads.
β’ Familiarity with data governance, data lineage, metadata management, and security/access control practices.
β’ Experience supporting analytics and reporting use cases through well-structured and governed datasets.
β’ Familiarity with BI and analytics tools such as Looker Studio and Looker Platform is a plus.
β’ Understanding of KPI definition, reporting needs, and data consumption patterns is desirable.
β’ Strong analytical thinking, troubleshooting, and problem-solving skills.
β’ Strong communication and collaboration abilities with both technical and business stakeholders.
β’ Highly proactive, autonomous, and self-driven approach to work.
β’ English proficiency (B2 or higher).You will be accountable for the following responsibilities:
β’ Design, build, and maintain scalable batch and streaming data pipelines using GCP technologies.
β’ Develop ingestion frameworks and integrate data from internal and external sources.
β’ Implement and optimize robust ETL/ELT processes, ensuring reliability, scalability, fault tolerance, and performance.
β’ Monitor, troubleshoot, and maintain high availability and integrity of data pipelines and workflows.
β’ Contribute to the design and evolution of the organizationβs cloud-based data platform architecture.
β’ Implement and optimize data storage and processing solutions using services such as BigQuery, Dataflow, Dataform, Pub/Sub, GCS, Firestore, and Spanner.
β’ Ensure scalability, cost efficiency, and performance optimization across data workloads.
β’ Support CI/CD practices and infrastructure automation for data engineering workflows.
β’ Develop and maintain scalable, reusable, and standardized data models aligned with best practices.
β’ Implement data validation, quality checks, and monitoring frameworks to ensure data reliability and governance.
β’ Maintain data lineage, metadata, technical documentation, and governance standards.
β’ Enforce data governance, security, and access management policies.
β’ Provide high-quality, structured, and governed datasets for reporting, analytics, and downstream consumers.
β’ Collaborate with analysts, stakeholders, and cross-functional teams to ensure data usability, accessibility, and alignment with business requirements.
β’ Support the definition of data contracts, SLAs, and data quality standards.
β’ Contribute to the development and optimization of curated datasets and analytical data products.
β’ Assist in exploratory analysis and performance optimization of analytical queries.
β’ Promote best practices in data engineering, quality, governance, and platform scalability.