RAVL helps technologists accelerate their careers.
At RAVL, we connect strategy with execution, care deeply about the people we work with, and measure success by the lasting impact we leave behind. Our purpose is to build a team that puts real, sustainable business outcomes at the core of everything we do.
Weβre here to leave our clients better than we found them, and to create a place where our people are proud to Build. Better.
What does success look like in this role?
β’ Design and deliver enterprise data platforms, lakehouse architectures, and distributed raw data processing systems using modern cloud-native technologies.
β’ Architect and implement scalable batch and streaming pipelines, medallion architectures, data mesh patterns, and platform automation frameworks for resilience, governance, and security.
β’ Standardize and lead adoption of Databricks, Apache Spark, Delta Lake, and similar distributed data processing ecosystems across engagements.
β’ Define and implement AI-ready data foundations, including feature engineering pipelines, model-ready data layers, and scalable experimentation environments.
β’ Build horizontal capabilities including ingestion frameworks, metadata and lineage standards, data quality and observability frameworks, secure-by-design platform blueprints, and MLOps enablement patterns.
β’ Architect and guide implementation of MLOps workflows including model lifecycle management, model deployment strategies, monitoring, and governance.
β’ Integrate with cloud-native storage, data warehouses, APIs, ML platforms, vector databases, and enterprise systems while managing authentication, authorization, and secure data flows.
β’ Apply secure coding practices, compliance standards, responsible AI principles, and automation-first approaches across all data and AI platform designs.
β’ Demonstrate a bias for action: ship reference architectures, reusable modules, AI accelerators, and templates that enable rapid, incremental delivery.
β’ Mentor engineers, influence stakeholders, define governance standards, and shape technical and strategic direction across BuildIQ.
Sounds great, do my skills fit?
Strong Grasp of Core Data & AI Engineering Concepts
β’ Distributed data processing and Spark internals
β’ Lakehouse architecture and medallion design patterns
β’ Data modeling for analytical, operational, and ML workloads
β’ Metadata management, lineage, observability, and cost optimization
β’ MLOps, feature stores, model versioning, and deployment strategies
β’ AI system design fundamentals including LLM integration patterns and vector-based retrieval
Cloud-Native & Multi-Cloud Architecture
β’ Deep experience designing and operating cloud-native data and AI platforms on AWS, Azure, or GCP
β’ Experience working across multi-cloud environments
β’ Strong understanding of networking, storage, identity, GPU workloads, and security boundaries in cloud data and AI systems
Consulting Excellence
β’ Collaboration, prioritization, and RAID ownership across multiple engagements
β’ Comfortable operating in ambiguity and creating clarity for teams
β’ Ability to influence senior stakeholders as a trusted outsider
β’ Strong facilitation, alignment, and decision-making capability
β’ Operates as a high-performing remote leader ensuring work is visible, transparent, and uplifting to peers
Mindset Success Traits (Mandatory)
β’ Delivery-first and outcome-oriented (get shit done mentality)
β’ Creative and open to new approaches, including emergent AI technologies
β’ Comfortable working in ambiguity and creating clarity
β’ Influential presence: able to shape direction across client and internal environments
β’ Curious, adaptable, emotionally aware, and committed to delivery excellence
Non-Negotiable Technical Skills
Candidates must demonstrate proficiency in all of the following:
β’ Programming: Advanced Python and SQL, plus Scala or Java
β’ Data Platform Tooling: Databricks, Apache Spark, Delta Lake
β’ AI & ML Tooling: Experience with ML frameworks (e.g., MLflow, PyTorch, TensorFlow) and model lifecycle tooling
β’ Infrastructure & Automation: Terraform and CI/CD pipelines
β’ Cloud Platforms: Deep expertise in at least one of AWS, Azure, or GCP, with working knowledge of a second
β’ Security & Governance: IAM, encryption (at rest and in transit), RBAC, secure coding practices, data governance, and responsible AI fundamentals
(Candidates missing these will not be considered further.)
Compensation & hiring process
We are recruiting for a future opportunity. The salary range for this role is $140 000 - $180 000 CAD, reflecting expected base pay. Total compensation may also include additional pay such as bonuses or incentives, depending on the position, and final offers are based on experience, skills, and qualifications. As part of our hiring process we may use technology, including AI-based tools, to help summarize and assess applications; these tools assist our team and do not replace human review or decision-making.
Equal Opportunity & Accessibility
RAVL is an equal opportunity employer committed to building a diverse, inclusive, and accessible workplace. We welcome applications from all qualified individuals and provide accommodations throughout the hiring process upon request.