Octus
Octus is a leading global provider of credit intelligence, data, and analytics. Since 2013, tens of thousands of professionals across hedge fund, investment banking, management consulting, and law firm verticals have come to rely on Octus to make better, faster, and more confident decisions in pace with the fast-moving credit markets.
For more information, visit: [Upgrade to PRO to see link]
Working at Octus
Octus hires growth-minded innovators and trailblazers across the globe to drive our business and culture. Our core values β Action Oriented, Customer First Mindset, Effective Team Players, and Driven to Excel β define an organizational ethos thatβs as high-performing as it is human. Among other perks, Octus employees enjoy competitive health benefits, matched 401k and pension plans, PTO, generous parental leave, gym subsidies, educational reimbursements for career development, recognition programs, pet-friendly offices (US only), and much more.
Role
Octus is seeking a Principal Data Engineer to roll up their sleeves and build scalable, production-grade data pipelines and infrastructure. You'll be a hands-on technical leader β writing code daily, solving hard engineering problems, and helping elevate the team around you through doing. Spanning Snowflake, Databricks, and AWS, you'll be deeply involved in the day-to-day development of the data platform that powers Octus's products, data, and automation initiatives. The ideal candidate is an expert in Python and SQL who thrives in an execution-focused environment and has deep experience building modern data pipelines and lakehouse solutions.
Responsibilities
β’ Build and maintain end-to-end data pipelines β from raw ingestion through transformation and delivery β across diverse data sources (APIs, web data, internal feeds, etc.).
β’ Hands-on development of scalable, production-grade pipelines within Databricks, including Delta Lake table management, Workflows, and cluster optimization.
β’ Build and maintain data models, schemas, and transformation logic in Snowflake, optimizing for performance and reliability.
β’ Develop and manage Databricks environments including Unity Catalog, Delta Live Tables, and integration patterns that support both internal data consumers and external sharing use cases.
β’ Build and manage orchestration workflows using AWS services (MWAA/Airflow, Lambda, ECS, SQS, MSK) and Databricks-native orchestration where appropriate.
β’ Implement and maintain infrastructure as code (IaC) using Terraform, ensuring reproducibility and compliance with cloud standards.
β’ Establish and enforce best practices in data modeling, schema design, and ETL/ELT processes for high-volume structured and semi-structured data across Snowflake and Databricks.
β’ Ensure data quality, lineage, and observability through automated testing, monitoring, and alerting across all pipeline layers.
β’ Collaborate closely with technology leadership to align data platform development with business strategy and product goals.
β’ Stay at the forefront of industry trends in data engineering, lakehouse architecture, and cloud-native data platforms.
Requirements
β’ Strong foundation in software engineering principles, including SOLID design, modularity, and scalability.
β’ Expert proficiency in Databricks, including Delta Lake, Unity Catalog, Delta Live Tables, MLflow, and Databricks Workflows.
β’ Deep experience with Snowflake, including data modeling, performance optimization, and integration with upstream/downstream pipeline tooling.
β’ Expert proficiency in Python for data pipeline and automation development.
β’ Advanced SQL skills with experience optimizing complex queries and data models at scale.
β’ Proven experience designing and maintaining cloud-native data pipelines on AWS (e.g., MWAA/Airflow, Lambda, ECS, SQS, Glue, S3, Redshift, etc.).
β’ Experience implementing and managing Terraform or similar IaC frameworks.
β’ Strong understanding of lakehouse architecture patterns, data ingestion, transformation, and orchestration, including familiarity with ML/AI pipeline integration patterns.
β’ Familiarity with CI/CD pipelines, automated testing, and modern DevOps practices.
β’ 8+ years of experience in data engineering or backend development, with a focus on scalable data solutions.
β’ Demonstrated experience leading data infrastructure projects end-to-end and mentoring senior engineers.
β’ Familiarity with containerization (Docker) and workflow orchestration best practices.
β’ Excellent communication, collaboration, and problem-solving skills.
Nice to Have
β’ Experience with streaming data technologies (Kafka, Kinesis, Flink).
β’ Exposure to ML/AI pipeline patterns (feature stores, experiment tracking, model serving) and MLOps tooling, particularly in a cross-functional team environment.
β’ Experience integrating data quality and observability tools.
β’ Experience with Databricks as a data sharing and collaboration platform (Delta Sharing, Marketplace).
β’ Familiarity with Claude Code or similar AI-powered developer tools for accelerating pipeline development and code workflows.
At Octus, we consider a range of factors in connection with compensation decisions, including experience, skills, location, and our business needs and limitations. As a result, compensation may vary within and across similar roles and positions. Please note that the salary range information below is a good faith estimate for this position and actual compensation for any individual may fall outside this range if warranted by the circumstances applicable to that individual. If we identify a role that would be suitable for a broader range of skills and experience such that we would consider hiring at multiple levels then the range listed below may reflect that breadth.
The salary range estimate for this position is $175,000 - $220,000
The actual compensation will be at Octus' sole discretion and will be determined by the aforementioned and other relevant factors.
Equal Employment Opportunity
Octus is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, marital status, pregnancy, veteran status, or any other legally protected status. We strive to create an inclusive and diverse work environment where all individuals are valued, respected, and treated fairly. We believe that diversity enriches our workplace and enhances our ability to innovate and succeed.