<div><strong>About Safe</strong></div>
<div> </div>
<div>At Safe Security, we are building Cyber Super Intelligence (CSI) β a next-gen system of intelligence that autonomously predicts, detects, and remediates cyber threats. We operate with radical transparency, high trust, and a culture-first mindset (no βbrilliant jerksβ). Youβll join a team that values ownership, growth, technical rigour, and impact.</div>
<div> </div>
<div> </div>
<div>
<div>
<div>
<div>
<div>We operate with radical transparency, autonomy, and accountabilityβthereβs no room for brilliant jerks. We embrace a culture-first approach, offering an unlimited vacation policy, a high-trust work environment, and a commitment to continuous learning. For us, Culture is Our Strategyβcheck out our <a href="[Upgrade to PRO to see link]">Culture Memo</a> to dive deeper into what makes SAFE unique.</div>
</div>
</div>
</div>
</div>
Role Overview
As a Principal Engineer β Data Platform, you will drive the next wave of architectural direction and foundational data capabilities that power Safeβs multi-tenant data platform, analytics, and intelligence systems.
You will partner with engineering leadership, product, and cross-functional teams to define, build, and evolve the core data systems that allow Safe to scale securely and reliably.
Youβll not just execute β youβll lead and mentor, influence technical direction across the org, and champion best practices in data architecture, lakehouse design, scalability, reliability, and observability.
Key Responsibilities
β’ Architect & Lead Data Platform Strategy
Drive the long-term vision for Safeβs data platform: lakehouse architecture, open table formats (Apache Iceberg), data ingestion frameworks, streaming pipelines, and data serving layers.
Evaluate alternative architectures, lead design reviews, and ensure consistency across solutions.
β’ Operational Excellence & Scalability
Ensure data systems operate at high performance with strong guarantees on data freshness, accuracy, and availability.
Lead efforts in performance tuning, large-scale data handling (billions of records), cost efficiency, and capacity planning.
β’ Cross-cutting βHorizontalβ Ownership
Lead horizontal capabilities such as data ingestion, data modeling, streaming pipelines, data quality, lineage, and data observability.
Drive self-serve data platform capabilities for internal teams.
β’ Drive Engineering Standards & Best Practices
Establish best practices for data modeling, schema evolution, partitioning, compaction, and pipeline design.
Ensure strong data quality, testing, and reliability standards across the platform.
Mentor senior and staff engineers and elevate overall technical rigor in data systems.
β’ Collaboration & Influence
Work closely with Product, AI, Security, and Platform leadership to align data architecture with business goals.
Clearly articulate trade-offs, constraints, and design decisions.
β’ End-to-End Ownership
From ingestion to transformation to serving β own critical data flows end-to-end and ensure production-grade reliability.
Guide teams through complex data challenges and maintain robustness in production systems.
Must-Have Qualifications
β’ Experience: 10+ years in software/data engineering, including 4+ years as a senior/lead/principal engineer in data platform, backend, or infrastructure systems.
β’ Lakehouse & Iceberg Expertise:
Deep hands-on experience with Apache Iceberg (mandatory) and modern lakehouse architectures.
Strong understanding of partitioning strategies, schema evolution, compaction, snapshotting, and large-scale table optimization.
β’ Distributed Data Systems:
Proven track record designing and building large-scale data pipelines, including batch and streaming systems, event-driven architectures, and data ingestion frameworks.
β’ Strong Language Skills:
Expert proficiency with Python, Go, or TypeScript (or equivalent); familiarity with multiple languages is a plus.
β’ Storage & Messaging:
Deep experience with data lakes (S3), and systems like Kafka, Spark, Flink, or equivalent processing frameworks.
β’ Cloud & Infra:
Hands-on experience with AWS (or equivalent), containerization (Docker), orchestration (ECS/Kubernetes), and IaC (Terraform/CloudFormation).
β’ Observability & Reliability:
Expertise in data observability, pipeline monitoring, data quality systems, SLAs, and failure recovery mechanisms.
β’ Security & Multi-Tenancy:
Strong understanding of data isolation, governance, access control, and secure data design in multi-tenant systems.
β’ Leadership & Communication:
Excellent written and verbal communication. Comfortable influencing cross-functional stakeholders across geographies.
β’ Problem-Solving & Judgement:
Strong fundamentals in system design, tradeoff analysis, and building scalable data systems.
Preferred / Nice-to-Have
β’ Experience building B2B SaaS data platforms at scale
β’ Exposure to AI/ML pipelines, feature stores, or vector databases
β’ Experience with real-time analytics and streaming systems
β’ Experience in developer-facing data platforms (self-serve data, internal tooling)
β’ Exposure to Snowflake or similar analytical warehouses
β’ Experience in regulated or security-sensitive environments (ISO 27001, SOC2)