Are you obsessed with data, partner success, taking action, and changing the game? If you have a whole lot of hustle and a touch of nerd, come work with Pattern! We want you to use your skills to push one of the fastest-growing companies headquartered in the US to the top of the list.Â
Â
Pattern accelerates brands on global ecommerce marketplaces leveraging proprietary technology and AI. Utilizing more than 66 trillion data points, sophisticated machine learning and AI models, Pattern optimizes and automates all levers of ecommerce growth for global brands, including advertising, content management, logistics and fulfillment, pricing, forecasting and customer service. Hundreds of global brands depend on Pattern’s ecommerce acceleration platform every day to drive profitable revenue growth across 60+ global marketplaces—including Amazon, Walmart.com, Target.com, eBay, Tmall, TikTok Shop, JD, and Mercado Libre. To learn more, visit pattern.com or email [Upgrade to PRO to see contact].
Â
Pattern has been named one of the fastest growing tech companies headquartered in North America by Deloitte and one of best-led companies by Inc. We place employee experience at the center of our business model and have been recognized as one of Newsweek’s Global Most Loved Workplaces®.
As a Senior Data Engineer, you will be a high-impact "Game Changer" responsible for architecting and building the very foundation of Pattern's data-driven future. You will tackle massive, petabyte-scale challenges, transforming raw data into high-octane fuel for our AI models and global marketplace strategies. This is your chance to lead high-stakes technical initiatives that directly accelerate growth for hundreds of global brands in a fast-paced, elite engineering environment.
What is a day in the life of a Senior Data Engineer?
•
Designing and implementing robust ETL/ELT pipelines using Airflow, DBT, and cloud-native architectures.
•
Writing sophisticated, production-grade Python code to automate data orchestration and processing.
•
Building/optimizing complex SQL queries and dimensional models for OLAP and OLTP based systems
•
Collaborating with cross-functional teams to ingest and harmonize data from dozens of global marketplaces.
•
Building and maintaining infrastructure-as-code and containerized workflows to ensure platform reliability.
•
Leveraging AI thoughtfully to optimize processes and workflows
What will I need to thrive in this role?
•
Bachelor’s degree in Computer Science, Data Science, or a related technical field (or equivalent experience).
•
7+ years of professional data engineering experience with a heavy focus on ETL/ELT and data modeling.
•
5+ years of expert-level SQL mastery, including window functions, CTEs, and deep performance tuning.
•
4+ years of professional Python development specifically tailored for data pipelines and tooling.
•
3+ years of hands-on experience building/optimizing large-scale data warehouses like Snowflake, BigQuery, or Redshift.
•
Proficiency with open-source frameworks such as Apache Spark, Trino, Kafka, and Debezium.
•
A "Data Fanatic" mindset with experience handling petabyte-scale diverse datasets.
What does high performance look like?
•
Successfully executing the migration or optimization of massive data streams with zero downtime.
•
Consistently delivering clean, well-documented, and high-quality code that sets the standard for the engineering team.
•
Acting as a 'Doer' by taking the initiative to resolve platform bottlenecks before they impact partners.
•
Elevating the technical bar of the team through mentorship and the introduction of innovative engineering practices.
What is my potential for career growth?
•
Opportunity to lead major architectural shifts within a rapidly expanding global tech company.
•
Regular networking and collaboration with high-level technical leadership and AI experts.
•
Upward mobility toward Staff Data Engineer or specialized technical leadership roles.
•
Continuous learning opportunities with cutting-edge technologies like Apache Iceberg and real-time streaming architectures.
What does success look like in the first 30, 60, 90 days?
•
30 Days: Complete onboarding, gain a deep understanding of current data architectures, and begin contributing to existing projects.
•
60 Days: Identify and implement at least one major performance optimization within the data environment and lead a small-scale pipeline project.
•
90 Days: Take responsibility for a significant segment of data processes, collaborating with other engineers and contributing to the long-term roadmap for lakehouse integration.
What is the team like?
•
This role reports directly to the Director of Data Engineering.
•
You will be joining a growing team of data professionals that span multiple geographies.
•
In this role, you will collaborate closely with Data Scientists, Software Engineers, AI Engineers, and Product Managers as well as other departments including Marketing and Sales.
Sounds great! What’s the company culture? We are looking for individuals who are:
•
Game Changers- A game changer is someone who looks at problems with an open mind and shares new ideas with team members, regularly reassesses existing plans and attaches a realistic timeline to goals, makes profitable, productive, and innovative contributions, and actively pursues improvements to Pattern’s processes and outcomes.
•
Data Fanatics- A data fanatic is someone who recognizes problems and seeks to understand them through data, draws unbiased conclusions based on data that lead to actionable solutions, and continues to track the effects of the solutions using data.
•
Partner Obsessed- An individual who is partner obsessed clearly explains the status of projects to partners and relies on constructive feedback, actively listens to partner’s expectations, and delivers results that exceed them, prioritizes the needs of your partners, and takes the time to create a personable experience for those interacting with Pattern.
•
Team of Doers- Someone who is a part of a team of doers uplifts team members and recognizes their specific contributions, takes initiative to help in any circumstance, actively contributes to supporting improvements, and holds themselves accountable to the team as well as to partners.
What is the hiring process?
•
Phone Interview with Talent Acquisition
•
Video Interview
•
Onsite Interview
•
Executive Review
•
Offer
How can I stand out as an applicant?
•
Strong Nice-to-Haves: Expertise in AWS services (Terraform, EKS, Lambda), experience with Apache Iceberg or Delta Lake, and a background in real-time streaming (Kafka/Kinesis).
•
Interview Tips: Be prepared to discuss your experience managing large-scale data outages or complex optimizations; highlight any 'Partner Obsessed' moments where your data work solved a critical business problem; and demonstrate your 'Data Fanatic' nature through a deep dive into a past side project or complex pipeline you built.
Why should I work at Pattern?
Â
Pattern offers big opportunities to make a difference in the ecommerce industry! We are a company full of talented people that evolves quickly and often. We set big goals, work tirelessly to achieve them, and we love our Pattern community. We also believe in having fun and balancing our lives, so we offer awesome benefits that include:
Â
- Unlimited PTO
- Paid Holidays
- Onsite Fitness Center
- Company Paid Life Insurance
- Casual Dress Code
- Competitive Pay
- Health, Vision, and Dental Insurance
- 401(k) match. Pattern matches 100% of the first 3% in eligible compensation deferred and 50% of the next 2% in eligible compensation deferred.Â
Â
Pattern provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability, status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.