Data Platform Engineer β Build the Backbone of AI
Capco at Data & AI Warsaw Tech Summit 2026
About Capco Poland
At Capco Poland, weβre not just another consultancy - weβre the spark behind digital transformation in the financial world.
As a global technology and management consultancy focused on financial services, we partner with leading banks, fintechs, and financial institutions to design and deliver next-generation data platforms, AI solutions, and digital ecosystems.
From data strategy and modern data platforms to AI-driven decision systems and GenAI innovation, our teams help clients unlock the true value of their data.
Our secret?
A culture thatβs fast, flexible, and fiercely entrepreneurial. We move quickly, think creatively, and empower our people to push the boundaries of what technology can achieve.
At Capco Poland, we are proud to be:
β’ Technology partners for leading banks, payments providers, and financial institutions
β’ Builders of modern data platforms and AI-powered systems
β’ Champions of innovation across cloud, data engineering, machine learning, and GenAI
β’ A community of engineers, architects, and consultants passionate about solving complex problems
Meet Capco at the Data & AI Warsaw Tech Summit (21.04 & 22.04.2026) π
At this yearβs Data & AI Warsaw Tech Summit, Capco will share how financial institutions can move from experimentation to production-grade AI and scalable data ecosystems.
Our experts will explore how organizations can:
β’ Build AI-native architectures on modern cloud platforms
β’ Scale machine learning and generative AI solutions across enterprise environments
β’ Transform fragmented data into high-value data products
β’ Embed AI into real business workflows and decision-making systems
Capco Speakers at Data & AI Warsaw Tech Summit π
Andrzej Worona - Head of AI and Data @ Capco Poland & Laura Ε»usin-Kaczmarek - Data Practice Lead @ Capco Poland
Topic: From Data to Meaning: Educating AI in Banking with Ontologies: Lessons from FIBO and Conversational Banking
Time: 11:50-12:10 CET
Intro:
Many AI solutions still fall short when it comes to understanding and reasoning about complex financial concepts. The real challenge is about how financial knowledge is represented and shared with machines. Why does AI still misunderstand basic banking terms despite having access to vast amounts of data?
How can AI truly understand financial concepts? Using the Financial Industry Business Ontology (FIBO) as an example of structured domain knowledge, we will discuss how formal, machine-readable definitions can provide the contextual foundation AI needs. By analysing selected conversational banking scenarios and example solutions, we will invite participants to reflect together on what the right semantic layer for AI in banking should look like.
Join us to discover why the next leap in AI for banking isnβt just about more data or better models, but about building a structured understanding of financial meaning.
Weβre Looking for Data Engineers
Role Overview
We are looking for a Data Engineer to join our Data & Analytics team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and architectures. You will work closely with data analysts, data scientists, and business stakeholders to ensure reliable and high-quality data is available for decision-making.
Key Responsibilities
β’
Design, develop, and maintain ETL/ELT data pipelines
β’
Integrate data from multiple sources (APIs, databases, external systems)
β’
Build and optimize data warehouses and data lakes
β’
Ensure data quality, consistency, and availability
β’
Monitor and improve performance of data processing systems
β’
Collaborate with data scientists and analysts to deliver datasets
β’
Create and maintain technical documentation
β’
Implement and manage cloud-based data solutions (AWS, Azure, GCP)
Requirements
β’
Proven experience as a Data Engineer or similar role (3+ years)
β’
Strong SQL skills
β’
Proficiency in Python or Scala
β’
Experience with ETL tools (e.g., Airflow, dbt, Informatica, Talend)
β’
Experience with relational and NoSQL databases
β’
Familiarity with cloud platforms (AWS, Azure, or GCP)
β’
Understanding of data warehousing concepts and data modeling
β’
Experience working with large-scale data (e.g., Spark, Hadoop)
β’
Strong problem-solving and communication skills
Nice to Have
β’
Experience with BI tools (e.g., Power BI, Tableau)
β’
Knowledge of DataOps and CI/CD practices
β’
Experience working in Agile environments
β’
Familiarity with Docker and Kubernetes
Online Recruitment Process
β’ Screening call with the Recruiter
β’ Hiring Manager Technical Interview
β’ Feedback
β’ Offer
We offer a flexible collaboration model based on a B2B contract with the opportunity to work on diverse projects.