Capco at Data & AI Warsaw Tech Summit 2026
About Capco Poland
At Capco Poland, weβre not just another consultancy - weβre the spark behind digital transformation in the financial world.
As a global technology and management consultancy focused on financial services, we partner with leading banks, fintechs, and financial institutions to design and deliver next-generation data platforms, AI solutions, and digital ecosystems.
From data strategy and modern data platforms to AI-driven decision systems and GenAI innovation, our teams help clients unlock the true value of their data.
Our secret?
A culture thatβs fast, flexible, and fiercely entrepreneurial. We move quickly, think creatively, and empower our people to push the boundaries of what technology can achieve.
At Capco Poland, we are proud to be:
β’ Technology partners for leading banks, payments providers, and financial institutions
β’ Builders of modern data platforms and AI-powered systems
β’ Champions of innovation across cloud, data engineering, machine learning, and GenAI
β’ A community of engineers, architects, and consultants passionate about solving complex problems
Meet Capco at the Data & AI Warsaw Tech Summit (21.04 & 22.04.2026) π
At this yearβs Data & AI Warsaw Tech Summit, Capco will share how financial institutions can move from experimentation to production-grade AI and scalable data ecosystems.
Our experts will explore how organizations can:
β’ Build AI-native architectures on modern cloud platforms
β’ Scale machine learning and generative AI solutions across enterprise environments
β’ Transform fragmented data into high-value data products
β’ Embed AI into real business workflows and decision-making systems
Capco Speakers at Data & AI Warsaw Tech Summit π
Andrzej Worona - Head of AI and Data @ Capco Poland & Laura Ε»usin-Kaczmarek - Data Practice Lead @ Capco Poland
Topic: From Data to Meaning: Educating AI in Banking with Ontologies: Lessons from FIBO and Conversational Banking
Time: 11:50-12:10 CET
Intro:
Many AI solutions still fall short when it comes to understanding and reasoning about complex financial concepts. The real challenge is about how financial knowledge is represented and shared with machines. Why does AI still misunderstand basic banking terms despite having access to vast amounts of data?
How can AI truly understand financial concepts? Using the Financial Industry Business Ontology (FIBO) as an example of structured domain knowledge, we will discuss how formal, machine-readable definitions can provide the contextual foundation AI needs. By analysing selected conversational banking scenarios and example solutions, we will invite participants to reflect together on what the right semantic layer for AI in banking should look like.
Join us to discover why the next leap in AI for banking isnβt just about more data or better models, but about building a structured understanding of financial meaning.
Role Overview
We are looking for a Machine Learning Engineer to design, build, and deploy scalable machine learning solutions. In this role, you will work closely with data scientists, data engineers, and product teams to bring ML models into production and ensure their performance, reliability, and scalability.
Key Responsibilities
β’
Design, develop, and deploy machine learning models into production
β’
Build and maintain scalable ML pipelines and workflows
β’
Collaborate with data scientists to operationalize models (MLOps)
β’
Optimize model performance, scalability, and latency
β’
Monitor, evaluate, and retrain models in production
β’
Work with large datasets and feature engineering processes
β’
Implement best practices for versioning, testing, and deployment of ML models
β’
Integrate ML solutions into existing systems and applications
β’
Document models, pipelines, and processes
Requirements
β’
Proven experience as a Machine Learning Engineer or similar role (X+ years)
β’
Strong programming skills in Python (or similar language)
β’
Experience with ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn)
β’
Solid understanding of machine learning algorithms and statistics
β’
Experience with data processing tools (e.g., Pandas, Spark)
β’
Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow)
β’
Experience with cloud platforms (AWS, Azure, or GCP)
β’
Knowledge of APIs and microservices architecture
β’
Strong problem-solving and communication skills
Nice to Have
β’
Experience with deep learning and NLP or computer vision
β’
Familiarity with Docker and Kubernetes
β’
Experience with CI/CD pipelines for ML
β’
Knowledge of data engineering concepts and tools
β’
Experience with real-time or streaming data systems
Online Recruitment Process
β’ Screening call with the Recruiter
β’ Hiring Manager Technical Interview
β’ Feedback
β’ Offer
We offer a flexible collaboration model based on a B2B contract with the opportunity to work on diverse projects.