AGENDA 2019

8.00 - 9.00

Registration and welcome coffee

9.00 - 9.15

Conference opening

Przemysław Gamdzyk

CEO & Meeting Designer, Evention

Adam Kawa

CEO and Co-founder, GetInData

9.15 - 10.45

Plenary Session

The Data Analytics Platform or how to make data science in a box possible

State of data platforms in the tech industry. ING WBAA vision on the future of data analytics. Highlights on the ING Data Analytics Platform main components and features.

Rob Keevil

Data Analytics Platform Lead, ING

Krzysztof Adamski

Data Infrastructure Architect, ING

Enterprise Data Cloud – from the Edge to AI

The new Cloudera reshapes the enterprise data landscape by providing a unified, open-source, hybrid-cloud service offering to customers. We deliver cloud-native machine learning and analytics from the Edge to AI. Our modern data architecture enables on-premise, multi-cloud and private cloud deployments work alike within a single distribution service.
Together with our partner – 3Soft S.A. – we will demonstrate highlights of the ongoing technical innovation and provide real-life examples from the field, to showcase the relevancy and urgent need for such a unified platform.

Gergely Devenyi

Director of Engineering, Cloudera

Kamil Folkert

CTO, Member of the Board, 3Soft

Large Scale Land use of Satellite Imagery

Leveraging Convolutional Neural Network models in a Streaming pipeline for Segmentation of satellite images for agricultural use.

Suneel Marthi

Principal Technologist - AI/ML, Amazon Web Services

10.45 - 11.15

Coffee break

11.15 – 15.30 Simultaneous sessions

Architecture, Operations and Cloud

This track is dedicated to architects, administrators and people with DevOps skills who are interested in technologies, techniques and best practices for planning, building, installing, managing and securing their Big Data infrastructure in enterprise environments – both on-premise and the cloud.

Data Engineering

This track is the place for engineers to learn about tools, techniques and battle-proven solutions to collect, store and process large amounts of data. It covers topics like data collection and ingestion, ETL, job scheduling, metadata and schema management, distributed processing engines, distributed datastores and more.

Artificial Intelligence and Data Science

This track includes real-world case-studies demonstrating how data & technology are used together to address a wide range of business problems such as product recommendations, predictive analytics, decision optimization and automation. You find here talks about innovative analytics applications and systems for machine learning, statistics, visualization, natural language processing and deep learning.

Streaming and Real-Time Analytics

This track covers technologies, strategies and valid use-cases for building streaming systems and implementing real-time applications that enable actionable insights and interactions not previously possible with classic batch systems. This includes solutions for data stream ingestion and applying various real-time algorithms and machine learning models to derive valuable insights from the flow of events coming from IoT sensors, devices, users, and front-end applications.

Host:

Mikołaj Wiśniewski

Big Data, GetInData

Host:

Klaudia Wachnio

Specialist , GetInData, Marketing&Training

Host:

Agnieszka Strzałka

Data Engineer, GetInData

Host:

Paweł Kubit

Data Engineer, GetInData

11.15 - 11.45

Evolution of Big Data platform

The competitive advantage of a modern enterprise lies in the ability to effectively apply big data, analytics and ML algorithms. By applying the right approach, organizations can modernize their IT architecture to meet the new requirements posed by the need to use large data sets, hybrid cloud and analytics. Our speech will present current trends in building these types of environments. We will talk about the problems faced by companies and what business benefits a properly prepared platform can bring. We will also present, from the technical side, a ready-made platform build by IBM.


Keywords: big data, hybrid cloud, analytics

Mariusz Czopiński

BigData Technical Sales, Certified IT Specialist L2, IBM Poland

Maciej Martusewicz

Software Client Architect, IBM Poland

11.15 - 11.45

How Truecaller leverages big data technologies for analytics and machine learning

Truecaller is a leading caller ID and spam protection app with over 100+ million daily active users.

Keywords: big data, machine learning, analytics

Dhanesh Padmanabhan

Director of Engineering, Truecaller

11.15 - 11.45

Data Science in Roche Diagnostics: From Exploration to Productionization

Data Science applications in Roche Diagnostics – from exploration to productionization of DS initiatives. Real use case #1: machine learning and image processing for quality control. Real use case #2: time series analysis on financial data for business planning.

Keywords: Data Science in Roche Diagnostics, Financial Time Series Analysis, Automated Quality Control, Deep Learning on Product Images, Machine learning, Image Processing

Dr Mohammadjavad Faraji

Senior Data Scientist, Roche

11.15 - 11.45

Real-Time Data Processing at RTB House – How we have grown 10x in 2 years.

In this talk, we would like to share our experience connected with building and scaling our real-time data processing infrastructure at RTB House to buy advertising space effectively.

Keywords: kafka, stream processing, scalability, multi-dc

Bartosz Łoś

Tech Lead, RTB House

11.45 - 11.50

Technical break

11.50 - 12.20

How to build and run a Containerized Architecture for a production ready AI/ML platform

When it comes to AI data scientists/engineers tend to focus on tools. Though the data platform that enables these tools is equally important, it’s often overlooked.

Keywords: containerized AI infrastructure, datacenter, hybrid cloud

Ernst Kratky

Big Data Analytics & AI Sales Lead – Datacenter EMEA, Cisco

11.50 - 12.20

Reliable logging infrastructure: Building trust on logs @ Slack

Overview of logging infrastructure at Slack and why should you care about it, a summary of why reliability is critical for logging infrastructure to gain customer trusts.  Lesson learned and best practices for building logging infrastructure.

Keywords: logging, kafka, reliability

Ananth Packkildurai

Senior Data Engineer, Slack Technologies Inc

11.50 - 12.20

Evolution of search: From a complicated problem to a simplified search experience

What goes into making Booking the leading player to book place to stay? At Booking, with millions of users visiting our platforms every day we have very rich behavioural data about our users – what they type, what they search for, what properties they look at, what filters they apply, how much time they spend on a property page that they book compared to pages of properties that they don’t book, etc. Each of these behavioral data points have helped us build a powerful, personalized search, making the experience of booking a trip easier for our users.

Keywords: search, relevant results, behavioral data, personalization

Arihant Gupta

Software Developer, Booking.com

Priyanka Prakash

Product Owner, Booking.com

11.50 - 12.20

Streaming Visualization

Batch and streaming visualization in big data reference architecture, architecture blueprints for streaming visualization, implementations of the blueprints in a fast data solution.

Keywords: streaming visualization, kafka, bigdata architecture

Guido Schmutz

Solution Architect, Trivadis AG

12.20 - 12.25

Technical break

12.25 - 12.55

AI applied: filtering RTB traffic at Ad Tech scale

In this talk, we would like show a glimpse of our RTB pipeline and dive into applying machine-learning based request filtering at the load balancer level. During the talk we’ll go through the entire development process we took: from defining the task and constraints to the working solution.


Keywords: machine learning, data classification, filtering, load balancing.

Paweł Zawistowski

Senior Data Scientist , Adform

12.25 - 12.55

8 Takeaways from building Rakuten Analytics

Introduction – what is Rakuten Analytics, introduce team and roles. Motivation in building an on-premise petabyte-scale analytics service. 8 takeaways from building Rakuten Analytics. Tech and business impact.

Keywords: analytics, data sketches, microservices, real-time

Juan Paulo Gutierrez

Lead Data Engineer and Architect, Rakuten

12.25 - 12.55

Data Science for Everyone

How to work effectively in Hadoop environment for free with IBM Watson Studio

Keywords: Machine Learning, Hadoop, data science, analytics

Ewa Gruszka

Technical Sales, Predictive Analytics, IBM Poland

Piotr Mechliński

Country Leader for Data Science and AI, IBM Poland

12.25 - 12.55

Driving your marketing automation with multi-armed bandits in real time

Multiarmed bandits vs simple A/B testing. Architecture of solution – how to connect Flink, Nussknacker and R? Other uses cases – what are other good fits for similar architecture.

Keywords: Multi-armed bandit, Marketing automation, Streaming/Flink, R

Wit Jakuczun

founder and co-owner, WLOG Solutions

Maciej Próchniak

Software Lead Developer, TouK

12.55 - 13.50

Lunch

Host:

Mikołaj Wiśniewski

Big Data, GetInData

Host:

Tomasz Nazarewicz

Data Engineer, GetInData

Host:

Łukasz Suchenek

Conferences Editor, Evention

Host:

Klaudia Wachnio

Specialist , GetInData, Marketing&Training

13.50 - 14.20

Scalable machine-learned model serving

Online evaluation of machine-learned models (model serving) is difficult to scale to large data sets. Vespa.ai is an open source solution to this problem in use today on some of the largest such systems in the world, such as the content pages on the Yahoo network and the worlds third largest ad network.
This talk will explain the problem and architectural solution, show how Vespa can be used to implement the solution to achieve scalable serving of TensorFlow and ONNX models, and present benchmarks comparing performance and scalability to TensorFlow Serving.

Keywords: bigdata, opensource, vespa, ml

Jon Bratseth

Distinguished architect, Oath (former Yahoo)

13.50 - 14.20

Metadata Driven Access Control in Practice

The importance of data access governance is continuously growing due to new regulations, such as GDPR, and industry policies. Manage access policies for each individual dataset is a hassle. In this talk, we will show how Svenska Spel use metadata about datasets to generate access policies. We use it to create policies for access, retention, and anonymization.

Keywords: DataGovernance, GDPR, Security

Magnus Runesson

Data engineer, Tink

13.50 - 14.20

I’m a data scientist and engineers don’t hate me

What’s the role of data scientist in the development of a new project? Do they only produce charts while engineers are doing the real work? In this presentation, I will share a story of data science and engineering collaboration at Twitter that resulted in shipping a successful new product feature. I’ll talk about our tools and environment and specifically focus on inter-team dynamics: what practices did we use? how did we divide tasks? what benefits did each team derive? why do we still want to work together?

Keywords: collaboration, cross-functional, best practices, data scientist’s environment

Mateusz Fedoryszak

Data Scientist, Twitter

13.50 - 14.20

Streaming topic model training and inference with Apache Flink

How to use stateful stream processing and Flink’s Dynamic processing capabilities to continuously train topic models from unlabelled text and use such models to extract topics from the data itself.

Keywords: Streaming, Topic Modeling, NLP, Keyword extraction

Suneel Marthi

Principal Technologist - AI/ML, Amazon Web Services

Jörn Kottmann

Senior Software Developer, Sandstone SA

14.20 - 14.25

Technical break

14.25 - 14.55

Towards next generation, cloud-ready and open-source big data discovery platform

At most companies, Data Lake becomes a controlled production environment where the most critical jobs must run with strict service level agreements, with careful deployment and limited creativity. On the other hand, data-driven companies and data scientists often need an elastic and more relaxed environment that gives

Keywords: Data Discovery, Cloud-Ready, Next Generation, Containers, Data Lake

Krzysztof Zarzycki

Big Data Architect, CTO and Co-founder, GetInData

Marek Wiewiórka

Big Data Architect, GetInData

14.25 - 14.55

Enterprise Adoption at Klarna. Software Engineering methods bringing order to the Big Data Galaxy

Klarna provides instant consumer credit at the point of sale, and allows flexible credit lines after purchase. Credit is issued using Klarna’s Checkout product, which is integrated with almost 100’000 merchants on many markets. In order to perform automated decisions on credit and fraud with low latency guarantees, it is mission critical for Klarna that all required data is available at all times. Furthermore, it is important that all decisions are traceable. Finally, our data infrastructure must facilitate engineer and analyst productivity.

Keywords: Teams, Tools, Processes, Performance and scalability, Validation

Erik Zeitler

Lead software engineer, Klarna

14.25 - 14.55

Data Scientist projects - The main challenges for IT in (Big) Data Age. Do we really need an IT guys for doing Data Science?

Data is net currency now. Every modern company is looking for a “Holy Grail” for transforming their business in an innovative way.

Marek Pyka

Cloud Solutions Architect - Artificial Intelligence, Microsoft Polska

14.25 - 14.55

Detecting Patterns in Event Streams with Flink SQL

Introduction to SQL on streams concepts, explain how the new SQL MATCH_RECOGNIZE clause brings the power of pattern matching to (streaming) SQL and demonstrate advanced stream analytics with Flink SQL.

Keywords: Apache Flink, SQL, stream analytics, stream processing

Dawid Wysakowicz

Software Engineer, Ververica (former data Artisans)

14.55 - 15.00

Technical break

15.00 - 15.30

From legacy to cloud: an end to end data integration journey

Raw data collection from cloud and legacy data centers. Standard data preparation (e.g. binary conversion, partitioning). User driven analytics and machine learning. Challenges and experiences of building and operating data pipelines and computation as a service for hundreds of teams operating at petabytes scale.

Keywords: dataLake, dataPipelines, infrastructure, Spark

Max Schultze

Data Engineer, Zalando SE

15.00 - 15.30

Building Machine Learning platform for Real-Time Bidding

How to build platform for fast machine learning model development? How to serve machine learning models under heavy load and tight timing constraints? How to establish effective cooperation between data scientists and data engineers?

Keywords: machinelearning, bigdata, fastdata, rtb

Tomasz Kogut

Technical Lead, Adform

15.00 - 15.30

Data Science at PMI - The Tools of The Trade

Introduction to Data Products, CI & CD, Modus Operandi and Agile Data Science on the Data Ocean. Best Practices: Docker Containers, Project Templates, Programming Style Standards etc. Reproducible Data Science

Keywords: CI/CD, Best Practices for Data Science, Data Product, Reproducible research

Maciej Marek

Enterprise Data Scientist , Philip Morris International

Michał Dyrda

Senior Enterprise Data Scientist, Philip Morris International

15.00 - 15.30

The Changing Face of ETL: Event-Driven Architectures for Data Engineers

The power of events and unbounded data. Streaming is not just for real-time applications – it’s for everyone. Where a streaming platform fits in an analytic architecture. How event-driven architectures can enable greater scalability and flexibility of systems both now and in the future.

Keywords: kafka, event-driven architecture, streaming, integration

Robin Moffatt

Developer Advocate, Confluent

15.30 - 16.00

Coffee break

16.00 – 17.25 Roundtables sessions

16.00 - 16.05

Intro

Parallel roundtables discussions are the part of the conference that engage all participants. It has few purposes. First of all, participants have the opportunity to exchange their opinions and experiences about specific issue that is important to that group. Secondly, participants can meet and talk with the leader/host of the roundtable discussion – they are selected professionals with a vast knowledge and experience.

There will be 2 rounds of discussion, hence every conference participants can take part in 2 discussions

 

16.05 – 16.45    1st round

16.50 – 17.30    2nd round

16.05 - 16.45

1st ROUND

Maciej Bryński

Big Data Architect, XCaliber

Arkadiusz Gąsior

Senior Big Data Engineer, RTBHOUSE

Radosław Kita

Team Lead, Adform

Batch vs Streaming – Tech geeks follow the streaming hype and the toolset makes it better and easier to apply each day. On the other hand the batch use cases were more popular in 2018. Batch is still is considered to be cheaper and easier to implement. Hot or not? Which one to choose?

Ingesting complexity relies on two dimensions: size of a data and a variety of data sources. Kafka Connect, Apache Spark or Gobblin scale great in respect to data size. On the other hand Apache NIFI allows to add new ingestions faster via web UI. One can build data pipelines without writing a single line of code which is significantly cheaper. Do we still need to write any code for data ingestion pipelines?

Paweł Leszczyński

Hadoop Product Owner, Grupa Allegro

The theme of this panel are the ways in which ongoing rise of cloud computing is redefining the Big Data landscape as we know it. We’ll discuss “The Good, the Bad and the Ugly” sides of  the cloud transition and share our experiences in a search of the ultimate big data cloud solution.

Plan of discussion:

  1. Present yourself: name, current position, your experience with cloud.
  2. Describe your good and bad experiences with data cloud solutions so far.
  3. Which service  / solution had or could have a huge impact on your business? Why?
  4. What are the big Yes’s or No’s of the transition.
  5. “in 3 years from now…” the happy path – What do you think the best cloud data solution should offer?

Mateusz Pytel

Google Certified Professional - Cloud Architect, GetInData

A common obstacle when adopting stream processing is the lack of data engineers and developers who are familiar with this fairly recent technology. The promise of streaming SQL is to make stream processing a lot easier and much more accessible for engineers who do not have a stream processing background (yet). In this round table session we will discuss and try to answer common questions about streaming SQL.

Why should streams be processed with SQL? How does it work? Who is using streaming SQL for what use cases? Why should a streaming SQL have the same semantics as a regular batch query?
Please feel invited to join our discussion if your are curious about streaming SQL, are thinking about using it, or even have experiences to share.

Fabian Hueske

Software Engineer, Ververica (former data Artisans)

Before last May we rushed towards a swift implementation of GDPR restrictions, new rules and requirements, new features for our customers, we introduced new roles in our organisations. Among all of these – we also needed to review what data and how we collect, process and store it. The whole IT sector was totally involved, with dozens of conferences, panels and discussions taking place. Quite often officials were late with local implementation (i.e. in Poland the final regulation came in the motion just over two weeks before GDPR) while consulting firms and lawyers gave us multiple opinions on detailed matters. This round table session will be devoted to GDPR – summary of our approaches for its implementation, what it meant for our products, data analytics, teams, processes and clients.

Wojciech Ptak

CTO FreshMail, Consultant, Trainer, FreshMail

“Above all else show the data” – is this quota from E.R. Tufte still applicable? How Big Data changed visualization field and what to expect? Does novel data imagery look for answers or maybe to reveal right questions? Let’s meet and talk through these and more aspects of the present data visualization!

At this table all data visualization enthusiasts are welcome. The meeting has a free discussion form so that everyone can use this opportunity to exchange experience, best practices and ideas. Any related topic is welcome while the list below might help you choose yours:

  • role of data visualization
  • modern data imagery techniques
  • dos and don’ts of data visualization
  • how data visualization can fight Big Data’s strugglers (volume, variety, dirtiness…)
  • best tools on the market
  • choosing right tool for one’s needs
  • features of the next generation visualization tool (MR\VR?)
  • inspiration\information sources

Emil Ruchała

Data Analyst & BI Developer, GetInData

This time we’re developing system not for Kaggle competition but for real problem. What’s now?
How to construct robust models? How can I assure that model will be work in the future? How can I prove it to business?
How can I construct a maintainable model? Is my model interpretable?

Let’s talk!

Piotr Wygocki

co-founder, MIM Solutions

There are many pitfalls as a data processing setup moves forward and more projects added. In this session we walk through a quick example company’s challenge as it builds up its processing platform when new projects come in… What would you have been done differently to keep people, plans, processes and the infrastructure working together?

Simon Caruana

Lead Engineer/Solution Design, Tesco Bank

Hadoop was built in 2006, with a purpose to allow distributed processing of large data sets across clusters of computers using simple programming models. It was designed to be highly scalable and deliver a highly-available service. It quickly evolved to a quite big ecosystem providing additional components to solve particular challenges i.e: SQL querying, data transformation and loading or analytics. While advanced Business Analytics (BA) is a must today to be competitive – what can be observed on the market – there are many ways to implement BA over legacy Data Warehouses (DW).
So looking at the purpose Hadoop was built for, should companies implement BA using Analytical MPP columnar solutions and Hadoop as Data Lake for cold data or should companies go into a direction where full analytics abilities are implemented in Hadoop ecosystem only?
Let’s discuss what is your experience in this area, what is your opinion, which solution brings more business benefits in terms of value, risks & costs …

Maciej Paliwoda

Solution Engineer, VERTICA

Arkadiusz Wiśniewski

Senior Manager @IBM Analytics, Central & Eastern Europe, IBM Polska

How to deploy, manage, and optimize applications in multicloud and container environments. How to securely extend your private networks into public clouds and ensure the application experience. How to protect multicloud identities, direct-to-cloud connectivity, data, and applications.

Tomasz Kulakowski,
Consulting System Engineer, Data Center & Multi-cloud Solutions, Cisco

Krzysztof Banel,
Consulting Systems Engineer.Sales, Cisco

Artur Fijałkowski

Big Data Devops Expert, ING Tech Poland

16.50 - 17.30

2nd ROUND

Maciej Bryński

Big Data Architect, XCaliber

Philipp Krenn

Infrastructure | Developer Advocate, Elastic

Every business now days seeks for better ways to get closer to their customers, to provide better service experience, to manage customers’ expectations, to react to customers’ inquiries as fast as possible, to provide personalized products & services, etc. Moderators of the roundtable had opportunity to work with Telco, Retail and Banking companies in implementation of such a solutions that helped them achieve those goals.

The focus of the discussion will be on technical aspects that worked well in environments where vast amounts of data had to be processed & analyzed such as Telco, as well as the “small data” environments and the technical approach that worked better than classical Big Data.

Darko Marjanović

CEO & Co Founder, Things Solver

Goran Pavlović

Integration Architect, Vip Mobile

Whether you’re a global company that operates a well-known hospitality service, a Polish e-commerce platform, a multinational enterprise dealing with networking & telecommunications equipment or a marketing cloud platform handling TBs of data, you need to provide your internal and external customers the ability to quickly access your data sources and make faster, better, data-driven business decisions.
Join me for an interesting discussion about some of the most cutting-edge technologies out there (e.g Druid, Kylin and ClickHouse), aiming to help us achieve this goal, while taking other factors into consideration, such as scale-out, sub-second response for OLAP queries and cost efficiency.
Together, we will share our experiences with said tools and compare the pros and cons.

Itai Yaffe

Big Data Tech Lead, Nielsen

Radosław Kita

Team Lead, Adform

Fabian Hueske

Software Engineer, Ververica (former data Artisans)

Stream processing gets very popular these days. It’s being used for streaming data integration, for analytics , data aggregation and monitoring, and for complex event processing. Together with usage hype of stream processing, there is a bloom of tools to solve these problem. At this table, we’re going to discuss what are the use cases, what are the tools, which tool is the best for the use case, how they compare in performance and feature sets.

Krzysztof Zarzycki

Big Data Architect, CTO and Co-founder, GetInData

“Above all else show the data” – is this quota from E.R. Tufte still applicable? How Big Data changed visualization field and what to expect? Does novel data imagery look for answers or maybe to reveal right questions? Let’s meet and talk through these and more aspects of the present data visualization!

At this table all data visualization enthusiasts are welcome. The meeting has a free discussion form so that everyone can use this opportunity to exchange experience, best practices and ideas. Any related topic is welcome while the list below might help you choose yours:

  • role of data visualization
  • modern data imagery techniques
  • dos and don’ts of data visualization
  • how data visualization can fight Big Data’s strugglers (volume, variety, dirtiness…)
  • best tools on the market
  • choosing right tool for one’s needs
  • features of the next generation visualization tool (MR\VR?)
  • inspiration\information sources

Emil Ruchała

Data Analyst & BI Developer, GetInData

This time we’re developing system not for Kaggle competition but for real problem. What’s now?
How to construct robust models? How can I assure that model will be work in the future? How can I prove it to business?
How can I construct a maintainable model? Is my model interpretable?

Let’s talk!

Piotr Wygocki

co-founder, MIM Solutions

Choosing a BI/analytics platform for your organization is a commitment. You may think you found a perfect tool for your needs and still you’ll find missing functionalities, annoying features or bugs that you, your team and you users will need to live with (or hack). Other times the outcome of your efforts will give users an unprecedented value added worth a standing ovation (seen that).
At the table we will be discussing your experiences with different BI/analytics tools – what led you to choose one over another and did it work out for you? Do you stick to old school power-houses or have you given a chance to fancy niche players (to use naming convention by a Certain Controversial Quadrant) or maybe you have a great open source alternative to recommend? We’ll talk integration with data sources and ability to deal with big volumes of data, data modeling and visualization, self-service exploration, scheduling, data security, embedding and implementing data science solutions into analytics.

Magdalena Paszko

Head of BI & Data Science, SHOWROOM

For this roundtable we want to discuss experiences, best practices and learnings for data migration projects from on-premise to the cloud. We will talk about organizational challenges, as well as tools that can be found in the space between open source and cloud provider offerings.

Present yourself: name, current position, your experience with on-premise and the cloud. Describe parts of your data migration project that when really well. What are the biggest challenges in data migration projects that you have faced?

What tools that now exist would have been game changers for your data migration?

Max Schultze

Data Engineer, Zalando SE

MLOps is the adaptation of DevOps principles to ML service development. It’s a combination of processes and tools that help organisations to quickly deliver scalable ML applications. It relies on a close collaboration between data scientists who develop models and engineers who deploy and maintain them.

We currently experiment with building a working MLOps culture and would like to compare our experiences and thoughts with others. We’d like to centre this discussion around the tools that shorten the ML application development cycle, e.g. Amazon SageMaker. We will also touch upon ways of working between data scientists and engineers.

Krzysztof Jędrzejewski

Principal Data Scientist, Pearson

Nikoleta Lindenau

Associate Data Scientist, Pearson

BigData/AI is probably the hottest industry in the tech world right now because almost every company would like to be data-driven, powered by AI and monetise its data. No wonder many vendors and software houses offer their services on the market. Also in Poland. During this roundtable session we will discuss the challenges and opportunities for BigData/AI vendors & software houses in Poland. They will include coping with competition (compete or collaborate), offering right products or services, finding target customers (domestic vs. international, corporations vs. startups), sales & marketing strategies, recruiting talents, and other important aspects of running a successful company.

Adam Kawa

CEO and Co-founder, GetInData

17.30 - 17.45

Coffee break

17.45 - 18.15

Panel discussion - How current megatrends are changing the Big Data landscape and what it means to us

Although Big Data has always been evolving fast, today we see new megatrends that might revolutionise the Big Data landscape. They include containerisation, hybrid and public cloud, ML/AI adoption to name a few. Will companies go to the cloud at some point? How Hadoop and open-source ecosystem will look like? Will ML/AI still be competitive advantage or just a norm? What vendors will disrupt the Big Data industry? During this panel, our experts will attempt to answer these questions. We will talk about how they see the nearest future of Big Data and how they embrace these changes in their companies. The discussion will give some tips on how you can prepare for incoming changes and specific actions that you can take in 2019.

Host:

Adam Kawa

CEO and Co-founder, GetInData

Participants:

Fabian Hueske

Software Engineer, Ververica (former data Artisans)

Rob Keevil

Data Analytics Platform Lead, ING

Andrzej Michałowski

Head of AI Research & Development, Synerise 

Michał Żyliński

Cloud Customer Engineer, Google

18.15 - 18.30

Closing & Summary

Przemysław Gamdzyk

CEO & Meeting Designer, Evention

19.00 - 22.00

Networking party for all participants and speakers

At the end of the conference we would like to invite all the attendees for the informal evening meeting at “Wilcza 50” restaurant, which is located at the address contained in its name.

LOCATION

Community Corner

Through the whole day of Bigdata Technology Warsaw Summit 2019 you are invited to the Community Corner. This is a dedicated room with cozy setup – ready for talks, discussions or maybe some rest and  time off. And also a place for few informal presentations. Located at the same level in Marriot hotel, close to Game Area and a cloakroom.

Please join:

14.00 - 14.30

Map of Polish AI – state of the sector in Poland

Łukasz Borowiecki

AI Sector Expert, Digital Poland Foundation