Online Expo + Knowledge Zone - Big Data Technology Warsaw Summit
Online Expo + Knowledge Zone
It is a free conference participation package which includes:
online access to the exhibitors zone
access to the VOD knowledge zone
Package does not include: |
access to the presentations of the speakers in the hotel and via stream during the conference |
participation in workshops on-site |
access to recordings of the lectures |
access to pdf presentation |
certificate of participation |
possibility to participate in the evening meeting. |
Interested in extending the conference participation package?
Pay 545 PLN / 125 EUR net for the PREMIUM package
to access online and on-site conference
Please contact:
Weronika Warpas
m: +48 570 611 811
e: weronika.warpas@evention.pl
Online Expo
The Expo Zone online (exhibitors zone) consists of virtual booths of conference partners.
At the Big Data Tech Summit conference, the booths in the Expo Zone are divided into TECH Zone and JOB Zone.
The TECH booths include:
- pdf materials and pages with information about technological solutions, products and services
- video materials: instructional videos, demo, case study
- quizzes and polls with questions and answers to choose from and information about the prize for completing the quiz or survey
- contact forms and chats
- contact cards for company representatives
- links to social media for more information
- additional information, e.g. graphics, interesting diagrams.
The JOB booths include:
- company descriptions
- benefits for the employee (information about what companies offer, reasons why it is worth joining the company)
- positions for which recruitment is currently being carried out and links to offers
- locations where recruitment is in progress
- polls with preferences regarding the form, date and time of contact
- chats for communication with company representatives during the conference
- links to social media for more information.
Knowledge Zone
We have great set of presentation available in the CONTENT ZONE that would be available pre-recorded as Video on Demand for conference participants in advance. We will provide access to these recordings on April 26. Participants will be able to use the knowledge zone with access to the platform until 30 days after the conference
Parallel Session
In the world of AI, ML and Big Data analysis, we have forgotten about our main clients - people who are not interested in querying databases or waiting for the result of data preparation - they want to easily play with data themselves.
In this presentation, I will explain to you:
- what are analytical cubes
- who will use them and how can they do it
- what are the differences between Apache Kylin and Microsoft Analysis Services and how to prepare a cube in these environments
#kylin #analysisservices #analyticalcubes #ssas #apache
Parallel Session
1) When Data Lake becomes a Data Swamp
2) Data access patterns and sharing - what could possibly go wrong?
3) Why do we need to introduce Data Layers
4) Bronze / Silver / Gold Data Layers and the concept behind it
5) How to make it work - tips and tricks.
#data_swamp #good_practices #sharing_data #thinking_about_users #data_lineage
Parallel Session
- Where is entity resolution and deduplication needed
- Why is it a tough problem to solve
- Demonstration of open source technologies to solve this
#entityresolution #recordlinkae #deduplication #dataquality #ml
Parallel Session
- Microservices
- Industrialized data ingestion
- Data virtualization
- Data privacy
- Operational data store
#Kubernetes #microservices #spark #datalakehouse
Parallel Session
-How to transform from building one deep learning model per month into evaluating and deploying hundreds of them in a single week?
-Building MLOps solution with CICD practices using CDK
-How to detect underground structures based on a bunch of radar signals and no labels?
-Should we avoid manual steps in the automatic Machine Learning pipeline?
-Can we use lambda aliases to differentiate between dev and prod environment?
#MLOps #CDK #DeepLearning #Automatisation #Python
VOD
Watch this short 10 minute video, see how technologies like Kafka, .NET core, Scala, Java are used in AdTech and learn more about the journey of data in AdTech based on Adform example. The presentation goes through 5 V’s of Big Data (volume, variety, velocity, veracity, value) and shows where does all this data come from as well as gives a glance at the data pipelines.
#AdTech #BigData #Adform #DataPipelines #kafka #scala #.NET #java
VOD
Traditional data platforms are complex, expensive and time consuming to provide a business value. Why not to consider a solution which your business users would love to use? Why a typical analytical project have to take months to deliver?
Let me show you, how a single, fully self-service platform can provide all services orchestrated together to address your analytical needs in action!
#analytics, #bigdata, #ROI, #datadriven, #selfservice
VOD
AI is as much a management challenge as it is a technological challenge.
There has been, and still is, a big debate about data driven management. We've long been encouraging managers and decision makers to use whatever data they have available, to draw insights and to make more informative and verified decisions. As a result we can observe an increase in Data Analytics and Data Science employment. What we can also observe is increasing complexity of systems, tools, applications, regulations and policies around their day-to-day work.Now is the time to encourage everyone to think about how to organize and manage all of the teams, tools, pipelines, and applications that we've brought to life in Data Science.
#ai, #strategy, #mlops, #data science, #ml, #management
VOD
During the presentation we will show how we delivered project of implementation of Augmented Analytics platform for one of our clients. We will describe what were the client’s challenges and how the platform was able to solve them.
#Business Intelligence, #AI, #Artificial Intelligence, #Augmented Analytics
VOD
Data preparation is a key phase in any machine learning/data mining process. In most cases is complicated, undocumented and there are serious issues in moving all process from the development environment to production environment. I want to show that, with the right tools, it does not have to be a problem.
Ab Initio is a platform that allows for processing, transforming and managing data in every single aspect, from low level details to high abstraction concepts. The core of that process are graphs, which are responsible for data transformations. They provide declarative-streaming approach to data processing. During short presentation I will show how to load the data, process them into required form, and use it to build ML model. This approach is so flexible that will be useful for many people with different technical skills. Programmers can use graphs as a normal programming language, data scientists can operate on data without touching programming details and managers can understand the whole process and have full documentation of it.
#AI/ML #mlops #datamining #ETL #declarativeprogramming
Interested in extending the conference participation package?
In the PREMIUM package you will receive: |
participation in the conference on-site 27.04. and participation in the conference online 27-28.04. (access to the presentations of the speakers in the hotel and via stream during the conference) |
30-day access to recordings of the lectures |
access to pdf presentation |
certificate of participation |
possibility to participate in the evening meeting |
online access to the Exhibitors Zone and VOD Knowledge Zone of course also |
Please contact:
Weronika Warpas
m: +48 570 611 811
e: weronika.warpas@evention.pl