Agenda Kopia 5-04-22
BigData Technology Warsaw Summit 2022Check the ONSITE conference location: LOCATION
In this year's edition of the conference, we will focus on the areas:
Artificial Intelligence and Data Science, Streaming and Real-Time Analytics,
Data Strategy and ROI, Data Engineering, Architecture Operarations &Cloud.
26.04.2022 - WORKSHOP DAY
9.00 - 16.00
PARALLEL WORKSHOPS (independent workshops, paid entry) | on-site, WARSAW
Introduction to Machine Learning Operations (MLOps)
DESCRIPTION:
In this one day workshop you will learn how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.
During the course we simulate real-world end-to-end scenarios – building a Machine Learning pipeline to train a model, and deploy it on Kubeflow environment. We’ll walk through the practical use cases of MLOps for creating reproducible, scalable and modular data science code. Next, we’ll propose a solution for running pipelines on Google Cloud Platform, leveraging managed and serverless services. All exercises will be done using either a local docker environment, or GCP account.
SESSION LEADER:
GetInData
Real-Time Stream Processing
DESCRIPTION:
In this one day workshop you will learn how to process unbounded streams of data in real-time using popular open-source frameworks. We focus mostly on Apache Flink and Apache Kafka – the most promising open-source stream processing framework that is more and more frequently used in production.
During the course we simulate real-world end-to-end scenario – processing logs generated by users interacting with a mobile application in real-time. The technologies that we use include Kafka and Flink. All exercises will be done using either a local docker environment or within your IDE.
SESSION LEADERS:
GetInData
GetInData
Modern data pipelines with dbt
DESCRIPTION:
SESSION LEADER:
GetInData
19.00 - 22.00
EVENING SPEAKERS MEETING (Only for Speakers) on-site, WARSAW
27.04.2022 - 1ST CONFERENCE DAY | HYBRID: ONLINE + ONSITE
8.30 - 9.00
Morning cofee and networking time
9.00 - 9.10
Sesja plenarna
Evention
GetInData
9.10 - 11.25
PLENARY SESSION
9.10 - 9.30
Plenary Session
Join us at this session to learn about the key technologies and strategy that drove a decade-long big data journey of Microsoft. Based on the example of our company we'll also explore realities of organizations working at massive scale while still maintaining a fast pace of innovation. We will touch multiple technologies during the presentation from non-Microsoft ones (Spark, Python, R, HDFS, YARN) to Microsoft-proprietary cloud services (Azure Data Lake Analytics, Azure Data Lake Store, HDInsight).
#BigData #Technology #Microsoft
9.30 - 9.55
KEYNOTE PRESENTATION
Plenary Session
The Data Mesh paradigm is a strong candidate to supersede the centralized data lake and data warehouse as the dominant architectural patterns in data and analytics. It promotes the concept of domain-focused Data Products which go beyond sharing of files and towards guarantees of quality and acknowledgment of data ownership.
Through personal experience with applying the Data Mesh concept in practice, as well as dedicated field research, the presenter discovered the most common pain points at different stages of the journey and identified successful approaches to overcome those challenges. In this talk, you will gain both technical and organizational insights ranging from companies that are just starting to promote a mindset shift of working with data, to companies that are already in the process of transforming their data infrastructure landscape, to advanced companies that are working on federated governance setups for a sustainable data-driven future.
9.55 - 10.10
BREAK
10.10 - 10.35
Plenary Session
Google Cloud Poland
Google Cloud Poland
10.35 - 11.00
Plenary Session
As a Vertica customer for more than 10 years, Kambi tracks and reports on 150+ data sources for 425+ users worldwide, while maintaining regulatory and GDPR compliance in this highly regulated industry. Successful sports betting companies offer their customers many betting options, ease in financial transactions and great customer service, all of which generates repeat business and competitive advantages. Please join Andrew Hedengren as he describes how Vertica delivers a centralized version of the truth for a “simple and scalable” solution.
#BigData #PlatformArchitect #dataanalytics #Vertica #Kambi #dataplatform #dataanalyticssolution

Kambi
11.00 - 11.25
Plenary Session
According to 451 Research, 96% of enterprises are actively pursuing a hybrid IT strategy. That’s great but how can I implement that? I just selected my one and only Cloud provider - so what goes where now? Should I migrate to hybrid right away or in phases? Or maybe there is another way?
This session has been designed for you to learn how CDP Hybrid Cloud delivers freedom of choice - private and public clouds, performance and cost management, security and self-service, flexibility and control. Join us to find out how your job can get a whole lot easier!

11.25 - 11.50
BREAK
11.50 - 13.20
PARALLEL SESSIONS
11.50 - 12.20
Architecture Operations &Cloud
Parallel Session
Using components of the public cloud seems to be pretty straightforward. You have a nice and modern UI. If you're a console oriented guy (or freak) - there's even a built-in web shell to make you happy. Problems start to occur when each team uses various components, each in a different way and your task is to make those components meet the company's policies, normalize a process of a deployment and in the end - make this deployment user-friendly.
Let me show you how we've automated this process with the help of Terraform and provided a user-friendly way to build an infrastructure putting all of its complexities out of sight.
All the examples will be based on the infrastructure creation tasks for the big data processing projects.
#public #cloud #terraform #python #infrastructure #data

Artificial Intelligence and Data Science
Parallel Session
It is known that Data Science, Software Engineering, ML/AI are highly intertwined fields. When building future-proof advanced analytical solutions, originally devised by data scientists, not only certain best practices from Software Engineering are handy, e.g. pair programming, but also they help achieving e.g. low coupling and high cohesion for ML/AI systems. In this talk we show how this has been achieved on a concrete example of a predictive solution for understanding the impact of production process parameters on final product attributes.
#softwareenginnering #SEbestpractices #ArtificialIntelligence #MachineLearning #injdustry4.0 #crossfunctional

The presentation will be announced soon.
12.20 - 12.50
Data Engineering
Parallel Session
In this talk I’m going to present to you the design process behind Nielsens Data Auditing system, Life Line. From tracking and producing , to analysing and storing auditing information, using technologies such as Kafka, Avro, Spark, AWS Lambda functions and complex SQL queries. The data auditing project was one of main pillars in 2020, the extensive design process we went through paid off, and tremendously raised the quality of our data. We’re going to cover:
* A lot of data arrival and integrity pain points
* Designing your metadata and the use of AVRO
* Producing and consuming auditing data
* Designing and optimizing your auditing table - what does this data look like anyway?
* Creating an alert based monitoring system and some other add-ons
* Answering the most important question of all - is it the end of the day yet?
#data #auditing #kafka #architecture #sql

Artificial Intelligence and Data Science
Parallel Session
- User queries at an e-commerce site exhibit a plethora of information ranging from brands, size, intent or even desires. A better understanding of users intent leads to a better user experience.
- How can we exploit plain natural language texts to extract semantic and syntactic information. And how can such information help us towards improving the site behaviour?
- We will talk about enhanced language processing and understanding techniques for semantifying queries using advanced sequence learning techniques.
- We will also discuss how to design an offline evaluation to quantify the model performance at a large scale.
#nlp #queryunderstanding #ecommerce #sequencelearning #ml

Data Strategy and ROI
Parallel Session
Becoming a data-driven organization is a hot topic that keeps busy a lot of companies. Many leaders and executives see value in unleashing analytics potential and using it to impact business growth.
We are going to talk and share experiences about how we…
• democratize data across the different levels of the organization by consolidating, integrating and automating data workflows into a single Data Lake;
• designed and implemented cloud-based scalable & secure architecture;
• impacted business performance by unlocking various use cases;
• executed project based on the consolidated data for one of the largest nutrition brands.
#Datademocratization #businessimpact #Spark #datalake #digitization

12.50 - 13.20
Data Engineering
Parallel Session
Hearing aids nowadays can be treated as a cloud of IoT devices that produce a huge number of events.
However, getting from the devices to the report is not a simple path.
Especially that we need to additionally take into account GDPR, transformations or cost optimizations in Azure.
#Azure #EventHub #Datalake #Databricks #Lakehouse #ETL
Demant
Real-Time Streaming
Parallel Session
The ability to analyze data in real time for mobile network is crucial for diagnostics and ensuring the quality of the service for end customers. To achieve this we have built a real-time ingestion and analytics platform that processes 2.2 billions messages a day from mobile networks hardware. During the talk we will show how we used Flink and Flink SQL to build this platform. The solution includes calculation of more than 5000 KPIs and 1500 aggregation defined in SQL, on 750 Kafka topics. We will describe how we manage Flink jobs at scale using Ververica and Kubernetes, how we monitor the platform using Clickhouse and what problems we need to overcome in the project.
#streaming #flink #real-time #operationalmonitoring #telco

Data Strategy and ROI
Parallel Session
- What is an analytics translator?
- What skills and competencies are required?
- How to build a data-driven business with analytics translators?
- Do you fit the profile of an analytics translator?
- Do you like quizzes?
#AnalyticsTranslator #DataDrivenBusiness #Transformatio #PeopleAnalytics #Data-Science

Speaker:
University of Applied Sciences Utrecht
13.20 - 14.15
LUNCH BREAK
14.15 - 15.55
CASE STUDY
14.15 - 14.45
Architecture Operations &Cloud
Parallel Session
Microsoft is a data driven company. All client-side software is well instrumented and emits telemetry. Designing, developing and operating (DevOps model) a big data pipeline gathering this data at the Microsoft scale (the pipeline has: 100k+ Azure cores, 13 Data Centers, hundreds of PBs) is a great learning opportunity. In this presentation I will show what we've learned over the last 7 years and describe the DevOps process we use. This will be a journey spanning: our design principles, testing approach, ops mindset (monitoring, automation, continuous-improvement), rollout across 13 Data Centers strategy and more.
#devops #telemetry #continuousimprovement #scale

Data Engineering
Parallel Session
- Common issues with data lakes
- What is Apache Iceberg? and what problems does it solve
- Building CDC archive at Shopify using Iceberg
- Management / considerations when using Iceberg
- Brief intro into whats next on deck for Shopify + Iceberg (Type-1 dimensions using Iceberg's V2 spec with row-level deletion)
#iceberg #datalake #columnardata #dataplatform #CDC

The presentation will be announced soon.
14.50 - 15.20
Real-Time Streaming
Parallel Session
TerrariumDB is a column and row store engine designed specifically for behavioral intelligence, real-time data processing, and is the core of the Synerise platform. It simultaneously processes data heavy analytics while executing various business scenarios in realtime. TerrariumDB was designed to analyse behavioural data, where data order and time are important to make business decisions. During the talk, there are described why we are developing our distributed database engine, where were the challenges and pitfalls, for which use cases does TerrariumDB fits best, and how it handles billions of queries per day where 99 percentile does matter.
#Streaming Database #RealtimeAnalytics #ML #OperationalAnalytics

Artificial Intelligence and Data Science
Parallel Session
What is a feature store ? Why do we need it ? How to use it ? In this session, I would like to show how to use the Feast feature store to build a complete MLOps process. Starting with fetching historical data and model training, thought model versioning and deployment process and finally online features materialization and real-time model inference.
#featurestore #feast #mlops

Data Strategy and ROI
Parallel Session
Keep up with the increasing trend in the software world! Learn more about Digital Twins and what role Big Data/AI/XR/IoT and Robotics play in this story. Digital Twin is a broad topic with a number of business areas where it can be applied and wide list of technologies under the hood. On this session you will learn about a framework, intended to handle underlying complexity, facilitate design and adoption of Digital Twins; technical aspects and considerations; real-life examples, covering success stories and delivered value. This session can be interesting for different groups: Architects, tech-savvy business leaders, Product Managers, technology experts and consultants.
#DigitalTwins #AI/XR/IoT #Robotics

15.25 - 15.55
Architecture Operations &Cloud
Parallel Session

Data Engineering
Parallel Session
Data Engineering used to be a hard problem that only people with a software engineering background could solve. Additionally, a number of use cases and analytics needs in companies waiting to be solved outnumber data engineers teams, burdening them and making business departments wait for ages for their use cases to be implemented. On the other side, Business departments would love to implement the data pipelines themselves. But for a long time, they couldn't do it well, mainly because they lacked the engineering skill required for them to work efficiently and deliver technical quality. Today, we witness the change of this obstacle getting away thanks to the maturity of modern data platforms and thanks to tools that make it easy to implement pipelines according to the best DataOps practices. Like DBT if you have heard about it.
Tools like DBT are great but are just puzzles of the bigger picture. What we at GetInData do is take those puzzles, these advancements of data tools, and combine them into a coherent, unified data pipelines framework that guides analytics engineers by hand in developing the pipelines end-to-end. From the idea to production.
Come to our presentation to listen about modern data platforms, DBT as well as our framework.
#analytics engineering, #self-service data pipelines, #cloud, #DBT, #SQL

Real-Time Streaming
Parallel Session
Stream processing has become vastly popular in the past couple of years. As you're already familiar with Kubernetes, you have the necessary toolbox for deploying and operating low latency mission-critical streaming data pipelines with Apache Flink. After this talk, you'll get a broad understanding of how the complex stream processing applications fit into the cloud-native era and get comfortable with introducing them to your organization.
15.55 - 16.15
BREAK
PEER2PEER SHARING
16.15 - 17.15
ROUNDTABLES (ONLINE or ONSITE)
Parallel roundtables discussions are the part of the conference that engage all participants. It has few purposes. First of all, participants have the opportunity to exchange their opinions and experiences about specific issue that is important to that group. Secondly, participants can meet and talk with the leader/host of the roundtable discussion – they are selected professionals with a vast knowledge and experience.
There will be roundtable sessions, hence every conference participants can take part in 2 discussions, one each day of the conference.
Roundtable discussion

• Normalization processes - when should you do that? How do you identify your data issues?
• How do you explore new models and solutions?
• How do you decide whether the new model works best for you?
Roundtable discussion

Pandas is a fast and powerful open-source data analysis and manipulation framework written in Python. Apache Spark is an open-source unified analytics engine for distributed large-scale data processing. Both are widely adopted in the data engineering and data science communities. Even though there’s a great value in combining them in terms of productivity, scalability, and performance, it’s often overlooked. Join us for a live discussion, where you will hear and share your experience with combining Spark and Pandas to benefit from both worlds! We welcome all levels of expertise, from intermediate to advanced.
Roundtable discussion

Vector databases store data with vector embeddings, which are computed with Machine Learning models. Indexed vector embeddings enable fast similarity search and retrieval. An open-source vector search engine like Weaviate can be used to do semantic search, similarity search of text, images and other types of unstructured data, one-shot labeling, etc. These features of vector search engines enables you to scale ML models, build recommendation systems or do anomaly detection.
In this discussion we will talk about vector databases. You can learn about vector search engines, share your experiences, get updates from the latest techniques, meet people working in a similar field and get feedback on your ideas. Whether you're new to vector databases or identify as an experienced user, all are welcome to join!
Roundtable discussion

Traditionally, data governance has been defined as managing data integrity and the access of enterprise systems. Usually it consists of a centralized team with a steering committee, data stewards, process workflows and policies. In a traditional environment when you have a centralized data team, this can work well, but in today’s world where each department has their own analysts creating different analyses, such a project will likely fail. We'll discuss how today's organizations are approaching data governance given the fast-changing, decentralized environment of data today.
Roundtable discussion

Organizations are disappointed on the return on investment of their dashboarding efforts. At the same time, trends like natural language querying, data catalogs, and metric stores are arising. Are dashboards dead or maybe we haven't seen their best days yet.
Roundtable discussion

Over the last years, multiple data transformation projects included significant investments to provide a centralized data lake, which could swallow any type and quantity. There was a promise for a single point of data for analytics and democratization of data. However, the concept brought many expensive problems in both implementation and maintenance. Other issues include a lack of clear ownership and division between data contexts, which lowered the return on investment.
These issues were well known to software architects. Many lessons could be shared from software architecture patterns, including sociotechnical, strategic, and implementation. For example, the concept of Data Mesh connects really well with Domain-Driven Design on multiple levels; both are completing each other. The point of our discussion will be to share learnings from applying software architecture patterns to data architecture paradigms.
19.00 - 22.00
EVENING NETWORKING SESSION| on-site, WARSAW
Let's get together! To talk, to meet new people, to see old colleagues. We invite you for a face 2 face interaction onsite.
More information will be provided soon.
28.04.2022 - 2ND CONFERENCE DAY| ONLINE
9.30 - 12.00
PARALLEL WORKSHOPS (ONLINE)
Sesja plenarna
In this workshop you will not only master BigQuery, but also learn that unlike traditional data modeling techniques, Data Vault model is highly scalable and can adapt easily to ever changing business requirements. During the course we simulate a real-world end-to-end scenario – processing metrics generated by devices that capture usage in real-time. The technology that we will use is BigQuery. All exercises will be done within BigQuery UI on Google Cloud Platform.
Google Cloud Poland
Google Cloud Poland
12.00 - 13.00
BREAK
13.00 - 13.10
OPENING
13.10 - 13.35
KEYNOTE PRESENTATION
Artificial Intelligence and Data Science
Plenary Session
Building your own platform is often ostracized these days. Everyone is encouraged to reuse existing solutions for known reasons. But using a ready-made platform / tool should not be a mindless process. Reusability is an art. During this presentation, you will learn why we decided to build our own MLOps platform while not re-inventing the wheel by using ready-made components with a touch of custom components. What are the benefits of this, but also what limitations and hurdles we have encountered. We hope that our experience will help you make the right decisions in your projects. Sometimes, maybe more risky ones.
#ML/AI, #MLOPS, #GOOGLE CLOUD, #OPEN-SOURCE

13.40 - 14.10
PARALLEL SESSIONS
Architecture Operations &Cloud
Parallel Session
Processing Electronic Medical Records (EMRs) is super complex in itself, but it gets infinitely more complex when you want to do it at scale, covering multiple countries (and even continents). How did IQVIA EMRs Factory team build the probably largest global Electronic Medical Records data platform in the world? How was a combination of Cloudera, Spark, Kafka and Looker used to build a solid foundation and where are we taking it now on a hybrid-cloud architecture? How and why are we evolving - adopting Snowflake and Databricks to enable rapid business growth? We'll talk about what drove our design decisions (including requirements critical in healthcare like privacy, security and governance) and what we learned over the years running and modernising the platform. Importance od data residency and its impact on the system architecture.
#Cloudera #Spark #Scala #Java #Looker #Cloud #Snowflake #Databricks

Architecture Operations &Cloud
Parallel Session
Kubernetes is an excellent vehicle for driving your analytics, big-data, and ML workloads, but a one-size-fits-all configuration won’t give you all the benefits platform provides.
I want to walk you through the things to consider when creating a high-performance Kubernetes cluster for your workloads, from resource utilization, topology, application settings, and consumption of hardware components.

The presentation will be announced soon.
14.15 - 14.45
CASE STUDY
Artificial Intelligence and Data Science
Parallel Session
One of the more annoying challenges product businesses face is in establishing causal relationships while trying to determine the impact of making product changes. Frequently, A/B testing is used to this end, but that is expensive and time consuming for engineering and design teams, and often requires tooling as well as randomization. Here at Willa (a payments and invoicing app for US-based freelancers), we used an econometric technique called Difference-In-Differences (D-I-D) regression to tease out the causal impact of simplifying the invoicing process using our app, on the invoice creation rate per user. We took advantage of natural variation in product usage between different kinds of users, and were able to reach statistically significant results cheaper and faster than via A/B testing.
The technologies used were BigQuery and Jupyter/Databricks, and GCP more broadly, although the approach is platform agnostic. Overall, the presentation will be interdisciplinary in nature, so anyone interested in economics, data science and engineering, cloud technologies and general dynamics of product businesses is welcome to attend.
Real-Time Streaming
Parallel Session
Streaming trillions of messages per day (and meeting the agreeed SLA's) was a challenging job. In this session I will present how are we handling some critical aspects like reliability, performance, skewed data, debugging and performance testing. This session is targeted to software engineers passioned about performance/handling large amount of data.
#spark #streaming #trillions

Data Strategy and ROI
Parallel Session

PEER2PEER SHARING
14.45 - 15.40
ROUNDTABLES (ONLINE)
Parallel roundtables discussions are the part of the conference that engage all participants. It has few purposes. First of all, participants have the opportunity to exchange their opinions and experiences about specific issue that is important to that group. Secondly, participants can meet and talk with the leader/host of the roundtable discussion – they are selected professionals with a vast knowledge and experience.
There will be roundtable sessions, hence every conference participants can take part in 2 discussions, one each day of the conference.
Roundtable discussion
• What impact does the trend towards cloud warehouses (like Snowflake) and deltalakes (like Databricks) have on existing data platforms?
• How are you using these solutions - migrating legacy workloads, implementing new use cases?
• Are you fully committed to one platform (and using all native features, e.g. Snowpark and Databricks Live Tables) or try to stay platform agnostic?
• Which features have the biggest impact on your use cases (elasticity, speed, data sharing, ML integration, built-in jobs orchestration, …)?
• What roadblocks have you hit when implementing Snowflake and Databricks?
#Snowflake #Databricks #Cloud
Roundtable discussion
The way things get named in this industry can be pretty odd. A lot of people have been hearing noise about how cool a “data lakehouse” is, since it combines the power of a data lake with the power of a data warehouse. There are two ideas at work here, having to do with “big data” – the need to store massive quantities of information in the lake, and the need to analyse that data in the warehouse.
The business value becomes clear when you have a properly constructed data lakehouse at your disposal. You can:
• Support data science as well as BI using the same data sets
• Access all of your data, wherever it may reside
• Save on the labour and cost of moving data from one place to another, just to do analytics.
OR, why not go beyond this and unify how all teams do analytics? Why not unify user experiences and ways of accessing the data by building the “Unified Analytics Environment” to simplify the architecture and manageability of the environment?
Let’s discuss the advantages of investing in a data lakehouse and challenges related to it, and see how it can make a big difference to your organization.
Roundtable discussion
What's the optimal strategy towards operations team building in the multi- and hybrid cloud era? Are we going to see all ops-related engineering roles like DevOps, DataOps or MLOps to specialize even more and gravitate out of each other in the responsibilities spectrum? Or should we rather expect them to consolidate, which will mandate a new way of leveraging cross-discipline skilled CloudOps talent as a foundation of the operations teams? And how does it actually relate to hyper automation, software-defined infrastructure and massive adoption of data analysis across organizations?
Let's meet and talk about the challenges we will face as Big Data specialists and managers in this vivid and emerging area of high-complexity technology, limited human resources and endless business opportunities.
15.40 - 16.45
CASE STUDY
15.40 - 16.10
Architecture Operations &Cloud
Parallel Session
As a global bank we need to follow strict financial regulations and certifications. Our goal is an agile organization, where developers can provision and operate cloud resources, while we keep in control and stays compliant. We provide a controlled development environment, where guardrails force high level security, while staying agile. This includes custom pipeline tasks, template repositories and automated change control. Get an insight in how we control resources, security and cost in automated processes
#azure #devops #compliant #agile #selfservice

Artificial Intelligence and Data Science
Parallel Session
The targeting of online advertising is like gold seeking - in our case, the interpretable machine learning models serve as the diggers that discover the most profitable targeting criteria for a particular campaign to maximize the final profit. In an environment consisting of multiple advertising products and traffic counted in millions of PV daily, they need to be fast and reliable to find the best spot before the others do.
In this talk, I will present how we bridged the gap between business goals of KPI maximization, while ensuring scalability and stability of the solution. The journey starts with presenting the kind of gold that we search for, defined by the business context of online advertising. Next, I will present our gold-digging machine learning techniques and discuss technical details of architecture design and implementation in the AWS environment integrated with our in-house ad server. Finally, I will discuss the performance of the deployed models that are constantly monitored on production.
#Machine Learning #Realworldsystem #Adoptimization #Modelinterpretability

Data Strategy and ROI
Parallel Session
16.15 - 16.45
Architecture Operations &Cloud
Parallel Session
Since Data infrastructure was set up at Uber, we have been managing our own server fleet. Age old practices of managing hosts posed several challenges that stood in the way of innovation.
We did an entire ground up re-architecture of our deployment stack, embraced the DevOps model and automated away operational tasks. This effort gained us a lot of benefits across several areas (efficiency, security, etc) and strategically positioned us to leverage the cloud.
In this talk, we'll briefly discuss the challenges we faced as part of our containerization journey, our strategies/solutions to overcome these challenges and mainly focus on lessons we learned along the way.
#containerization #architecture #devops #automation #massive-migration

Artificial Intelligence and Data Science
Parallel Session
The primary source of bias in machine learning is not in the algorithms deployed, but rather the data used as input to build the predictive models. In this talk we will discuss why this is a huge problem and what to do about it. Different sources of bias will be identified along with possible solutions for remedying the situation when deploying machine learning. We will also speak about the importance of transparency when using machine learning to predict outcomes that impact critical decisions.
• Learn why most predictive models are biased.
• Learn about the sources of bias in predictive models.
• Learn how to reduce the negative impact of potential bias in predictive models.
#MachineLearning #ArtificialIntelligence #EliminationofBias

The presentation will be announced soon.
16.50 - 17.20
KEYNOTE SESSION
Parallel Session
17.20 - 17.30
SUMMARY & CLOSING
ONLINE EXPO + KNOWLEDGE ZONE
Free participation
We have great set of presentation available in the CONTENT ZONE that would be available pre-recorded as Video on Demand for conference participants in advance
Parallel Session
In the world of AI, ML and Big Data analysis, we have forgotten about our main clients - people who are not interested in querying databases or waiting for the result of data preparation - they want to easily play with data themselves.
In this presentation, I will explain to you:
- what are analytical cubes
- who will use them and how can they do it
- what are the differences between Apache Kylin and Microsoft Analysis Services and how to prepare a cube in these environments
#kylin #analysisservices #analyticalcubes #ssas #apache

Parallel Session
1) When Data Lake becomes a Data Swamp
2) Data access patterns and sharing - what could possibly go wrong?
3) Why do we need to introduce Data Layers
4) Bronze / Silver / Gold Data Layers and the concept behind it
5) How to make it work - tips and tricks.
#data_swamp #good_practices #sharing_data #thinking_about_users #data_lineage

Parallel Session

Parallel Session

Parallel Session

Parallel Session
-How to transform from building one deep learning model per month into evaluating and deploying hundreds of them in a single week?
-Building MLOps solution with CICD practices using CDK
-How to detect underground structures based on a bunch of radar signals and no labels?
-Should we avoid manual steps in the automatic Machine Learning pipeline?
-Can we use lambda aliases to differentiate between dev and prod environment?
#MLOps #CDK #DeepLearning #Automatisation #Python
