Streaming Analytics for the Digital Asset Risk Management System - Cloudwall Success Story
How to minimize data processing latency to hundreds of milliseconds when dealing with 100 mln messages per hour? How can data quality be secure and…
Read moreA decade ago, only a few companies ran their Big Data infrastructure and pipelines in the public cloud (Netflix was one of such companies). At that time, the most popular way to build Big Data solutions was to use on-premise infrastructure and an ecosystem of open-source components. In 2012-2013, we even had examples of companies that tried public cloud solutions, but quickly returned to building Big Data infrastructure with their own data-centres. The reason was primarily high costs, issues with elasticity, and service unavailability.
There were also very often opinions that public clouds and cloud infrastructure were too expensive, regardless of the cost calculation.
A clear change began in 2014 when Microsoft and Google began to compete with Amazon in the field of the public cloud. In my opinion, however, one of the biggest milestones for the development of public cloud-based infrastructures was convincing Spotify to move from their large on-premise & open-source data infrastructure to the public cloud. It was a sign for the Big Data community that using the public cloud brings with it significant opportunities, so large that companies like Spotify are willing to pay for them.
This trend has accelerated in recent years, also in the passing 2020. We see (at least in Poland) a significant adaptation of public cloud solutions in companies from various sector (e.g. banking or industry)
During the Big Data Technology Warsaw Summit 2021 conference, we will be able to listen to many presentations related to the use of public cloud Here are some interesting examples:
The last presentation I decided to bring you closer will be about the production use-cases built on top of Azure. As mentioned in earlier blogopost, H&M will describe their multi-year AI/ML journey in the public cloud (Azure, Databricks) and explain how their architecture has evolved over time. The story will cover the entire MLOps stack addressing a few common challenges in AI and Machine learning product, like development efficiency, end to end traceability, speed to production.
Don't we too often forget about one of the most important issues in building even the most complex projects? It's only architecture, technologies or other technical aspects. Probably everyone agrees that one of the critical success factors in Big Data projects (if not the most critical) is a team.
Still, a large percentage of Big Data projects fail, exceed the budget, or don't meet critical deadlines. It becomes crucial to study and measure how team management can increase chances for a project to be successful. There are some common patterns and best practices that, if properly defined, may help to avoid problems that lead to the failure of Big Data projects.
This year at the BDTWS 2021 conference, we will have various presentations that introduce Big Data projects from the perspective of team management often in a data-driven approach. These presentations are part of the "Data Strategy and ROI" track:
Jesse Anderson author of "Data Teams: A Unified Management Model for Successful Data-Focused Teams", data engineer and trainer, will talk about the importance of a solid foundation for data teams. He will also identify common problems with it and explain what management should do to fix it. Jesse has several years of experience in studying the importance of data teams, and here are his slides from 2017 where he describes the five dysfunctions of a data engineering team.
What’s next?
If you are interested in any of the presentations, we invite you to check our agenda and register before February 5th to take advantage of Winter Promotion (link).
As you might expect, this year, the conference will be organized in the form of an online interaction. Please check my recent blog post that explains how COVID-19 changes Big Data Tech Warsaw 2021 but makes it greater at the same time.
How to minimize data processing latency to hundreds of milliseconds when dealing with 100 mln messages per hour? How can data quality be secure and…
Read moreDynamoDB is a fully-managed NoSQL key-value database which delivers single-digit performance at any scale. However, to achieve this kind of…
Read moreLast month, I had the pleasure of performing at the latest Flink Forward event organized by Ververica in Seattle. Having been a part of the Flink…
Read moreIt’s been already more than a month after Big Data Tech Warsaw Summit 2019, but it’s spirit is still among us — that’s why we’ve decided to prolong it…
Read moreIn part one of this blog post series, we have presented a business use case which inspired us to create an HTTP connector for Flink SQL. The use case…
Read moreCompanies planning to process data in the cloud face the difficulty of choosing the right data warehouse. Choosing the right solution is one of the…
Read moreTogether, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.
What did you find most impressive about GetInData?