Data Observability (DO) is an emerging category that proposes to help organizations identify, resolve, and prevent data quality issues by continuously monitoring the state of their data over time. This talk is a deep dive into DO, starting from its origins (why it matters), defining the scope and components of DO (what it is), and finally closing with actionable advice for putting observability into practice (how to do it).
We’ll rigorously define data observability to understand why it is different from software observability and existing data quality monitoring. We will derive the four pillars of DO (metrics, metadata, lineage, and logs) then describe how these pillars can be tied to common use cases encountered by teams using popular data architectures, especially on cloud data stacks.
Finally, we’ll close with pointers for how to put observability into practice, drawing from our experience helping teams across sizes, from fast-growing startups to large enterprises, successfully implement DO. Successfully implementing observability throughout an organization involves not only using the right technology, whether that be a commercial solution, an in-house initiative, or an open source project, but implementing the correct processes with the right people responsible for specific jobs. Talk participants can expect to leave with new concepts to understand how DO can help their organizations and ideas for how to implement DO.