No More Silos: Integrating Databases and Apache Kafka

A presentation at UKOUG 2018 in in Liverpool, UK by Robin Moffatt

Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka® streaming platform. With Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event driven architectures and the population of multiple downstream systems.

In this talk we’ll look at one of the most common integration requirements - connecting databases to Kafka. We’ll consider the concept that all data is a stream of events, including that residing within a database. We’ll look at why we’d want to stream data from a database, including driving applications in Kafka from events upstream. We’ll discuss the different methods for connecting databases to Kafka, and the pros and cons of each. Techniques including Change-Data-Capture (CDC) and Kafka Connect will be covered, as well as an exploration of the power of KSQL for performing transformations such as joins on the inbound data.

Attendees of this talk will learn:

  • That all data is event streams; databases are just a materialised view of a stream of events.

  • The best ways to integrate databases with Kafka.

  • Anti-patterns of which to be aware.

  • The power of KSQL for transforming streams of data in Kafka.