Solutions
DeltaStream Solution: Read from Apache Kafka and write to Databricks
Through the integration of DeltaStream and Databricks, individuals have the capability to process data from their streaming sources in DeltaStream and promptly write the outcomes in their Delta Lake where data can be stored long-term. Users can then perform batch computations in Databricks on this data set, which is constantly kept up to date.
Charles and Shawn use DeltaStream to connect to an Apache Kafka Cluster, and then filter and process data before landing it in Databricks utilizing the Unity Catalog.