background-image

Solutions

DeltaStream Solution: Read from Apache Kafka and write to Databricks

Video Image

Through the integration of DeltaStream and Databricks, individuals have the capability to process data from their streaming sources in DeltaStream and promptly write the outcomes in their Delta Lake where data can be stored long-term. Users can then perform batch computations in Databricks on this data set, which is constantly kept up to date.

 

Charles and Shawn use DeltaStream to connect to an Apache Kafka Cluster, and then filter and process data before landing it in Databricks utilizing the Unity Catalog.

Recommended For You

alert-icon

Please enter a valid email address.

Request Submitted

Thank you for requesting a demo.
You will receive your login information to your email soon.