The Ingress keeps you moving at top speed on Confluent Cloud™. Read for an easy download of recent product updates, development resources, events, and more to assist with your data-in-motion journey.
PRODUCT UPDATES
ksqlDB is now available on Private Link clusters
Stream processing with ksqlDB—enabling you to derive instant insights from your Confluent data streams with a simple, familiar SQL syntax—is now supported on Private Link networked clusters. Popular for its unique combination of security and simplicity of setup, Private Link networking allows for one-way secure connection access from your VPC/VNet to Confluent Cloud with an added protection against data exfiltration. Start working with ksqlDB on your Private Link cluster today.
(preview) Databricks Delta Lake Sink Connector for AWS
Fuel your Delta Lake with real-time event streams using our latest fully managed connector. The Databricks Delta Lake Sink connector, now available in Preview for AWS clusters, polls data from Kafka and copies it to an Amazon S3 staging bucket before committing these records to a Databricks Delta Lake instance. Check out the documentation linked above for more information on setup.
Three new features for Confluent Cloud Connectors:
- Single message transforms (SMTs) are simple and lightweight modifications to message values, keys, and headers as messages flow through connectors. With an available list of pre-built options, implemented and managed through both the Cloud Console or CLI, SMTs are useful for inserting fields, masking information, routing events, and performing other minor data adjustments within the connector itself.
- Connect log events are now available to customers on Standard and Dedicated clusters for self-service consumption and analysis. This feature increases the operational transparency of fully managed connectors by providing contextual logging information so that users have more information to self-identify the root cause of connector errors and resolve them quickly.
- Connect data previews (available today on select source connectors for Basic and Standard clusters) provide a dry-run functionality that previews the output of the connector using your actual connector configurations. This feature allows you to launch your connector with confidence if the data preview reflects your expected outputs or modify your configurations prior to launching if the preview outputs are unexpected.
Learn more about these updates within our recent blog, “Introducing Single Message Transforms and New Connector Features on Confluent Cloud”.
Want more? Check out and subscribe to the Confluent Cloud release notes.
DEVELOPMENT RESOURCES
Demo: Accelerate Your Cloud Data Warehouse Migration and Modernization
Join this session to understand how Confluent helps teams connect hybrid and multi-cloud data to their cloud data warehouse of choice in real time. We’ll review the benefits of modernization including no-code data source integration, real-time data processing prior to writing to your data warehouse, and overall strategies for future proofing your implementation. Modernizing your data warehouse doesn’t need to be a multi year lift and shift effort—join and learn how.
Workshop series: Your Path to Production with Confluent Cloud
New to Confluent? This 5-part instructional series will guide you through your first usage of Confluent Cloud. Each session is a technical demo led by a Solutions Engineer and will walk you through the key milestones for successfully deploying your first use case with Confluent. Alongside a deep dive into the product, you’ll learn about producers/consumers, source & sink connectors, stream processing, platform metrics, monitoring, and more.
Webinar: How ACERTUS Migrated from a Monolith to Microservices with ksqlDB
Register for our upcoming online talk with ACERTUS who shifted thinking from a synchronous, API-first mindset to an asynchronous approach oriented around event streaming. You’ll hear from J3, ACERTUS VP of Data, on how ksqlDB was used to build new stream processing features and functionalities using just SQL statements—no requirements for new application code—and his team’s broader use of ksqlDB for streaming ETL, data warehouse ETL processing, and microservices projects.
- More ways to continue learning
- Intro to Event-Driven Microservices: Microservices Demo
- Build a Streaming ETL pipeline: MongoDB to Snowflake ETL Demo
- Constantly updated Cloud content: Confluent Blog and Confluent Podcast
Want more? Check out Confluent Developer to continue learning.
LEARNING EVENTS
Come visit Confluent at AWS re:Invent
We're back and in-person at this year's AWS re:Invent taking place in Las Vegas later this month. We’ll have a number of different activities going on including sponsored sessions where you'll hear more about Confluent on the Edge and Data Warehouse Modernization, daily booth happy hours, and brand-new custom product demos. Make sure to attend a session or stop by our booth (#220) to meet with our experts and pick up some swag!