Event Streaming with Kafka

Confluent Kafka and Its Offerings

Demo Getting Started with Confluent for Free

Now that you know what Confluent adds on top of Apache Kafka, let's walk through signing up for Confluent Cloud’s Free Tier and launching a managed Kafka cluster in minutes.


1. Sign Up for the Free Tier

  1. Open your browser and go to Confluent Cloud.
  2. Click Start for Free on the landing page.
  3. Ensure Cloud (default) is selected instead of Self-Managed.
  4. You’ll receive $400 in credits valid for 30 days—perfect for experimenting with Kafka.

Warning

Confluent will authorize $1 on your card to verify it but won’t charge you as long as you stay within $400 or the 30-day window.

  1. Log in with Google (recommended) or provide your email, full name, company, country, and credit-card details.
  2. After validation, you’ll land on your Confluent Cloud dashboard.

The image shows a webpage for Confluent Cloud, a cloud-native service for Apache Kafka, with options to start for free or contact the company. There is also a cookie consent banner at the bottom.


2. Create Your First Kafka Cluster

  1. In the dashboard sidebar, click EnvironmentDefault environment.
  2. Click Create a Cluster.

The image shows a dashboard interface for Confluent, featuring options for data processing and cluster management, along with recommendations for multi-factor authentication and sample data production.

  1. Select the Basic plan and click Begin Configuration.

The image shows a webpage from Confluent Cloud displaying different cluster configuration options: Basic, Standard, Enterprise, Dedicated, and Freight, each with varying features and pricing.

  1. Choose your Cloud provider (AWS, GCP, or Azure) and Region.

Note

Selecting a region tells Confluent where to launch its managed nodes—it doesn’t provision resources in your own cloud account.

  1. Name your cluster (e.g., Kafka cluster) and click Launch Cluster.

The image shows a web interface for creating a cluster on Confluent Cloud, with options for naming the cluster, viewing costs, and configuring settings like provider and region. There are tabs for configuration, usage limits, and uptime SLA, and a button to review the payment method.

  1. Wait a few minutes for the status to change to Running.

3. Overview and Producing Sample Data

On the cluster overview page, you can configure connectors, clients, and even generate sample events automatically:

The image shows a Confluent Cloud dashboard for a Kafka cluster, displaying an overview with options to set up connectors, clients, and produce sample data. The interface includes navigation options on the left and a section for throughput metrics.

Click Produce Sample Data to see predefined event templates, or continue below to create your own topic.


4. Create a Kafka Topic

  1. In the left menu, select TopicsCreate topic.
  2. Enter your topic name (e.g., random-topic).
  3. Leave Partitions at the default (6) and click Create with defaultsSkip.

The image shows a Confluent Cloud interface for creating a new Kafka topic, with fields for topic name and partitions, and options for infinite retention and advanced settings.

Your topic is now ready to receive messages.


5. Launch Sample Data

  1. Go back to the Overview page.
  2. Under Produce Sample Data, click Get started.
  3. Select a template (e.g., Stock trade) and hit Launch.

The image shows a Confluent Cloud interface with a "Launch Sample Data" popup, allowing users to generate sample data for topics like orders, users, and stock trades. The background displays various connectors available for integration.

Confluent creates its own topic (e.g., sample_data_stock_trade) and provisions a connector to push events:

The image shows a Confluent Cloud interface displaying a "Connectors" page with a running connector named "sample_data" and options to connect with popular connectors like Amazon S3 Sink and MongoDB Atlas Sink.

Wait until the connector status is Running.


6. View the Produced Messages

  1. In Topics, click the sample-data topic (e.g., sample_data_stock_trades).
  2. Wait a few seconds for messages to populate.

The image shows a Confluent Cloud interface displaying a Kafka topic named "sample_data_stock_trades" with a bar chart and a table of stock trade messages, including details like timestamp, partition, offset, key, and value.

You’ll see a live table of messages with timestamp, partition, offset, key, and value fields.


7. Set Up a Sink Connector

Stream your Kafka topic data to an external system:

  1. Click Connectors in the sidebar.
  2. Under Sink connectors, choose Amazon S3 Sink (or another sink).
  3. Select the topic(s) and click Continue.
  4. Enter your AWS credentials and S3 bucket details.
  5. Finish the setup to start streaming data to S3.

The image shows a web interface for adding an Amazon S3 Sink connector, focusing on selecting or creating an API key for Kafka credentials. It includes options for "My account," "Service account," and "Use an existing API key," with documentation on the right.

Note

In this demo, we skip the full AWS configuration—refer to the Confluent AWS Sink Connector guide for details.


8. Monitor Topic Metrics and Settings

  1. Go to Topics → your topic → Monitor to view production and consumption graphs.
  2. Open Settings to adjust partitions, retention, cleanup policy, and more.

The image shows a Confluent Cloud interface displaying a Kafka topic named "sample_data_stock_trades" with production and consumption metrics. The sidebar includes options like Cluster Overview, Networking, and Topics.

The image shows a Confluent Cloud interface displaying the configuration settings for a Kafka topic named "sample_data_stock_trades." It includes details like partitions, cleanup policy, and retention settings, with a Confluent AI Assistant chat window on the right.

Confluent’s AI Assistant can help troubleshoot configuration or performance issues in real time.


With these steps, you’ve successfully registered for Confluent Cloud’s free tier, launched a Kafka cluster, produced and viewed sample data, and configured a sink connector. Enjoy exploring Confluent’s advanced features and integrations!


Additional Resources

Watch Video

Watch video content

Previous
Confluent Cloud Kafka Reimagined for the Cloud Era