Learn how to build event-driven applications the right way with Kafka Cluster.
December 18 @ 9:00 am – December 20 @ 5:00 pm UTC+8
About the event
Join Our Apache Kafka® Application Development Event
Unlock the world of seamless application development for Apache Kafka® at our upcoming public event! Whether you’re an application developer, architect, or programming enthusiast, this event is your gateway to mastering Apache Kafka® integration.
Discover the art of application development, leveraging the prowess of Apache Kafka®:
Language Inclusivity: While we use Java in our examples during the event, developers skilled in C# and Python can also thrive.
Calling all professional app developers fluent in Java (preferred), C#, or Python, who aspire to elevate their skills in Apache Kafka® integration. A solid experience in application development within your chosen language is recommended. A working knowledge of Apache Kafka® architecture would be an added advantage. You can gain this either through prior experience or by taking the “Confluent Fundamentals for Apache Kafka®” course.
Broaden Your Skills: Even if Java isn’t your primary language, you’ll find immense value and growth opportunities in the event.
By joining us, you’ll:
- Learn from Experts: Gain insights from seasoned professionals proficient in Apache Kafka® application development.
- Hands-On Skill Enhancement: Engage in practical exercises simulating real-world scenarios to deepen your understanding.
- Networking Opportunities: Connect with fellow developers and architects, fostering collaborations and knowledge exchange.
Join us and step into the world of proficient Apache Kafka® application development. Secure your spot now for an enlightening learning journey!
- Explain the value of a *Distributed Event Streaming Platform*
- Explain how the “log” abstraction enables a distributed event streaming platform
- Explain the basic concepts of:
- Brokers, Topics, Partitions, and Segments
- Records (a.k.a. Messages, Events)
- Retention Policies
- Producers, Consumers, and Serialization
- Replication
- Kafka Connect
- Sketch the high level architecture of a Kafka producer
- Illustrate key-based partitioning
- Explain the difference between `acks=0`, `acks=1`, and `acks=all`
- Configure `delivery.timeout.ms` to control retry behavior
- Create a custom `producer.properties` file
- Tune throughput and latency using batching
- Create a producer with Confluent REST Proxy
- Describe Kafka schemas and how they work
- Use the Confluent Schema Registry to guide schema evolution
- Write and read messages using schema-enabled Kafka
- Compare KStreams to KTables
- Create a Custom `streams.properties` file
- Explain what co-partitioning is and why it is important
- Write an application using the Streams DSL (Domain-Specific Language)
- Explain the motivation for Kafka Connect
- List commonly used Connectors
- Explain the differences between standalone and distributed mode
- Configure and use Kafka Connect
- Use ksqlDB to filter and transform a stream
- Write a ksqlDB query that joins two streams or a stream and a table
- Write a ksqlDB query that aggregates values per key and time window
- Write Push and Pull queries and explain the differences between them
- Create a Connector with ksqlDB
- List ways to avoid large message sizes
- Decide when to use ksqlDB vs. Kafka Streams vs. Kafka Connect SMTs
- Explain differences and tradeoffs between processing guarantees
- Address decisions that arise from key-based partitioning
- Authenticate a client app with a secure Kafka cluster
- Explain what “fully-managed” means in the context of Confluent Cloud
- Authenticate a Kafka client to Confluent Cloud
- Do basic operations with the `ccloud` CLI