Introduction of ZooKeeper’s successor
Apache Kafka 3.3 heralds the end of ZooKeeper for internal metadata management: It is replaced by the solution-internal tool Kafka Raft (KRaft). KRaft was first introduced with the release of Kafka 2.8 as the successor to ZooKeeper, which managed distributed, open-source configuration and synchronization, as well as name registries for distributed applications in Kafka so far. However, KRaft remained in Early Access state since its release in spring 2021 and was not intended for production use – until now! With this week’s release of Kafka 3.3, KRaft has been granted «Production-Ready» status. This means that KRaft can now be deployed to new Kafka clusters.
Reasons for switching to KRaft
There are several reasons why Apache Kafka project managers have opted for a self-managed metadata quorum for the future. On the one hand, the move to KRaft aims to break the dependency on ZooKeeper and thus reduce the overall complexity of the Kafka architecture. In addition, the elimination of ZooKeeper simplifies both administration and operation of Kafka clusters, as separate components are no longer required to manage the controller. Other benefits of KRaft include faster failover times in the event of controller downtime, as well as a massive improvement in partition scaling. Particularly for this reason it is worthwhile for Kafka project administrators to switch from ZooKeeper to KRaft. Â
Is KRaft backwards compatible?
But what can those without the luxury of a greenfield approach do? At present, existing Kafka clusters managed with ZooKeeper cannot be migrated to KRaft yet. Such functionality will be available with the next release planned for the end of 2022 at the earliest: The experimental support for migration from ZooKeeper to KRaft is then to follow with Kafka 3.4. However, effective migration of productive clusters to KRaft will not be possible until Bridge Release 3.5. But don’t worry: Currently, the transition phase to KRaft has just begun. Although ZooKeeper is expected to be deprecated as of Kafka 3.5, it will probably not give way to its successor until Kafka 4.0, leaving enough time for a clean migration to KRaft.

Frequently asked questions
All blog posts

Apache Kafka simply explained
In today’s world, where data needs to be processed faster and in ever-increasing volumes, a reliable and scalable infrastructure is essential. Apache Kafka has emerged as a leading solution for real-time data streaming and is used by businesses worldwide to capture, analyze, and distribute data efficiently. In this blog post, we explain what Apache Kafka is, how it works, and why it is crucial for modern enterprises—simply and clearly.

Good data management is the basis for the business models of tomorrow
The rapid spread of artificial intelligence also creates new challenges for companies when it comes to data. Dirk Budke, Lead Data Engineering & AI at mesoneer, explains the importance of strategic data management and why employers should proactively introduce AI tools.

Five challenges on the road to a modern data architecture
Outdated data systems put a strain on companies and contribute to the worsening of the shortage of skilled IT workers. However, the path to a modern data architecture is feasible. In this article, we highlight five key challenges: replacing legacy technologies, moving from isolated data streams to enterprise-wide data use, rethinking batch processing in favor of real-time data streams, managing data products properly, and making the necessary organizational transformation. Learn how to optimize your data strategy and unlock the full potential of your data.