Kafka became popular because of its performance and ability to scale. It’s often used by companies that have adopted event-driven architectures and that are building real-time applications — ones that are time-sensitive and deal with a continuous stream of data.
What is it?
Kafka is an open-source steaming platform frequently used by companies looking to develop innovative digital services that rely on large, real-time data sets.
Kafka became popular because of its ability to scale. It took a novel approach to message brokering and is useful for storing and processing data both historical and real-time work.
As enterprises look to develop innovative digital services, they increasingly want access to a wide variety of real-time data. Kafka provides a platform to integrate that data: it can reliably cope with vast volumes of data and is fast and resilient.
What’s in for you?
Kafka can play an important role in the development of new digital services and products that take advantage of web-scale volumes of real-time data.
Its ability to scale, its speed and its reliability are all important qualities in a message streaming platform.
It’s also flexible, allowing you to build different kinds of systems on top of its infrastructure — from messaging to event-driven architectures or real-time streaming apps. The ecosystem around Kafka is very mature and it works with any cloud provider or even on-prem if needed.
What are the trade offs?
Adopting Kafka may require a shift in mindset for staff used to working with other messaging brokers. And not all of the products and services you’re working on will need vast quantities of real-time data — and there may be more suitable tools for those types of projects.
Kafka requires non-trivial infrastructure to operate and the cost to build or operate this might not be justified if there’s a simpler solution that can be used. Some ‘Kafka-as-a-service’ providers are starting to emerge because of the complexity in learning to operate Kafka.