In our previous article, we have already talked about synchronous and asynchronous communication between microservices. Indeed, it makes sense to remember that, Asynchronous communication provides non-blocking, decoupling, and resiliency.
Event streaming in microservices is a powerful approach to communication and data processing based on a continuous flow of events. Instead of relying on traditional request-response mechanisms, services publish and subscribe to events, enabling loose coupling and real-time data processing.
Here's how it works.
- Events as the data unit: Events are self-contained pieces of information representing significant occurrences within your system. They can be anything from a user logging in to a new order being placed to a sensor reading exceeding a threshold.
- Event streams as the communication channel: Events are published to a central stream, like a Kafka topic or RabbitMQ queue. This stream acts as a central nervous system, carrying information across all microservices interested in the event.
- Services as event producers and consumers: Services can both publish events when something significant happens and subscribe to relevant events from other services. This allows for flexible, decoupled communication, where services don't need direct knowledge of each other.
- Real-time data processing: Event streams enable real-time data analysis and reaction. As events flow through the stream, services can continuously process them and trigger actions, like sending notifications, updating dashboards, or initiating workflows.
How about explaining things more easily?
Long story short, Event streaming.
- is a common and powerful approach for asynchronous communication in microservice architectures.
- it's a core element of event-driven architecture (EDA)
- Event: It's essentially a message carrying information about something that happened in the past, signaling to other parts of the system that they need to take action.
Ready to understand the things in practice?
Say we have 2 microservices: Customer microservice and OLF microservice.
The responsibility of the Customer is to provide all the required operations related to the customer, such as creating a customer, updating its attributes, etc.
OLF, on the other hand, is responsible for getting changes from the different services and providing them to real-time applications for statistics purposes.
When something happens to the Customer, it provides the changes as an event to the broker. But what is a broker?
- Broker, by its nature, is a server
- It is a mediator/middleware between services
- It acts as a bridge between services
- It is message storage, i.e, “database”
- It is an isolator /decouple point for the system
In broker-based systems, we usually send events to the broker( Producer)
Broker stores the event in Topic/Partition( for example, in Apache Kafka)
Data consumers, on the other hand, listen to the changes and take the event from the Broker to process. As you realize, we have some sort of event flowing from one place to another. The event flowing in such type of Pipeline is called Event Streaming!
There are a lot of good event streaming platforms out there, and my favorite one is Apache Kafka.
If you want to learn more about how Apache Kafka works under the hood, I have several articles in the below order:
Benefits of event streaming for async microservice communication
- Decoupling: Services publish events without knowing who will consume them, promoting loose coupling and improved maintainability.
- Scalability and resilience: Event streams can handle high-volume data efficiently and provide redundancy, making the overall architecture more resilient to failures.
- Flexibility: Different services can subscribe to the same events, enabling diverse integrations and workflows.
- Real-time data processing: Events can trigger immediate actions in other services, facilitating real-time responsiveness.