- An event-driven architecture consists of event producers that generate a stream of events, and event consumers that listen for the events.
- Events are delivered in near real time, so consumers can respond immediately to events as they occur.
- Producers are decoupled from consumers.
- A producer doesn't know which consumers are listening.
- Consumers are also decoupled from each other, and every consumer sees all of the events.
- This differs from a Competing Consumers pattern, where consumers pull messages from a queue and a message is processed just once (assuming no errors).
- In some systems, such as IoT, events must be ingested at very high volumes.
- An event driven architecture can use a pub/sub model or an event stream model.
- Pub/Sub: The messaging infrastructure keeps track of subscriptions.
- When an event is published, it sends the event to each subscriber.
- After an event is received, it cannot be replayed, and new subscribers do not see the event.
- Event Streaming: Events are written to a log.
- Events are strictly ordered (within a partition) and durable.
- Clients don't subscribe to the stream, instead a client can read from any part of the stream.
- The client is responsible for advancing its position in the stream.
- That means a client can join at any time, and can replay events.
- On the consumer side, there are some common variations:
- Simple event processing. An event immediately triggers an action in the consumer.
- For example, you could use Azure Functions with a Service Bus trigger, so that a function executes whenever a message is published to a Service Bus topic.
- Complex event processing:
- A consumer processes a series of events, looking for patterns in the event data.
- You can use technologies such as Azure Stream Analytics or Apache Storm.
- For example, you could aggregate readings from an embedded device over a time window, and generate a notification if the moving average crosses a certain threshold.
- Event stream processing:
- Use a data streaming platform, such as Azure IoT Hub or Apache Kafka, as a pipeline to ingest events and feed them to stream processors.
- The stream processors act to process or transform the stream.
- There may be multiple stream processors for different subsystems of the application.
- This approach is a good fit for IoT workloads.
Use Cases
- Multiple subsystems must process the same events.
- Real-time processing with minimum time lag.
- Complex event processing, such as pattern matching or aggregation over time windows.
- High volume and high velocity of data, such as IoT.
Benefits
- Producers and consumers are decoupled.
- No point-to-point integrations.
- It's easy to add new consumers to the system.
- Consumers can respond to events immediately as they arrive.
- Highly scalable and distributed.
- Subsystems have independent views of the event stream.
Challenges
- Guaranteed delivery.
- In some systems, especially in IoT scenarios, it's crucial to guarantee that events are delivered.
- Processing events in order or exactly once.
- Each consumer type typically runs in multiple instances, for resiliency and scalability.
- This can create a challenge if the events must be processed in order (within a consumer type), or if the processing logic is not idempotent.