Skip to content

Event Streaming

Redis Streams provide a middle ground between simple pub/sub and full message brokers. This guide helps you understand when to use them.

What Redis Streams Provide

FeatureRedis StreamsRedis Pub/Sub
PersistenceYesNo
Consumer groupsYesNo
Message replayYesNo
AcknowledgmentYesNo
OrderingPer-streamPer-channel

When to Use Redis Streams

Good fit:

  • Background job processing
  • Event sourcing (small scale)
  • Audit logs
  • Real-time notifications with durability
  • Task distribution across workers

Not ideal for:

  • High-throughput event streaming (>100k msg/sec)
  • Complex routing patterns
  • Multi-datacenter replication
  • Long-term message retention (>days)

Comparison with Alternatives

vs BullMQ

AspectRedis StreamsBullMQ
Delayed jobsManual implementationBuilt-in
Job priorityManualBuilt-in
Rate limitingManualBuilt-in
DependenciesCore RedisRedis + BullMQ
ComplexityLowerHigher

Use BullMQ when: You need delays, priorities, or complex job scheduling.

Use Streams when: Simple queue with consumer groups is enough.

vs Kafka

AspectRedis StreamsKafka
Throughput~100k/secMillions/sec
RetentionLimited by memoryDisk-based, unlimited
PartitioningManual (multiple streams)Built-in
OperationsSimpleComplex
Use caseApplication-levelInfrastructure-level

Use Kafka when: High throughput, long retention, or complex streaming pipelines.

Use Streams when: Simpler needs, already using Redis.

vs RabbitMQ

AspectRedis StreamsRabbitMQ
RoutingSimple (stream per topic)Complex (exchanges, bindings)
ProtocolRedis protocolAMQP
OrderingGuaranteedPer-queue
OperationsSimpleMedium

Use RabbitMQ when: Complex routing, multiple consumers per message.

Use Streams when: Simple fan-out or work distribution.

Delivery Semantics

Redis Streams provide at-least-once delivery:

At-Least-Once Semantics

Messages may be delivered multiple times. Make consumers idempotent.

Consumer Groups

Consumer groups enable parallel processing:

Stream: orders
Group: order-processors

Consumer 1: processes message 1, 4, 7...
Consumer 2: processes message 2, 5, 8...
Consumer 3: processes message 3, 6, 9...

Each message is delivered to one consumer in the group.

Failure Handling

Dead Letter Queue Pattern

After N failures, move to DLQ:

typescript
new StreamsPlugin({
  consumer: {
    maxRetries: 3,
  },
  dlq: {
    enabled: true,
    streamSuffix: ':dlq',
  },
})

Memory Considerations

Streams are memory-bound. Configure limits:

typescript
// Limit by count
await redis.xtrim('orders', 'MAXLEN', '~', 10000);

// Limit by age (Redis 7+)
await redis.xtrim('orders', 'MINID', '~', minId);
StrategyUse When
MAXLENFixed memory budget
MINIDTime-based retention
No trimmingAudit logs (with monitoring)

Ordering Guarantees

ScenarioOrdering
Single producer, single consumerGuaranteed
Single producer, consumer groupGuaranteed per message
Multiple producersArrival order (not send order)

For strict ordering across producers, use a single stream with sequence numbers.

Next Steps

Released under the MIT License.