📡
Soketi
0.x
0.x
  • 📡soketi
  • 🏆Benchmarks
  • 🎉Support
  • 🤝Contributing
  • 😢Known Limitations
  • Getting started
    • 🚀Installation
      • CLI Installation
      • Docker
      • Helm Charts
      • Laravel Sail (Docker)
    • 💿Configuring the server
    • 🔐SSL Configuration
    • 🎨Client Configuration
      • Pusher SDK
      • Laravel Echo
    • 💻Backend Configuration
      • Pusher SDK
      • Laravel Broadcasting
      • Nginx Configuration
    • 🧠Redis Configuration
  • App Management
    • 🎟️Introduction
    • 🧬Array Driver
    • 🛢️SQL Drivers
      • 🐬MySQL
      • 🐘PostgreSQL
      • ⛲Database Pooling
    • 👾DynamoDB
  • Rate Limiting & Limits
    • ⛔Broadcast Rate Limiting
    • 👥Events & Channels Limits
  • Advanced Usage
    • ↔️Horizontal Scaling
      • 🤖Running Modes
      • 🧠Redis Configuration
      • 🧙‍♂️🧙♂ 🧙♂ 🧙♂ NATS Configuration
      • 🗃️Private Network Configuration
      • 😑Ok, what to choose?
    • 🛑Graceful Shutdowns & Real-time monitoring
    • 📈Prometheus Metrics
    • 🔗HTTP Webhooks
      • 📐AWS Lambda trigger
    • 🕛Queues
      • ⛓️AWS SQS FIFO
      • 🧠Redis
  • Network Watcher
    • 🚀Installation
    • 💿Environment Variables
Powered by GitBook
On this page

Was this helpful?

Edit on GitHub
  1. Advanced Usage
  2. Queues

Redis

PreviousAWS SQS FIFONextInstallation

Last updated 3 years ago

Was this helpful?

Before reading about queuing webhook processing using Redis, you may wish to .

When combining queuing and horizontal scalability, it's highly recommended that you use a third-party driver like . Redis helps ensure that once a webhook is triggered it will be completely processed because the message to send the webhook will remain in-memory within Redis. Therefore, even if the soketi server goes down, the webhook will still be sent.

Each webhook message is processed by a worker. In addition, each worker can spawn multiple queue listeners. In soketi's case, each worker represents one of the listed events within the . This way, soketi ensures that webhooks are being processed quickly and efficiently. This behavior may be subject to change in the future. For example, queues for each app might eventually be needed to ensure high-performance message processing in all situations.

To decouple the queue processors from the active WS/HTTP server, consider and run a separate fleet for your workers.

In case you want to scale your queue workers with Prometheus, the best solution is to use ****

Redis Cluster mode may be broken in some cases. .

Environment Variables

Name
Default
Possible values
Description

QUEUE_REDIS_CONCURRENCY

1

Any integer

The number of webhook messages that can be processed in parallel for each event.

QUEUE_REDIS_CLUSTER_MODE

false

false, true

Whether the client should be initialized for Redis Cluster.

🕛
🧠
read the documentation regarding horizontal scaling
Redis
app webhooks documentation
bull_exporter
Read more about BullMQ Redis Cluster configurations
setting MODE=worker
You have to specify the DB_REDIS_CLUSTER_NODES value.