Skip to content

Tuning

Configuration profiles optimized for different workloads.

Baseline Profiles

Default Profile (Balanced)

For general-purpose applications:

typescript
RedisModule.forRoot({
  clients: {
    host: process.env.REDIS_HOST,
    port: 6379,
  },
  plugins: [
    new CachePlugin({
      l1: { maxSize: 1000, ttl: 30000 },
      l2: { defaultTtl: 3600 },
    }),
    new LocksPlugin({
      defaultTtl: 30000,
      autoRenew: { enabled: true, interval: 10000 },
    }),
    new RateLimitPlugin({
      algorithm: 'sliding-window',
    }),
  ],
})
ParameterValueRationale
L1 size1000Moderate memory, good hit rate
L1 TTL30sBalance freshness/efficiency
L2 TTL1 hourStandard cache lifetime
Lock TTL30sTypical operation duration
Lock renewal10sRenew at 1/3 TTL

Latency-Sensitive Profile

For APIs where response time is critical:

typescript
RedisModule.forRoot({
  clients: {
    host: process.env.REDIS_HOST,
    port: 6379,
    commandTimeout: 1000,  // 1s timeout
  },
  plugins: [
    new CachePlugin({
      l1: { maxSize: 5000, ttl: 60000 },  // Larger L1
      l2: { defaultTtl: 300 },             // Shorter L2
      stampede: { enabled: true },
    }),
    new LocksPlugin({
      defaultTtl: 10000,   // Shorter locks
      retry: { maxRetries: 1, initialDelay: 100 },  // Fast fail
      autoRenew: { enabled: false },
    }),
    new RateLimitPlugin({
      algorithm: 'token-bucket',  // Allows bursts
    }),
  ],
})
ParameterValueRationale
Command timeout1sFail fast
L1 size5000More in-memory hits
L1 TTL60sLonger local cache
Lock TTL10sShort operations only
Lock timeout1sDon't wait long

Throughput Profile

For batch processing and background jobs:

typescript
RedisModule.forRoot({
  clients: {
    host: process.env.REDIS_HOST,
    port: 6379,
    enableOfflineQueue: true,
  },
  plugins: [
    new CachePlugin({
      l1: { enabled: false },  // Skip L1 for throughput
      l2: { defaultTtl: 7200 },
    }),
    new LocksPlugin({
      defaultTtl: 300000,  // 5 min for long operations
      autoRenew: { enabled: true, interval: 60000 },
    }),
    new StreamsPlugin({
      consumer: {
        batchSize: 100,       // Large batches
        blockTimeout: 10000,  // Long poll
      },
    }),
  ],
})
ParameterValueRationale
L1 cacheDisabledAvoid memory pressure
Lock TTL5 minLong batch operations
Batch size100Reduce round trips
Block timeout10sEfficient long polling

Memory-Constrained Profile

For edge deployments or small containers:

typescript
RedisModule.forRoot({
  clients: {
    host: process.env.REDIS_HOST,
    port: 6379,
  },
  plugins: [
    new CachePlugin({
      l1: { maxSize: 100, ttl: 10000 },  // Tiny L1
      l2: { defaultTtl: 300 },            // Short TTL
    }),
    new LocksPlugin({
      autoRenew: { enabled: false },  // Save resources
    }),
  ],
})
ParameterValueRationale
L1 size100Minimal memory
L1 TTL10sQuick expiration
L2 TTL5 minReduce Redis memory
Auto-renewDisabledFewer background tasks

Parameter Reference

Cache Parameters

ParameterRangeImpact
l1.maxSize100-10000Memory usage, hit rate
l1.ttl5s-60sFreshness vs hits
l2.defaultTtl60s-86400sRedis memory, staleness
stampede.enabledtrue/falseProtection vs complexity

Lock Parameters

ParameterRangeImpact
defaultTtl5s-300sSafety vs flexibility
timeout0-30sWait time vs responsiveness
autoRenew.intervalTTL/3-TTL/2Reliability vs overhead

Stream Parameters

ParameterRangeImpact
batchSize1-1000Throughput vs latency
blockTimeout1s-30sResponsiveness vs efficiency
maxRetries1-10Reliability vs DLQ growth

Tuning Process

  1. Start with Default Profile
  2. Monitor Key Metrics
    • Cache hit rate
    • Lock wait time
    • Operation latency
  3. Identify Bottlenecks
    • Low hit rate → Increase TTL or L1 size
    • High lock contention → Reduce scope or TTL
    • High latency → Enable L1, reduce timeouts
  4. Adjust One Parameter at a Time
  5. Validate with Load Testing

Anti-Patterns

Anti-PatternProblemSolution
L1 TTL > L2 TTLServing stale dataL1 TTL < L2 TTL
Lock TTL < operation timeLost locksMeasure operation, add margin
No command timeoutHung requestsSet reasonable timeout
Huge batch sizesMemory spikesBalance batch size

Next Steps

Released under the MIT License.