Redis vs ioredis vs valkey-glide
Written by punkpeye on .
I benchmarked all major Node.js Redis clients–ioredis, redis, and valkey-glide–to see if we should stick with ioredis or if there are benefits to migrating to any of the other clients.
TL;DR: All clients perform similarly for sequential operations. redis wins for concurrent workloads, while redis with RESP3 excels at transactions. For typical web app workloads, the differences are negligible–pick based on API preference.
Here's the breakdown and the script I used.
Benchmark Results
Basic Operations
Operation | Client | Ops/sec | Avg | Min | Max |
SET | ioredis | 8,158 | 0.122ms | 0.067ms | 3.984ms |
SET | redis | 8,339 | 0.120ms | 0.069ms | 2.840ms |
SET | redis (RESP3) | 8,385 | 0.119ms | 0.077ms | 1.025ms |
SET | valkey-glide | 6,585 | 0.152ms | 0.087ms | 33.347ms |
GET | ioredis | 8,727 | 0.115ms | 0.073ms | 3.615ms |
GET | redis | 7,971 | 0.125ms | 0.080ms | 6.820ms |
GET | redis (RESP3) | 7,793 | 0.128ms | 0.069ms | 10.074ms |
GET | valkey-glide | 7,193 | 0.139ms | 0.090ms | 0.487ms |
Large Values (10KB)
Operation | Client | Ops/sec | Avg | Min | Max |
SET | ioredis | 5,396 | 0.185ms | 0.128ms | 0.552ms |
SET | redis | 5,097 | 0.196ms | 0.131ms | 5.286ms |
SET | redis (RESP3) | 5,219 | 0.192ms | 0.124ms | 2.575ms |
SET | valkey-glide | 4,754 | 0.210ms | 0.143ms | 2.403ms |
GET | ioredis | 8,067 | 0.124ms | 0.077ms | 2.082ms |
GET | redis | 8,160 | 0.123ms | 0.071ms | 8.275ms |
GET | redis (RESP3) | 8,032 | 0.124ms | 0.073ms | 10.921ms |
GET | valkey-glide | 6,864 | 0.146ms | 0.089ms | 2.046ms |
Hash Operations
Operation | Client | Ops/sec | Avg | Min | Max |
HSET | ioredis | 7,304 | 0.137ms | 0.075ms | 29.547ms |
HSET | redis | 8,213 | 0.122ms | 0.079ms | 0.976ms |
HSET | redis (RESP3) | 7,860 | 0.127ms | 0.080ms | 7.553ms |
HSET | valkey-glide | 6,754 | 0.148ms | 0.084ms | 4.903ms |
HGET | ioredis | 8,618 | 0.116ms | 0.068ms | 2.042ms |
HGET | redis | 8,338 | 0.120ms | 0.075ms | 1.074ms |
HGET | redis (RESP3) | 8,675 | 0.115ms | 0.076ms | 0.378ms |
HGET | valkey-glide | 6,660 | 0.150ms | 0.082ms | 7.266ms |
Pipeline/Transaction & Increment
Operation | Client | Ops/sec | Avg | Min | Max |
Pipeline (100 SETs) | ioredis pipeline | 3,334 | 0.300ms | 0.228ms | 0.463ms |
Transaction (100 SETs) | ioredis multi | 3,178 | 0.315ms | 0.248ms | 2.100ms |
Transaction (100 SETs) | redis multi | 3,141 | 0.318ms | 0.251ms | 2.353ms |
Transaction (100 SETs) | redis (RESP3) multi | 3,615 | 0.277ms | 0.234ms | 0.397ms |
Pipeline (100 SETs) | valkey-glide batch | 3,310 | 0.302ms | 0.270ms | 0.342ms |
Transaction (100 SETs) | valkey-glide atomic | 3,458 | 0.289ms | 0.253ms | 0.339ms |
INCR | ioredis | 8,364 | 0.120ms | 0.074ms | 1.740ms |
INCR | redis | 8,413 | 0.119ms | 0.076ms | 2.335ms |
INCR | redis (RESP3) | 8,207 | 0.122ms | 0.074ms | 1.421ms |
INCR | valkey-glide | 6,779 | 0.147ms | 0.089ms | 0.689ms |
Concurrent Operations (100 parallel SETs)
Client | Ops/sec | Avg | Min | Max |
ioredis | 2,447 | 0.409ms | 0.349ms | 0.578ms |
ioredis (auto-pipeline) | 2,748 | 0.364ms | 0.274ms | 3.551ms |
redis | 3,832 | 0.261ms | 0.215ms | 0.397ms |
redis (RESP3) | 3,384 | 0.295ms | 0.259ms | 0.383ms |
redis (pool) | 253 | 3.957ms | 3.218ms | 4.502ms |
valkey-glide | 1,899 | 0.527ms | 0.444ms | 1.257ms |
Summary
Operation | ioredis | redis | redis (RESP3) | valkey-glide | Winner |
SET | 8,158 | 8,339 | 8,385 | 6,585 | redis (RESP3) |
GET | 8,727 | 7,971 | 7,793 | 7,193 | ioredis |
SET (10KB) | 5,396 | 5,097 | 5,219 | 4,754 | ioredis |
GET (10KB) | 8,067 | 8,160 | 8,032 | 6,864 | redis |
HSET | 7,304 | 8,213 | 7,860 | 6,754 | redis |
HGET | 8,618 | 8,338 | 8,675 | 6,660 | redis (RESP3) |
Pipeline | 3,334 | – | – | 3,310 | ioredis |
Transaction | 3,178 | 3,141 | 3,615 | 3,458 | redis (RESP3) |
INCR | 8,364 | 8,413 | 8,207 | 6,779 | redis |
Concurrent | 2,447 | 3,832 | 3,384 | 1,899 | redis |
Key Takeaways
Sequential Operations
Comparison | Result |
ioredis vs redis | Roughly equal (±5%) |
ioredis GET performance | ioredis leads (~9% faster than redis) |
redis HSET performance | redis leads (~12% faster than ioredis) |
All clients vs valkey-glide | valkey-glide ~15-20% slower |
Pipeline vs Transaction (100 batched SETs)
Comparison | Result |
ioredis pipeline vs ioredis multi | Pipeline ~5% faster (no atomicity) |
redis (RESP3) multi vs ioredis multi | redis (RESP3) 14% faster |
valkey-glide atomic vs ioredis multi | valkey-glide 9% faster |
All clients | Comparable performance (3,100-3,600 ops/sec) |
Concurrent Operations (100 parallel SETs)
Comparison | Result |
redis vs ioredis | redis 57% faster |
redis vs ioredis (auto-pipeline) | redis 39% faster |
ioredis auto-pipeline vs standard | auto-pipeline 12% faster |
redis vs redis (RESP3) | redis (RESP2) 13% faster |
redis pool | ⚠️ Performed poorly (avoid) |
Bottom Line
Sequential workloads → All clients perform similarly; ioredis slightly better for GETs, redis for HSETs
Concurrent workloads →
redis(RESP2) wins decisivelyTransactions →
redis(RESP3) is fastest; valkey-glide competitiveLarge value writes → ioredis has a slight edge
Latency consistency → valkey-glide shows lowest max latencies for some operations
Which Should You Pick?
Use Case | Recommendation |
High-concurrency web app |
|
Transaction-heavy workload |
|
Read-heavy workload | ioredis (fastest GETs) |
Simple, low-traffic app | Any–differences won't matter |
Need Valkey compatibility | valkey-glide (competitive performance, full feature support) |
Lowest latency variance | valkey-glide (most consistent max latencies) |
Script
import { config } from '#app/config.server.ts';
import { Batch, GlideClient } from '@valkey/valkey-glide';
import Redis from 'ioredis';
import { createClient, createClientPool } from 'redis';
/**
* Benchmarks Redis vs Valkey client performance.
*
* Run with:
* ```
* node --expose-gc --import tsx app/bin/benchmark-redis.ts
* ```
*/
const maybeGc = () => {
if (global.gc) {
global.gc();
}
};
const parseRedisUrl = (
url: string,
): {
host: string;
port: number;
} => {
const parsed = new URL(url);
return {
host: parsed.hostname,
port: parsed.port ? Number(parsed.port) : 6_379,
};
};
type BenchmarkResult = {
avgMs: number;
maxMs: number;
minMs: number;
name: string;
opsPerSec: number;
totalMs: number;
};
const runBenchmark = async (
name: string,
iterations: number,
fn: () => Promise<void>,
): Promise<BenchmarkResult> => {
const times: number[] = [];
// Warmup
for (let i = 0; i < Math.min(100, iterations / 10); i++) {
await fn();
}
const start = performance.now();
for (let i = 0; i < iterations; i++) {
const iterStart = performance.now();
await fn();
times.push(performance.now() - iterStart);
}
const totalMs = performance.now() - start;
return {
avgMs: times.reduce((a, b) => a + b, 0) / times.length,
maxMs: Math.max(...times),
minMs: Math.min(...times),
name,
opsPerSec: Math.round((iterations / totalMs) * 1_000),
totalMs,
};
};
const formatResult = (result: BenchmarkResult): string => {
return `${result.name}: ${result.opsPerSec.toLocaleString()} ops/sec (avg: ${result.avgMs.toFixed(3)}ms, min: ${result.minMs.toFixed(3)}ms, max: ${result.maxMs.toFixed(3)}ms)`;
};
const main = async () => {
const { host, port } = parseRedisUrl(config.REDIS_DSN);
console.log(`Connecting to Redis at ${host}:${port}\n`);
// Initialize clients
const ioredisClient = new Redis({ host, port });
const ioredisAutoPipelineClient = new Redis({
enableAutoPipelining: true,
host,
port,
});
const redisClient = createClient({ url: `redis://${host}:${port}` });
await redisClient.connect();
const redisResp3Client = createClient({
RESP: 3,
url: `redis://${host}:${port}`,
});
await redisResp3Client.connect();
const redisPoolClient = createClientPool(
{ url: `redis://${host}:${port}` },
{ maximum: 5, minimum: 5 },
);
await redisPoolClient.connect();
const glideClient = await GlideClient.createClient({
addresses: [{ host, port }],
clientName: 'benchmark',
});
const iterations = 10_000;
const testKey = 'benchmark:string';
const hashKey = 'benchmark:hash';
const counterKey = 'benchmark:counter';
const pipelineKeyPrefix = 'benchmark:pipeline';
const concurrentKeyPrefix = 'benchmark:concurrent';
const testValue = 'hello world';
const largeValue = 'x'.repeat(10_000);
console.log(
`Running benchmarks with ${iterations.toLocaleString()} iterations each\n`,
);
// ===================
// SEQUENTIAL BENCHMARKS
// These run one operation at a time - tests raw client overhead
// ===================
maybeGc();
// SET benchmarks
console.log('=== SET (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.set(testKey, testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.set(testKey, testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.set(testKey, testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.set(testKey, testValue);
}),
),
);
maybeGc();
// GET benchmarks
console.log('\n=== GET (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.get(testKey);
}),
),
);
maybeGc();
// Large value SET benchmarks
console.log('\n=== SET 10KB (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.set(testKey, largeValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.set(testKey, largeValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.set(testKey, largeValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.set(testKey, largeValue);
}),
),
);
maybeGc();
// Large value GET benchmarks
console.log('\n=== GET 10KB (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.get(testKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.get(testKey);
}),
),
);
maybeGc();
// HSET benchmarks
console.log('\n=== HSET (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.hset(hashKey, 'field', testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.hSet(hashKey, 'field', testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.hSet(hashKey, 'field', testValue);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.hset(hashKey, { field: testValue });
}),
),
);
maybeGc();
// HGET benchmarks
console.log('\n=== HGET (sequential) ===');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.hget(hashKey, 'field');
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.hGet(hashKey, 'field');
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.hGet(hashKey, 'field');
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.hget(hashKey, 'field');
}),
),
);
maybeGc();
// INCR benchmarks
console.log('\n=== INCR (sequential) ===');
await ioredisClient.set(counterKey, '0');
console.log(
formatResult(
await runBenchmark('ioredis', iterations, async () => {
await ioredisClient.incr(counterKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis', iterations, async () => {
await redisClient.incr(counterKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('redis (RESP3)', iterations, async () => {
await redisResp3Client.incr(counterKey);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', iterations, async () => {
await glideClient.incr(counterKey);
}),
),
);
// ===================
// PIPELINE/TRANSACTION BENCHMARKS
// Valkey GLIDE 2.0 uses Batch class:
// - Batch(true) = atomic (Transaction with MULTI/EXEC)
// - Batch(false) = non-atomic (Pipeline)
// ===================
maybeGc();
const pipelineSize = 100;
const pipelineIterations = iterations / pipelineSize;
console.log(
`\n=== Pipeline/Transaction (${pipelineSize} SETs per batch) ===`,
);
// ioredis pipeline - batches without atomicity
console.log(
formatResult(
await runBenchmark(
'ioredis pipeline (no atomicity)',
pipelineIterations,
async () => {
const pipeline = ioredisClient.pipeline();
for (let i = 0; i < pipelineSize; i++) {
pipeline.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await pipeline.exec();
},
),
),
);
// ioredis multi - transaction with atomicity (for fair comparison with redis)
console.log(
formatResult(
await runBenchmark(
'ioredis multi (atomic)',
pipelineIterations,
async () => {
const multi = ioredisClient.multi();
for (let i = 0; i < pipelineSize; i++) {
multi.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await multi.exec();
},
),
),
);
// redis multi - transaction with atomicity
console.log(
formatResult(
await runBenchmark(
'redis multi (atomic)',
pipelineIterations,
async () => {
const multi = redisClient.multi();
for (let i = 0; i < pipelineSize; i++) {
multi.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await multi.exec();
},
),
),
);
// redis RESP3 multi - transaction with atomicity
console.log(
formatResult(
await runBenchmark(
'redis (RESP3) multi (atomic)',
pipelineIterations,
async () => {
const multi = redisResp3Client.multi();
for (let i = 0; i < pipelineSize; i++) {
multi.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await multi.exec();
},
),
),
);
// valkey-glide Batch (non-atomic / pipeline mode)
console.log(
formatResult(
await runBenchmark(
'valkey-glide batch (pipeline, no atomicity)',
pipelineIterations,
async () => {
const batch = new Batch(false); // false = non-atomic (pipeline)
for (let i = 0; i < pipelineSize; i++) {
batch.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await glideClient.exec(batch, false);
},
),
),
);
// valkey-glide Batch (atomic / transaction mode)
console.log(
formatResult(
await runBenchmark(
'valkey-glide batch (atomic)',
pipelineIterations,
async () => {
const batch = new Batch(true); // true = atomic (transaction)
for (let i = 0; i < pipelineSize; i++) {
batch.set(`${pipelineKeyPrefix}:${i}`, testValue);
}
await glideClient.exec(batch, false);
},
),
),
);
// ===================
// CONCURRENT BENCHMARKS
// Multiple operations in flight simultaneously - this is where
// auto-pipelining and connection pools provide benefits
// ===================
maybeGc();
const concurrentSize = 100;
const concurrentIterations = iterations / concurrentSize;
console.log(`\n=== Concurrent (${concurrentSize} parallel SETs) ===`);
// Single client - commands queue behind each other
console.log(
formatResult(
await runBenchmark('ioredis', concurrentIterations, async () => {
await Promise.all(
Array.from(
{ length: concurrentSize },
async (_, i) =>
await ioredisClient.set(`${concurrentKeyPrefix}:${i}`, testValue),
),
);
}),
),
);
// Auto-pipeline batches concurrent commands automatically
console.log(
formatResult(
await runBenchmark(
'ioredis (auto-pipeline)',
concurrentIterations,
async () => {
await Promise.all(
Array.from(
{ length: concurrentSize },
async (_, i) =>
await ioredisAutoPipelineClient.set(
`${concurrentKeyPrefix}:${i}`,
testValue,
),
),
);
},
),
),
);
// Single client
console.log(
formatResult(
await runBenchmark('redis', concurrentIterations, async () => {
await Promise.all(
Array.from(
{ length: concurrentSize },
async (_, i) =>
await redisClient.set(`${concurrentKeyPrefix}:${i}`, testValue),
),
);
}),
),
);
// Single client with RESP3
console.log(
formatResult(
await runBenchmark('redis (RESP3)', concurrentIterations, async () => {
await Promise.all(
Array.from(
{ length: concurrentSize },
async (_, i) =>
await redisResp3Client.set(
`${concurrentKeyPrefix}:${i}`,
testValue,
),
),
);
}),
),
);
// Pool distributes across 5 connections
console.log(
formatResult(
await runBenchmark('redis (pool)', concurrentIterations, async () => {
await Promise.all(
Array.from(
{ length: concurrentSize },
async (_, i) =>
await redisPoolClient.set(
`${concurrentKeyPrefix}:${i}`,
testValue,
),
),
);
}),
),
);
console.log(
formatResult(
await runBenchmark('valkey-glide', concurrentIterations, async () => {
await Promise.all(
Array.from({ length: concurrentSize }, (_, i) =>
glideClient.set(`${concurrentKeyPrefix}:${i}`, testValue),
),
);
}),
),
);
// Cleanup
console.log('\nCleaning up...');
await ioredisClient.del(testKey);
await ioredisClient.del(hashKey);
await ioredisClient.del(counterKey);
for (let i = 0; i < pipelineSize; i++) {
await ioredisClient.del(`${pipelineKeyPrefix}:${i}`);
}
for (let i = 0; i < concurrentSize; i++) {
await ioredisClient.del(`${concurrentKeyPrefix}:${i}`);
}
// Close connections
await ioredisClient.quit();
await ioredisAutoPipelineClient.quit();
await redisClient.quit();
await redisResp3Client.quit();
await redisPoolClient.quit();
glideClient.close();
console.log('Done!');
};
main().catch(console.error);
Written by punkpeye (@punkpeye)