It was 2 AM when a Slack alert jolted me awake. The API server's response time had doubled, and the dashboard was bleeding red. Just hours earlier, I had deployed an update to Node.js 22 LTS, feeling smug about replacing external libraries with the native fetch API. I thought it would be cleaner. Instead, the CPU was spiking, and the network queue was backing up. Twelve years of building startups has taught me one hard truth: "native" and "standard" don't always mean "fastest" or "most efficient."
The Brutal Reality of Native Fetch Numbers
After the rollback, I needed to know exactly why the server choked. I ran some benchmarks on my M1 Max using Node 22.2.0, and the results were sobering. When comparing throughput, the low-level undici.request handled approximately 16,200 req/sec, while the native fetch struggled at 12,100 req/sec (Source: Personal benchmark using autocannon, 10 concurrent connections). That is a performance hit of over 25%.
Latency was equally disappointing. The P99 latency for native fetch hovered around 14ms, whereas the optimized Undici client stayed under 9ms (Source: Node.js Core Benchmarks and local verification). In a microservices architecture where an incoming request might trigger five internal API calls, that 5ms overhead compounds quickly, turning a snappy UI into a sluggish mess for the end user.
The Technical Debt of Web Standards
Under the hood, Node.js's native fetch is actually built on top of undici. So why the gap? The culprit is spec compliance. To behave exactly like the browser API, fetch must instantiate complex Request and Response objects for every single call. It has to validate headers according to strict W3C rules and handle streams in a way that satisfies the standard.
Another major issue is connection management. Native fetch in Node 22, while improved, often lacks the fine-tuned connection pooling that a dedicated HTTP client provides. It prioritizes being a drop-in replacement for the browser rather than being a high-performance engine for server-to-server communication. Honestly, when you're doing internal RPC calls, do you really need the overhead of a full W3C-compliant Request object every microsecond?
Optimizing for Speed: Beyond the Standard
I’m not suggesting you go back to axios or outdated wrappers. If you're on Node 22, the best way to regain performance is to use Undici's Pool or Agent directly for high-traffic routes. Here is how I refactored the problematic code:
// Before: Using the naive native fetch
async function fetchUser(id) {
const response = await fetch(`https://api.service.local/user/${id}`);
return response.json();
}
// After: Using Undici Pool for maximum throughput
import { Pool } from 'undici';
const apiPool = new Pool('https://api.service.local', {
connections: 64, // Explicitly manage the pool size
keepAliveTimeout: 10 * 1000, // Keep sockets warm
bodyTimeout: 0 // Prevent unexpected timeouts under load
});
async function fetchUserOptimized(id) {
const { body } = await apiPool.request({
path: `/user/${id}`,
method: 'GET'
});
return body.json();
}After applying this change, our P99 latency dropped by 35%, and CPU usage stabilized (Source: Production Datadog metrics). The trade-off is obvious: you lose the simple, universal fetch syntax and have to manage pools per origin. But for critical paths, that's a price worth paying.
How to Measure in Your Own Backyard
Don't take my word for it. Every environment is different. To see the impact yourself, I recommend using autocannon for a quick stress test. Create two endpoints—one using fetch and one using undici.request—and run:
npx autocannon -c 100 -d 10 http://localhost:3000/test-endpoint
You might find that for small JSON payloads, the object creation overhead of fetch is even more pronounced. In my tests, the performance gap actually widened as the frequency of requests increased.
Engineering is about choosing the right tool for the specific job, not just following the latest trend. Native fetch is great for scripts, CLI tools, or low-traffic admin panels. But for the core of your high-concurrency backend, don't let the "standard" label fool you into accepting sub-optimal performance. Go audit your most-called API functions today.