Honestly, I'd never thought I'd be writing about this, but the matter of patching globals turned out to be one of those topics that a lot of engineers seem to misunderstand. Most of us don't do monkey-patching, and even fewer stay around those implementations long enough to witness their impact. All the more reason to talk about it.

Throughout the next few thousand words, I will be referring to modifications of global APIs as "patching". You take the code you don't own, you modify its behavior, you patch it. I hope we are clear on semantics.

I am using React, Next.js, and Bun as examples in this article because they have introduced features built on top of patched globals in the past months, which was met with a sizeable backlash from the community. This piece is not a part of that backlash. If you came here to bolster your dislike toward those projects or the companies behind them, I kindly encourage you to close this tab and go on your way. I have voiced my concerns about the matter publicly and saw firsthand how deserved criticism is muddied with blind hatred, and I wish to contribute to that kind of discourse no longer.

I love API design. All of the said companies have featured excellent design choices in the past. Today, you will learn why patching globals is never such a choice, despite its reasoning and appeal, which we will also discuss.

Background

I first came upon the "patching issue" when I heard the announcement of the new Caching API in Next.js. While the feature itself is a great addition to the developer and user experience, the proposed public API was rather . . . unusual.

The team has decided to provide a Next.js-specific feature by extending a standard global API—fetch.

1fetch('https://kettanaito.com', {
2 // This is not a valid option of RequestInit,
3 // and it doesn't exist outside of Next.js.
4 next: { revalidate: 10 },
5})

After looking more into this, I've soon discovered that React itself was patching the global fetch function to cache server-side responses in their in-progress React Server Component model. This was one of the most used JavaScript frameworks and one of the most used JavaScript meta-frameworks introducing what I find to be a rather harmful design pattern.

A few days ago, I stumbled upon a tweet from Bun, reminding its users about a convenient way to proxy HTTP requests. All it takes is to give any fetch request a proxy option:

1fetch('https://redd.one', {
2 // This is not a valid option of RequestInit,
3 // and it doesn't exist outside of Bun.
4 proxy: 'https://kettanaito.com',
5})

At this point, it was the third time a major framework (or, in this case, a runtime) introduced a feature that relied on modifying an existing global API. I did (and still do) assumed the good intention from the authors, yet I couldn't help but worry that it was an oversight, an accidental spark of a flint that, if not doused, would shroud our whole dear forest in fire.

But why do I think those APIs are bad? What is so harmful about them?

Answering these questions without understanding the reason behind those design decisions first would be simply improper.

Appeal of patching globals

It's not an easy thing to come up with a good API. Even more so, with an API that would be used by thousands of people. It has to be intuitive, concise, solve the problem it sets off to solve all while being efficient about it. There's a lot of boxes to check, with some of them pulling your judgment in opposite directions.

Luckily, all three examples we have today happened to concern fetch. It's a global function to make HTTP requests on the web, and it's known and used by a lot of people out there. You call fetch when you want a request to happen. Isn't that the perfect place to say you want that request cached too? Maybe proxied elsewhere? And because it's a global, it doesn't even have to be imported! We are checking off those boxes so fast we are grinding pencils to stumps here.

In the end, why should anyone go through the trouble of inventing custom functions and methods for something the platform should have had, to begin with?

And I do get this appeal, I truly do. There is, however, a cost to every decision, and patching globals doesn't just have a cost that affects both users and maintainers, but also actively harms otherwise great APIs and even the language itself. Now, to the specifics.

Harm #1: Maintenance

When you patch a global, you modify something you do not own. Maintaining the things you do not own is hard. That is why you should always strive toward making the experiences you ship apparent and explicit. Like so:

import { experience } from 'your-thing'

That will also help you assess the blast radius of any change you decide to make to your framework in the future.

See, the globals are precisely that—global. The only thing worse than thinking you can predict all the possible ways the global is used in the wild both by your users and other tools, is to assume you can accommodate for it.

Context: Hi 👋 I patch globals for a living. I don't enjoy that. I actually hate that, and I'd move away from that instantly had I had a choice (and I did, moving the network interception to a Service Worker the moment it clicked). Sadly, the nature of what I do doesn't give me much choice. Luckily, all the frameworks I bring up as an example today do have that choice, and I will even show an alternative design to their APIs later in this article.

But not before we are done discussing the harms of patching globals.

You can take my word for it, or you can wait for your users to tell you how meddling with globals breaks their applications and what an excruciating pain they had to suffer to discover that.

You have thrown a bear trap into the source code and someone's got caught. At least, it won't cost them hours of their lives and sanity to fix the issue. But it will cost you.

Using a more concrete example, imagine your framework patches the global fetch function to dedupe requests. That works really well for the immediate consumers of fetch—your users. Now, what would happen if they rely on a third-party SDK that also has a deduplication mechanism built-in? The two behaviors would clash, causing all sorts of weird issues that are virtually impossible to debug.

That is your best case scenario. The worst case scenario is that the third-party was also patching fetch and relying on that patch to exist, which your framework successfully undoes by introducing its own patch. A magnificent manifestation of hack-driven development at its finest.

Harm #2: Predictability

There's hardly anything more satisfying in engineering than knowing how a thing behaves. If you use console.log(), you know it will print a message to the console. Now, it may be the browser's DevTools protocol that does it, or Node.js writing to process.stdout, but you know what's going to happen. You can predict the behavior.

There's hardly anything more irritating in engineering than having the same thing behave differently.

If you've been building for web long enough, I'm sure you have your own barrage of oddities, quirks, and behaviors of questionable sanity corked in memories you never wish to revisit. Remember how display: flex used to work only in some browsers? What about negative lookbehind in Safari? And all those joyful hours spent begging IE to render your page correctly, if at all?

That's the chaos that happens when multiple agents implement the same thing differently.

It makes me appreciate the present even more. Cross-browser compatibility has never been higher. Even cross-environment compatibility is showing incredible progress, with Node.js shipping fetch, ReadableStream, WebSocket, and a dozen of other browser APIs. That didn't happen by accident. We, as an industry, came to the conclusion that nobody wants to waste money and time on compatibility issues. We invested into specifications, we deprecated old and gave way to the new.

And now we have come a full circle. We are back at the stage where multiple agents implement the same thing differently.

Here's a request that will be cached and then revalidated after 10 seconds in Next.js:

1fetch('https://kettanaito.com', {
2 cache: 'force-cache',
3 next: { revalidate: 10 },
4})

Will this request behave the same in Svelte? In Remix? In plain JavaScript, maybe? Not really. The revalidating part happened to be a Next.js-specific API leaking through the global fetch. It's a custom API disguised as a global, which drives its predictability to the ground.

I'm intentionally not giving examples of the standard behaviors of fetch working differently across runtimes. Those exist too, but there is also an effort to bridge the gaps and improve specification-compliance. Patching, on the other hand, pulls the rope in the opposite direction entirely.

Harm #3: Learning

A software that disregards the learning experience it creates is a tool destined to fail.

As a library author, I know firsthand how impactful the decisions I make can be for my users. There's already a lot of things a good API must accomplish: achieve the task at hand, provide an intuitive, versatile means to do so, align with the established philosophy and best practices, be extendable and maintainable and transferrable. But there's one more thing, the most important thing it must do.

An API must educate.

A good tool teaches you the concept.
A bad tool teaches you itself.

Every time a user reaches for your tool, they have a problem to solve. And the solution you provide them is always twofold: the immediate one and the lasting one.

For the lack of a better example, I will shamelessly use one from my experience. I spent the last year making MSW more specification-compliant. The goal was to facilitate the standard Fetch API so the developer would interface with it more often. It was a challenge and a limitation I imposed on myself, knowing that the gains would be tremendous.

See, when developers come to MSW, they want to intercept requests and mock responses. They could do all of that before, but the API was entirely contrived and specific to the library:

1rest.get('/user', (req, res, ctx) => {
2 return res(ctx.json({ name: 'John' }))
3})

This does achieve the goal. This does intercept the request and mock its response. It has that "immediate" solution right here, ready to be used. But boy did it have no lasting impact whatsoever because it fundamentally failed to educate.

Does this API teach you how requests are handled on the web? Does it let you explore what responses are, how to construct them? No, not really. Does it utilize the knowledge you already have to make your experience better? Nae.

Does it make you a better engineer?

Anyway, let's return to our muttons. What does this example teach me about caching?

1fetch('https://kettanaito', {
2 cache: 'force-cache',
3 next: { revalidate: 10 },
4})

Well, for once, that requests can be cached and that there's a standard cache property that controls how the browser caches that request. So far so good! But that's the standard teaching me, not Next.js.

The next.revalidate part is entirely contrived. Even if I looked it up in the Next.js docs, I wouldn't know what exactly it abstracts over. Does it control cache revalidation or does it do something else entirely? It certainly doesn't teach me about the max-age request directive of the standard Cache-Control HTTP header.

But most importantly, can I write the same logic using plain fetch?

1fetch('https://kettanaito', {
2 cache: 'force-cache',
3 headers: {
4 'Cache-Control': 'max-age=10',
5 },
6})

I genuinely didn't know. That's why I've reached out to Lee for help. He kindly explained to me that next.revalidate focuses on Incremental Static Regeneration (ISR), not just caching. So no, I cannot translate it to a plain fetch call. Seeing how tightly it integrates with Next.js, I believe it belongs to RequestInit even less.

As powerful as next.revalidate is, I wish it taught me more about what exactly it does so I would get better at understanding caching and ISR. Ideally, while also relying on the existing standards that I can apply outside of Next.js too.

Harm #4: Lock-in

The more sugar your framework sprinkles on top of standard APIs, the harder it is to move away from that framework.

I would hate to be an engineer who has to migrate this from Bun:

1fetch('https://redd.one', {
2 proxy: 'https://kettanaito.com',
3})

Because I have no idea what Bun is doing here. I know what it's supposed to do from the docs but the reality is often such that there are dozens of things that are happening under the hood. There's no specification for how proxy behaves. Now I have to match all of its behaviors while migrating because each oddity missed is a new bug in my app waiting to happen.

You took a thing you don't own, you modified it on the premise of convenience, and now it doesn't work anywhere else but in your own little universe. If that is not the definition of vendor lock-in, I don't know what is.

Harm #5: Hurting the progress

What appears like a pioneering of innovation ends up being one big spike in the wheel of progress. A really, really big spike.

Here's a quick history lesson for you. In the hairy days of the Internet, there was a library called MooTools. It was a utility library to help developers survive the unfathomable chaos which was webdev some 20 years ago. Its goal was noble, its means, well, not so much.

See, MooTools worked by imbuing the prototypes of global JavaScript objects with extra functionality. In return, you got a ton of useful methods, like Array.prototype.getLast(), Function.prototype.attempt(), and Number.prototype.limit(). None of those really existed in JavaScript but, in the end, it was in such a sorry state, to begin with. Nobody really cared.

Until more and more sites got built with MooTools. And when the time came to define the next chapter for ECMAScript, to address all those pains and make the language better, the committee was faced with a conundrum. They wanted to introduce new prototype methods on primitives but couldn't because half of the web relied on the same methods from MooTools. Shipping those methods as a part of the language meant breaking half the Internet, which the committee found the wisdom not to do.

They had to bend the spec to reality. So we got String.prototype.includes() instead of String.prototype.contains() to stay consistent with Array.

Let me make this abundantly clear:

The language itself couldn't become more convenient because developers were addicted to a convenience library that modified things it didn't own.

MooTools has also spawned a Smoosh controversy, causing a newly added Array.prototype.flatten to break the web and forcing the TC39 folks to rename a perfectly valid method name to flat.

MooTools has passed into history, but its impact on JavaScript remained. So next time you choose to support short-sighted convenience, please do not complain if the WHATWG introduces a proxyTarget and proxyOptions properties to the fetch API instead of a single proxy.

I admit, I don't know the details on why MooTools chose prototype patching as their design direction. I will give them the benefit of the doubt, assuming they thought it the best way for their features to feel like the extension of the language instead of a vendor library. Perhaps they simply didn't know better and there was no other example to learn from. I am thankful they have become one for the rest of us.

A truly fascinating part here is that none of these costs and harms would even be relevant if one approached convenience with a bit more thought and precaution. It's only proper we looked at alternatives to patching globals (I can't believe I had to write this sentence).

Alternatives

There's a widespread misbelief that convenience either comes directly from the spec or comes at the cost of violating one. I really don't know where that stance is coming from so let me clarify this:

Being against patching globals doesn't mean being against convenience.

I am all hands for convenience and yet I hate whenever frameworks patch globals.

But there is one thing I hate even more. Complaining without proposing solutions.

There are a ton of solutions to any API problem better than patching globals, and the quintessential part of any of them is being explicit. Abandon the magical, cast aside the pretence that you own fetch and it is only through modifying it that you can achieve that golden DX you dream about. That DX is fundamentally and irreversibly flawed, and I've just given you five extremely detailed reasons above as to why.

Let's take Bun's proxy API as an example and make it explicit:

1import { proxy } from 'bun'
2
3const proxiedRequest = proxy(proxyUrl, new Request(url, options))
4await fetch(proxiedRequest)

With this change, the proxying functionality no longer appears from thin air. It has to be imported from bun explicitly. Nobody will confuse it with a standard JavaScript behavior. In fact, developers will be reminded on every import that it is Bun who's making proxying easier. Here's your developer-oriented marketing as a side effect of a good API design!

But we can no longer shove it into fetch so we have to be smarter about our proxy function. We want to proxy a request, right? So, maybe, let's accept any request instance as an argument and return a proxied request? It doesn't interfere with how a Request is constructed or a fetch call is made all the while achieving the same proxying functionality? Get out of here.

As a result of being an explicit utility, our proxy function doesn't suffer any of the drawbacks inherent to patching globals. It's a function owned by the framework, which makes it easy to version, maintain, modify, and deprecate. It's predictable because you clearly see where it's coming from and what it does. Most importantly, it doesn't masquerade as a native JavaScript behavior. It's extremely easy to learn: you go to the Bun's documentation, see the function's call signature, and . . . Wait, what is request: Request? What, I have to learn how to construct requests on the web now? Damn you, Bun, I didn't ask you to make me smarter! There's no lock-in by design, and if you wish to play chaotically good here, you can even publish proxy for other runtimes to use, boosting everybody's awareness of your project and perhaps even incentivising it becoming a standard. And, finally, you start to inspire and incite progress instead of pretending it has happened already, the others just haven't caught up yet.

Do you see how a simple change made this API better? I'll give it to you, it's likely not production-ready because I had to spent the entirety of 10 seconds coming up with this example. But I bet with a team of smart people and a week or two of discussion, it would have become another small reason to switch over to Bun. Who knows, who knows.

Since the ball is already rolling, let's refactor the APIs from React and Next.js as well. This is going to be mostly boring because we will apply the same pattern to all of them, ridding them of the same problems and reaping the same benefits.

1import { cache } from 'react'
2
3// There's no reason to couple the data fetching
4// function with React. By leaving it generic, it
5// can be reused and tested with fewer dependencies.
6function getUser() {
7 const response = await fetch('https://api.example.com/user')
8 return response.json()
9}
10
11export async function UserProfile() {
12 // Instead, make caching (request deduplication)
13 // an explicit choice where we actually need it—
14 // when fetching the data for our component.
15 const profile = await cache(getUser())
16
17 return <p>Hello, {profile.name}!</p>
18}
1import { withCache } from 'next'
2
3// A regular request instance.
4const request = new Request('https://kettanaito.com')
5
6// A regular fetch call that accepts a modified
7// request instance that instructs Next.js how to
8// cache this request.
9fetch(withCache(request, { revalidate: 10 }))

It speaks volumes that both React and Next.js are already in a process of migrating away from patching fetch by introducing their cache and unstable_cache APIs, respectivelly.

The suggestions I'm proposing are also less invasive, as they operate on existing globals and behaviors as opposed to modifying them.

Common counter-arguments

The amount of backlash toward the criticism of patching globals online made me realize most people lack the fundamental understanding of the matter. Below, I will address most common counter-arguments against my concerns just in case the article failed to convince you that meddling with globals is a terrible idea.

But who cares?

Not enough of us, sadly. Putting a jelly sandwich on a bus seat won't affect the entire bus, but does it make it a good thing to do?

But it's a better developer experience!

If all the extensive explanations of this article failed to convinice you that patching globals is never a good developer experience, I fear nothing will.

Bun is a custom runtime, there is no patching!

Bun is built on a promise of Node.js compatibility. Node.js implements fetch according to the WHATWG Fetch standard, and so Bun does too. The standard defines fetch in its entirety. With addition of custom properties, Bun is patching fetch, deviating from the standard and introducing all the issues I've talked about prior.

But Node.js does weird stuff too!

Of course it does! It's a project with 15 years of history behind it. I swim in that weird stuff on a daily basis at work. We should learn from its oddities and lead by example, not use them as an excuse to implement harmful APIs.

But fetch doesn't exist on the server!

Global fetch API exists in Node.js since 2022 (introduced as experimental in v17). Undici is the official implementer of that API and is a dependency to Node.js. People have been using it on the server for years now.

But these frameworks know better!

They likely do! But in some cases, it takes them time and the feedback from the community to arrive at that "better".

As I've mentioned before, React and Next.js are moving away from patching fetch in favor of more explicit APIs. When I asked Lee Robinson to provide more context behind that switch, he shared with me the following:

In theory, community packages could publish their own code that used fetch, and set these caching policies on their own. And we started to do that a bit, but it was very opaque and hard to tell why something was cached. So we decided we want to be more explicit. — Lee Robinson, VP of Product at Vercel.

Perhaps we will even see Bun reconsider their direction of patching globals, which would immediately make it a more mature runtime, at least in my eyes.

The bottom line is, there isn't enough amount of experience, skill, or credibility that would render you immune to making a mistake. The only thing that matters is having the strength to admit one and the wisdom to resolve it.


Conclusion

I hope this article was able to shed some light on the dangers of apparently "controversial" practice of patching globals. It's a footgun we have already fired, and seems we needed a due reminder that the feet we have are just two.

Huge thanks to Lee Robinson and Jordan Harband for proofreading this piece! Your feedback helped me ensure my example were accurate and my tone welcoming. I'm truly honored to have had your eyes on this one.

Resources

There are a lot of great writing on this topic. I encourage you to read these:

You are also most welcome to share this article any time you notice a framework is reaching toward patching a global. If it helps to prevent that, I know I will be glad.