It seems like Web Components are always just on the cusp of finally catching on. They’re like the year of Linux on the desktop for frontend nerds. I keep reading the latest articles about Web Components as they bubble up on my social media feeds, just hoping that there is something that I missed out on and now they have more substance, but I always end up feeling disappointed. I wrote up my thoughts on Web Components back in 2020, and it doesn’t feel like the conversation has progressed in all that time. It’s like an Eternal September with people constantly going back to the original promise of Web Components, in spite of the reality having long since shown itself to have fallen short.
What went wrong
To TL;DR my earlier piece:
The pitch is “get semantic elements from across the web!” But those are wrong problems to try to solve.
- Custom elements aren’t “semantic” because search engines don’t know what they mean.
customElements.defineis an extremely clunky API.
<slot>are fine, but not a sufficient substitute for a real templating language.
- Shadow DOM is also very clunky and solves an extremely niche problem.
Okay, those are the basics, can we all just agree on that?
Yes, there are circumstances where using the customElement API is a convenient way to namespace things and ensure that lazy loaded elements have their constructors called, but they are mostly useful for things like rendering third party embeds in a rich text content well. That doesn’t add up to the name “Web Components” referring to a meaningful thing.
Browser makers to the rescue
The good news is that even though “Web Components” aren’t really A Thing, the browser makers do eventually solve real problems the right way, but sometimes it takes them a while.
As I wrote about Shadow DOM at the time:
The fundamental thing that Shadow DOM does is to allow an element of the page to have its own CSS reset. There’s no reason we couldn’t have that as part of CSS itself instead (perhaps by changing the rules for
@importto allow it to be nested instead of only at the top level).
As it turns out, that is exactly what happened, so there won’t really be much reason to use Shadow DOM anymore once browser support for donut scoping becomes sufficiently widespread. Use CSS scope selectors and import layers to create a CSS reset scoped just to your component, and you get all the benefits of Shadow DOM with none of the goth language about “light DOM styles piercing the shadow root” and whatnot.
So, what should browser makers do instead of pouring more effort into Web Components per se?
I think everyone agrees that the best example of paving the desire lines in the history of the DOM is the relationship between jQuery and querySelectorAll. In 2003, Simon Willison put together a demo version of
document.getElementsBySelector. JQuery took the idea and made it into the once ubiquitous
$("") API. The browser makers took that and turned it into
document.querySelectorAll which made it faster and universally available without linking to jQuery.
This is the ideal scenario for the evolution of the web: developers worked collectively over a series of years to find a solution to a recurring problem and then browser makers took that solution and made it native so that it would be universally available and more performant. By contrast, the
Instead of trying to improve Web Components as they exist now, what the browser makers should focus on is to find new areas where we can standardize the things developers are already doing. I have three suggestions for what they should look at.
- Using reactivity (e.g. signals) for DOM updates.
- Using cloned templates for DOM rendering.
- Using modern web APIs like
Proxy, which make all of the above easier.
To be honest, I started drafting this blog post before Lawson’s came out, so I was happy to see that he highlighted some of the same aspects of modern frameworks that I was planning to write about here. The
Proxy APIs are fine, and don’t especially need to be improved by browser makers. But there is a lot of room for improving reactivity/signals and DOM rendering. Let’s talk about reactivity first.
The basic idea of reactivity is that if you have a simple component like a todo item, you want to know when
todo.done goes from
true so that you can trigger changes to other data like
todos.count which will go from N to N+1. In my 2020 post, I called this “the data lifecycle.”
This is clearly a problem that has been solved independently by multiple frameworks. It’s time for the browser to tackle it too.
Morph DOM / Virtual DOM
node.innerHTML = newHTML; is reasonably performant. The problems come if your nodes have event handlers (which can be partially mitigated by delegating to handlers higher in the DOM) or state which need to be preserved across renderings. For example, if you have an input element, using
node.innerHTML on one of its parent elements will wipe out the state of the input and lose any text entered into the box. All systems for templating an interactive page need a way to prevent these kinds of problems from happening on each render and only overwriting state that is supposed to be overwritten.
React famously works around this problem by using a Virtual DOM that is then reconciled with the browser DOM, but even frameworks that are server focused like HTMX need a solution for swapping out DOM nodes without wiping out the current element state. Another solution in this vein is MorphDOM which is eight years old. More recently, Caleb Porzio has been working on @alpinejs/morph as a solution to the problem, and he occasionally podcasts about the issues he runs into with difficult edge cases. Svelte works around the problem of reconciling VDOMs by pushing as much of the reconciliation process into its compiler as possible, but even for Svelte, there are limitations on what’s possible to do in advance and some things need to be done on the client. In any event, while compiler side solutions are good, it is also good to have solutions that can work even without a build step.
However you tackle it, creating a diff between two DOM trees can be an extremely complex process due to the difficulties inherent in figuring out if a node has been entirely replaced or just moved to another place in the tree. There are lots of corner cases to consider and tradeoffs to think about. This is another area really ripe for consolidation by browser makers. If the browser had an API that was capable of reconciling two DOM trees, it could be faster and more efficient. Having this as a browser API would let frameworks stop trying to differentiate themselves on rendering performance and instead focus on differentiating based on the convenience of their APIs and the depth of their ecosystems. It’s another well tread field that is ready to be paved.
Let’s talk about another common developer need not mentioned in Lawson’s article and a technology that I think has fallen off the radars of a lot of developers because it is seen as being a legacy technique.
Wrapping it up
I hope you all understand what I’m arguing for. The dream of Web Components is a good one, but we need new solutions to match the problems encountered by contemporary web developers, not just a continued retreading of a path laid out a decade ago.
I asked Claude to write a stunning conclusion to this blog post, and here’s what it came up with:
The promise of “Web Components” capturing imaginations across the web has fallen short. Instead of trying to retrofit new APIs onto old problems, we should look to the future - following the desire lines that developers are already tracing out.
Reactivity, virtual DOM reconciliation, and self-sizing iframes are all areas where developers have put in hard work over years to find solutions. Now it’s time for browser makers to pave those roads - taking the best of what developers have created and making it faster, more efficient, and universally available.
This is how real progress happens on the web - not starting from lofty top-down ideas, but from real solutions emerging organically to meet actual needs. If browser makers can shift to enabling what developers are already doing, we’ll wind up with better tools for everyone. The solutions won’t always be sexy, but they’ll be grounded in practicality.
So let’s learn from the past, meet the present where it lives, and build the web we need - not the one we imagine. The pieces are already coming together if we have the vision to see them.
On that note, let’s keep blazing new trails for the future of the web, before the coming AI-powered coding robots inevitably put us all out of work.
P.S. This is the first blog post I’ve published under the name Carlana. See here for details.