It seems like Web Components are always just on the cusp of finally catching on. They’re like the year of Linux on the desktop for frontend nerds. I keep reading the latest articles about Web Components as they bubble up on my social media feeds, just hoping that there is something that I missed out on and now they have more substance, but I always end up feeling disappointed. I wrote up my thoughts on Web Components back in 2020, and it doesn’t feel like the conversation has progressed in all that time. It’s like an Eternal September with people constantly going back to the original promise of Web Components, in spite of the reality having long since shown itself to have fallen short.

What went wrong

To TL;DR my earlier piece:

The pitch is “get semantic elements from across the web!” But those are wrong problems to try to solve.

  • Custom elements aren’t “semantic” because search engines don’t know what they mean.
  • “From across the web” is always going to be worse for end user performance than using one coherent, progressive enhancement compatible JavaScript framework per site.
  • customElements.define is an extremely clunky API.
  • <template> and <slot> are fine, but not a sufficient substitute for a real templating language.
  • Shadow DOM is also very clunky and solves an extremely niche problem.

Okay, those are the basics, can we all just agree on that?

Yes, there are circumstances where using the customElement API is a convenient way to namespace things and ensure that lazy loaded elements have their constructors called, but they are mostly useful for things like rendering third party embeds in a rich text content well. That doesn’t add up to the name “Web Components” referring to a meaningful thing.

If you put the name “Web Components” on your collection of JavaScript widgets powered by a lightweight framework, more people will check it out than if you had called them something else. In the end though, it’s just a marketing name for a couple of clunky DOM APIs and a dream that we all wish really existed. The dream is that you could just mix and match widgets regardless of what framework each widget is written in, but the truth is the only way to do that is by paying the full price of including a framework for each web component you include on the page. It’s never going to be a practical choice for sites where end user performance matters.

Browser makers to the rescue

The good news is that even though “Web Components” aren’t really A Thing, the browser makers do eventually solve real problems the right way, but sometimes it takes them a while.

As I wrote about Shadow DOM at the time:

The fundamental thing that Shadow DOM does is to allow an element of the page to have its own CSS reset. There’s no reason we couldn’t have that as part of CSS itself instead (perhaps by changing the rules for @import to allow it to be nested instead of only at the top level).

As it turns out, that is exactly what happened, so there won’t really be much reason to use Shadow DOM anymore once browser support for donut scoping becomes sufficiently widespread. Use CSS scope selectors and import layers to create a CSS reset scoped just to your component, and you get all the benefits of Shadow DOM with none of the goth language about “light DOM styles piercing the shadow root” and whatnot.

So, what should browser makers do instead of pouring more effort into Web Components per se?

I think everyone agrees that the best example of paving the desire lines in the history of the DOM is the relationship between jQuery and querySelectorAll. In 2003, Simon Willison put together a demo version of document.getElementsBySelector. JQuery took the idea and made it into the once ubiquitous $("") API. The browser makers took that and turned it into document.querySelectorAll which made it faster and universally available without linking to jQuery.

This is the ideal scenario for the evolution of the web: developers worked collectively over a series of years to find a solution to a recurring problem and then browser makers took that solution and made it native so that it would be universally available and more performant. By contrast, the customElement and Shadow DOM APIs come from about a decade ago and predate many of the modern techniques for JavaScript frameworks. As a result, they ended up solving the wrong problems, just due to the inexperience that we as an industry had at that time with building large JavaScript application frameworks.

Instead of trying to improve Web Components as they exist now, what the browser makers should focus on is to find new areas where we can standardize the things developers are already doing. I have three suggestions for what they should look at.

Reactivity

Recently, Nolan Lawson wrote Let’s learn how modern JavaScript frameworks work by building one. In the article, Lawson defines “modern” frameworks as having three components:

  1. Using reactivity (e.g. signals) for DOM updates.
  2. Using cloned templates for DOM rendering.
  3. Using modern web APIs like <template> and Proxy, which make all of the above easier.

To be honest, I started drafting this blog post before Lawson’s came out, so I was happy to see that he highlighted some of the same aspects of modern frameworks that I was planning to write about here. The <template> and Proxy APIs are fine, and don’t especially need to be improved by browser makers. But there is a lot of room for improving reactivity/signals and DOM rendering. Let’s talk about reactivity first.

The basic idea of reactivity is that if you have a simple component like a todo item, you want to know when todo.done goes from false to true so that you can trigger changes to other data like todos.count which will go from N to N+1. In my 2020 post, I called this “the data lifecycle.”

In Lawson’s article, he creates a quick and dirty reactivity system with only about 50 lines of JavaScript, but more realistic reactivity systems like observable-membrane, @vue/reactivity, and @preact/signals are significantly larger and more complex.

Shifting the core of these systems out of JavaScript libraries and into the browser will allow for significant optimization of the tree of dependencies between reactive data elements. It takes a lot of code to ship a performance optimized algorithm for marking nodes as dirty. For a simple todo app with only a handful of reactive elements, the complexity of the reactivity engine easily swamps the complexity of the application itself. But if the browser were tackling this problem directly, it would be possible to have this code written in C++ or Rust and made maximally efficient because it wouldn’t need to be packaged up, sent to clients, and interpreted from scratch on every page load. The cost could be paid once by a browser team and the benefits enjoyed by web developers and end users everywhere.

This is clearly a problem that has been solved independently by multiple frameworks. It’s time for the browser to tackle it too.

Update: Dave Rupert has pointed out on Mastodon that there is a W3C proposal to add reactive signals to JavaScript that is in the process of being forwarded to TC39. I wish them luck in the proposal process.

Morph DOM / Virtual DOM

Another aspect of modern frameworks that is begging for optimization is DOM rendering. In Lawson’s article, he spends a significant amount of time explaining how to translate proxy calls into efficient DOM updates. For my part, I don’t think it would make sense for the browser makers to enter into the templating wars. The existing template literal syntax in JavaScript is sufficient to allow developers to create templating systems that make sense to them, or they can keep using solutions like JSX to make templating a compile time feature of their framework. What is needed is an efficient way of merging the HTML results of the template system with the existing browser DOM of a page. If you just have some simple HTML to replace, then writing node.innerHTML = newHTML; is reasonably performant. The problems come if your nodes have event handlers (which can be partially mitigated by delegating to handlers higher in the DOM) or state which need to be preserved across renderings. For example, if you have an input element, using node.innerHTML on one of its parent elements will wipe out the state of the input and lose any text entered into the box. All systems for templating an interactive page need a way to prevent these kinds of problems from happening on each render and only overwriting state that is supposed to be overwritten.

React famously works around this problem by using a Virtual DOM that is then reconciled with the browser DOM, but even frameworks that are server focused like HTMX need a solution for swapping out DOM nodes without wiping out the current element state. Another solution in this vein is MorphDOM which is eight years old. More recently, Caleb Porzio has been working on @alpinejs/morph as a solution to the problem, and he occasionally podcasts about the issues he runs into with difficult edge cases. Svelte works around the problem of reconciling VDOMs by pushing as much of the reconciliation process into its compiler as possible, but even for Svelte, there are limitations on what’s possible to do in advance and some things need to be done on the client. In any event, while compiler side solutions are good, it is also good to have solutions that can work even without a build step.

However you tackle it, creating a diff between two DOM trees can be an extremely complex process due to the difficulties inherent in figuring out if a node has been entirely replaced or just moved to another place in the tree. There are lots of corner cases to consider and tradeoffs to think about. This is another area really ripe for consolidation by browser makers. If the browser had an API that was capable of reconciling two DOM trees, it could be faster and more efficient. Having this as a browser API would let frameworks stop trying to differentiate themselves on rendering performance and instead focus on differentiating based on the convenience of their APIs and the depth of their ecosystems. It’s another well tread field that is ready to be paved.

Self-sizing iframes

Let’s talk about another common developer need not mentioned in Lawson’s article and a technology that I think has fallen off the radars of a lot of developers because it is seen as being a legacy technique.

The humble iframe may date back to Internet Explorer 4, but it is actually more powerful than the Shadow DOM because they have stronger style isolation and some real security guarantees. As a result, while iframes have basically no role in modern JavaScript frameworks, they are still extremely common in my field of online news publishing. If you have a content well and you want to be able to put some kind of widget in the middle of it and have it work across CMSes and future redesigns, you use an iframe. The examples of this include everything from charts by DataWrapper and Flourish to videos from YouTube and Vimeo.

The big problem with iframes is that there isn’t a way in CSS to tell an iframe to take on the height of its content. There are good reasons for this. For example, if a host page could learn whether you’re logged into your banking site by just adding an iframe and seeing if it’s the height of the logged in page or the logged out page, that would be bad. As a result, the best way to make a responsively sized iframe is to use JavaScript to pass a message from the contained page to the host page that tells the host the size of the contained page so that the host can then resize the iframe appropriately. There are various libraries that people use to do this, like NPR’s Pym.js and SideChain or @newswire/frames.

In spite of the security concerns, I think this could be worked around with a native solution in which the framing and framed pages use an X-Frame-Options header or a CSP to tell the browser which sites want to cooperate on allowing a dynamic frame height. Moving this out of JavaScript and into the browser would fix a lot of little glitches that come from polling for window size changes or contained pages with a vh CSS style, and make life easier for anyone who has to create or use an embed. It would also make it possible to use responsive iframes even in contexts where JavaScript isn’t allowed or needs to be heavily restricted.

Wrapping it up

I hope you all understand what I’m arguing for. The dream of Web Components is a good one, but we need new solutions to match the problems encountered by contemporary web developers, not just a continued retreading of a path laid out a decade ago.

I asked Claude to write a stunning conclusion to this blog post, and here’s what it came up with:

The promise of “Web Components” capturing imaginations across the web has fallen short. Instead of trying to retrofit new APIs onto old problems, we should look to the future - following the desire lines that developers are already tracing out.

Reactivity, virtual DOM reconciliation, and self-sizing iframes are all areas where developers have put in hard work over years to find solutions. Now it’s time for browser makers to pave those roads - taking the best of what developers have created and making it faster, more efficient, and universally available.

This is how real progress happens on the web - not starting from lofty top-down ideas, but from real solutions emerging organically to meet actual needs. If browser makers can shift to enabling what developers are already doing, we’ll wind up with better tools for everyone. The solutions won’t always be sexy, but they’ll be grounded in practicality.

So let’s learn from the past, meet the present where it lives, and build the web we need - not the one we imagine. The pieces are already coming together if we have the vision to see them.

On that note, let’s keep blazing new trails for the future of the web, before the coming AI-powered coding robots inevitably put us all out of work.

P.S. This is the first blog post I’ve published under the name Carlana. See here for details.