11 Jan 2026 - tsp
Last update 11 Jan 2026
11 mins
When many of us started building web applications in the 1990s and early 2000s, the “dynamic website” was the holy grail. Every request ran through a Perl or later PHP script, dozens of database queries were executed as soon as we discovered MySQL, and the page was rebuilt on the fly. The promise was seductive: millisecond-accurate content, fully dynamic pages, and the feeling that your website was “alive”, providing fresh content on each frequent reload - we saw tens of page updates in a single hour.
Two decades later, most of us have learned: this approach is a very bad idea. Not just because of nostalgia for simpler times, but because it fundamentally misuses resources and scales poorly. And it ignores the basic design principles behind HTTP and many of the beautiful features it provides that have been carefully crafted into the protocol.

Dynamic generation on every request means every read triggers full database activity. This is wasteful, since in nearly all cases, content changes far slower than it is read. Consider a small business or an association website: their event calendar might change once per week, yet the page could be read hundreds of times per day. Why hammer your database for every read?
Typical read/write ratios are heavily skewed toward reads - easily 1000:1, usually even way more. For such scenarios, it makes much more sense to update the site when writing and serve static, pre-generated pages for reads. An Apache httpd or nginx server can handle tens of thousands of static requests per second on cheap commodity hardware. By contrast, even 10 dynamic requests per second can start to saturate a weak CMS, consuming CPU and database I/O (this is actually what happened to the server this page is hosted on - not for this static page but another website ran on the same hardware, this somehow motivated the writing of this article).

Once your project grows, central dynamic generation becomes not just inefficient but impossible to scale. Modern systems are built on the principles of:
Thus, aiming for central, synchronous consistency is misguided. Designing for eventual consistency and static delivery of precomputed content is the path to resilience. In practice, the illusion of realtime content can be offered through event brokers or pub/sub systems, where clients subscribe to updates as they propagate. These updates arrive without strong guarantees of ordering or consistency, but they are often good enough for reality - allowing static updates and eventual propagation to coexist with a responsive user experience.

Caching is the practical compromise between freshness and scalability. For most content - and most content is quasi static - the exact moment of update doesn’t matter:
ETags and Cache-Control headers allow you to exploit this in combination with proxy servers, CDNs and clients that use mechanisms like If-Modified headers. A 30-minute TTL can cut load by many orders of magnitude. For many static cases, you can cache for days or weeks. Static generation takes this even further: the write path updates a file, and the delivery path serves that file directly. These are totally independent pipelines, you can apply complete separation of concerns. During development we are often tempted to set very low timeouts just to always see the latest changes, but this is not how production should work. One can always actively reload content when needed, flushing caches. Experienced developers know you rarely need a totally consistent view of scripts, static page files and layout files all expiring at the same moment if you do development in a well versed way; this is simply unrealistic. Good developers handle such asynchrony gracefully, while believing in seamless, synchronized reloads of the entire system is a mark of amateur design.
There are, however, rare situations where even professionals require low reload times - still they have to account for non synchrony of states of different resources in any case. In those cases, this is carefully planned upfront: short timeouts are set, time is allowed for them to propagate, and only then are changes rolled out. After a brief period the timeouts are raised again. This is often employed when deploying new mechanisms like DNS signing algorithms or similar. But this is a last-resort strategy and should usually be avoided.

Dynamic CMS systems often expose outdated software with a broad attack surface. Each request is funneled through layers of middleware - the PHP interpreter or similar runtime, custom scripts, and the database backend - each introducing potential vulnerabilities and each of them exposed to each and every request from any untrusted source. By contrast, serving static files is far more robust: the web server simply reads from disk or distributed storage, a process with very few exploitable points. Static file serving also allows clear separation between the delivery system and the editing/authoring environment. Even if a frontend node is compromised, your underlying content remains intact - you patch the issue, redeploy, and your producers can continue working without disruption meanwhile.
This doesn’t mean you have to give up the convenience of a CMS. Modern static site generators (Hugo, Jekyll, etc.) let you author in Markdown or even WYSIWYG editors, then compile to static HTML. Even some “dynamic” CMSs now offer “static export” features, where content is generated dynamically at write-time but served statically at read-time. Still, one should carefully consider whether in‑browser editing is really necessary. More often than not it turns into a hassle rather than a feature - reducing performance and providing an editing experience that is rarely as pleasant as alternatives, even if it seems like a cool idea at first glance. Another downside is the requirement of a working network connection at all times. Collaborative solutions add even more fragility: they depend on stable bandwidth, low latency, and minimal packet loss - and even then they frequently suffer from inconsistent updates and lost edits, as most of us have experienced with “modern” collaborative tools. These problems simply don’t exist with proper offline-first editing workflows that only push changes once you are done.
The result: best of both worlds. A smooth editorial experience for writers, but fast, secure, cache-friendly delivery for readers.
Static-first doesn’t mean “never dynamic.” It means move the dynamic parts off the hot path. When a page is mostly stable, serve it statically; when a tiny slice truly must be dynamic, push that slice into serverless functions or edge functions that run outside the page render pipeline.
Good fits for serverless/edge:
You can segregate the typical pattern into two independent paths - the read path vs write path:
A few major design tips so it scales:
Cache-Control, ETag, and consider stale-while-revalidate for snappy UX under load.What not to do: put full page HTML rendering inside a function for every read. That merely recreates the 90s mistake on newer infrastructure. Use functions as narrow dynamic sidecars, not as a page factory. And do not rely on JavaScript in the browser, always provide a fallback (you can sacrifice propagation speed of information and some of useability but the information should still be accessible without any active code execution on the client).
Even web pages that are often shown as some of the most well performing high traffic WordPress instances like the NewYork times are in fact static webpages. Large portals like them don’t use WordPress as the user facing frontend, they usually utilize those systems in headless mode. This means they utilize WordPress to manage their content and allow their editors to edit content. Then a rendering backend - distinct from wordpress - generates the actual frontend pages. This happens on different timescales for different article types - like interactive realtime rendering for breaking news that is fetched from the frontend, pre-rendered static HTML that is also published via content delivery networks on a shedule and at publish time for evergreen articles (which resembles a classical static webpage). In addition there is incremental and on-demand regeneration for pages that are mostly static but can be re-generated when content changes.

The 90s beginners dream of millisecond-accurate dynamic websites has aged poorly. For most organizations - small businesses, associations, newspapers, etc. - dynamic generation on every request makes no sense.
Static or cache-heavy delivery is not only faster and more resource-efficient, it is more scalable, secure, and resilient. The future of the web lies not in rebuilding every page every time, but in serving static truths and letting updates propagate when they truly matter. For true realtime propagation you should rely on dedicated pub/sub systems, not the WWW itself, since the web was never designed for strict realtime guarantees. And importantly: static-first should not be mistaken for an “old school” approach. On the contrary, it aligns with the most modern web stacks - from Jamstack architectures to global CDNs - proving that efficiency, scalability and resilience are timeless design choices.
This article is tagged:
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/