Note: This is an opinion article on the future of web development. Currently there is a trend
about building more and more logic into the client - even for simple tasks. That trend has been there
previously and it has been a bad idea back then. This article tried to present a few arguments why I
donât think that this is or should be the future of the web.
Note that - if you expect that - I wonât tell you that JavaScript is useless or that frameworks
like Angular or React are bad - note that react
is a bad example though since it supports complete server side rendering without any client JS
support if used correctly though. They solve a particular problem and theyâre great at this (Iâve
had the opportunity to work with both of them and with different JS environments). But theyâre often
used in a wrong or bad way - most of the time to be precise.
This article has been inspired by a friend linking me to an article about backend-less JavaScript applications.
As usual this article might be extended in case of more ideas coming to my mind or getting
external feedback.
Introduction
Everyone who does web development full- or part time knows it. There currently is (again) a trend to
develop websites and basic web applications - not speaking about desktop applications running in the
browser though, more on that later - that are running as massive JavaScript client side applications
inside the browser. They do not simply provide some features that are only supported using JavaScript
and they do not only enhance their functionality using JavaScript but are completely composed on the
client side using a massive amount of JavaScript code that fetches information from distributed sets
for APIs that might even be operated by a huge amount of different backend providers. They take the
idea of creating mashups to the next level.
Just as a short summary as not everyone will reach the end of this article - the main features
that the web offers in my opinion are:
- Being open for everyone without central control. Anyone can build a webpage without
many resources and without any consent of any external provider. And anyone can run
oneâs own hosting service to host that content.
- Device independence.
- Simple interoperability. Even in ways one doesnât imagine (and no there is nothing
wrong with someone mining oneâs other side for some data and doing something with
that data).
- Archivability. Webpages from the early 1990 are still available and still
viewable. Webpages that build on JS based mashing up of some backends will vanish
and leave just a blank page in history.
- Accessibility. It allows all people with all kinds of illnesses and with all kinds
of devices in all social groups to access the same content.
- Downward and upward compatibility.
But first letâs look at some history of computing and the web.
A little bit of history
Basically first it all started before the web with having some big mainframe computers and a number
of remote dumb terminals. The terminals just had been capable of displaying or printing out data, all
computations have been made at the mainframe centrally. This has been done that way since computational
power and storage have been pretty expensive and took up way much more room than one can imagine today.
A hard-disk that could store an megabyte of data filled the space an refrigerator did today and a computer
with the calculation power of and todayâs smartwatch would have filled a whole building, required massive
electric and cooling infrastructure and planning and was error-prone to a huge set of errors starting
from being interrupted by static electricity up to bugs crawling into the machine - which they liked
because of the higher temperature - and causing shorts inside the circuits. Funnily the word bug naming
an error originates exactly from this behavior.
Note that the Internet was already in existence at that time - the internet started at 1st January 1983
with the adoption of TCP/IP for ARPANET. All of the core features that are now holding the Internet
together have been developed back then - the core features of the network have been itâs decentrality
and robustness as well as easy expansion. Features that are in place still of today even if there is
every few years someone who proposes to exchange that infrastructure for a centralized âmodernâ approach
which fortunately never happens.
So back in the 1970 to 1980 we had a scenario that one could describe as having a smart server and
dumb clients.
Then the time of personal computers started in the 1980. Computational power and storage became
more cheap and more compact. Computers moved into the households - at the beginning theyâve been
a nice toy for geeks but starting with the x86 architecture - that we still use today - and especially
the 80386 in the year 1985 the personal computer gained way more computational power and an major
32 bit computational environment. On the other hand communication over long distances was rather
expensive and slow - since the 80s there existed dial up network connections with speeds of about 40
kBit per second - so thatâs about a factor 1000 to 10000 slower than the internet connections used
by many people today. The first commercial dial-up connections started in the early 1990âs and people
started to build the web on top of the Internet during that time.
The web started to be a simple idea of being a global information medium on which people are capable
of hosting their HTML documents that provide easy and standardized, human readable markup that can
be rendered by a browser that provide metadata and link between each other. The concept goes back to
the 1946 but was - in itâs todayâs form - proposed 1989 by Tim Berners-Lee who also implemented the first
webserver speaking HTTP, the specification for the HTML markup language and the first browser which
also supported editing directly inside the browser in 1990 at CERN. In 1991 the server went online
and was reachable for everyone on the web. Note that the first webbrowser supported a feature that
most of the time is hacked into todayâs webpages by using a huge bunch of server-side and client side
code - it allowed direct editing of webpages. You could simply point your cursor somewhere in the
webpage, edit itâs content like youâre editing in a word processor and then - if you had the FTP
permissions - upload the document via either FTP or an HTTP PUT request to the webserver (HTTP PUT
being a supported method in modern HTTP even if still not widely used or sometimes used in a different
way to upload information to backend APIs in REST patterns).
Webpages have been rather simple and just have been rendered by the browser. So they contained just
some dumb markup. Servers have been rather dumb and only delivered the data. So the web has been
a little less interactive than one would imagine today. Applications ran as native code on the client
side (like most programs still do today). During the same time stuff like transferring files using
FTP and discussions in Usenet using NNTP have still been the way to be interactive on the web.
So back in the 1990âs we had a scenario that one could describe as having a dumb only file-serving
server and smarter clients (but nearly no interactivity on the web; running local applications).
The main advantages of the web that one can still have today have been:
- Being portable and device independent. Using HTML correctly allows one to represent the same
information on a huge variety of devices independent if theyâre a line printer, a small display,
a projector on a multi-meter sized surface or today a screen-reader, smartwatch, etc. - or even
an bot doing automatic processing based on the annotations.
- Being standardized. A browser could display HTML from any source as long as it was standard
conforming. With a clearly specified standard (that worked up to HTML 4.01) if one ignored
vendor specific extensions one could target any browser - there is no need to support some
given browser or to support a given architecture. Just provide your HTML (and today CSS) and it
magically works everywhere without any hassle. If you need to target a given browser youâre in my
opinion still doing it wrong.
- Being archivable. One could simply download every document and read them offline years later even
if the original service went offline. This is something that weâll get onto later on.
- And being bookmarkable. Also getting to this later on
Soon after the emergence of the Windows operating system and ActiveX in 1996 the idea emerged to
host ActiveX components inside the browser - namely inside Internet Explorer. Note that ActiveX
is an API that has been built for componization of applications - the basic idea back that still
lives today in many popular office suites was that a document can be composed of components from
different applications like for example the ability to embed a spreadsheet into a word processor
without the word processing having any knowledge about spreadsheets. This has been done by launching
an spreadsheet processing component in the background - which is the ActiveX/OLE component and
rendering itâs result inside the word processor who only knows that it has to launch a component
of a given type. The same idea expanded to the web - including the ability to download components.
This allowed on to download whole applications into the browser, support interactive forms and database
access and animations. Other technologies that emerged at a similar time had been Java applets,
Flash animations and applications and some other browser addons.
Logic moved to the browser with all itâs side effects: Not all browsers supported this rather complex
logic, one had to install extensions that have been just available for some platforms so the portability
of the web got lost. On the other hand most platforms had - because of their complexity - massive
security problems so malware targeting users machines by this mechanisms emerged. The higher demand
on calculation power as well as memory on client machines did the rest and so these technologies did
not really survive long outside of some specific bubbles like flash games or some Java applet based
remote interfaces for machines or telephony exchanges.
To counter that server side scripting and programming emerged. First there has been CGI that simply
instructed an webserver to execute an arbitrary binary or script in case a web request has been
made to a specific address - so the webserver got something like the inetd
daemon that
just launched applications on demand, passed request information and passed returned information
from the application towards the browser, wrapping it inside HTML. The most accessible way
when using CGI back then for the average user has been to use Perl scripts - which was and still is
somewhat a hassle to program with even if there are really potent and powerful string functions
supported compared with many other languages. Then PHP emerged. This language simply looked like
a hack but it allowed mixing HTML markup and code inside the same file which made thinking about
and building pages much easier. PHP has been the language of choice for every hobbyist web
developer who wanted to add interactivity on his site, later on paired with the MySQL database
system. Interaction with webpages followed a strict request-response pattern normally using
either links for non state-changing GET requests as well as forms that could pass selected
options and content of text fields and areas to the server site using potentially state changing
POST requests. After every action triggered by the browser data got sent to the server,
the server processed the request and returned new information towards the client.
If one for example wouldâve implemented a simple solver for linear equations one would provide
a nice input form for the user. After the user has filled the form heâd transmit the
form to the server. The server then would perform input validation - in case of an error heâd return
a error message to the user and potentially allow correction of the input or heâd perform
the calculation and return the result. Each and every request of course would require a complete
page reload which is not much of a problem for simple page designs but in case there are
some complex layouts that was still somewhat feeling clumsy for some users - especially on slow
connections.
One had again a smart server - doing computations - and mostly dumb client - doing only
the display - situation as back in mainframe times. On the other hand the main advantage was
that one also had the server as a gatekeeper into internal databases and internal state so
this is also (still) the area in which one is validating user input, managing most of the access
permissions, etc.
To give users a more fluid feel JavaScript emerged. This is not to be confused as it was
in the early days with Java. Java applets run full blown bytecode inside the browser and
can perform major computations. JavaScript is capable of this today too but was a rather
inefficient and slow programming language back then. It allowed to run easy scripts
in addition to previous server side architecture. For example one could intercept the
submission of forms to the server, perform some input parameter validation on the client
and do quick and fast user feedback in case of errors before submitting the values. One
had to to server side checking though since code running on a client can never be trusted.
Sometimes one could even perform the calculations on the client side if one also provided
a fallback in case JS was not supported.
From this on pure JS applications emerged that simply left out the server side fallback.
Clients that donât support JavaScript or have them not enabled due to resource or security
constraints are not capable of running such applications though. This went further and
further and some of the drawbacks will be mentioned later on (since this post is exactly
about that topic). On top of that asynchronous script driven communication channels
as asynchronous XML HTTP requests (AJAX) as well as WebSockets and WebRTC data channels
emerged. Theyâre often used to load information from a variety of backend services or
peer to peer connections like on has done from native client applications. They try to
provide a fluid view of applications. Thatâs the state of today. With logic moving
into the browser one could see this as:
Having a smart client and a rather dumb or mixed server again, but relying on a rather
error less and high bandwidth internet connection since one loads the application and
data over the network on demand.
So a short summary to show the effect Iâve always been fascinated with:
- Dumb clients, smart mainframes at first
- Smart clients, dumb servers for the first PCs
- Dumb clients, smart servers for first web applications
- Smart clients, dumb servers for many JS based web applications
Some problems
Now whatâs the problem I see with this approach?
Requiring JS support on the client (Inconsistency, Security)
This might sound rather basic but the main problem is that JavaScript support is required
on the client side. With HTML there is a rather simple specification of a markup language
out there that can be implemented in a few days to weeks by anyone whoâs capable of writing
a parser. Up to HTML 4.01 it was clearly specified whatâs valid and one could simply reject
and invalid and non conforming code (that changed with HTML5 because of the huge amount
of erroneous webpages out there - in my opinion thatâs a bad idea since webpages should simply
just use standards conforming markup but that was driven by the market of having browsers
that just try to show more than their competitors and started of doing sniffing and guessing
how something should be interpreted). On top of that simple HTML one could - but didnât has
to - add rather simple property based layout rules that are specified in cascading stylesheets.
They provide clear and simple fallback methods in case something is not supported, allow
client side tailoring and adding user stylesheets to adjust rendering to oneâs personal
preferences - thatâs a feature often forgotten by webdesigners today but itâs still there.
The main point with this is that every browser can simply implement HTML and is capable of
viewing and interpreting a webpage correctly. There is no need to target a specific browser
or a specific version of a browser. A webpage that works with Internet Explorer or Google
Chrome also works with textmode browsers like Lynx or Links as long as itâs conforming to
the standard. It might look different and one has to follow accessibility standards but
it simply works. And webpages can be processed automatically.
On the other hand JavaScript - or letâs say ECMA script - is a rapidly developing language
that gets new releases every few years with a huge set of APIs supported by a variety of
environments - different DOM manipulation strategies, a huge amount of data storage technologies
like structured WebSQL or unstructured key-value stores, a huge amount of graphics processing
APIs like 2D and 3D canvas, optional APIs that allow background processing like web workers.
This is all stuff thatâs not specified in a centralized way and thatâs not implemented in
a few weeks, not even in a year when developing from scratch. This set of rapidly developing
APIs is evolving and requires a huge amount of feature detection and would require massive
fallbacks if one doesnât want to target a specific browser. Because of this itâs way more
common to hear sentences like âthis application is only supported on new versions of
Google Chrome or Mozilla Firefoxâ or similar. So weâre back at only supporting given
versions of given browsers - these webpages will not be runable in 10 years on browser that
have evolved further without major refactoring and modifications. And theyâll for sure not
be runable in fifty years from now - webpages from the early 1990s are still displayable
by browsers today on the other hand.
Funnily many applications do not even correctly detect all features they require but simply
fail in some unpredicted way in case something is missing or in case some API has evolved.
The huge complexity on the other hand opens up a huge number of potential and already found
security problems. There is one problem after each other which leads to JavaScript being
disables in security sensitive environments. There are even security holes that are
hardware based that one can exploit using JavaScript such as cache timing sidechannels
and rowhammer attacks.
JS being a huge resource hog
JavaScript is one of the most resource intensive and inefficient runtime environments
on the market today. Try it yourself - launch a browser and open a few hundred tabs
of a plain HTML webpage and of a JS webpage on the other hand. If this doesnât really
convince you keep them open and watch memory grow due to data continuously loaded and
especially also due to external memory fragmentation on the JS heap. This is another
reason why many people have to restart their browsers periodically or simply disable
JavaScript for longer battery life.
Load times
JavaScript applications tend to consist of many libraries and a huge amount of code to
realize their logic. This has to be loaded on demand at least on the first visit of
the webpage and then after each change or after each cache timeout. JavaScript can only
partially be loaded in an asynchronous fashion - HTML on the other hand can be displayed
incrementally even when loading really huge documents even before everything has been loaded.
When loading multi megabytes of JS libraries this might be a huge problem over high
latency and low bandwidth links - it takes time to download all resources associated
with an JS application, it takes time to interpret and compile the code, then it takes
time to load the data asynchronously that should be displayed to a user. And on every
asynchronous load a high loss link might fail and trigger a retry or load failure. Loading
times are - in contrast to the expectation - for the initial load normally way higher than
for plain HTML and server side logic solutions.
Just beware that any single load or request may fail and might need a retry. And that
every single load over a low bandwidth connection takes much time.
Complex failure modes
Since many scripts tend to load data not only from a centralized source but from a variety
of APIs - many times even operated by a huge amount of different providers like suggested
in some articles on pure JavaScript web applications - the number of components that might
fail, might be refactored or might be organizationally changed drastically increases. One
has to handle each and every failure mode on the client side. There is also no single
gatekeeper to oneâs backend any more - the client has to know each backend service and
in case on modifies an API one has to support multiple API versions at the same time since
cached script versions are still out there that perform requests the old style - sometimes
for a longer period of multiple hours to days or weeks. The typical solution is simply break
existing instances and expect frustrated users to reload their pages.
Distributed backends
What does it mean to distribute your backends onto multiple different service providers?
It means that you:
- loose administrative control
- have to trust the external party to be reliable and keep their service up in an
unmodified way to not interrupt your own service
- have to trust the external service to do proper access control
You have to perform access control on each backend service. Basically this is an approach
that I somewhat like since I also try to perform access control additionally using the
users identity at all backend services that I call out from my webservers. But on the
other hand you loose this additional gate keeping point that can do at least some
input parameter validation and will normally only produce internal specific requests - although
your services should be hardened of course.
User script intervention
This is also something one simply forgets. Itâs always possible that users inject their
own JavaScript as userscript or Greasemonkey script libraries into a web context. This is
also often done using browser extensions. One cannot simply trust the users environment
to work in a consistent way. Sometimes these scripts do interfere and itâs nearly impossible
to effectively debug such failures and error modes.
Of course there is an advance in technology that one currently sees such as more major
languages such as TypeScript. But on the other hand native and server side application development
offers just a huge bunch of major tools. This starts with static analysis, compile time
error detection, being capable of doing intensive unit tests, etc. As Iâve written before
there are some advances in languages such as TypeScript and having automated testing tools
such as Selenium available - but itâll be a long road till they reach the maturity level
of existing tools for server side languages such as Java and even PHP.
This of course leads to less error-resistant code being in existence - and since there is
this idea of fast changing web the error rate within web applications is just larger
than in most other areas. This combined with the previously mentioned inconsistent environments
makes developing client based applications way more error prone and hard to debug than
any other environment.
Non archivability
This is a major problem I see with JavaScript driven applications. In the early days you
could simply save a plain HTML webpage and reopen it offline a few centuries from this moment
on. If you try this with a JavaScript driven webpage that gets build by querying a huge
pile of backend APIs doing AJAX requests or WebSocket exchanges - there is nearly zero
chance that you can display anything ten years from now. Youâre simply not capable of saving
a document in a given state and archiving it. This is a bigger problem than one might
imagine on the first hand - think about libraries not being capable of archiving knowledge
on books or papyrus for example.
Non bookmarkability
This is just a small extension of the previous point. Have you ever tried to bookmark
a typical modern JavaScript driven website in a given state and reload it a month later?
In nearly all cases you do not get back to the same state as before. There are some hackish
ways of applications that try to achieve that but they are neither reliable - because they
rely again on some inconsistent presence of some features - and they still break archivability.
The way I like to develop applications (The solution to most problems presented above)
I basically think of JavaScript of providing added value for responsive webpages on top of
existing technology. In my opinion an web application (with some rare exception that Iâll mention
at the end) just has to work without any client side scripting. Yes, that means having
trip around times for requests, form submissions and no more interactivity. It means having
no nice editors, no WYSIWY(M)G editing of content, no client side value pickers and so on. But
it means that every browser in existence is capable of using ones service. And on top of that
itâs nice to add JavaScript features that make stuff more responsible - like loading additional
content via JS instead of paging, continuously updating content to reflect the current state,
loading partial content via a backend request, animating graphics, providing WYSIWY(M)Q editing
features that hides markup from users and so one. It might even allow some additional features
like image manipulation on the client - in addition to the basic core features exposed via
a server side service.
This allows:
- Each and every browser in existence (including bots) to use a webservice
- This webservice will also be usable by browsers written in a few centuries from know
- One can simply save the state of a webpage into a file that is going to work even
centuries from now and that one can archive into a library even thousands of years.
- The service will be device independent again.
- If one does markup correctly automatic processing to get back to the mashup idea previously
and to the open idea of the web also works.
On the other hand there is some JavaScript centric application type that I also
expect to require JavaScript inside a browser. Thatâs the type of application that
projects like emscripten and webassembly have been developed for - for example one
can run complete CAD applications inside the browser, complete image manipulation applications
or even computer games - without having to install them. Of course there is this major
drawback of security but since these are applications that are comparable to applications
that have been installed the only drawback from this point of view is the update-ability
by simply exchanging server side resources. Such applications of course have to use
extensive client side features. Some examples for such applications might be:
- OpenJSCad to draw CAD models
- GIMP online
that allows one to run the image editing program GIMP inside the browser without any
installation
- DIA online which is a browser based version of a
commonly used diagram editor.
- 3D games like HexGl or 2D games like Cross-Code.
In my opinion each and every website whose main objective is to present information
just has to work without any scripting at all. If the main objective is to substitute
and desktop application on the other hand the required scripting is of course alright. But
this is not the case for a simple database frontend or for example a reporting application
in my opinion. This has nothing to do with a modern or antique approach - it has something
to do with keeping up the most basic properties of the web:
- Being open for everyone without central control. Anyone can build a webpage without
many resources and without any consent of any external provider. And anyone can run
oneâs own hosting service to host that content.
- Device independence.
- Simple interoperability. Even in ways one doesnât imagine (and no there is nothing
wrong with someone mining oneâs other side for some data and doing something with
that data).
- Archivability. Webpages from the early 1990 are still available and still
viewable. Webpages that build on JS based mashing up of some backends will vanish
and leave just a blank page in history.
- Accessibility. It allows all people with all kinds of illnesses and with all kinds
of devices in all social groups to access the same content.
- Downward and upward compatibility.
Disclaimer
Please again keep in mind that this is an opinion based article and everyone might have
a different one.
This article is tagged: