This page contains short articles about projects I'm doing myself or that friends of mine have expressed some interest in and for which I've decided to write short summaries. It also contains some short notes on how to solve some particular problems to keep an easy to access repository for friends and other people who might have the same problems. Please note that many of these articles are written late at night or in short breaks so if you spot any errors any correction is welcome.
The topics covered include:
Some source-code of hobby projects and/or partially externally used code as well as some unfinished stuff that I've released under BSD or BSD-like licenses can be found in my GitHub repositories. Most of the finished and released projects are automatically built for every release and every GitHub commit by Jenkins Pipelines on dynamically created Xen virtual machines for FreeBSD (amd64, arm6, aarch64), Linux (amd64) and Windows (amd64) as well as automatically tested by the pipeline scripts (using Unit-Testing for components, Frama-C with it's wp plugin against RTEs and to prove certain correctness constraints on C applications as well as Selenium with Java to emulate users on Web applications using chromedriver). I plan to auto deploy the artifacts in near future to a public artifact repository (like this webpage is automatically deployed on every commit into the corresponding repository).
If you like to read books on interesting subjects you can also refer to my list of recommended books
Hard ideas land softer when they travel with a story. The code and data are rigorous; the pictures keep us human. I include whimsical and fantasy-tinged illustrations here for a few simple reasons:
The result is a mix of rigor and whimsy: reproducible experiments, solid code, careful reasoning - accompanied by images that remind us not to lose sight of curiosity, story, and delight.
What are those images generated with? It depends. A mixture of manual work in Gimp, Inkscape and some help of OpenAIs DALL-E-3, OpenAIs 4o image generation, Stable Diffusion XL (SDXL) and Pony XL as well as some minor models.
I've been notified that not all browser refer to the atom feed that's supplied by this blog and that not everyone notices the presence of such a feed even if interested. Of course one can subscribe using an RSS/ATOM reader using the ATOM feed at https://www.tspi.at/atom.xml or the corresponding onion URI http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/atom.xml
Even though this page is mainly a personal blog and thus more of a personal notebook - if you like it or if it even helped you somewhere consider keeping it alive by supporting the author to fuel him with caffein, keep the infrastructure up and running or even fetch some new project related parts:
Beneath the surface of this website lies a large vector space - every paragraph, every idea, every article reduced to a point in a high-dimensional semantic field. The interactive 3D views below let you explore this space directly (in case you have JavaScript and WebGL enabled in your browser).
The first plot shows individual text chunks of all articles. Each dot corresponds to a small segment of text, placed according to its semantic similarity to others. Clusters form naturally: essays about physics and engineering gather together, while psychology and social critique form their own constellations. The axes correspond to the selectable three of the ten main principal components obtained from a principal component analysis applied to the embedding vectors. These components capture the main directions of variations in the blogs content: for example one might roughly separate technical from philosophical writing, another might contrast scientific precision with emotional or narrative tone.
Hover over a point to see which article it belongs to and rotate the view to see how the themes intertwine.
The second plot shows centroids of whole articles - each dot now representing an entire post.
Together, these plots are a semantic topography of the blog: a landscape where proximity means conceptual kinship. Articles that exist near each other are thematically or linguistically related, even if they were written years apart. It is a glimpse into how machine learning perceives this corpus - and, perhaps, how ideas evolve and orbit each other over time.
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/