37 mins
Semantic search is a trending topic - and as of today a suprisingly simple concept. Instead of relying on keywords, this article shows how to use embeddings to capture meaning and suggest genuinely related posts. Along the way, I will show how to visualize embeddings in 2D and 3D, making the abstract mathematics of similarity tangible through interactive plots that visualize this blog itself. From theory to practice, the article walks through a full pipeline. It covers chunking content, generating embeddings with Ollama or OpenAI, storing them in PostgreSQL with pgvector, and finally integrating related-article boxes directly into Jekyll. Code snippets, schema definitions, and configuration examples make it easy to reproduce - or adapt for your own projects.
7 mins
Instrumentation amplifiers are the quiet workhorses behind precision measurement. From strain gauges in mechanical systems to Pirani gauges in vacuum technology, they allow engineers to pick up the tiniest signals while ignoring large amounts of electrical noise. Built on the foundations of the classical difference amplifier, modern INAs overcome its limitations by offering high input impedance, simple gain adjustment, and excellent common-mode rejection - all in a compact chip. This article walks through the principles step by step, starting from the difference amplifier and leading to the full instrumentation amplifier. Equations, circuit diagrams, and practical notes show why INAs are indispensable in sensor front-ends and data acquisition. Whether you’re an engineering student, hobbyist, or professional, this quick overview will give you the tools to understand when and why instrumentation amplifiers matter.
58 mins
Wasps often suffer from a bad reputation as aggressive stingers, but in reality they are vital neighbors in our gardens. They act as natural pest controllers, removing flies, caterpillars, and other insects, while also visiting hundreds of plant species for nectar and contributing to pollination. Far from mindless automatons, studies show that wasps learn, remember, recognize faces, and even demonstrate reasoning abilities. Their so-called aggression is predictable and usually defensive - stings occur mainly when a nest is disturbed or a wasp feels directly threatened. By learning to read their behavior, we can safely coexist with these insects. A foraging wasp on fruit is harmless, a guard wasp with wings spread is issuing a warning, and only persistent provocation leads to an attack. Wasp colonies are temporary, vanishing in winter, but while active they provide free ecological services worth billions globally. Understanding these modes and signals helps transform fear into respect, showing wasps not as pests but as intelligent, beneficial allies in maintaining the balance of the garden.
23 mins
Are humans metaphysically unique, or is consciousness a matter of organization and dynamics that could, in principle, emerge in other substrates? This essay tackles the long-standing debate by weighing arguments for human exceptionalism against counterpoints from philosophy of mind, neuroscience, and AI. It explores whether qualities like subjective experience, qualia, and the hard problem demand a biological essence, or whether feedback loops, global broadcast, embodiment, intrinsic drives, and learning dynamics suffice to explain why minds feel the way they do. The conclusion points toward a continuum rather than an exclusive divide: human minds feel special, but their vividness and unity can be linked to patterns of information processing, not an untouchable spark. Machines do not yet meet these conditions fully, but no principled barrier rules it out. What makes minds special is excellence of organization—loops, memory, drives, and adaptation—not the matter they are built from.
23 mins
Give your agents real hands and eyes. This article introduces the Model Context Protocol (MCP) as a standardized, modular way to expose tools, resources, and prompts to any orchestrator - discoverable at runtime, permissioned by design, and refreshingly simple to implement. We start with a tiny toy MCP that is copy-pastable and runable in minutes. Then we turn it up a notch with an MQTT-powered MCP that bridges into the real world - sensors, home automation, lab gear, and small robots - using safe allowlists, request/reply with timeouts, and robust async hand-offs.
4 mins
Many dismiss large language models as nothing more than fancy autocomplete. But that view misses the real point: language itself is not random chatter, it’s the accumulated record of how humans have thought, reasoned, and created for millennia. Training on language means training on the full spectrum of patterns humanity has ever written down - from the logic of a proof to the rhythm of a poem. This article explains why LLMs are more than next-word predictors. They internalize and generalize patterns across fields, recombine them in creative ways, and operate on a scale far beyond any individual human mind. Rather than being just statistics, they are engines for navigating and extending the collective patterns of human thought - and that’s what makes them powerful and exciting.
9 mins
Understanding spacetime can feel abstract, especially when it comes to metrics, raising and lowering indices, and the difference between proper and coordinate time. Especially when encountering them for the first time. In this short article, I share an intuitive way of looking at these concepts: tangent space as the local microscopic view, the metric as the universal projector into curved spacetime, and proper time as the tick of a real clock that all observers agree on. From light cones and causality to time dilation and 4-velocity, the short text walks through how the metric naturally encodes the rules of relativity. It is not a formal derivation, but rather a personal attempt to make the mathematics feel more tangible - an accessible entry point for anyone curious about how spacetime really works and has a hard time grasping the basic ideas.
6 mins
It is often claimed that large language models could one day replace entire software companies, writing all the code without human input. At first glance this sounds visionary - and it is to some extend - but a closer look at how these systems actually perform in real projects shows a much more sober picture. Yes, they can replace some repetitive entry-level tasks, generate boilerplate, and accelerate debugging. They are also excellent at writing small, self-contained utilities and documenting code. But when projects grow in scope and deviate from mainstream patterns, the limits quickly become visible: redundancy, miscorrections, escalating costs, and fragile adherence to design goals. The near and mid-term future at the moment is not one of fully automated software factories, but of layered collaboration between human engineers and machine assistants. Models will take over the mechanical and repetitive, while humans will remain indispensable for planning, coherence, and responsibility. Companies that try to automate people away entirely will build fragile systems that collapse under their own weight. Those that succeed will be the ones who combine human judgment with machine speed - not by replacing one with the other, but by orchestrating both together.
5 mins
Booting FreeBSD on older ThinkPads can be trickier than expected. When moving a GPT-partitioned, ZFS-encrypted SSD from an older HP laptop to a Lenovo X260, the system refused to start in legacy BIOS mode - even with all the usual boot hacks applied. Hours of troubleshooting revealed that the common lenovofix MBR trick wasn’t enough. The solution turned out to be switching over to UEFI boot with a small EFI system partition. This short guide shows the exact steps - from reorganizing partitions to installing the FreeBSD bootloader - and points out small adjustments like Wi-Fi interface renaming. If you’re trying to get FreeBSD running smoothly on an X260 or similar ThinkPad, this walkthrough may save you time and frustration.
10 mins
Mixers are a key element of RF systems, shifting signals into frequency ranges where they can be filtered and processed more easily. Alongside the desired signal, noise is also translated, and its behavior under mixing can be less intuitive than expected. This article explains how signal and noise appear at the mixer output, why image folding can reduce signal-to-noise ratio, and how local oscillator noise affects the result. Examples with diode mixers illustrate the difference between homodyne and heterodyne setups, show where the familiar 3 dB SNR loss originates, and highlight conditions under which signal-to-noise is preserved. Practical measurements are included to compare both cases and to demonstrate the influence of filtering choices.
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/