A quick recap on Householder reflectors

28 Nov 2024 - tsp
Last update 29 Nov 2024
Reading time 14 mins

Introduction

The Householder transformation is one of the elementary linear transformations that can be applied in vector spaces. It is often used to perform matrix transformations in an efficient way - for example when performing QR decomposition or SVD decomposition of matrices. It can also be used to solve the linear least squares problem too.

The transformation represents the reflection of about a hyperplane that goes through the origin and is usually characterized by the planes normal vector. Recall that a reflection of a point $x$ at the hyperplane defined by the normal vector $v$ can be calculated by

[ x - 2 \langle x, v \rangle v = x - 2 v (v^{*} x) ]

In this case $v^{*}$ is the conjugate transpose of $v$. Note that it has been assumed that $v$ is a column unit vector. One can imagine this to work by first determining the shadow or projection of the vector from $0$ to the point $x$ onto $v$. Subtracting this once from $x$ yields the component of the location vector of $x$ that is orthogonal to $v$ - i.e. the component in the plane. If we now want to reflect on the plane we just subtract the orthogonal component a second time.a

The inner product $\langle x, v \rangle$ yields the length but not the direction along the normal $v$. To get a vector again it’s multiplied by $v$ again. The second form of the projection that one sees above has the advantage of always being left multiplied on $x$:

[ \begin{aligned} x - 2 v (v^{*} x) &= (1 - 2 v (v^{*} 1)) x \\ &= (1 - 2 v v^{*}) x \end{aligned} ]

Definition

Using matrix formalism we can define a transformation for any vector yielding another vector exactly using the above formula

[ (1 - 2 v v^{*}) x ]

We recall that $v^{*}$ is the conjugate transpose so it’s result will be a matrix. We can also add normalization of the vectors to our formula:

[ \begin{aligned} (1 - 2 \frac{v}{\mid v \mid} \frac{v^{*}}{\mid v \mid}) x \\ (1 - 2 \frac{v v^{*}}{\mid v \mid^2}) x \\ \end{aligned} ]

This now yields the definition of our reflector matrix

[ \begin{aligned} H &= I - 2 \frac{v * v^{*}}{\mid v \mid^2} \end{aligned} ]

A very simple Python method to evaluate the Householder transformation matrix for a vector $v$ can be given by:

import numpy as np

def householder(v):
    return np.identity(len(v)) - 2 * np.outer(v, v) / (np.linalg.norm(v) ** 2)

Properties

Zeroing matrix columns

One of the most useful applications of householder transformations is the zeroing of matrix elements (for example in QR decomposition and SVD decomposition).

Lets take a look at the action of the Householder reflector on a single vector:

[ \begin{aligned} y &= H * x \end{aligned} ]

Lets assume we want to zero out all elements but the first in $x$. We now utilize the basis vector $e_1$ that points into the same direction as the first element of $x$ and has the same length - but all zero components for the remaining entries:

[ \begin{aligned} x &= \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ \ldots \\ x_n \end{pmatrix} \\ \mid x \mid &= \sqrt{x_1^2 + x_2^2 + \ldots + x_n^2} \\ e_1 &= \begin{pmatrix} \mid x \mid \\ 0 \\ 0 \\ \ldots \\ 0 \end{pmatrix} \\ \end{aligned} ]

Then we calculate the vector $v$ as difference between $x$ and $e_1$:

[ \begin{aligned} v &= x - e_1 \\ &= \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ \ldots \\ x_n \end{pmatrix} - \begin{pmatrix} \mid x \mid \\ 0 \\ 0 \\ \ldots \\ 0 \end{pmatrix} \end{aligned} ]

This vector also gets normalized:

[ \begin{aligned} v_0 &= \frac{1}{\mid v \mid} v \end{aligned} ]

Now we form the Householder reflector as defined above:

[ \begin{aligned} H &= I - 2 * v_0 * v_0^{*} \\ &= \begin{pmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ 0 & 0 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & \ldots & 1 \end{pmatrix} - 2 * \begin{pmatrix} v_{0,1} \\ v_{0,2} \\ \ldots \\ v_{0,n} \end{pmatrix} \begin{pmatrix} v^{*}_{0,1} & v^{*}_{0,2} & \ldots & v^{*}_{0,n} \end{pmatrix} \end{aligned} ]

To zero out the matrix or vector column we then just multiply our matrix or vector with our Householder transformation:

[ A_1 = H \times A ]

As one can see in the table below this procedure zeros out the columns below the diagonal of our matrix. When we want to recover the input we just have to chain all of our transformations and right multiply with the matrices:

[ A = H_1 * H_2 * H_3 * H_4 * m ]

Taking a look at the error deviation matrix yields the zero matrix:

[ \begin{aligned} H_1 * H_2 * H_3 * H_4 * m - A &= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{aligned} ]
Iteration Matrix $x$ $e1$ $v$ $H$
0 $\begin{pmatrix} 0.71849739 & 0.68300692 & 0.95967855 & 0.42489296 & 0.63354321\\ 0.97990704 & 0.08919208 & 0.8406018 & 0.22022244 & 0.04848086\\ 0.15543111 & 0.84463264 & 0.18208212 & 0.75802462 & 0.07074044\\ 0.02059253 & 0.55175405 & 0.10135218 & 0.75248157 & 0.87553633\\ 0.15875599 & 0.56599797 & 0.55866914 & 0.72661108 & 0.09783331\\ \end{pmatrix}$        
1 $\begin{pmatrix} 1.23541194 & 0.65616851 & 1.32127471 & 0.62307312 & 0.44297959\\ 0. & 0.14006925 & 0.15512946 & -0.15546468 & 0.40972941\\ 0. & 0.85270269 & 0.07335372 & 0.6984338 & 0.12804104\\ 0. & 0.55282322 & 0.08694713 & 0.74458659 & 0.88312789\\ 0. & 0.57424065 & 0.44761489 & 0.66574553 & 0.15635965\\ \end{pmatrix}$ $\begin{pmatrix} 0.71849739 & 0.97990704 & 0.15543111 & 0.02059253 & 0.15875599\\ \end{pmatrix}$ $\begin{pmatrix} 1.23541194 & 0. & 0. & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} -0.45739191 & 0.86707089 & 0.13753324 & 0.0182213 & 0.14047526\\ \end{pmatrix}$ $\begin{pmatrix} 0.58158527 & 0.79318243 & 0.12581318 & 0.01666855 & 0.1285045\\ 0.79318243 & -0.50362386 & -0.23850214 & -0.03159832 & -0.24360402\\ 0.12581318 & -0.23850214 & 0.96216922 & -0.00501207 & -0.03864004\\ 0.01666855 & -0.03159832 & -0.00501207 & 0.99933597 & -0.00511928\\ 0.1285045 & -0.24360402 & -0.03864004 & -0.00511928 & 0.9605334\\ \end{pmatrix}$
2 $\begin{pmatrix} 1.23541194 & 0.65616851 & 1.32127471 & 0.62307312 & 0.44297959\\ 0. & 1.17562201 & 0.33121431 & 1.16338708 & 0.63334397\\ 0. & 0. & -0.07163941 & -0.3875451 & -0.05608935\\ 0. & 0. & -0.00705465 & 0.040526 & 0.76375269\\ 0. & 0. & 0.34997131 & -0.0655917 & 0.03235962\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0.14006925 & 0.85270269 & 0.55282322 & 0.57424065\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 1.17562201 & 0. & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & -0.66364719 & 0.5464654 & 0.35428382 & 0.36800944\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 0.1191448 & 0.72532045 & 0.47023892 & 0.48845687\\ 0. & 0.72532045 & 0.40275114 & -0.38720769 & -0.40220885\\ 0. & 0.47023892 & -0.38720769 & 0.74896596 & -0.26075958\\ 0. & 0.48845687 & -0.40220885 & -0.26075958 & 0.7291381\\ \end{pmatrix}$
3 $\begin{pmatrix} 1.23541194 & 0.65616851 & 1.32127471 & 0.62307312 & 0.44297959\\ 0. & 1.17562201 & 0.33121431 & 1.16338708 & 0.63334397\\ 0. & 0. & 0.35729804 & 0.0126572 & 0.02786229\\ 0. & 0. & 0. & 0.04710805 & 0.76513342\\ 0. & 0. & 0. & -0.39211791 & -0.03613676\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & -0.07163941 & -0.00705465 & 0.34997131\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0.35729804 & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & -0.77475907 & -0.0127423 & 0.63212817\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 1. & 0. & 0. & 0.\\ 0. & 0. & -0.20050322 & -0.01974443 & 0.97949406\\ 0. & 0. & -0.01974443 & 0.99967527 & 0.01610954\\ 0. & 0. & 0.97949406 & 0.01610954 & 0.20082795\\ \end{pmatrix}$
4 $\begin{pmatrix} 1.23541194 & 0.65616851 & 1.32127471 & 0.62307312 & 0.44297959\\ 0. & 1.17562201 & 0.33121431 & 1.16338708 & 0.63334397\\ 0. & 0. & 0.35729804 & 0.0126572 & 0.02786229\\ 0. & 0. & 0. & 0.3949375 & 0.1271437\\ 0. & 0. & 0. & 0. & -0.75536051\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & 0.04710805 & -0.39211791\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & 0.3949375 & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & -0.66359636 & -0.74809082\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 1. & 0. & 0. & 0.\\ 0. & 0. & 1. & 0. & 0.\\ 0. & 0. & 0. & 0.11927975 & -0.99286069\\ 0. & 0. & 0. & -0.99286069 & -0.11927975\\ \end{pmatrix}$

Using Householder reflectors to zero rows

One can simply use the same method with Householder reflectors to zero out full rows in a matrix. We just have to change to norm of the basis vector to the norm of our row vector and left multiply our matrix by our transformations.

As one can see this in the example run below this procedure zeros out the rows above the diagonal of our matrix. When we want to recover the input we just have to chain all of our transformations and left multiply with the matrices:

[ A = m * H_4 * H_3 * H_2 * H_1 ]
Iteration Matrix x e1 v H
0 $\begin{pmatrix} 0.71849739 & 0.68300692 & 0.95967855 & 0.42489296 & 0.63354321\\ 0.97990704 & 0.08919208 & 0.8406018 & 0.22022244 & 0.04848086\\ 0.15543111 & 0.84463264 & 0.18208212 & 0.75802462 & 0.07074044\\ 0.02059253 & 0.55175405 & 0.10135218 & 0.75248157 & 0.87553633\\ 0.15875599 & 0.56599797 & 0.55866914 & 0.72661108 & 0.09783331\\ \end{pmatrix}$        
1 $\begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ 1.07572318 & 0.01292617 & 0.73344219 & 0.17277805 & -0.02226183\\ 0.78029469 & 0.34726562 & -0.51675773 & 0.44861671 & -0.390607\\ 0.86473253 & -0.12014843 & -0.84272375 & 0.33449659 & 0.25229337\\ 0.89275214 & -0.01823431 & -0.26222326 & 0.36316501 & -0.44408857\\ \end{pmatrix}$ $\begin{pmatrix} 0.71849739 & 0.68300692 & 0.95967855 & 0.42489296 & 0.63354321\\ \end{pmatrix}$ $\begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} -0.52166598 & 0.41522575 & 0.5834249 & 0.25830851 & 0.38515489\\ \end{pmatrix}$ $\begin{pmatrix} 0.45572922 & 0.43321829 & 0.60870584 & 0.26950152 & 0.4018444\\ 0.43321829 & 0.65517515 & -0.48450608 & -0.21451269 & -0.31985245\\ 0.60870584 & -0.48450608 & 0.31923077 & -0.30140723 & -0.4494179\\ 0.26950152 & -0.21451269 & -0.30140723 & 0.86655343 & -0.19897757\\ 0.4018444 & -0.31985245 & -0.4494179 & -0.19897757 & 0.70331142\\ \end{pmatrix}$
2 $\begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ 1.07572318 & 0.75395781 & 0. & 0. & 0.\\ 0.78029469 & -0.38240384 & 0.20543866 & 0.61874557 & -0.41252749\\ 0.86473253 & -0.7526483 & -0.21670178 & 0.4819695 & 0.23329202\\ 0.89275214 & -0.15906478 & -0.12283513 & 0.39600087 & -0.44831935\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0.01292617 & 0.73344219 & 0.17277805 & -0.02226183\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0.75395781 & 0. & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & -0.70101911 & 0.69383946 & 0.16344878 & -0.02105979\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 0.01714442 & 0.97278944 & 0.22916143 & -0.02952663\\ 0. & 0.97278944 & 0.03717361 & -0.22681442 & 0.02922422\\ 0. & 0.22916143 & -0.22681442 & 0.946569 & 0.00688439\\ 0. & -0.02952663 & 0.02922422 & 0.00688439 & 0.99911297\\ \end{pmatrix}$
3 $\begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ 1.07572318 & 0.75395781 & 0. & 0. & 0.\\ 0.78029469 & -0.38240384 & 0.77151154 & 0. & 0.\\ 0.86473253 & -0.7526483 & 0.20409053 & 0.02202276 & 0.53994581\\ 0.89275214 & -0.15906478 & 0.52459716 & -0.31167454 & 0.02349908\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0.20543866 & 0.61874557 & -0.41252749\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0.77151154 & 0. & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & -0.60568939 & 0.66204836 & -0.44139815\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 1. & 0. & 0. & 0.\\ 0. & 0. & 0.26628073 & 0.80199134 & -0.53470036\\ 0. & 0. & 0.80199134 & 0.12338393 & 0.58445385\\ 0. & 0. & -0.53470036 & 0.58445385 & 0.61033534\\ \end{pmatrix}$
4 $\begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ 1.07572318 & 0.75395781 & 0. & 0. & 0.\\ 0.78029469 & -0.38240384 & 0.77151154 & 0. & 0.\\ 0.86473253 & -0.7526483 & 0.20409053 & 0.54039475 & 0.\\ 0.89275214 & -0.15906478 & 0.52459716 & 0.01077785 & -0.31237328\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & 0.02202276 & 0.53994581\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & 0.54039475 & 0.\\ \end{pmatrix}$ $\begin{pmatrix} 0. & 0. & 0. & -0.69254852 & 0.7213713\\ \end{pmatrix}$ $\begin{pmatrix} 1. & 0. & 0. & 0. & 0.\\ 0. & 1. & 0. & 0. & 0.\\ 0. & 0. & 1. & 0. & 0.\\ 0. & 0. & 0. & 0.0407531 & 0.99916925\\ 0. & 0. & 0. & 0.99916925 & -0.0407531\\ \end{pmatrix}$

Taking a look at the error deviation matrix yields the zero matrix:

[ \begin{aligned} m * H_4 * H_3 * H_2 * H_1 - A &= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \\ \begin{pmatrix} 1.57658837 & 0. & 0. & 0. & 0.\\ 1.07572318 & 0.75395781 & 0. & 0. & 0.\\ 0.78029469 & -0.38240384 & 0.77151154 & 0. & 0.\\ 0.86473253 & -0.7526483 & 0.20409053 & 0.54039475 & 0.\\ 0.89275214 & -0.15906478 & 0.52459716 & 0.01077785 & -0.31237328\\ \end{pmatrix} * \begin{pmatrix} 0.45572922 & 0.43321829 & 0.60870584 & 0.26950152 & 0.4018444 \\ 0.64946413 & -0.49980367 & 0.24643663 & -0.09242771 & -0.50903707 \\ 0.06245609 & 0.40889585 & -0.25748092 & 0.66413693 & -0.56703539 \\ 0.18982425 & -0.52275128 & -0.34601749 & 0.58165855 & 0.48233088 \\ 0.57495496 & 0.34936154 & -0.61864502 & -0.37339191 & 0.15883993 \end{pmatrix} - \begin{pmatrix} 0.71849739 & 0.68300692 & 0.95967855 & 0.42489296 & 0.63354321\\ 0.97990704 & 0.08919208 & 0.8406018 & 0.22022244 & 0.04848086\\ 0.15543111 & 0.84463264 & 0.18208212 & 0.75802462 & 0.07074044\\ 0.02059253 & 0.55175405 & 0.10135218 & 0.75248157 & 0.87553633\\ 0.15875599 & 0.56599797 & 0.55866914 & 0.72661108 & 0.09783331\\ \end{pmatrix} &= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{aligned} ]

References

This article is tagged: Math, Programming, Physics, How stuff works, Computational linear algebra


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support