## Rotation matrix from one vector to another in n-dimensions

Sometimes you need to find a rotation matrix that rotates one vector to face in the direction of another. Here’s some code to do it for vectors of arbitrary dimension. The code is at the bottom of the post.

Some notes…

1. Rotations happen in planes not around axes. In 3D, being in a plane and being perpendicular to an axis is the same thing, but it is not generally true. In $n$ dimensions there are $(n-1)(n-2)$ possible planes of rotation. We can decompose any rotation into a combination of the rotations in each of these planes.
2. There is no unique solution. For a given vector there is a set of rotations that do not change it (all rotations in planes perpendicular to it). One can obtain another solution by rotating in any plane that is orthogonal to the “target” vector.
3. It is easy to rotate a vector to $(1,0,0...)$ and this is how the algorithm here works. First we find the rotation matrix to $(1,0,0...) = X$ for both the “starting” ($S$) vector and the “target” ($T$)vector. If these rotations are $R(S,X)$ and $R(T,X)$ then the rotation from “start” to “target” is given by $R(S,T) = R(T,X)^{-1} R(S,X)$. This is computationally simple as they are rotation matrices and their inverse is equal to their transpose.
4. To rotate to $(1,0,0...)$ we can do the following. If our vector is given by $(x,y,z,w...$) then we find the rotations in each plane rotate the vector and find the rotation for the next plane. In other words find the matrix that rotates $(x,y,z,w...)$ to $(\sqrt{x^2+y^2},0,z,w...)$ then the one that rotates $(\sqrt{x^2+y^2},0,z,w...)$ to $(\sqrt{x^2+y^2+z^2},0,0,w...)$ and so on until we have got zeros everywhere except in the first dimension. The rotation matrix is the product (in order) of these individual rotations.
5. Numerical functions are not 100% accurate and errors are compounded in this algorithm. In the implementation below they will generally be of the order of $10^{-16}$, but the higher the dimension the bigger the error. It would not be hard to implement this so that it works with symbolic algebra packages (scipy, for example) and produce exact solutions.

Code…

from numpy import sin, cos, arctan2, dot, eye

def rotation_matrix(vector, target):
""" Roation matrix from one vector to another target vector.

The solution is not unique as any additional rotation perpendicular to
the target vector will also yield a solution)

However, the output is deterministic.
"""

R1 = rotation_to_pole(target)
R2 = rotation_to_pole(vector)

return dot(R1.T, R2)

def rotation_to_pole(target):
""" Rotate to 1,0,0... """
n = len(target)
working = target
rm = eye(n)
for i in range(1,n):
angle = arctan2(working[0],working[i])
rm = dot(rotation_matrix_inds(angle, n, 0, i), rm)
working = dot(rm, target)

return rm

def rotation_matrix_inds(angle, n, ax1, ax2):
""" 'n'-dimensional rotation matrix 'angle' radians in coordinate plane with
indices 'ax1' and 'ax2' """

s = sin(angle)
c = cos(angle)

i = eye(n)

i[ax1,ax1] = s
i[ax1,ax2] = c
i[ax2,ax1] = c
i[ax2,ax2] = -s

return i



This part verifies it and demonstrates the accuracy.

if __name__ == "__main__":
from numpy import random, linalg, arccos, sqrt

def short_vec_print(v):
return ("["+("%.4g, "*len(v))[:-2]+"]")%tuple(v)

# Verify rotation_to_pole function
for i in range(8):
v = random.randn(i)
length = linalg.norm(v)
rm = rotation_to_pole(v)
vrot = dot(rm,v)

print "to_pole:",short_vec_print(vrot),
print "(length %g)"%length

# Verify rotation_matrix function
for i in range(2,8):
v = random.randn(i)
t = random.randn(i)

len_v = linalg.norm(v)

rm = rotation_matrix(v, t)

vrot = dot(rm,v)

cosang = dot(t,vrot)/sqrt(dot(vrot,vrot)*dot(t,t))

print "\nrotation_matrix:",short_vec_print(v),"to",short_vec_print(t)
print " length of rotated relative to original:",len_v/linalg.norm(vrot)

if cosang > 1:
print " *** Rounding error of",cosang-1,"***"
ang = 0
else:
ang = arccos(cosang)

print " cos angular difference:", cosang, "(%g radians)"%ang



### 7 Comments to “Rotation matrix from one vector to another in n-dimensions”

1. I had this problem a while ago. The second answer here http://math.stackexchange.com/questions/465665/hermitian-matrices-and-great-circles gives some very elegant alternative ways to achieve it.

• It’s not exactly the same problem, though I can see how you could tackle one with the other. Your question has a far fewer solutions. In the terms you used, this may not give great circles, just some circle that passes through the two points (well, their projection onto the sphere). If you have a nice function for calculating subspace spanned by the two vectors and then transforming them to a sensible place, it’s just a rotation in 2D. I think my method is basically the Gramm-Schmidt process and in some sense the same as the answers (hence the accumulation of error with size of vector). The reason mine has the form is does is because I usually just want to get the projection of some data into any plane specified by a normal vector, so the function “rotate_to_pole” is enough.

• Well, it was sort-of the same problem originally – I actually wanted to find the “most minimal” rotation matrix (in some not very well defined sense), and the stuff about great circles was just part of my attempt to work out what that would be. The first answer pointed out that thinking about great circles was a red herring, since there are still many solutions in >3 dimensions even if you ask for a great circle. The second answer seemed intuitively minimal enough to me, so I went with that.

2. I think instead of using a sequence of rotations like you describe, rotation from S to X can be performed a lot more efficiently using a single Householder transformation. For example, see page 1 and 2 of this reference:

http://www.eecs.berkeley.edu/~jduchi/projects/general_notes.pdf

• I understand that Householder transformations describe reflections, so do you mean getting replacing the numerical method for the two rotations that are multiplied with Householder transformations so that the reflection parts cancel out leaving the rotation? I can see how that would be more efficient.

3. What I meant is that you can replace the sequence of rotations used to compute R(S,X) and R(S,T) by (orthonormal) Householder matrices H(S,X) and H(S,T). An example of how these Householder matrices can be calculated (which is actually very easy) is provided on page 2 of the reference I provided above.

• Yup, that’s what I thought you were saying.