$ \def\Vec#1{\mathbf{#1}} \def\vt#1{\Vec{v}_{#1}(t)} \def\v#1{\Vec{v}_{#1}} \def\vx#1{\Vec{x}_{#1}} \def\av{\bar{\Vec{v}}} \def\vdel{\Vec{\Delta}} $

Harald Kirsch

about this blog
2024-06-04

Raising or Lowering an Index

$ \def\ve{\vec{e}} \def\onef{\tilde f} $

First the Rant

"You can use the metric to raise an index" is what I found in one or the other physics text book. Surely I missed some context each time, so I couldn't help to swear: WTF, I like the indexes where they are, what is the point of raising or lowering them. It felt like the text was suggesting there is a church of the Oooompf, where physicists gather for rituals during which indices are flipped. But what is the point?

Recently I endeavored on reading Gravitation (by Misner, Thorne, Wheeler or MTW for short) and finally found the hints I was missing.

Spoiler: it has nothing to do with religious rituals. Exercises 2.2 and 2.3 add the why, as they read

2.2 Lowering index to get the 1-form corresponding to a vector
2.3 Raising index to recover the vector
Nevertheless, I feel the need to write this all down in a way I understand it best.

Vector Space, Vectors

A vector, $\vec{v}$ is an element of a vector space, $V$, which means that elements of the set can be added to get another vector and can get multiplied by elements (numbers) taken from an associated field, $F$, such that expressions like $r\vec{v} + s\vec{w}$ make sense and the "usual" addition and multiplication rules apply.

A vector space has a basis, which is set of vectors $\{\ve_1, \dots \ve_n\}$ from which all other vectors can be constructed by multiplication and addition as shown above, while, on the other hand, none of the basis vectors can be constructed by any combination of the others.

A basis can have finite or infinite many elements, but lets stick to the finite element bases for now. The construction of every vector from the basis:

\begin{equation} \vec{v} = \sum_{i=0}^n v^i \ve_i, \qquad v^i\in F \label{veccoords} \end{equation}

is unique, meaning no other combination of numbers $v^i$ will produce $\vec{v}$. The numbers $v^i$ are the coordinates of the vector with respect to the given basis. The $i$ in $v^i$ is not an exponentiation, it is just an index written in superscript position. Why it is written like this will become clearer later. At least as a notational convenience, it leads to the Einstein notation

$$ v^i \ve_i := \sum_{i=0}^n v^i \ve_i$$

which is a valid short cut if identical superscript and subscript indices match up for summation.

Dot or Scalar Product

A common operation on a pair of vectors is to "multiply" them to get, as a result, a value from the associated field $F$, the scalar or dot product

$$\cdot : V\times V \to F\,,$$

written as $\vec{v}\cdot \vec{w} = r \in F$. Not going into all the details, but for sanity it makes sense that multiplication of a vector with an element $a\in F$ and the dot-product are associative:

$$ (a\,\ve_1) \cdot \ve_2 = a(\ve_1\cdot \ve_2) $$

For an arbitrary vector $v^i \vec{e_i}$ (Einstein Notation) this means

\begin{equation} (v^i \ve_i) \cdot e_k = \sum_{i=1}^n v^i (\ve_i\cdot \ve_k) \label{dotassoc} \end{equation}

This shows that once we know all the results of the dot product for combinations of the basis vectors,

$$ g_{ij} := \ve_i\cdot \ve_j,\qquad 1\leq i,j, \leq n $$

we can compute the dot product for all vectors by means of those few values. The simplified case of vector times basis vector above is then:

$$ (v^i \ve_i)\cdot \ve_k = \sum_{i=1}^n v^i g_{ik} = v^i g_{ik},\quad \scriptsize\text{ Einstein Notation} $$

and, with multiplying through a double sum, we get for the general case

\begin{equation} v\cdot w = (v^i \ve_i)\cdot (w^j \ve_j) = g_{ij} v^i w^j \,. \label{metric} \end{equation}

The last expression uses Einstein Notation twice and we start getting used to matching up upper and lower indexes for summation.

From school we may remember that the scalar product should rather be $\sum_{i=1}^n v^i w^i$, but this is a special case where $g_{ij} = \delta^i_j$.

Dual Space, 1-form and Lowering an Index

Having a vector space, we soon want functions, for example functions $\onef: V \to F$ come to mind, and, for good measure and simplicity, start with linear ones. The latter means that multiplication and addition can be interchanged with application of the function:

$$ \onef(a\vec{v} + b\vec{w}) = a\onef(\vec{v}) + b\onef(\vec{w})$$

To give these functions a name, these are 1-forms.

Similar to the arguments about the dot product, lets apply the 1-form to a vector written in coordinates:

$$ \onef(v^i\ve_i) = \sum_{i=1}^n v^i \onef(\ve_i) $$

As for the scalar product, we know the whole 1-form once we know how it operates on the basis vectors:

\begin{equation} f_i := \onef(\ve_i) \in F,\qquad 1\leq i\leq n, \label{1formcoord} \end{equation}

because then we can write $\onef(\vec{v}) = f_i v^i$, this time using lower indexes, $f_i$, well, hmm, to not confuse those factors with vector coordinates. Lets also call them coordinates, coordinates of the 1-form $\onef$.

Now consider a fixed, given vector $\vec{f}=f^i\ve_i\in V$. A vector now, so its $f^i$ having raised indexes again, but we chose $f$ as the symbol, because we can define a 1-form $\onef$ as

\begin{equation} \onef(\vec{v}) := \vec{f}\cdot \vec{v}\,. \label{dualf} \end{equation}

OK, if this is a 1-form, what are its factors $f_i$? Simple, just apply it to the basis vectors:

\begin{align*}f_i &= \onef(\ve_i) & \text{by \eqref{1formcoord}} \\ &= \vec{f}\cdot \ve_i & \text{by \eqref{dualf}}\\ &= (f^j\ve_j) \cdot\ve_i & \text{by \eqref{veccoords}}\\ & = f^j(\ve_j \cdot \ve_i) & \text{by \eqref{dotassoc}}\\ & = f^j g_{ji} & \text{by \eqref{metric}} \end{align*}

And here, taaadaaaa, we have the famous case of "lowering an index" which corresponds to exercise 2.2 in the tome Gravitation. Lets summarize:

For completeness a few remarks:

  1. There is a one-to-one correspondence between 1-forms and vectors, not just the one-way correspondence vector$\to$ 1-form used above.
  2. "Raising an index" computes the vector coordinates from the 1-form coordinates.
  3. You will often see $\braket{\onef, \vec{v}}$ used for $\onef(\vec{v})$ or $\vec{f} \cdot \vec{v}$. (Note the switch from 1-form to vector in the last term.)
  4. Since there is the one-to-one correspondence between 1-forms and vectors, you are lucky if typographical distinctions are made at all.