# Preliminaries to Studying Non-Euclidean Geometry

Jump to: navigation, search

## Introduction

Continuum mechanics applies the tools of differential geometry to study stress and strain fields within a material by treating that material as an idealized continuum. In most cases, we assume that we assume Euclidean geometry for the continuum, but there are several situations when the assumptions of Euclidean geometry no longer apply.

One example is using the tools of continuum mechanics at astronomical length-scale where the effects of gravity, i.e. the curvature of space, become non-negligible, such as when studying neutron stars. [1] [2] Another example is when applying continuum mechanics at the mesoscale to study material defects such as dislocations, voids and disclinations. [3] [4] [5] Normally, the continuum idealization would have meant that we cannot model explicitly discontinuities in the material such as due to dislocations and disclinations, but as the aforementioned references show, this problem can be overcome by applying non-Euclidean geometry to the discontinuous material.

Goode-Homolosine projection of the Earth. [6] In flat (Euclidean) 2D space this map appears discontinuous, but in spherical (non-Euclidean) 2D space the map is continuous. by applying the appropriate spatial metric and geometric connection, we can deal with the map in the appropriate non-Euclidean space for which the discontinuities vanish.

Here is an intuitive way to see how non-Euclidean geometry helps: Consider a wall map of the Earth and notice that any such map would have a one or more discontinuities, e.g. the line where east-west hemispheres appear separated on the projection. A globe, however, does not manifest such discontinuity, because the globe's geometry is no longer Euclidean (flat) but spherical.

There are other potential future applications for why it maybe worth pursuing the generalization of continuum mechanics to non-Euclidean geometry. For example. non-Euclidean computational models could be more accurate as they will be able to naturally handle discontinuities and incompatibilities.

Nevertheless, understanding how the geometry of space should reflect in the equations of continuum mechanics is a challenging exercise, which is why perhaps it is not a part of the typical continuum mechanics curriculum. Non-Euclidean geometry is often veiled in highly abstracted mathematical language (tangent bundles, vector fields, Lie groups and Lie algebra, differential forms, ... ). While some of these abstractedness is necessary to ensure that we are not overly limiting or overly extending the applicability of the math, it is often more helpful to first grasp the basic concepts intuitively and then grok the generalization.

Here we will focus on an attempt to intuitively present the most basic abstractions in a way that would hopefully lend itself to better intuitive and not just syntactic understanding. The following sections attempt to introduce and motivate some of the key vocabulary of differential geometry. Specifically, we are interested in Riemannian geometry which is a particular generalization of Euclidean geometry. The reader is encourage to lookup the references for further reading.

Here is a brief overview of the basic concepts described in later sections:

• Contravariant and covariant representations are different representations of a given vector or point in space.
• The metric defines how we measure distances between points in space. In general the metric can vary from one point in space to another. In Riemannian geometry, the metric is specified by the metric tensor.
• Geometric connection defines how vectors at different points in space can be compared to each other.

## Contravariant and Covariant Representation of Points, Reciprocal Basis

### Choice of Basis Vectors

Fundamental to a physical space is the ability to refer to points in space. In order to refer to points in space we need a set of basis vectors and a reference point (the origin). For example let $\mathbf{e_1}$ and $\mathbf{e_2}$ be basis vectors in 2D space. Think of these as two non-overlapping meter sticks extending from the origin. Each meter stick represents both a direction as well as a unit of length in that direction. Since $\mathbf{e_1}$ and $\mathbf{e_2}$ are not parallel to each other, we can describe any point in space by stating how many times one needs to count off the length of $\mathbf{e_1}$ in the $\mathbf{e_1}$ direction followed by how many times one should count off the length of $\mathbf{e_2}$ in the $\mathbf{e_2}$ direction. Thus,

 $\mathbf{a}=a^1\mathbf{e_1}+a^2\mathbf{e_2} = \begin{pmatrix} a^1 \\ a^2 \end{pmatrix}$ (1.1)

is a way to describe a point $\mathbf{a}$ in terms of the given measuring sticks.

In summary:

• The choice of basis vectors $\mathbf{e_1}$ and $\mathbf{e_2}$ is arbitrary with the only limitatoin being that these two should not be parallel.
• The point $\mathbf{a}$ exist independent of and does not change with the choice of basis vectors. However, the indices $a^1$ and $a^2$ are only meaningful in the context of the given basis vectors and would change with different choice of basis vectors.

### Contravariant Representation of a Point

The coefficients $a^1$ and $a^2$ form the contravariant representation of point $\mathbf{a}$. The term "contravariant" signifies that these coefficients vary inversely with respect to the length of their respective meter sticks. In other words, if we were to choose longer meter sticks then the same point $\mathbf{a}$ would be represented by smaller coefficients and vice versa.

To see this more intuitively, think of each basis vector as analogous to a unit of length, e.g. m, cm, etc. Then, notice for example that, $5m = 500cm$ is an illustration of how making the unit smaller requires a larger numerical value to represent the same length.

In short:

• A point in space is independent of the chosen basis, but its representation depends on the choice of basis.
• The term contravariant characterizes the point's representaiton and not the point itself and signifies how the representation changes with a change of basis.

### Covariant Representation of a Point

The contravariant (orange) and covariant (blue) components of point A with respect to the given basis $\mathbf{e_1}$ and $\mathbf{e_2}$ and the reciprocal basis $\mathbf{e^1}$ and $\mathbf{e^2}$. The figure shows how the concept of reciprocal basis and co- and contra-variance generalizes to N-dimensions. Each reciprocal basis vector $\mathbf{e^i}$ is in fact the normal vector to the (N-1) dimensional hyperplanes that are spanned by the rest of the given basis vectors $\mathbf{e_j}$ where $i\neq j$. [7]

One might ask how can one determine the contravariant coefficients $a^1$ and $a^2$ given the point $\mathbf{a}$ and the basis vectors $\mathbf{e_1}$ and $\mathbf{e_2}$. If the given basis vectors formed a Cartesian orthonormal basis, then the coefficients would be simply the orthogonal projections of the point $\mathbf{a}$ onto the respective basis vectors, which would be the same as computing the dot product (a.k.a inner product) between $\mathbf{a}$ and the respective basis vector. However, for a general set of basis vectors that are not orthonormal this is not the case.

Nevertheless, it turns out that we can choose a reciprocal set of basis vectors $\mathbf{e^1}$ and $\mathbf{e^2}$ such that

 \begin{align} \mathbf{e_1}\cdot\mathbf{e^1} = 1\quad & \mathbf{e_1}\cdot\mathbf{e^2} = 0 \\ \mathbf{e_2}\cdot\mathbf{e^1} = 0\quad & \mathbf{e_2}\cdot\mathbf{e^2} = 1 \end{align} (2)

With the help of these reciprocal basis vectors we can compute each contravariant coefficient as the dot product between the given point $\mathbf{a}$ and the respective resiprocal basis vector. For example,

 \begin{align} \mathbf{a}\cdot\mathbf{e^1} & = (a^1\mathbf{e_1} + a^2\mathbf{e_2})\cdot\mathbf{e^1} \\ & = a^1(\mathbf{e_1}\cdot\mathbf{e^1}) + a^2(\mathbf{e_2}\cdot\mathbf{e^1}) \\ & = a^1 \end{align} (3)

and likewise for $a^2$.

Since $\mathbf{e^1}$ and $\mathbf{e^2}$ themselves form another basis, the given point $\mathbf{a}$ has a representation in that basis as follows,

 $\mathbf{a}=a_1\mathbf{e^1}+a_2\mathbf{e^2}$ (4)

The coefficients $a_1$ and $a_2$ form the covariant representation of point $\mathbf{a}$. The term "covariant" signifies that these coefficients would increase or decrease when the original set of basis vectors $\mathbf{e_1}$ and $\mathbf{e_2}$ increase or decrease respectively. That is, the covariant coefficients vary in the same way as the original basis vectors.

## Metric and Metric Tensor

### Definition

A metric specifies how we compute distances between points in space given a set of basis vectors.

For exmaple, let $\mathbf{e_1}$ and $\mathbf{e_2}$ be the basis vectors and let $\mathbf{a}$ and $\mathbf{b}$ be two points in space. If the line through $\mathbf{a}$ and $\mathbf{b}$ happened to be along one of the basis vectors, then we could use that basis vector as the measuring stick to count off the distance between the two points. However, for general positions of points $\mathbf{a}$ and $\mathbf{b}$ we have no apriori mechanism for determining distance between them. That is because we only know apriori how to use each basis vector as a measuring stick in the direction of that basis vector. For an arbitrary direction, we would have to apply a combination of all measuring sticks and how exactly to do that is something that needs to be specified in addition to the basis vectors. The metric tensor provides this additional information.

Let $\Delta \mathbf{x}=\mathbf{b}-\mathbf{a}$ be the vector between points $\mathbf{a}$ and $\mathbf{b}$ so that

 $\Delta x^i = b^i - a^i, \quad i = \{1,2\}$ and $\Delta \mathbf{x} = \Delta x^1\mathbf{e_1} + \Delta x^2 \mathbf{e_2}$ (5)

We will define the length $\Delta s$ of the segment connecting points $\mathbf{a}$ and $\mathbf{b}$ as follows:

 \begin{align} (\Delta s)^2 = & g_{11}\Delta x^1\Delta x^1 + g_{12}\Delta x^1\Delta x^2 \\ & + g_{21}\Delta x^2\Delta x^1 + g_{22}\Delta x^2\Delta x^2 \end{align} (6)

In the above equation, the coefficients $g_{ij}, i,j=\{1,2\}$ are called the metric. It is possible to show that these coefficients obey tensor transformation rules under change of basis and thus show that they are the components of a tensor $\mathbf{g}$ known as the metric tensor. Being a tensor means that $\mathbf{g}$ represents a quantity that is independent of the choice of basis and that only the particular representation of $\mathbf{g}$ depends on the basis.

Indeed, we can write the equation above in a basis-independent way as follows:

 $(\Delta s)^2 = \Delta \mathbf{x}^T\mathbf{g}\Delta \mathbf{x}$ (7)

Therefore it is the tensor $\mathbf{g}$ alone, independent from the choice of basis, that is responsible for defining distnaces between points in space.

In general, $\mathbf{g}$ may vary from point to point in space, i.e. $\mathbf{g}=\mathbf{g}(\mathbf{x})$ is a tensor field. Therefore, the metric equation above is only valid within a small region in space for which we can treat $\mathbf{g}$ as approximately constant. To signify that, we typically write the metric equation in terms of the infinitesimal distance like so:

 $(ds)^2 = d\mathbf{x}^T\mathbf{g}d\mathbf{x}$ (8)

### Relationship to the Inner Product (Dot Product) of Vectors

Once we have specified the notition of distance, we can derive other notions such as perpendicularity, straight line and dot product. So, we should expect that the metric tensor, which defines how to measure distances, will also be used in defining the dot product between two vectors.

Let us start by defining the square of the line segment distance $ds$ as the dot product of the the vector $d\mathbf{x}$ with itself. By expanding the dot product in terms of the contravariant representation we get:

 \begin{align} ds^2 = & d\mathbf{x}\cdot d\mathbf{x} \\ & = \left(\sum_{i=1,2} dx^i\mathbf{e_i}\right)\cdot \left( \sum_{j=1,2} dx^j\mathbf{e_j}\right) \\ & = \sum_{i,j = 1,2} dx^i dx^j(\mathbf{e_i}\cdot\mathbf{e_j}) = \sum_{i,j=1,2}g_{ij}dx^i dx^j \end{align} (9)

The last equality above follows from the definition of the metric coefficients $g_{ij}$ in terms of their use in computing the square of the line segment $ds^2$. (cf. equation 6.)

From the last equality we derive an expression for the metric coefficients in terms of the basis vectors:

 $g_{ij} = \mathbf{e_i}\cdot\mathbf{e_j}, \quad i,j = {1,2}$ (10)

Now consider the dot product of any arbitrary two vectors $d\mathbf{x}$ and $d\mathbf{y}$ as follows:

 \begin{align} d\mathbf{x}\cdot d\mathbf{y} & = \left(\sum_{i=1,2} dx^i\mathbf{e_i}\right)\cdot \left( \sum_{j=1,2} dy^j\mathbf{e_j}\right) \\ & = \sum_{i,j = 1,2} dx^i dy^j(\mathbf{e_i}\cdot\mathbf{e_j}) \\ & = \sum_{i,j=1,2}g_{ij}dx^i dy^j \\ & = d\mathbf{x}^T \mathbf{g} d\mathbf{y} \end{align} (11)

### Examples

#### Cartesian Coordinates

In Cartesian coordinates the metric is just

$ds^2 = (dx^1)^2 + (dx^2)^2$

and so, the metric tensor is simply the identity tensor:

$\mathbf{g} = \mathbf{I} = \begin{bmatrix} 1 && 0 \\ 0 && 1 \end{bmatrix}$

#### Skewed Cartesian Coordinates

Computing the length of a line segment $ds$ where the basis vectors are unit vectors subtending some arbitrary angle $\theta$ with each other. The resulting metric is: $ds^2 = (dx^1)^2+(dx^2)^+2dx^1dx^2\cos\theta$. Notice that although this is still Euclidean (flat) space, the metric equation looks different for the current coordinate basis compared to the Cartesian basis.

Consider a set of unit basis vectors $\mathbf{e_1}$ and $\mathbf{e_2}$ that subtend some arbitrary angle $\theta$ with each other. We can use simple trigonometry as shown on the diagram to compute the length of an arbitrary line segment $ds$ as follows,

\begin{align} ds^2 & = (dx^2+dx^1 \cos \theta)^2+(dx^1 \sin \theta)^2 \\ & = (dx^1)^2 + 2dx^1dx^2 \cos \theta + (dx^2)^2 \end{align}

Therefore the metric tensor's representation for the given basis vectors is as follows:

$\mathbf{g} = \begin{bmatrix} 1 & \cos\theta \\ \cos\theta & 1 \end{bmatrix}$

Notice that although just as with the previous example of Cartesian coordinates, the current example is also for flat (Euclidean) space and yet the metric tensor's representation is no longer one and the same as the identity tensor. That is because the metric tensor's representation depends on the choice of basis vectors.

Despite the apparent differences between the metric tensor computed for the skewed Cartesian coordinates and the orthonormal Cartesian coordinates in the previous section, one can see that they are both similar in that $\mathbf{g}$ is constant for all points in space because both $\theta$ as well as the basis vector lengths remain the same throughout space.

TODO

TODO

## References

1. Gerard A Maugin, Magnetized deformable media in general relativity, Annales de l'I.H.P., section A, tome 15,
1. 4(1971), p.275-302
2. G.A.Maugin, On the covariant equaitons of the relativistic electrodynamics of continua. I. General equations, Universite de Paris VI, Laboratoire de Mecanique Theorique associe au C.N.R.S., tour 66, 75230 Paris, Cedex 05, France
3. Kazuo Kondo, ON THE ANALYTICAL AND PHYSICAL FOUNDATIONS OF THE THEORY OF DISLOCATIONS AND YIELDING BY THE DIFFERENTIAL GEOMETRY OF CONTINUA, International Journal of Engineering Science, Vol. 2, pp. 219-251, Pergamon Press 1964
4. John D. Clayton, Douglas J. Bammann, and David L. McDowell, A Geometric Framework for the Kinematics of Crystals With Defects, Philosophical Magazine, vol 85, nos. 33-35, pp. 3983-4010, February 2005
5. J.D. Clayton, D.L. McDowell, D.J. Bammann, Modeling dislocations and disclinations with finite micropolar elastoplasticity, International Journal of Plasticity, 22 (2006) 210-256
6. http://commons.wikimedia.org/wiki/File%3AGoode_homolosine_projection_SW.jpg, By Strebe (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
7. http://commons.wikimedia.org/wiki/File%3ABasis.svg, By Original diagram (File:Basis.gif) due to Hernlund, redrawn by Maschen. (Own work) [Public domain], via Wikimedia Commons