Preliminaries to Studying Non-Euclidean Geometry

From EVOCD
Jump to: navigation, search

This article is an attempt to briefly introduce some of the most basic abstractions of differential geometry such as contravariant versus covariant representations of vectors and points, metric and metric tensor and affine connection. The goal is to provide the necessary vocabulary that would enable the reader to be better equipped for understanding continuum mechanics research that makes use of Riemannian geometry.

Here is a brief overview of the basic concepts described in later sections:

  • Contravariant and covariant representations are different representations of a given vector or point in space with respect to a given set of basis vectors and a reciprocal set of basis vectors.
  • The metric specifies how to compute distances between points in space. In general the metric can vary from one point in space to another. In Riemannian geometry, the metric is specified by the metric tensor. A metric is a choice that one can attribute to a mathematical space. Once that choice is made, it further gives rise to the definition of straight line (a.k.a. geodesic), the dot product of vectors and what it means for lines to be perpendicular.
  • Affine connection defines how vectors at different points in space can be compared to each other. The term affine pertains to the ability to preserve parallel relationships. Whereas in Euclidean geometry a vector has only a length and direction, but no position, in Riemannian geometry, however, a vector cannot be disassociated from its position in space. So, before one can add, subtract or compute the inner products of an arbitrary set of vectors, there will need to be a way to specify an equivalent set of vectors that are all attached to the same point in space. This is the purpose of the affine connection.

Contents

Introduction

Continuum mechanics applies the tools of differential geometry to study stress and strain fields within a material by treating that material as an idealized continuum. In most cases, we assume Euclidean geometry for the continuum, but there are several situations when the assumptions of Euclidean geometry no longer apply.

One example is using the tools of continuum mechanics at astronomical length-scale where the effects of gravity, i.e. the curvature of space, become non-negligible, such as when studying neutron stars. [1] [2] Another example is when applying continuum mechanics at the mesoscale to study material defects such as dislocations, voids and disclinations. [3] [4] [5] Normally, the continuum idealization would have meant that we cannot model explicitly discontinuities in the material such as due to dislocations and disclinations, but as the aforementioned references show, this problem can be overcome by applying non-Euclidean geometry to the discontinuous material.

Goode-Homolosine projection of the Earth. [6] In flat (Euclidean) 2D space this map appears discontinuous, but in spherical (non-Euclidean) 2D space the map is continuous. by applying the appropriate spatial metric and affine connection, we can deal with the map in the appropriate non-Euclidean space for which the discontinuities vanish.

Here is an intuitive way to see how non-Euclidean geometry helps: Consider a wall map of the Earth and notice that any such map would have a one or more discontinuities, e.g. the line where east-west hemispheres appear separated on the projection. A globe, however, does not manifest such discontinuity, because the globe's geometry is no longer Euclidean (flat) but spherical.

There are other potential future applications for why it maybe worth pursuing the generalization of continuum mechanics to non-Euclidean geometry. For example. non-Euclidean computational models could be more accurate as they will be able to naturally handle discontinuities and incompatibilities.

Nevertheless, understanding how the geometry of space should reflect in the equations of continuum mechanics is a challenging exercise, which is why perhaps it is not a part of the typical continuum mechanics curriculum. Non-Euclidean geometry is often veiled in highly abstracted mathematical language (tangent bundles, vector fields, Lie groups and Lie algebra, differential forms, ... ). While some of these abstractedness is necessary to ensure that we are not overly limiting or overly extending the applicability of the math, it is often more helpful to first grasp the basic concepts intuitively and then grok the generalization.


Contravariant and Covariant Representation of Points, Reciprocal Basis

This section is based on lecture notes by Prof. Anna Vainchtein. [7]

Choice of Basis Vectors

Fundamental to a physical space is the ability to refer to points in space. In order to refer to points in space we need a set of basis vectors and a reference point a.k.a. the origin. For example let \mathbf{e_1} and \mathbf{e_2} be basis vectors in 2D space such that they are both attached at the origin but not overlapping each other. Each basis vector defines a coordinate axis and and a scale for that axis. That is, it also specifies a way to assign a numerical value to each point on the axis. Since \mathbf{e_1} and \mathbf{e_2} are not parallel to each other, we can reference any point in space by stating the numerical values of its projection points along the \mathbf{e_1} in the \mathbf{e_2} axes respectively. Thus,


\mathbf{a}=a^1\mathbf{e_1}+a^2\mathbf{e_2} =
\begin{pmatrix}
a^1 \\
a^2
\end{pmatrix}
(1.1)

is a way to describe a point \mathbf{a} in terms of the given measuring sticks.

In summary:

  • The choice of basis vectors \mathbf{e_1} and \mathbf{e_2} is arbitrary with the only limitatoin being that these two should not be parallel.
  • The point \mathbf{a} exist independent of and does not change with the choice of basis vectors. However, the indices a^1 and a^2 are only meaningful in the context of the given basis vectors and would change with different choice of basis vectors.

Contravariant Representation of a Point

The coefficients a^1 and a^2 form the contravariant representation of point \mathbf{a}. The term "contravariant" signifies that these coefficients vary inversely with respect to the length of their respective basis vectors. In other words, if we were to choose longer basis vectors then the same point \mathbf{a} would be represented by smaller coefficients and vice versa.

To see this more intuitively, think of each basis vector as analogous to a unit of length, e.g. m, cm, etc. Then, notice for example that, 5m = 500cm is an illustration of how making the unit smaller requires a larger numerical value to represent the same length.

In short:

  • A point in space is independent of the chosen basis, but its representation depends on the choice of basis.
  • The term contravariant characterizes the point's representaiton and not the point itself and signifies how the representation changes with a change of basis.

Covariant Representation of a Point

The contravariant (orange) and covariant (blue) components of point A with respect to the given basis \mathbf{e_1} and \mathbf{e_2} and the reciprocal basis \mathbf{e^1} and \mathbf{e^2}. The figure shows how the concept of reciprocal basis and co- and contra-variance generalizes to N-dimensions. Each reciprocal basis vector \mathbf{e^i} is in fact the normal vector to the (N-1) dimensional hyperplanes that are spanned by the rest of the given basis vectors \mathbf{e_j} where i\neq j. [8]

One might ask how can one determine the contravariant coefficients a^1 and a^2 given the point \mathbf{a} and the basis vectors \mathbf{e_1} and \mathbf{e_2}. If the given basis vectors formed a Cartesian orthonormal basis, then the coefficients would be simply the orthogonal projections of the point \mathbf{a} onto the respective basis vectors, which would be the same as computing the dot product (a.k.a inner product) between \mathbf{a} and the respective basis vector. However, for a general set of basis vectors that are not orthonormal this is not the case.

Nevertheless, it turns out that we can choose a reciprocal set of basis vectors \mathbf{e^1} and \mathbf{e^2} such that


\begin{align}
\mathbf{e_1}\cdot\mathbf{e^1} = 1\quad & \mathbf{e_1}\cdot\mathbf{e^2} = 0 \\
\mathbf{e_2}\cdot\mathbf{e^1} = 0\quad & \mathbf{e_2}\cdot\mathbf{e^2} = 1
\end{align}
(2)

With the help of these reciprocal basis vectors we can compute each contravariant coefficient as the dot product between the given point \mathbf{a} and the respective resiprocal basis vector. For example,



\begin{align}
\mathbf{a}\cdot\mathbf{e^1} & = (a^1\mathbf{e_1} + a^2\mathbf{e_2})\cdot\mathbf{e^1} \\
& = a^1(\mathbf{e_1}\cdot\mathbf{e^1}) + a^2(\mathbf{e_2}\cdot\mathbf{e^1}) \\
& = a^1
\end{align}
(3)

and likewise for a^2.

Since \mathbf{e^1} and \mathbf{e^2} themselves form another basis, the given point \mathbf{a} has a representation in that basis as follows,


\mathbf{a}=a_1\mathbf{e^1}+a_2\mathbf{e^2}
(4)

The coefficients a_1 and a_2 form the covariant representation of point \mathbf{a}. The term "covariant" signifies that these coefficients would increase or decrease when the original set of basis vectors \mathbf{e_1} and \mathbf{e_2} increase or decrease respectively. That is, the covariant coefficients vary in the same way as the original basis vectors.

Metric and Metric Tensor

Definition

A metric specifies how we compute distances between points in space given a set of basis vectors.

For exmaple, let \mathbf{e_1} and \mathbf{e_2} be the basis vectors and let \mathbf{a} and \mathbf{b} be two points in space. If the line through \mathbf{a} and \mathbf{b} happened to be along one of the basis vectors, then we could use that basis vector as the measuring stick to count off the distance between the two points. However, for general positions of points \mathbf{a} and \mathbf{b} we have no apriori mechanism for determining distance between them. That is because we only know apriori how to use each basis vector as a measuring stick in the direction of that basis vector. For an arbitrary direction, we would have to apply a combination of all measuring sticks and how exactly to do that is something that needs to be specified in addition to the basis vectors. The metric tensor provides this additional information.

Let \Delta \mathbf{x}=\mathbf{b}-\mathbf{a} be the vector between points \mathbf{a} and \mathbf{b} so that


\Delta x^i = b^i - a^i, \quad   i = \{1,2\}

and


\Delta \mathbf{x} = \Delta x^1\mathbf{e_1} + \Delta x^2 \mathbf{e_2}
(5)

We will define the length \Delta s of the segment connecting points \mathbf{a} and \mathbf{b} as follows:


\begin{align}
(\Delta s)^2 = & g_{11}\Delta x^1\Delta x^1 + g_{12}\Delta x^1\Delta x^2 \\
& + g_{21}\Delta x^2\Delta x^1 + g_{22}\Delta x^2\Delta x^2
\end{align}
(6)

In the above equation, the coefficients g_{ij}, i,j=\{1,2\} are called the metric. It is possible to show that these coefficients obey tensor transformation rules under change of basis and thus show that they are the components of a tensor \mathbf{g} known as the metric tensor. Being a tensor means that \mathbf{g} represents a quantity that is independent of the choice of basis and that only the particular representation of \mathbf{g} depends on the basis.

Indeed, we can write the equation above in a basis-independent way as follows:


(\Delta s)^2 = \Delta \mathbf{x}^T\mathbf{g}\Delta \mathbf{x}
(7)

Therefore it is the tensor \mathbf{g} alone, independent from the choice of basis, that is responsible for defining distnaces between points in space.

In general, \mathbf{g} may vary from point to point in space, i.e. \mathbf{g}=\mathbf{g}(\mathbf{x}) is a tensor field. Therefore, the metric equation above is only valid within a small region in space for which we can treat \mathbf{g} as approximately constant. To signify that, we typically write the metric equation in terms of the infinitesimal distance like so:


(ds)^2 = d\mathbf{x}^T\mathbf{g}d\mathbf{x}
(8)

Relationship to the Inner Product (Dot Product) of Vectors

Once we have specified the notition of distance, we can derive other notions such as perpendicularity, straight line and dot product. So, we should expect that the metric tensor, which defines how to measure distances, will also be used in defining the dot product between two vectors.

Let us start by defining the square of the line segment distance ds as the dot product of the the vector d\mathbf{x} with itself. By expanding the dot product in terms of the contravariant representation we get:


\begin{align}
ds^2 = 
& d\mathbf{x}\cdot d\mathbf{x} \\
& = \left(\sum_{i=1,2} dx^i\mathbf{e_i}\right)\cdot \left( \sum_{j=1,2} dx^j\mathbf{e_j}\right) \\
& = \sum_{i,j = 1,2} dx^i dx^j(\mathbf{e_i}\cdot\mathbf{e_j}) = \sum_{i,j=1,2}g_{ij}dx^i dx^j
\end{align}
(9)

The last equality above follows from the definition of the metric coefficients g_{ij} in terms of their use in computing the square of the line segment ds^2. (cf. equation 6.)

From the last equality we derive an expression for the metric coefficients in terms of the basis vectors:


g_{ij} = \mathbf{e_i}\cdot\mathbf{e_j}, \quad i,j = {1,2}
(10)


Now consider the dot product of any arbitrary two vectors d\mathbf{x} and d\mathbf{y} as follows:


\begin{align}
d\mathbf{x}\cdot d\mathbf{y} 
& = \left(\sum_{i=1,2} dx^i\mathbf{e_i}\right)\cdot \left( \sum_{j=1,2} dy^j\mathbf{e_j}\right) \\
& = \sum_{i,j = 1,2} dx^i dy^j(\mathbf{e_i}\cdot\mathbf{e_j}) \\
& = \sum_{i,j=1,2}g_{ij}dx^i dy^j \\
& = d\mathbf{x}^T \mathbf{g} d\mathbf{y}
\end{align}
(11)

In other words, the metric tensor is the dot-product operator. While for Cartesian in Euclidean space we could have just used d\mathbf{x}\cdot d\mathbf{y} = d\mathbf{x}^T d\mathbf{y} in the general case we need to use the metric tensor to compute the dot product as per equation (11) above.

Relationship to the Contravariant and Covariant Representaiton. Raising and Lowering of Indices

Consider the following expression for one of the covariant components of an arbitrary point \mathbf{a}. By re-representing the given point using both its covariant and contravariant representation we can derive the following:


\begin{align}
a_1 
& = (a_1\mathbf{e^1} + a_2\mathbf{e^2})\cdot \mathbf{e_1} = \mathbf{a} \cdot \mathbf{e_1} \\
& = (a^1\mathbf{e_1} + a^2\mathbf{e_2})\cdot \mathbf{e_1} \\
& = a^1(\mathbf{e_1}\cdot \mathbf{e_1})  + a^2 (\mathbf{e_2} \cdot \mathbf{e_1}) \\
& = g_{11} a^1 + g_{12}a^2
\end{align}
(12)

In other words, we conclude that the metric tensor coefficients can be used to convert between contravariant and covarient representation. In general we can conclude that:


\begin{align}

a_i = \sum_{j=1,2} g_{ij}a^j \\
a^j = \sum_{j=1,2} g^{ij}a_j
\end{align}
(13)

In the above equation the g^{ij} coefficients are the coefficients of the metric tensor in terms of the reciprocal basis vectors. These constitute the contravariant representation of \mathbf{g} while the coefficients g_{ij} make up the covariant representation.

The conversion from contravariant to covariant representation and vice versa are known as lowering and raising of indices respectively. So, the metric tensor's covariant or contravariant representation is used used in the operation of lowering or raising of indices respectively.

Examples

Cartesian Coordinates

In Cartesian coordinates the metric is just

ds^2 = (dx^1)^2 + (dx^2)^2

and so, the metric tensor is simply the identity tensor:


\mathbf{g} = \mathbf{I} = 
\begin{bmatrix}
1 && 0 \\
0 && 1
\end{bmatrix}

Skewed Cartesian Coordinates

Computing the length of a line segment ds where the basis vectors are unit vectors subtending some arbitrary angle \theta with each other. The resulting metric is: ds^2 = (dx^1)^2+(dx^2)^+2dx^1dx^2\cos\theta. Notice that although this is still Euclidean (flat) space, the metric equation looks different for the current coordinate basis compared to the Cartesian basis.

Consider a set of unit basis vectors \mathbf{e_1} and \mathbf{e_2} that subtend some arbitrary angle \theta with each other. We can use simple trigonometry as shown on the diagram to compute the length of an arbitrary line segment ds as follows,


\begin{align}
ds^2 & = (dx^2+dx^1 \cos \theta)^2+(dx^1 \sin \theta)^2 \\
     & = (dx^1)^2 + 2dx^1dx^2 \cos \theta + (dx^2)^2
\end{align}

Therefore the metric tensor's representation for the given basis vectors is as follows:


\mathbf{g} =
\begin{bmatrix}
1 & \cos\theta \\
\cos\theta & 1
\end{bmatrix}

Notice that although just as with the previous example of Cartesian coordinates, the current example is also for flat (Euclidean) space and yet the metric tensor's representation is no longer one and the same as the identity tensor. That is because the metric tensor's representation depends on the choice of basis vectors.

Despite the apparent differences between the metric tensor computed for the skewed Cartesian coordinates and the orthonormal Cartesian coordinates in the previous section, one can see that they are both similar in that \mathbf{g} is constant for all points in space because both \theta as well as the basis vector lengths remain the same throughout space.


Spherical Coordinates

In standard spherical coordinates (R, \theta, \varphi) the metric is ds^2 = R^2 d\theta^2 + R^2 \sin^2 \theta d\varphi^2

In spherical coordinates, the great circles serve as coordinate axes and as we can see from the diagram in this section, the square length ds^2 of an infinitesimal line segment can be computed as follows:


ds^2 = R^2 d\theta^2 + R^2 \sin^2 \theta d\varphi^2

Where R is the radius of the sphere, \theta is the angle subtended with the vertical axes and \varphi is the azimuth angle measured from a reference longitude line.

Based on the above equation we conclude that the metric tensor for spherical coordinates has the following form:


\mathbf{g} =
\begin{bmatrix}
R^2 & 0 \\
0 & R^2 \sin^2 \theta
\end{bmatrix}

Notice that unlike the previous examples, which were all for Euclidean space, in this case the metric tensor would vary from point to point on the sphere because its value depends on \theta, which is one of the coordinates.


Poincaré Disk

From an observer outside of the disk, it would appear that objects within the disk shrink inversely proportionally to their distance to the edge of the disk. [9]
Poincaré disk is an illustration of hyperbolic geometry where the geodesics (that is, the straight lines within its geometry) appear as circles or lines whose ends are perpendicular to the disk's boundary. This diagram shows how the Poincaré disk can be thought of as a projection of a hyperbolic surface. [10]


The Poincaré disk has the following curious property: from an observer outside of the disk, it would appear that objects within the disk shrink inversely proportionally to their distance to the edge. From the perspective of an inhabitant of the disk, the disk would appear to be of infinite space since the inhabitant would never be able to reach the edge of the disk.

To the outside observer, the disk would appear as a finite 2D disk of radius R, but with a non-Euclidean metric. Using a standard Cartesian basis as our reference basis, whose origin is at the center of the disk, we can define the metric as follows:


ds^2 = \frac{(dx^1)^2 + (dx^2)^2}{R^2 - (x^1+x^2)^2}

Consequently, the metric tensor is as follows:


\mathbf{g} =
\begin{bmatrix}
\frac{1}{R^2 - (x^1+x^2)^2} & 0 \\
0 & \frac{1}{R^2 - (x^1+x^2)^2} 
\end{bmatrix}

As with the previous example, the resulting metric is not constant but varies throughout space, which is what gives rise to the non-Euclidean geometry.

This example illustrates that it is the metric that induces the curvature of space (i.e. whether it is non-Euclidean or Euclidean) rather than space having curvature apart from the metric. In other words, from a mathematical perspective, the choice of metric is independent from the choice of space.

In this case, for example, we had a 2D disk, which looked like any ordinary flat disk until we applied a hyperbolic metric to it which induced Hyperbolic (non-Euclidean) geometry on the disk. When it comes to physical space, however, the choice of metric is less arbitrary and is typically associated with the choice of some physical "meter stick".


Affine Connection and Covariant Derivative

This section is abased on sections 85 and 86 of the book by L.D. Landau and E.M.Lifshitz [11]

The result of parallel transporting a vector on a sphere depends on the path taken. Going from point N to A along the NA longitude line yields a vector different from what one would get by parallel transporting the same original vector first along the NB longitude line and then along the BA latitude line. [12]

An affine connection specifies how vectors located in different points in space can be compared to one another. This is needed in order to be able to compute the differential of a vector with respect to displacement in space along a given direction. Such differential is known as the covariant derivative of the vector.

To compute such a derivative requires finding the difference between vectors in two distinct points in space. In Euclidean geometry, this is a simple matter of computing the difference of the respective vector components. That is because in Euclidean geometry vector quantities are fully determined by their vector components irrespective of spatial location. For example vector \mathbf{v} is fully determined by its components v^1 and v^2. In non-Euclidean geometry, however, vectors quantities are specific to points in space, so without any additional information it would not be meaningful to compute differences between vectors located at different points in space. This additional information is the affine connection and just as with the metric an affine connection is not inherent within a given space but must be separately specified; it is a matter of choice.

So, before we can compute the difference between two vectors located at different points in space, one of the vectors must be first brought to the same location as the other vector. This must be done without altering the value of the vector that is being transported. In Euclidean space, this is analogous to translating a vector while keeping it parallel to itself, which is known as parallel transport. In non-Euclidean space, however, parallel transport depends on the path and is not uniquely defined apriori as can be seen by the illustration here. This is why we need additional information to uniquely define parallel transport.

One way in which such additional information is provided is by specifying how the contravariant components of vectors should change under parallel transport. In particular, let \mathbf{v} is some vector at location \mathbf{x} that we wish to parallel transport to another location \mathbf{x}+d\mathbf{x}. Furthermore, let (dx^1, dx^2)^T and (v^1, v^2)^T be the contravariant components of d\mathbf{x} and \mathbf{v} respectively. Then the change of (v^1, v^2)^T under parallel transport over the infinitesimal displacement d\mathbf{x} is given by:


\begin{align}
dv^1 = \sum_{j,k=\{1,2\}} \Gamma^1_{jk} v^j dx^k \\
dv^2 = \sum_{j,k=\{1,2\}} \Gamma^2_{jk} v^j dx^k
\end{align}

The basic idea is that over infinitesimal displacements the change can be approximated with a linear expression in terms of the vector's own components and the components of the displacement vector. The linear coefficients \Gamma^i_{jk} are known as connection coefficients or Christoffel symbols. It is important to keep in mind that the connection coefficients just like to the metric are a function space and can vary from one point in space to another.

Note the similarity between the above expressions and the expression in equation (11). It looks as if in each expression we are computing a kind of a "inner product" between the vectors \mathbf{v} and d\mathbf{x} where \mathbf{\Gamma^1} and \mathbf{\Gamma^2} play the role of the inner product operator. Effectively we are saying: project \mathbf{v} onto d\mathbf{x} one way to get dv^1 and then project \mathbf{v} onto d\mathbf{x} another way to get v^2. However, this is more of an intuitive interpretation. In fact, unlike the metric tensor, the connection coefficients are not tensors as they do not obey tensor transformation rules.

It is worth noting that although the connection coefficients are a matter of choice, there is actually a particular choice that is in a sense natural for the given metric. Such connection is known as the Levi-Civita connection. The connection coefficients for the Levi-Civita connections can be derived from the metric tensor by differentiating the metric tensor with respect to spatial displacements. For more detail, the reader is referred to [11].

For Euclidean space the connection coefficients all vanish, so it is reasonable to expect that these coefficients have something to do with the curvature of space. Indeed, the components of the Riemann curvature tensor, which is a rank 4 tensor, can be expressed in terms of the connection coefficients and spatial differentials of the connection coefficients.


References

  1. Gerard A Maugin, Magnetized deformable media in general relativity, Annales de l'I.H.P., section A, tome 15,
    1. 4(1971), p.275-302
  2. G.A.Maugin, On the covariant equaitons of the relativistic electrodynamics of continua. I. General equations, Universite de Paris VI, Laboratoire de Mecanique Theorique associe au C.N.R.S., tour 66, 75230 Paris, Cedex 05, France
  3. Kazuo Kondo, ON THE ANALYTICAL AND PHYSICAL FOUNDATIONS OF THE THEORY OF DISLOCATIONS AND YIELDING BY THE DIFFERENTIAL GEOMETRY OF CONTINUA, International Journal of Engineering Science, Vol. 2, pp. 219-251, Pergamon Press 1964
  4. John D. Clayton, Douglas J. Bammann, and David L. McDowell, A Geometric Framework for the Kinematics of Crystals With Defects, Philosophical Magazine, vol 85, nos. 33-35, pp. 3983-4010, February 2005
  5. J.D. Clayton, D.L. McDowell, D.J. Bammann, Modeling dislocations and disclinations with finite micropolar elastoplasticity, International Journal of Plasticity, 22 (2006) 210-256
  6. http://commons.wikimedia.org/wiki/File%3AGoode_homolosine_projection_SW.jpg, By Strebe (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
  7. Anna Vainchtein, Change of basis, reciprocal basis, covariant and contravariant components of a vector and metric tensor, Math 1550 Lecture Notes, http://physastro-msci.tripod.com/webonmediacontents/notes1.pdf
  8. http://commons.wikimedia.org/wiki/File%3ABasis.svg, By Original diagram (File:Basis.gif) due to Hernlund, redrawn by Maschen. (Own work) [Public domain], via Wikimedia Commons
  9. Wolfram MathWorld, Poincaré Hyperbolic Disk, http://mathworld.wolfram.com/PoincareHyperbolicDisk.html
  10. http://commons.wikimedia.org/wiki/File%3AHyperboloidProjection.png, By Selfstudier (Own work) [CC0], via Wikimedia Commons
  11. 11.0 11.1 L.D.Landau an dE.M.Lifshitz, The Classical Theory of Fields 4^th Revised English Edition, Course of Theoretical Physics Volume 2, Butterworth Heinemann 1996
  12. http://commons.wikimedia.org/wiki/File%3AParallel_Transport.svg, By Fred the Oyster [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
Personal tools
Namespaces

Variants
Actions
home
Materials
Material Models
Design
Resources
Projects
Education
Toolbox