Robert's Mistakes

three types of differential geometry

I have learned the following form Ben Webster during his 2021 course on symplectic geometry.

Riemannian metric g(u,v)g(u,v) is an inner product. It can be used to measure distances and angles. And using that we can construct a measure for area. But this area is unsigned.

A symplectic metric ω(u,v)\omega(u,v) measures area directly, but implies a convention for assigning a sing to areas.

An almost complex structure JJ is a matrix that squares to negative identity: J2=IdJ^2=-Id.

J is often used to denote a matrix that rotates by 90 degrees.

If the curvature tensor in a Riemannian geometry is zero, we call the geometry flat (it looks locally like R^n). This is similar to a symplectic form being closed. And the same holds for almost complex structures: if it locally looks like C^n, it can be called a complex structure (and not just almost), although the corresponding tensor is a bit more involved.

We can combine all those like so:

g(u,v)=ω(u,Jv) g(u,v) = \omega(u, J v)

So if the right hand side gives a Riemannian metric, we call the corresponding symplectic and almost-complex structures compatible. Any symplectic structure has a compatible almost-complex structure. Put differently, any symplectic form ω\omega lives in a such compatible triple (g,ω,J)(g, \omega, J) where the above holds.

The requirements for having a symplectic form are strictest (compatible triples also exist for almost symplectic structures, where it may be impossible to find an ω\omega which is closed. For example, S6S^6 is almost complex, but not symplectic).

On requirement is that the dimension of the space or manifold has to be even. (Although the symplectic manifold may be embedded in a higher dimensional space) and that also holds for (almost) complex structures, but they don't require that the symplectic form is closed. Which simply says that its derivative must be zero $`d\omega = 0$. If that's not the case, we can call it an almost symplectic structure.

Fixing ω\omega and JJ that way gives what is called a Kähler metric or Kähler manifold.

However, answering wether a symplectic form exists can be difficult, and amounts to the question wether the second de-Rham cohomology vanishes, for example: H2(S6,R)={0}H^2(S^6,R)=\{0\}, hence S6S^6 has no symplectic structure.

You can also define almost symplectic (and almost complex) structures on any vector bundle, but there is no obvious generalization of the closedness condition dω=0d\omega = 0 for that.

So, if a vector bundle has a metric, then having an almost symplectic structure is equivalent to having an almost complex structure.

In that general setting it may not be clear how to define that a vector bundle is flat. We can ask wether a connection on a vector bundle is flat, a question Joshua Silva asked, but I didn't get the details.

Intuition, you often cannot interpolate between a pair of symplectic forms tω1+(1t)ω2t\omega_1+(1-t)\omega_2.

By the way, having a number of summands that add to one is called partition of unity, which appears quite a lot in Ben's lectures.

While subspaces of inner product spaces are always also inner product spaces this isn't true of symplectic spaces. There are a number of properties one might hope for if it's not also symplectic (Lagrangian, isotropic, coisotropic, ...). Similarily for almost complex structures: for a random subspace it's unlikely that it's tangent space turns out to be almost complex as well.

Theorem: In a compatible triple, if a submanifold is almost complex, then it's also symplectic. The reverse need not be true, though.

The space of compatible almost complex structures is simply connected. Which means, you can interpolate between them: gt(J)=(1t)ω(u,Jv)+tω(u,J0v)g_t(J)=(1-t)\omega(u, J v)+t\omega(u, J_0 v). Note: gtg_t is a metric. Compare that to the statement above, that you cannot interpolate between symplectic forms themselves.

interpreting Lagrangians

Action=(KV)dt Action = \int (K-V) dt

So the Action is the "total of the difference of kinetic and potential energy". The approach can handle minimal just as well as maxima, so it doesn't really matter which is subtracted from which. Traditionally we'd get a maximium when most energy is in the motion, like that a ball has most energy when it's fastest. This is why potential energy is typically denoted with a minus sign!

The plan is to optimize a path with respect to the action, that is to find a minimum (or maximum) such that

Earlier, Fermat's principle of least time helped to understand the path a lightray takes when crossing from one medium over to another.

from dot products to 2-forms

Vector algebra combines coordinates, adding and scaling them, with the dot product. The dot- or scalar product is a function __:Rn×RnR\_ \cdot \_: \reals^n \times \reals^n \rightarrow \reals taking two vectors to a number, and it can be used to measure lengths and angles.

Then, in 1898, Giuseppe Peano captured the dot product axiomatically, and since this turns out to be a more general concept, it deserves a new name: it's called an inner product. So he turned the dot product...

ab=iaibi=a1b1+a2b2 a \cdot b = \sum_i a_i \overline{b_i} = a_1 \overline{b_1} + a_2 \overline{b_2} \dots

...into a handful of requirements for a better, and fancier inner product __:V×VF\_ \odot \_: V \times V \rightarrow F. Which must be proven to be

(1) conjugate symmetric

ab=ba a \odot b = \overline{b \odot a}

(2) linear in its first argument

(a+b)c=ac+bcs(ab)=(sa)b (a+b) \odot c = a \odot c + b \odot c\\ s (a \odot b) = (s a) \odot b

Note: When there's no complex conjugation, then this boils down to \odot being commutative, and since that implies it's linear in both arguments we can just it bilinear. Just take care that, in the complex case, there's an extra conjugation required when flipping the arguments.

Note: The topic of this post has an intricate relationship to complex numbers. And it can be done using complex numbers instead of real numbers. Or even quaternions! I couldn't resist to add extra nooks and handles for complex nubers where that's possible.

There's one more constraint. To keep it from being degenerate, we can require it to be

(3) positive definite

aa=0impliesa=0 a \odot a = 0 \, \text{implies} \, a = 0

Any such operation allows us to construct a metric a||a|| from it, and that means we can do differential geometry! This is what Oliver Heaviside and J. Willard Gibbs formalized to get vector calculus! And which Bernhard Riemann suggested to do on manifolds.

You see, from just a few rules we can recover all of vector algebra, basically geometry itself. But what happens, if we change (1) just slightly, and make that product anti-commutative? Will we get an alien geometry? Read on to find out!

For now, let's merely think about some friendly, real, and flat Rn\reals^n vector space: then all inner products can be written as matrix operation like this:

aTMb a^T M b

Note: Doing a transpose suggests that one argument of the inner product is secretly a dual vector. Here's another irritating foreboding of the twist ahead.

Note: In the complex world, you'd take the conjugate transpose instead.

Of course MM has to satisfy our axioms (1) - (3). Linearity simplifies things greatly here, we can just take any symmetric, nondegenerate matrix.

Or we could say that it must be an invertible matrix, whose off-diagonal elements are mirrored across the diagonal, like so:

aij=aji a_{ij} = a_{ji}

Or anything you'd get from a base change of such a matrix.

Here's a 2d example:

(abbc) \begin{pmatrix} a & b \\ b & c \end{pmatrix}

skew symmetry

Now, Peano's symmetry axiom could be modified to involve a minus sign instead. The resulting operation can be called alternating, or, since we're talking about matrices, skew symmetric.

(1b). anti symmetric in its two arguments

ab=ba a \cdot b = - \, b \cdot a

We can write symmetric matrices as quadratic forms:

ax2+bxy+cy2 ax^2+bxy+cy^2

Similarily, we can write any skew-symmetric matrix...

(abbc) \begin{pmatrix} a & b \\ -b & c \end{pmatrix} a differential 2-form:

bdxdy b \, dx \wedge dy

You can view that as a fancy way to write matrices. Or you could take things more serious, understand it as a function in dx and dy, and treat that wedge sign \wedge as if it were a multiplication sign, with the extra twist, that dxdy=dydxdx \wedge dy = - \, dy \wedge dx.

what kind of geometry can be gained from there?

ten approaches to symplectic geometry

  1. The difference of potential and kinetic energy, written as a function L of position coordinate q and its derivative q', is called a Lagrangian. We can solve for when it's constant to obtain a curve as solution!

  2. Their sum has the same property, and if we write that using two variables instead of one and its derivative, and using twice as many equations, we get something much more symmetrical in the coordinates: a symplectic manifold.

(Legendre transform) Hamiltonian is the total energy

  1. Symplectic geometry is the geometry of phase space, which has the Poisson bracket as product. It is the parameter space of a system given by its position and velocity coordinates. Any similarily coupled pair of coordinate sets would do.

. Given a 2n dimensional space, described as (q_n, p_n), with a product {x,y} where {q_i,q_j} = 0 = {p_i,p_j} and {q_i,p_i} = 1. We say q_i and p_i are a conjugate pair

b if, instead of 1, you get -ih it's a well-known relation from quantum mechanics

  1. Take vector calculus, gradients and such, but replace the dot product with a symplectic product! The dot product is zero when two vectors are orthogonal. The symplectic is zero when they're parallel!

  2. Using gradients \del x we can state normal dot product: (\del u)^T (\del v) symplectic product: (\del u)^T J (\del v) where J = (0 1 \ -1 0) has J² = -Id

  3. When you have an almost complex structure (say, such a J is part of your matrix group), then you have a symplectic manifold!

  4. Just as Homology can be related to normal geometry via Morse functions, Floer homology is about analyzing symplectic manifolds via their symplectic metric ω.

  5. A differential 1-form at point is just like vector: a linear combination of base vectors (which makes it a dual vector), but living in a flat tangent space. Symplectomorphisms preserve a nondegenerate 2-form, which is also a linear combination, but of pairs of base vectors.

  6. Put differently, a symplectomorphism preserves the area of a parallelpiped!

  7. ω defines a vector field on a manifold, called symplectic flow.

  8. ω: R^2n × R^2n -> R is: linear in both arguments: ω(au,v) = ω(u,av) = aω(u,v), alternating or anti-symmetric: ω(u,v) = -ω(v,u), and nondegenerate: \forall u \exists v : ω(u,v) is nonzero.

b. The popular scalar product can be defined in almost the same way, except that it must be symmetric or commutative (u·v = v·u) instead of alternating.

Symplectic geometry is not complex!

Or is it? "Symplectic" is a greek-ish word related to the word complex, to which Herrmann Weyl was indeed seeking proximity in around 1939, while at the same time trying to avoid the then prevalent notion "complex line bundle", for that could easily be confused with, well, simply a bundle of complex lines in the complex plane.

Introducing with greeks seems appropriate, as symplectic geometry does relate to the motion of the planets. But as such it is hypermodern, well beyond Isaac's Newton's way to do mechanics.

Our understanding of the motion of the planets would eventually be cast in terms of William Rowan's Hamiltonians in 1833! They got their name in analogy to Joseph-Louis' Lagrangians from 1788, with all due sensationalism, merely reformulating Lagrange's method slightly.

You see, to capture the motion parameters of a planet, say for doing computations, you might want to give its position and its velocity at some point in time. Locally, they usually have the same number of dimensions, and one is clearly tangent to the other!

From there, symplectic geometry proceeds one step further, dropping physical constraints, it looks at the general kind of double space, which the above reformalizations of mechanics end up describing. Let's have some details:

The so-called Lagrangian specifies the sum of potential and kinetic energy for any possible state of a system. We can then use Euler-Lagrange's equations to compute the change in position and velocity after a short while and given the current position and velocity.

Lx(t,q(t),q.(t))d/dtLv(t,q(t),q.(t))=0 L_x(t, q(t), q^.(t)) - d/dt L_v(t, q(t), q^.(t)) = 0

It says, that potential energy LxL_x equals the change in kinetic energy LvL_v.

Now, a Hamiltonian replaces the kinetic energy term by an expression depending on momentum, (??? and that has the interesting effect that the product of momentum and potential become invariant)! That product is some kind of surface area, and its value is conserved!

Interestingly, Hamilton's reformulation is a standard trick for working with ordinary differential equations (ODE), called Legendre transformation. It transforms a second order differential equation (it contains a variable's second derivation) into a system of first order differential equations. This is a common trick, and instead of the derivative q.q^. of a single variable qq we introduce an extra variable pp.

q˙=Hpp˙=Hq \dot{q} = \frac{\partial H}{\partial p} \\ \dot{p} = - \frac{\partial H}{\partial q}

The Hamiltonian can thus be called the Legendre-dual of the Lagrangian.

Now, such a system of first order ordinary differential equations (ODE) can be understood as a vector field. Here's another fancy way to write that in one line, using differentials as base vectors:

XH=Hpd/dqHqd/dp X_H = \frac{\partial H}{\partial p} d/dq - \frac{\partial H}{\partial q} d/dp

XHX_H is called phase space

You can just add indices to the above and put it in a big sum. Yet another alternative and shorter way to write this is like so:

XH=HpdqHqdp X_H = H_p dq - H_q dp

You could picture yourself as a point on a warped landscape, and a flat tangent to that surface for the momentarily possible velocities. This is the moment in which we are typically expected to imagine a bundle of tangent planes, one attached to every point on the landscape, which are also somehow connected among themselves accordingly. Something, something complicated.

I have to admit, such a tangent bundle is a nice idea, for it suggests to project small linear changes due to velocity and in response to passing forward a short time span, back down onto the position surface. So we might somehow sum or integrate ourselves a path bit by bit... something, something.

image: blackbody spectrum

The basic idea, to find an orbit, we might color position space by potential energy, and the tangent velocity space by kinetic energy, and look at subspaces where the summed total energy does not change! By the way, when people say "Lagrangian" or "Hamiltonian", they usually refer to that sum!

However, for interesting cases, this would still give a whole surface of orbits. A bit before Langange, in the 1750s, Leonard Euler had the insight that Gallilean invariance, an idea from even earlier in 1632, that a body likes to keep its momentum, can be applied here as well, to reduce the number of dimensions of that space further, to obtain a one-dimensional path!

Enough physics! What's symplectic geometry?

As we're describing both spaces locally, we write both coordinates in terms of differentials anyways, so there is a kind of symmetry here. Oh, differentials are analogs of space directions, like units for small variations along x, y and z axes.


Gauss' principle:

variational approach: one could argue Leibnitz knew something about this, and that maybe he learned about it from Huygens.

hamiltonian formulation is nicer than newton's (sic): you can see right away, that - phase space volume is conserved (by the flow) - energy is conserved

you can actually conclude poincare recurrence, because the volume is finite!

[ dH/dx, dH/dp ] is a gradient of the vector field! We need to turn that 90°, hence the minus sign and the switch of coordinates!

you can prove that the area of phase space is conserved, by computing the divergence and noting that it's zero.


This guy says in part 1 of 18 that since the Lagrangian is kinetic energy minus potential energy, and the Hamiltonian is 2 times the kinetic energy minus the Lagrangian, this computes to the Hamiltonian being the sum of portential an kinetic energy.


see also

The hamiltonian ist still about energy, but we write one of the generalized coordinates as momentum, so we can take the derivative with respect to the momentum. We can the take the time derivative of that to directly obtain the acceleration!

 d/dt(ðH/ðp) = -ðH/ðx

ma = F

maths journey

jet →
jet-bundle →
contact structure →
one-form →
Lagrangian →
symplectic structure?

Today: returning home to symplectic geometry?



Over the next few days, I'll be developing an introduction to #symplectic #geometry here. Don't be afraid, we will be taking the easy route...

symplectic geometry is the geometry of phase space!


Aug 21, 2018,

Symplectic geometry is the modern destillate of an idea going back to Leonard Euler and Joseph Louis Lagrange.


The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.



continuing from that Wikipedia page:

Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766.

see also:


Read about other alternate formulations of classical mechanics here:

Little symplectic timeline:

Newton's "Principia", 1687.
The Euler-Lagrange equation, 1750's.
Lagrangian mechanics, 1788.
Hamiltonian mechanics, 1833.
Poincare defines what will later be known as symplectic geometry, 1912
Nöther's theorem, 1915.
Weyl coins the term "Symplectic geometry", 1939.
Arnol'd invents symplectomorphisms and symplectic topology, 1965,
Arnol'd's conjecture, 1974.
Gromov non-squeezing theorem 1985.


Symplectic geometry is tailored for doing physics and works well for:

  • geometrical optics
  • classical mechanics
  • relativity
  • quantum mechanics

...and many other areas! On the other hand, examples for areas where symplectic geometry doesn't 'just work' include:

  • statistical mechanics and thermodynamics
  • dissipative systems
  • noisy or lossy systems

Tobias Osborne – Symplectic geometry & classical mechanics – 1/21 [youtube]


While we're at it, here's a nice and short exposition: Dusa McDuff – Symplectic geometry [youtube]

And a classic book in a translation from 2001: V. I. Arnol'd, A. B. Givental, translation by G.Wassermann – Symplectic Geometry

I wonder, how old is the russian orgiginal?


I meant to link to Dusa McDuff's paper "What is Symplectic Geometry?" in /6 above. It is a fun paper, and you should read it even if you don't usually read math papers:

Somehow the video for 11/ came out here when I tried to paste the link. Apologies to my early readers.

In Euclidean geometry you get lengths and angles as basic tools. Projective geometry loses angles, and replaces the concept of length by one of proportional length. In topology you get rid of both these tools, and this makes it into what is probably the most flabby kind of geometry. Symplectic geometry gets rid of both, length and angle, but introduces a new formalism to measure area instead... So, it is about as much related to geometry as a tomato is to a frog...


Symplectic geometry is the ultimate generalization of Hamiltonian mechanics! The basic idea of Hamiltonian mechanics is to write down an expression (called Hamiltonian) for a conserved quantity (like energy) as depending on an unknown function p(t) (e.g. position), and one related to its first derivative q(t) (say, momentum). You then demand that the Hamiltonian is stationary, that it doesn't change, which is cleverly formalized by setting its derivative to zero, and solve for p!


On g+, +Beat Toedtli noticed that I had swapped the meaning of p and q! In the literature you'll find p used in Hamiltonians to refer to momentum, and q to the "generalized coordinate". I can't change my earlier posts here, but I will try to stick with the popular nomenclature from now on.

The announcement on g+ is here:

The earlier Lagrangian mechanics involved an extra parameter for the time t, and it always uses the derivative of the first kind of coordinates as the second kind of coordinates.

The extra parameter t amounts to another coordinate dimension, making the resulting spaces odd-dimensional. This is called contact geometry and it is closely related to symplectic geometry.


Now you know what a Hamiltonian H(p,q) is, first notice that p and q are simply coordinates. The position p(t) may be on a manifold that isn't standard Euclidean space. The momentum q(t) however, is always to be understood in the space that is tangent to the manifold at the position p(t). So a symplectic space is always even-dimensional, and smooth enough for one degree of infinitesimals.


In the literature you might find the labels p and q used the other way around. See my annotation below 8/

A paper on the history of symplectic topology:

Michele Audin – Vladimir Igorevich Arnold and the Invention of Symplectic Topology

Expository talks on symplectic geometry (videos):

Dusa McDuff – Symplectic geometry [youtube]

Helmut Hofer – First Steps in Symplectic Dynamics [youtube]

Dusa McDuff – Symplectic Topology Today [youtube]


There was a bit of a kerfuffle regarding the foundation of symplectic geometry. In 1996 Kenji Fukaya and Kaoru Ono published a paper on counting fixed points. Everybody referred to it but only more than a decade later did someone notice that it was difficult to follow, because the explanation was incomplete.

When Dusa McDuff joined the party she decided to fix it. Read more about it here:


I have removed the post labeled 12/ necause it was the same as 10/. I'm sorry for the confusion.

Andreas Floer came up with the idea to use homology to attack symplectic geometry. At that time, much of the relevant literature was only available in russian, and a bit in french.

Another tragic mathematician.


A differential 2-form w on a (real) manifold M is a gadget that, at any point p ∈ M, eats two tangent vectors and spits out a real number in a skew-symmetric, bilinear way:

w_p: T_p M x T_p M -> R

((( T_p M is the tangent space at a point p ∈ M. If we consider all these spaces for all p it's called a tangent bundle TM. )))

For every v ∈ T_p M there is a symplectic buddy u ∈ T_p M such that

w(v,u) = 1


Linear "Darboux Theorem". Any two symplectic spaces of the same dimension are symplectically isomorphic, i.e. there exists a linear isomorphism between them which preserves the skew-scalar product.

Corollary. A symplectic structure on a 2n-dimensional linar space has the form p1^q1 + ... + pn ^ qn in suitable coordinates (p1,...,pn,q1,...,qn).

Such coordinates are called Darboux coordinates, and the space R^2n with this skew-scalar product is called the standard symplectic space.


Having defined tangent bundles, we can now situate Langangians as a map L: TM -> R for which we can then solve for level sets.

Remember, TM is the tangent bundle for our manifold M. A point of TM is then a point p on M together with a vector in T_pM, a vector tangent to M at p.


alternate mechanics

Before we get back to #symplectic geometry let's have some fun with alternate ways to do classical mechanics!

I mentioned Lagrangian and Hamiltonian mechanics earlier. Lagrange's is often simpler, but it cannot handle cyclic coordinates. Meet Routhian mechanics! Routh found out that you can cherry-pick momenta or velocities as your generalized coordinates to your delight.


Sep 06, 2018,

The thread about symplectic geometry is here:

Appell's equation of motion […] is an alternative general formulation of classical mechanics described by Paul Émile Appell in 1900 and Josiah Willard Gibbs in 1879.

Gibbs! Who invented vector calculus, and coined the term "statistical mechanics"!

It uses the second derivative to make things solveable. This approach shines when nonholonomic constraints are involved.


The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities.

In which the motion of a particle can be represented as a wave.

The HJE is a single, first-order partial differential equation for the function S of the N generalized coordinates q1...qN and the time t. The generalized momenta do not appear, except as derivatives of S.

Just like Lagrangian mechanics! But the latter amounts to a system of N equations.

The HJE post above is part


The Udwadia–Kalaba equation is useful when forces aren't conservative (they don't obey d'Alembert's principle).

M(q,t)q''(t) = Q(q,q',t)

Q is the total (generalized) force, M the mass matrix. M has to be symmetric, and semi-positive definite.

No Lagrange multipliers! It is based on Gauss' principle instead of Euler-Lagrange's equation.


axioms for symplectic space

Sep 09, 2018,

Let's start with an even dimensional Euclidean space R^2n, and a map from two vectors in that space to the reals

ω:R2n×R2n>R. ω: R^2n × R^2n -> R.

It has to be linear in both arguments

ω(au,v)=ω(u,av)=aω(u,v), ω(au,v) = ω(u,av) = aω(u,v),

be alternating or anti-symmetric

ω(u,v)=ω(v,u), ω(u,v) = -ω(v,u),

and it must be nondegenerate in the sense that for any u we can find a v such that ω(u,v) is nonzero.

The popular scalar product can be defined in almost the same way, except that it must be symmetric or commutative instead of alternating:

uv=vu u•v = v•u

Just as the scalar product measures angle and distance in a funny way, ω measures area in a funny way!

Making ωω bilinear is a strong requirement: it implies that, given a basis, ωω can be represented as a matrix! Which then has to be skew-symmetric, and nonsingular.

And just like with vector spaces we can do a change of basis.

A Darboux basis is the symplectic substitute for a standard basis. Its elements come in two kinds, let's call them xix_i and yiy_i for now, and let's put them through ω to see what's so simple about them:

ω(xi,yj)=ω(yj,xi)=1ω(xi,xj)=ω(yi,yj)=0 ω(x_i,y_j) = -ω(y_j,x_i) = 1 \\ ω(x_i,x_j) = ω(y_i,y_j) = 0

Remember? A Hamiltonian is an energy function which, if you're lucky, is a sum of potential energy, depending on position qiq_i, and kinetic energy, depending on momentum pip_i...

Symplectic space doesn't come with an interpretetation!

That shouldn't equal to 1 but to δ_ij: the Kronecker delta function, which is 1 only if i=j and 0 otherwise.

Which means ω is always zero except when the arguments correspond, having the same index, but belonging to opposite halfs:

ω(xi,yi)=1ω(yi,xi)=1 ω(x_i,y_i) = 1 \\ ω(y_i,x_i) = -1

wp> The standard symplectic space is R^2n with the symplectic form given by a nonsingular, skew-symmetric matrix. Typically ωω is chosen to be the block matrix

ω = | 0 Id | | -Id 0 |

wp> where Id is the n×n identity matrix.

Let's do some more interpretation. The external product or wedge product is ideal to understand the meaning of ω!

It is a nice way to calculate areas spanned by a pairs of vectors. If you look here for details

I hope you can see that such an area is expressed as a sum of components, of shadows cast on all possible base planes (e.g. the one named e1e2e1 \wedge e2 spanned by e1e1 and e2e2).

Using the basis from earlier we find

ω(u,v) = sum_i u_i ^ v_i

Hamiltonian mechanics

q˙=Hpp˙=Hq \dot{q} = \frac{\partial H}{\partial p} \\ \dot{p} = -\frac{\partial H}{\partial q}

First order ODE can be described by a fector field:

XH=H/pddqH/qddp X_H = \partial H / \partial p \frac{d}{dq} - \partial H / \partial q \frac{d}{dp}

symplectic geometry

take a real-valued function f at a point x in some kind of space M. Many nice stories start like this!

  • if you take the gradient of such a function, you get a vector field pointing "down the hill".
  • that means, it's orthogonal to the level sets.
  • Now, symplectic flow is tangent to the level sets.

In some sense, Riemannian and symplectic geometry are orthogonal! And that gives rise to an almost complex structure!

Riemannian geometry

Riemannian geometry studies smooth Riemannian manifolds, and it also defines a real-valued function of two arguments, often called inner product, which behaves like the scalar product _ \middot _.

Remember what a scalar product's for?

We can use it to square a vector, to get the square of its length: v\middotv=v2v \middot v = |v|^2, and it can measure the angle between a pair of vectors like so:

cos(angle)=u\middotvuv cos(angle) = \frac{u \middot v}{|u| |v|}

Slightly more generally, we can pin down the product at a point on M, and thus obtain a function in one variable, which can then be viewed as taking points in the tangent space TxMT_x M at x. After which we may show off calling our M Finsler manifold, and the product becomes a Minkowski functional F(x,)F(x,_).

This way to put it is convenient if you want to measure a curve's length using integration. Oh, and any positive definite quadratic form defines a Riemannian metric. (Which in 2-space might look like ap2+bpq+cq2ap^2+bpq+cq^2.)

almost complex structure

An almost complex structure is a matrix which squares to negative the identity matrix. There's a canonical example, and you can always arrange things sucht that it looks like that:

J2=Id J^2 = -Id

J=0110001100 J = 0 -1 1 0 0 0 -1 1 0 0 …

So, an almost complex structure appears in a matrix algebra over a 2n2n dimensional vector space whenever that algebra entails such a matrix JJ. Being a member of an almost complex structure we're allowed to write it using complex numbers:

J=i0i0 J = i 0 i 0 …

  • a 2-form is secretly a function which takes two vectors as arguments.
  • fixing one argument produces a 1-form
  • nondegeneracy means we can identify a 1-form with a vector field

a new dot product

Symplectic geometry is just like normal vector calculus, but with the common dot product replaced by one that follows different rules. If the common dot product between two vector functions is zero, it means that those vector fields are perpendicular at that point. On the other hand, if the symplectic product is zero, it means that they're parallel! You can write it in matrix form like this:

 (\nabla A)^T J (\nabla B)

compare with the dot product written using dual vectors:

a·b = (a_n) (b^n)

It's the same, but there's that J in between!


Kähler Manifold has a J as above, but it also gives zero when multiplied by the (Riemannian geometry) Levi-Civita connection \nabla J = 0. (Remember: Riemann metric g is a symmetric bilinear form).

w(,) = g(J , ) = \Omega^{1,1}(M)

Klein's Erlanger programme: A geometry always corresponds to a symmetry. A complex geometry has the complex-linear-group as symmetry. Riemannian geometry has the orthogonal group. Symplectic geometry has the symplectic group.

No matter which two of these symmetry groups you intersect, you get all Kähler Manifolds.

The above implies that Kähler manifolds are the complext manifolds (contrasting almost complex manifolds).

german prof on sympl geo: +1h [youtube]

tangent * cotangent: V+V^T

Sean Carroll has a nice series of slightly-deeper-than-popular physics lectures, here's him on the Lagrangian: "The Biggest Ideas in the Universe, 3. Force, Energy, and Action":

If you want to skip the stuff leading to it, you might jump to 39:00.

Maybe also take a look at ep 13, where he talks about Riemannian geomtry!

yt link?

posthuman superintelligence, what is it?

Novels are typically very vague about posthuman superintelligence.

Often, it's merely said they're incomprehensible. That is so unsatisfying! Worse, it seems hopeless to to tell how a superhuman would understand herself, or the world! But we won't become posthuman right away, possibly only to chuckle at such defeatism-in-assumptions.

Now, if we'd dare make a guess, how could we ever be confident in it? Anyways, I can't resist, so let's deconstruct our minds in a systematical, yet explosive way! Anything related to information processing can serve as template for locating a superhuman exaggeration of it.

Some ideas have been floating around for a while, like doing computations faster than any human could. Or looking up a book in a library. Or talk to faraway persons. All that sounds an awful lot like someone with a mobile phone and internet access. Pure magic to medieval minds.

Right now there are many technologies, which are not yet at our fingertips, but conceivably could be, with a bit of polishing, and perhaps some extra processing power. Take clustering a large dataset, or rather acquiring such a set, as software to work into the data already exists.

So-called machine learning covers classifying data, clustering, searching, even automagically writing a short abstract of a longer text, or synthethizing specimen like in a dream. Quality varies in unfamiliar ways, but it's not hard to imagine that as much more reliable.

Imagine: You are looking at millions of tweets at once, understand their gist and extremes, and you can picture the flow of ideas between people. You could even interact with a large number of them, but on a more personal level than, say, unidirectionally addressing half the nation.

Every nation is a hive mind! You might become part of one without it taking notice. But if you concern one of its institutions, they're like personality traits, they might hold on to you seeking to complete their business with you. You probably know how working for those is like.

Will a superintelligence be able to understand us? To me it seems that there must be some similarities between conscious beings of different kind. Yes, there should be superhuman babies.

Human beings are hive-minds. While only some of our cells seem to do the thinking, the others, individually imperceptible to us, are still connected with our human caonsciousness via a body covering sensory and nerve system.

limit technology is like a