The goal of this text is to concisely explain why the simple compact Lie groups are divided to the groups \[
A_\ell,\, B_\ell,\,C_\ell,\, D_\ell,\, E_{6,7,8},\, F_4, \,G_2.
\] The focus will be on the representation of every step, not on the detailed explanation of each step. In other words, I will assume that the reader is busy and smart and can rediscovery many things for herself.
It's a complementary texts to some previous ones such as Exceptional Lie groups. I recommend you to read this in the mobile template.
Some definitions
First, what do the words in the title mean? A group is a set \(G\) of elements (the elements are some operations or "symmetry transformations") that include \(1\) with an operation "product" (if the group is Abelian, i.e. the ordering in the "product" doesn't matter, we often talk about a "sum") which is associative\[
(gh)k = g(hk)
\] and which contains an identity element \(1\) as well as the inverse element \(g^{-1}\) for every element. Now, a Lie group (named after Sophus Lie) is "continuous" which means that you should imagine this whole set as a manifold that can be labeled by continuous coordinates. Groups of continuous rotations such as \(SO(N)\) are examples.
A compact Lie group is a Lie group that is "compact" i.e. that has a finite volume. If you imagine it as a manifold and define a "volume density form" on this manifold that is nonzero and invariant under the group action, the total volume is finite. Effective, compactness means that when written in terms of matrices, all the matrix entries are bounded. So \(SO(3)\) is compact but \(SO(3,1)\) is not. The former group manifold, \(SO(3)\), is actually geometrically equivalent to a sphere, \(S^3\), well, modulo a \(\ZZ_2\).
For a compact Lie group, I may simplify the definition of a "simple group" by saying that it cannot be written as (a group isomorphic to) \(G\times H\) which is a direct product. For example, \(SO(3)\times SO(4)\), the direct product of two groups whose product is defined\[
(g,h)(g',h') = (gg', hh'),\\ \quad \{g,g'\}\subseteq SO(3), \quad \{h,h'\}\subseteq SO(4),
\] is not simple while \(SO(3)\) is. Well, \(SO(4)\) actually isn't simple either because it is isomorphic to \(SU(2)\times SU(2)\), well, quotiented by a discrete group and all these subtleties will be ignored in this text.
Let's roll.
Lie algebras
For a given Lie group, we define the corresponding Lie algebra. Take all elements on the group manifold that are very close to the identity \(1\), for example all rotations by small angles (and their compositions).
So the group element (or the matrix representing it) may be written as \(1+\epsilon\) in an approximation. Well, we actually write it as \(\exp(iM)\) where \(i\) is a subtlety chosen by physicists' conventions ? useful for making \(M\) Hermitian rather than anti-Hermitian ? while the exponential (which reduces to \(1+iM\) in a Taylor expansion) is great because it's exact even for finite, not just infinitesimal, \(M\). It's because of a formula for the exponential\[
\exp(iM) = \lim_{N\to\infty} \zav{ 1+ \frac{iM}{N} }^N.
\] You see that it is a product of infinitely many infinitesimal transformations. The space of all possible values of \(M\) is the Lie algebra. What's important is the "commutator" \([M,N]\) on this Lie algebra. It may be calculated from the Lie group by considering expressions of the sort\[
\zav{ \exp\frac AN \exp\frac BN \exp\frac{-A}N\exp\frac{-B}{N} }^{N^2} = \dots
\] Ignore the \(N^2\) exponent for a while. You see a product of four exponentials, four elements of the Lie group. They would cancel and the result would be \(1\) if \(AB=BA\) because each action is undone. However, they're undone in the opposite order so if the exponentials don't commute with one another, some additional "leftover" transformation is left (note what happens when you try to rotate around small angles around the \(x\)-axis, \(y\)-axis, and then you undo these rotations, but again starting from \(x\) and then \(y\): you will actually get a small rotation around the \(z\)-axis). Now, expand each of the exponentials up to the order \(1/N^2\) which is enough if we want the finite piece of the \(N^2\)-th power in the large \(N\) limit. What you get is\[
\dots =\left[1+\frac{AB-BA}{N^2} + o(1/N^2)\right]^{N^2} \to \exp ([A,B]).
\] The commutator just comes from all the pairs in which you had to exchange the order of \(A,B\). So we see that by combining infinitesimal transformations, \(\exp([A,B])\) is actually also obtainable which means that \([A,B]\) is an element of the Lie algebra. Well, in physics conventions, we would insert factors of \(i\) and say that \(\pm i[A,B]\) is an element of the Lie algebra for all \(A,B\).
Great. We have a Lie algebra. As long as \([A,B]\) is defined as \(AB-BA\) for some particular matrices (that we can multiply by the rules of matrix multiplication), it's easy to verify that\[
[A,[B,C]]+[B,[C,A]] + [C,[A,B]] = 0.
\] The left hand side contains \(3\times 2\times 2 = 12\) terms, various permutations of \(ABC\), and they cancel in pairs. So it's a tautology (we call it "identity") if the commutator "means" \(AB-BA\). However, it's useful to imagine that the commutator is a "more abstract" operation not defined by any particular matrices in which case the identity above, the Jacobi identity, has to be required as a defining condition for the commutator to generate a Lie algebra (aside from the bilinearity which is obvious).
For \(SO(N)\), the Lie algebra called \({\mathfrak so}(N)\) ? yes, these are Gothic fonts ? consists of all antisymmetric \(N\times N\) matrices which are real in mathematicians' conventions and pure imaginary in physicists' conventions. That's because a small rotation around the \(z\) axis just adds a small multiple of \(y\) to \(x\) and vice versa with a minus sign. The exponential of an antisymmetric matrix is an orthogonal one: \(\exp(M)\exp(M^T) = \exp(M)\exp(-M)=1\).
Killing form
On the Lie algebra, we may define a bilinear "Killing form"\[
b(A,B) = {\rm Tr}(AB^{T}).
\] Whenever the Lie algebra is given by matrices, this inner product on the Lie algebra may be written as the trace above, up to an undetermined overall normalization. (Well, we want \(\dagger\), not \(T\), for complex matrices.) Whenever it's abstract, we want the Killing form to be invariant under the action of the group etc. ? to obey the same properties that the trace above does. The Killing form in combination with the commutator obey a fun identity\[
b([X,A]B) + b(A,[X,B]) = 0.
\] The Killing form is "antisymmetric" with respect to the commutator. You may also interpret the expression above as the Leibniz rule for the derivative of the product except that \([X,\dots]\) plays the role of the derivative and it acts on the inner product \(b\); then the vanishing of the derivative means that the inner product doesn't change when varied by the commutator. This condition is also required even if the Killing form isn't given by explicit matrices. Now, the operation \[
A\mapsto \exp(-X) A \exp(X)
\] is an isometry (symmetry preserving the length) relatively to the metric encoded by the Killing form.
I will leave the discussion of general representation theory for another day. Some next-to-rudimentary material has been covered, e.g. in Why are there spinors?.
Maximum tori, Cartan subalgebras
To classify simple compact Lie groups, we need to define the maximal tori. It's subgroups isomorphic to \(U(1)^\ell=SO(2)^\ell\) which is, as you can see, a commutative (Abelian) group. The maximum value of \(\ell\) is known as the rank of the Lie group or the Lie algebra.
For example, the group \(U(\ell)\) has the maximum commuting subgroup \(U(1)^\ell\). It's the maximum torus of it. The Lie algebra of the maximum torus is the Cartan subalgebra, \[
{\mathfrak u}(1)\oplus \dots \oplus {\mathfrak u}(1)
\] which has \(\ell\) terms. Similarly, the groups \(SO(2\ell)\) and \(SO(2\ell+1)\) have the same (isomorphic) maximal torus and Cartan subalgebra consisting of all rotations of the coordinates 1-2 into one another composed with a rotation of axes 3-4 into one another, and so on. For \(SO({\rm odd})\), one coordinate is left and cannot be used to extend the maximal torus anymore.
Why did I call the subgroups tori rather than "subspaces"? It's because they're periodic ? largely because \(\exp(2\pi i k)=1\). This is particularly clear in terms of the so-called Stiefel diagrams drawn in the Cartan subalgebra ? chosen to respect the angles inherited from the Killing form. For a few random easy groups, they look like this:
Click to zoom in. "Jin? pohled" means "A different perspective".
The empty or filled disks or squares correspond to elements of the Lie algebra (its Cartan subalgebra) whose exponential with the correct \(i\) if needed maps to the identity of the Lie group indicated in the legend. We also indicate the straight lines which are co-dimension 1 places for which all the roots take integral values under the most natural map. I won't really need the lines.
Root systems
Why is it clever to consider the maximum torus or its Lie algebra, the Cartan subalgebra of the original Lie algebra? It's because they define what the quantum physicists call "a maximum set of commuting observables". In particular, we can simultaneously diagonalize all these \(\ell\) (rank) generators.
Note that all these generators ? a basis of the Cartan subalgebra ? are operators that have well-defined actions on any "representation" of the Lie group (or Lie algebra, it's the same representation). A particularly important example of a representation of a Lie group or its Lie algebra is the same Lie algebra itself. The action of a Lie algebra element \(M\) on an element of the representation \(V\), which is just another element of the same Lie algebra, is simply \([M,V]\), the commutator. We call this action of the Lie algebra (or group) on itself "the adjoint representation".
Just like you may find states of the Hydrogen atom that are simultaneous eigenstates of \((n,l,m,s_z)\), a maximum set of commuting observables, you may look for collections of eigenvalues \((m_1,m_2,\dots , m_\ell)\) and the corresponding "shared eigenstates" in the Lie algebra. Here, \(m_i\) are some basis of the Cartan subalgebra. In the Cartan subalgebra space, the collection of these numbers ? eigenvalues \(m_i\) of \(M_i\) corresponding to an eigenstate ? looks like a vector. We call this vector a "weight of a given representation". Weights of the adjoint representation are known as "roots".
Note that \(\ell\) roots are zero (vanishing vectors in the Cartan subalgebra). They correspond to the "vectors of the representation" which are elements of the Cartan subalgebra itself, and because this subalgebra is commuting, \([M,V]=0\) for all pairs. We usually don't consider these "vanishing roots" to be "roots". So a "root" is really a weight for the adjoint representation that is nonzero. Their number is therefore \(d-\ell\) where \(\ell\) is the rank and \(d\) is the dimension of the Lie group or the Lie algebra, i.e. the number of its independent generators.
What do these roots (or their set for a given algebra, the so-called root system) look like? They're "stars" distributed in some "lattice" around the origin of the Cartan subalgebra. One may show that the introduction to the roots above implies that a root system
* doesn't contain the vanishing vector (by convention)
* for every \(\alpha\in\Sigma\), \(c\alpha\in\Sigma\) iff \(c=\pm 1\)
* the set \(\Sigma\) is symmetric with respect to mirroring defined by any plane normal to any root \(\alpha\in\Sigma\)
* the number\[
\{\alpha,\beta\} = \frac{2b(\alpha,\beta)}{b(\beta,\beta)} \in\ZZ
\] is integer. The second condition among the four heuristically says that the generator associated with a root commutes with itself so you can't get any roots that are longer but going in the same direction. It's not quite a proof but believe me it's the case for the root systems we may construct from actual groups ? and we may demand this condition to be valid for "abstract root systems", too.
The world's most viewed video of root systems. The only "tiny" problem is that Nude Socialist failed to explain to their readers (and themselves) that the colorful marks represent roots in a root system and the whole video is a pedagogically incomplete introduction to basic theory of Lie groups that's been known for a century in average. Instead, they told everyone it was an explanation of a theory of everything invented by a surfer dude. ;-)
The third condition reflects the existence of an automorphism associated with a given root. The last condition about the integrality may be proven but I chose to omit this point. However, it's the most powerful axiom because it implies that any pair of roots \(\alpha\neq \pm \beta\) ? note that these are just two vectors in some Euclidean space ? must have one of the following relative geometric positions:
(0) they're orthogonal to one another
(1) they're equally long and the angle between them is 60? or 120?
(2) one of them is \(\sqrt{2}\) times longer than the other one and the angle between them is 45? or 135?
(3) one of them is \(\sqrt{3}\) times longer than the other one and the angle between them is 30? or 150?
I will prove this assertion, however. Four times the cosine of the angle \(\omega\) between them may be written as\[
4\cos^2\omega = \frac{ 2b(\alpha,\beta) 2b(\beta,\alpha) }{ b(\alpha,\alpha)b(\beta,\beta) }
\] from the usual cosine interpretation of an inner product. Because of the left hand side, it is a number between 0 and 4. The right hand side shows that it is a product of two integers, due to an assumption a few paragraphs above, and it can't be \(4\) because we assumed that the roots weren't equal (not even up to a sign) and they couldn't be parallel if \(\alpha\neq \pm\beta\), due to another assumption. So the angle \(\omega\) equal to zero or a multiple of \(\pi\) is excluded and both sides must actually be equal to 0, 1, 2, or 3.
For 0, the roots are orthogonal to one another because the inner product has to be zero. What remains is to write numbers 1,2,3 in all possible ways as products of two integers and make the corresponding interpretation of the values \(b(\alpha,\beta)/b(\alpha,\alpha)\) and so on. We obtain the possible length ratios and angles listed above.
Finally, constructing root systems from Dynkin diagrams
Using the warp speed approach, we quickly got to the fun part of the game with many pictures and possibilities. We want to classify all root systems. Root systems are finite collections of (nonzero) roots in the \(\ell\)-dimensional space. The relative geometric relationships between the roots are given by the propositions (0), (1), (2), (3). One of them.
First, we will choose a subset of \(\ell\) roots which may act as a basis of the Cartan subalgebra, \(\RR^\ell\). A convenient choice is the choice of "positive roots". In some Cartesian coordinate system, they're roots whose "first" nonzero coordinate is positive. Note that we have made many steps. From a given Lie group, we constructed a Lie algebra. We found its Cartan subalgebra. We found the roots. From the root system, we chose the positive roots.
Now, we will draw the Dynkin diagram for the root system.
The Dynkin diagram is a collection of \(\ell\) (rank) nodes (small circles) and each two nodes are connected by 0, 1, 2, 3 lines if the relative geometric position (and length ratio) of the two roots is given by the propositions (0), (1), (2), (3) above, respectively. Moreover, if the link between two roots is of the (2) or (3) type, we draw an arrow on the link between the two nodes. By convention, it's directed towards the shorter root, so that it can be interpreted as \(\lt\) or \(\gt\) for the lengths of the roots.
(Someone uses the opposite convention but we can't do anything about the human freedom.)
Our final task is to draw all possible Dynkin diagrams. That's a cool task. The result will be
It' my copyrighted invention to draw the \(E_\ell\) Dynkin diagrams so that they resemble the letter E, and to add the Penrose triangle to the right side. ;-)
The possible Dynkin diagrams are of the \(A,B,C,D,E,F,G\) type. \(A\) is special unitary, \(B\) is odd orthogonal, \(C\) is symplectic, \(D\) is even orthogonal, and \(E,F,G\) refers to the five exceptional groups. How do we understand that these Dynkin diagrams are possible and the only possible ones?
We will go through a cute algorithm by showing that the geometrically possible Dynkin diagrams can't have loops; triple links combined with any other links; double links with extra links on both sides, and that the diagrams with simple links only (the so-called simply laced diagrams) can't be more convoluted than the diagrams \(A,D,E\) above.
Let us begin. But first, let's dedicate one paragraph to a refinement of our strategy.
Our strategy will be to look at a "candidate" Dynkin diagram and to construct a vector ? an integer linear combination of the positive roots (represented by the nodes of the diagram) ? whose squared length (computed via the inner product) is non-positive. Because any nontrivial combination of these bases vector is a nonzero vector in the Euclidean space, its length has to be positive. So if we find out it's zero or negative, the Dynkin diagram just can't be realized by any roots in an actual Euclidean space!
It just happens that all the linear combinations we will really care about will produce vectors of vanishing length. We will write the coefficients in front of each positive root as labels next to each node and the fact that exactly one combination of the positive roots is a "zero norm" vector actually means that all these diagrams will be "extended Dynkin diagrams" appropriate for "affine Lie groups". These are infinite-dimensional groups in which the original Lie group element is chosen for each point on a closed string.
I wanted to draw these extended Dynkin diagrams with the coefficients ? called "marks" in the diagram below ? but finally I saved one hour after I found the complete product somewhere:
Click to zoom in.
In all cases, the Dynkin numbering is just a particular way to label the nodes by numbers \(0,1,2,\dots ,\ell-1\): there are \(\ell\equiv n+1\) (rank) of them. (Here, \(n+1\) is the rank of a candidate Lie group we will prove not to exist, but by dropping the node #0, we always obtain a legit Lie group of a lower rank \(n\).) It's the marks we care about. Let's start with \(A_n\) which turns out to describe \(SU(n+1)\) of rank \(n\), after the node #0 will be dropped. The extended Dynkin diagram is a loop. Let's take the roots \(R_0,R_1,\dots ,R_{n-1}\), construct the appropriate linear combination, and square this vector. We have\[
(R_0+R_1+\dots + R_{n-1})^2 = (n-n) (R_0)^2=0.
\] Why is it \(n-n\)? Because the distributive law tells us to sum up the squared terms plus the mixed products. There are \(n\) terms \((R_i)^2\) so they contribute the term \(+n\) times \((R_0)^2\). The mixed terms ? inner products ? only contribute for the adjacent vectors such as \(R_4\cdot R_5\); all the other pairs are orthogonal. The adjacent positive vectors are equally long and have 60? or 120? in between them so the inner product is \(-1/2\) times \((R_4)^2\). The sign is actually minus because, as we have assumed, both roots are positive and this, with some extra reasoning, translates to the negative inner product. A system of simple positive roots is one for which all the mixed inner products are negative and it's always possible to "fix" a basis composed of roots so that the condition is satisfied.
Fine. So we see that the linearly independent vectors add up to a nonzero vector of a vanishing length. This can't happen in the Euclidean space; it can only happen in an indefinite space so the \(n\) vectors describe the affine Dynkin diagram only. If we erase any node (and the links going to it), but we will erase the node with the Dynkin numbering 0 because this rule extends throughout the process, we get a legitimate ordinary Dynkin diagram, one for \(A_m\). It's a straight line with simple links.
We learned that a legitimate Dynkin diagram can't include a loop. This is true for a loop with simple links but it's true for a loop with more complicated links, too. Why? Because whenever we have a no-go proof for a diagram with simple links only, we may always use it to exclude a more complicated diagram where some links are made double or triple, too. It's because in such a diagram with double or triple links, you may scale each root to have the same length, and because \(\cos\omega\) is even more negative for \(\omega=135^\circ\) or \(\omega=150^\circ\) than it is for \(\omega=120^\circ\), the "squared norm" will be even more negative. Also, if you add simple links to a pre-existing demonstrably impossible diagram, you will obtain an even more impossible diagram because the angle \(90^\circ\) whose cosine is zero also gets replaced by \(120^\circ\) whose cosine is (more) negative.
You may always attach the "marks" to a subdiagram only and run the same argument. We're learning that loops aren't possible in a normal Dynkin diagram.
Now, we have to deal with multiple links. They're possible but only with severe limitations. Let's start with triple links. Look at the extended Dynkin diagram for \(G_2\) above. It has a triple link and one more single link attached to one of the nodes hosting the triple link. The squared length of the combined vector is\[
(1R_0 + 2 R_1 + 3 R_2)^2 = (1+4+3-2-6 )(R_0)^2 = 0
\] The coefficients are the marks; the subscripts are the Dynkin numbering of the corresponding positive roots. Although the diagram used a different convention, the node #2 is the \(\sqrt 3\) times shorter one than the nodes #0, #1 (the different convention in the diagram is that the shorter roots are those with the filled nodes). The inner product of #0,#1 is \(-1/2\) and so is the inner product of #1,#2, due to the relative angle 150?. However, the negative inner products must also be multiplied by \(2\) from the \((a+b)^2\) formula as well as by the product of the coefficients.
We're learning that if there is a triple link, no other links can go to the longer root carrying the triple link. Well, the opposite case in which the diagram continues beyond the shorter root carrying the triple link is obviously also banned: this situation differs from one we analyzed by replacing the node #2 by three times itself.
The conclusion is that the triple link can't mix with any other links. In other words, the \(G_2\) diagram with nothing else than a triple link between two nodes is the only diagram with a triple link! Note that \(G_2\) is the automorphism group of the octonions.
Now, we have to deal with the double links. First, we start with the extended Dynkin diagram for \(C_n\) on the picture above. Again, we take the combination, compute the squared length of the vector from the inner products and angles ? you should really try to do these exercises if you've never done it before ? and conclude that a combination of the vectors has zero norm. The lesson learned from \(C_n\) is that there may be at most one double link in a diagram. Drop the left node with the Dynkin numbering #0 and you will obtain a legitimate \(C_\ell\) Dynkin diagram for the \(USp(2\ell)\) symplectic group.
But even with one allowed double link, the options are very restricted, although less restricted than in the case of the triple links. Look at the \(F_4\) extended Dynkin diagram above. It tries to extend the Dynkin diagram too far on both sides of a double link. It implies that if the links continue on both sides, then 1 extra link on one side and 2 extra links on the other side is already too many. Again, this holds regardless of which side contains the longer roots.
It means that the only viable Dynkin diagram that continues on both sides from a double link is one in which there is one simple link on each side. That's the normal diagram of \(F_4\) that you again obtain by dropping the node #0 in the Dynkin numbering from the extended \(F_4\) diagram.
Finally, when it comes to the discussion with double links, we have to check diagrams that only continue on one side from a single double link. In this part of the classification, the extended Dynkin diagram for \(B_n\) tells us that this continuation of a double link can't contain any branches at all. Drop the left upper node #0 and you will again get a legitimate \(C_n\) diagram. The legitimate \(B_n\) and \(C_n\) diagrams for \(SO(2\ell+1)\) and \(USp(2\ell)\) only differ by the orientation of the double link which is continued on both sides by a chain of simple links.
That has restricted the diagrams with a double link (there can only be one) to \(B_n,C_n,F_4\). And we're left to look at diagrams with simple links only ? the simply-laced diagrams ? that represent the "ADE classification" subset of our exercise. I have already said that there can't be loops, from the extended Dynkin diagram of \(A_n\).
However, when all multiple links are banned, there can be trees. They're somewhat restricted. First, look at the extended Dynkin diagram for \(D_4=SO(8)\) in the middle. It contains a tree with a "quartic vertex" and it's already prohibited. So "nonlinear tree" graphs may have at most cubic vertices. However, as extended \(D_n=SO(2n)\) diagrams further show, there can't be more than one cubic vertex in a normal Dynkin diagram. So let's look at simply laced diagrams with a single cubic vertex.
The extended diagram for \(E_6\) ? again, a calculation of a squared norm from inner product expects you here ? implies that the minimum length of a branch among the three branches must be less than two. If all three branches have length equal to two or longer, you will again discover impossible vectors. Drop the upper node #0 from the extended Dynkin diagram and you will get an allowed diagram for \(E_6\) whose branches have lengths 1,2,2.
Fine. This \(E_6\) stop signal covered all attempts to make all three branches of the cubic vertex too long. Let's assume that one of the branches is short, consisting of one simple link only. Then the extended \(E_7\) diagram on the picture implies that at least one of the remaining two lengths must be shorter than three links. If both of them are three, it's already impossible. Drop the left node #0 from the \(E_7\) extended Dynkin diagram to get the normal Dynkin diagram whose branch lengths are 1,2,3.
Finally, we may accept that the shortest branch has 1 link, as suggested by the \(E_6\) stop signal; the middle-length branch has 2 links, as suggested by the \(E_7\) stop signal. But we may still ask what the length of the remaining branch may be. The \(E_8\) extended Dynkin diagram answers this question: if the maximum branch length is 5, it's already too much. It may be at most 4, as in the normal \(E_8\) Dynkin diagram you get by dropping the node #0 from the extended one.
This classifies all possibilities. Now, you should really check all the inner products I just announced. ;-)
It is remarkable that the list of simple compact Lie groups is so concise yet so nontrivial. Note that "non-simple" compact Lie groups may be written as direct products and the corresponding diagrams are "disconnected" into several pieces. All the compact Lie groups may be imagined to be particular subgroups of some \(SO(M)\) group for a large enough \(M\).
The definition of a compact simple Lie group ? which is a somewhat rudimentary mathematical structure (and not one allowing a complete description of physical systems capable of intelligent life) led to this interesting classification problem. I view string theory ? and the classification of its solutions (or the "landscape") ? as the most complete mature counterpart of the child-like toy model that was presented in this article.
Source: http://motls.blogspot.com/2012/10/classification-of-simple-compact-lie.html
palmetto rob lowe sanctum the notebook duke basketball miranda july joe paterno near death
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.