Concept Deep Dive: Representation Theory

This is one of those topics that disappeared somewhere down the maelstrom of introduction to linear algebra, but gets important further down the line. Representation theory nominally reduces abstract algebra problems into linear algebra problems by decomposing the group into three subgroups, so algebraic objects seem to share some commonalities. It seeks to reduce the object down into a set of elements that will span the object with their linear combinations. We know this intuitively from vector spaces as a basis. This also exists for Lie-algebras, for example, though the basis requires that all its elements are linearly independent, whereas the generator does not.

Representations

A representation of some space can be defined either through an action, or through a map. Let's first go through the map, as it's a concept we're already definitely familiar with. In a vector space V for a group G, we pick some g ∈ G and construct a map ϕ(g): V → V which is isomorphic for all elements in G. This will make the representation ϕ a group homomorphism from G onto GL(V, F) where F is the field that V is a vector space over. This is probably the easiest of the descriptions available. The others would have us delve into endomorphism algebras and/or Lie algebras, both of which are better situated in their own parts. It's a very abstract way of describing a representation, but it's a short definition. You win some, you lose some, I guess.

The definition through action characterizes the map ϕ: G × V → V where G is either a group or an algebra on a vector space V. We embed another map inside it by setting ϕ(g): V → V, v → ϕ(g, v) to be linear in the field F. We also require ϕ to have an identity element and associativity in combination with the group operation (most commonly a group product. A lot of what these requirements posit is automatically fulfilled by associative algebras and Lie algebras. For Lie algebras, we thus only need to require associativity with the Lie-bracket. This definition is easier to grasp, but it comes with a list and a switch case. Despite this, I think I prefer it over the map definition.

For us to make decent claims, we want to minimize our objects as far as we can. Representations can be minimized down to an irreducible form. The irreducible representation is some nonzero representation without nontrivial subrepresentations. To unpack this, first we want to define subrepresentations. Take (V, ϕ) to be some representation of G, and W a subspace of V over which the (group-)action G is preserved. W then becomes a subrepresentation through Ψ: G → Aut(W). This restricts ϕ onto W, but (W, Ψ) does actually suffice as a representation of G, even though it's smaller. We can embed W into V through an equivariant map. The A representation then is reducible, if it only has itself and the trivial subspace as subrepresentations.

Since we're working with groups and algebras, there is the occasional case in which the representation can be completely disassembled into irreducible representations. We call such representations completely reducible. This occurs within finite groups (as one might expect); compact groups, which are topologically compact groups, i.e. a generalization of finite groups anyways; and semi-simple Lie algebras. Semi-simplicity is the most general case, in which an object can be completely decomposed into nontrivial, simple (or "elementary") subobjects. We of course link representations using the tensor product.

Representation Theory of Finite Groups

There are two principle cases I want to look at here. The case of the finite groups, because they are the most elementary, and the case of the Lie Algebra, because they are the most useful to me in modern theoretical physics. Obviously there is quite a jump in between both of them, but the methodology is more or less the same. As a nice bonus, the case of the Lie algebras will have a nice bit of praxis attached. We of course begin with a group G and a field with a characteristic that doesn't divide |G|. I know that this series is massively set off from the Math studies series that I do weekly, but there's going to be a piece on field extensions that tackles what that means. Anyways, with the representation ϕ of H on W with a basis {w} and S with basis {x} a left transversal to H in G. A left transversal is a subset of a subgroup, such that it contains exactly one element of each left coset. We can construct a basis {x ⊗ w} for a space V. The elements of this basis will only show up in the diagonal entry of the representation ϕ if gx = gh with some h ∈ H, which implies that ϕ(g)(x ⊗ w) = x ⊗ ϕ(h)(w) for all x and w. The sum of contributions in the diagonal entries corresponds in such a way that it forms a set we will denote with some induced character Χ', the sum of which is denoted as Χ

and where X' = X on H, 0 on G - H. If the reader is familiar with Frobenius reciprocity, then this is where it comes from. In this context, it maps the inner product of induced elements in G onto the inner product of induced elements in H, by restricting the representation on H. This is the most general case I'm aware of, so the implementation might seem a little garbled, because the objects are very non-trivial. It's not really the pointed I wanted to go for here, but just a nice bonus I found.

Representation Theory of Lie Algebras

A Lie algebra is a continuous group equipped with a Lie bracket. This might be a Poisson-bracket, or a commutator most times. We use the commutator for quantizations

On the space L, the representation ϕ: L -> B(V) over some vector space V induces the linear map

This is more or less what we know again. We can find a number of interesting examples. For the 1-dimensional case, the trivial representation will fulfill that. It'll map ϕ(T) = 0. The Pauli Matrices fulfill this for 2 dimensions, where each application of ϕ on some T(1, 2, 3) outputs one of the standard Pauli matrices. There are of course the three-dimensional Pauli matrices, which will be a representation in 3 dimensions. It's also possible to embed the 2-dimensional set of Pauli matrices in 3 dimensional matrices by writing it in block-diagonal form. This is an example for a reducible representation.

For Lie algebras, the decomposition can be written as a more explicit formula. We apply what we already took from the finite group, but just make explicit as much as we can. So the diagonal becomes a trace of the matrix, and the sum takes an exponential form. We use the fact that the tensor product of representations translates into a product of representations.

Let's apply that to the examples. A here is the diagonal matrix with 1/2 in the diagonal.

Previous
Previous

Concept Deep Dive: Green's Functions

Next
Next

Concept Deep Dive: Noether's Theory, Symmetries and Operators