Hopf algebras in noncommutative geometry Joseph C. Várilly Escuela de Matemática, Universidad de Costa Rica, 11501 San José, Costa Rica From: Geometric and Topological Methods for Quantum Field Theory (World Scientific, Singapore, March 2003), pp. 1–85 Abstract We give an introductory survey to the use of Hopf algebras in several problems of noncom- mutative geometry. The main example, the Hopf algebra of rooted trees, is a graded, connected Hopf algebra arising from a universal construction. We show its relation to the algebra of transverse differential operators introduced by Connes and Moscovici in order to compute a local index formula in cyclic cohomology, and to the several Hopf algebras defined by Connes and Kreimer to simplify the combinatorics of perturbative renormalization. We explain how characteristic classes for a Hopf module algebra can be obtained from the cyclic cohomology of the Hopf algebra which acts on it. Finally, we discuss the theory of noncommutative spherical manifolds and show how they arise as homogeneous spaces of certain compact quantum groups. Contents 1 Noncommutative Geometry and Hopf Algebras 2 1.1 The algebraic tools of noncommutative geometry 3 1.2 Hopf algebras: introduction 7 1.3 Hopf actions of differential operators: an example 17 2 The Hopf Algebras of Connes and Kreimer 22 2.1 The Connes–Kreimer algebra of rooted trees 22 2.2 Hopf algebras of Feynman graphs and renormalization 32 3 Cyclic Cohomology 36 3.1 Hochschild and cyclic cohomology of algebras 36 3.2 Cyclic cohomology of Hopf algebras 43 4 Noncommutative Homogeneous Spaces 48 4.1 Chern characters and noncommutative spheres 48 4.2 How Moyal products yield compact quantum groups 51 4.3 Isospectral deformations of homogeneous spin geometries 57 References 62 1 Introduction These are lecture notes for a course given at the Summer School on Geometric and Topological Methods for Quantum Field Theory, sponsored by the Centre International de Mathématiques Pures et Appliquées (CIMPA) and the Universidad de Los Andes, at Villa de Leyva, Colombia, from the 9th to the 27th of July, 2001. These notes explore some recent developments which place Hopf algebras at the heart of the noncommutative approach to geometry and physics. Many examples of Hopf algebras are known from the literature on “quantum groups”, some of which provide algebraic deformations of the classical transformation groups. The main emphasis here, however, is on certain other Hopf algebras which have recently appeared in two seemingly unrelated contexts: in the combinatorics of perturbative renormalization in quantum field theories, and in connection with local index formulas in noncommutative geometry. These Hopf algebras act on “noncommutative spaces”, and certain characteristic classes for these spaces can be obtained, by a canonical procedure, from corresponding invariants of the Hopf algebras. This comes about by pulling back the cyclic cohomology of the algebra representing the noncommutative space, which is the receptacle of Chern characters, to another cohomology of the Hopf algebra. Recently, some interesting spaces have been discovered, the noncommutative spheres, which are completely specified by certain algebraic relations. They turn out to be homogeneous spaces under the action of certain Hopf algebras: in this way, these Hopf algebras appear as “quantum symmetry groups”. We shall show how these symmetries arise from a class of quantum groups built from Moyal products on group manifolds. Section 1 is introductory: it offers a snapshot of noncommutative geometry and the basic theory of Hopf algebras; as an example of how both theories interact, we exhibit the Connes–Moscovici Hopf algebra of differential operators in the one-dimensional case. Section 2 concerns the Hopf algebras which have been found useful in the perturbative approach to renormalization. We develop at length a universal construction, the Connes–Kreimer algebra of rooted trees, which is a graded, commutative, but highly noncocommutative Hopf algebra. Particular quantum field theories give rise to related Hopf algebras of Feynman graphs; we discuss briefly how these give a conceptual approach to the renormalization problem. The third section gives an overview of cyclic cohomology for both associative and Hopf algebras, indicating how the latter provide characteristic classes for associative algebras on which they act. The final Section 4 explains how cyclic-homology Chern characters lead to new examples of noncommutative spin geometries, whose symmetry groups are compact quantum groups obtained from the Moyal approach to prequantization. 1 Noncommutative Geometry and Hopf Algebras Noncommutative geometry, in the broadest sense, is the study of geometrical properties of singular spaces, by means of suitable “coordinate algebras” which need not be commutative. If the space in question is a differential manifold, its coordinates form a commutative algebra of smooth functions; but even in this case, adding a metric structure may involve operators which do not commute with the coordinates. One learns to replace the usual calculus of points, paths, integration domains, etc., by an alternative language involving the algebra of coordinates; by focusing only on those features 2 which do not require that the coordinates commute, one arrives at an algebraic (or operatorial) approach which is applicable to many singular spaces also. 1.1 The algebraic tools of noncommutative geometry The first step is to replace a topological space 𝑋 by its algebra of complex-valued continuous functions 𝐶 (𝑋). If 𝑋 is a compact (Hausdorff) space, then 𝐶 (𝑋) is a commutative 𝐶∗-algebra with unit 1 and its norm ∥ 𝑓 ∥ := sup𝑥∈𝑋 | 𝑓 (𝑥) | satisfies the 𝐶∗-property ∥ 𝑓 ∥2 = ∥ 𝑓 ∗ 𝑓 ∥. The first Gelfand–Naı̆mark theorem [48] says that any commutative unital 𝐶∗-algebra 𝐴 is of this form: 𝐴 = 𝐶 (𝑋) where 𝑋 = 𝑀 (𝐴) is the space of characters (nonzero homomorphisms) 𝜇 : 𝐴 → ℂ, which is compact in the weak* topology determined by the maps 𝜇 ↦→ 𝜇(𝑎), for 𝑎 ∈ ℂ. Indeed, the characters of 𝐶 (𝑋) are precisely the evaluation maps 𝜀𝑥 : 𝑓 ↦→ 𝑓 (𝑥) at each point 𝑥 ∈ 𝑋 . We shall mainly deal with the compact case in what follows. A locally compact, but noncompact, space 𝑌 can be handled by passing to a compactification (that is, a compact space in which 𝑌 can be densely embedded). For instance, we can adjoin one “point at infinity”: if 𝑋 = 𝑌 ⊎ {∞}, then { 𝑓 ∈ 𝐶 (𝑋) : 𝑓 (∞) = 0 } is isomorphic to 𝐶0(𝑌 ), the commutative 𝐶∗-algebra of continuous functions on 𝑌 “vanishing at infinity”; thus, by dropping the constant functions from 𝐶 (𝑋), we get the commutative nonunital 𝐶∗-algebra 𝐶0(𝑌 ) as a stand-in for the locally compact space 𝑌 . There is also a maximal compactification 𝛽𝑌 := 𝑀 (𝐶𝑏 (𝑌 )), called the Stone–Čech compactification, namely, the character space of the (unital) 𝐶∗-algebra of bounded continuous functions on 𝑌 . The construction 𝑋 ↦→ 𝐶 (𝑋) yields a contravariant functor: to each continuous map ℎ : 𝑋1 → 𝑋2 between compact spaces there is a morphism1 𝜑ℎ : 𝐶 (𝑋2) → 𝐶 (𝑋1) given by 𝜑ℎ ( 𝑓 ) := 𝑓 ◦ ℎ. By relaxing the commutativity requirement, we can regard noncommutative 𝐶∗-algebras (unital or not) as proxies for “noncommutative locally compact spaces”. The characters, if any, of such an algebra may be said to label “classical points” of the corresponding noncommutative space. However, noncommutative 𝐶∗-algebras generally have few characters, so these putative spaces will have correspondingly few points. The recommended course of action, then, is to leave these pointless spaces behind and to adopt the language and techniques of algebras instead. There is a second Gelfand–Naı̆mark theorem [48], which states that any𝐶∗-algebra, commutative or not, can be faithfully represented as a (norm-closed) algebra of bounded operators on a Hilbert space. The data for a “noncommutative topology” consist, then, of a pair (𝐴,H) where H is a Hilbert space and 𝐴 is a closed subalgebra of L(H). ▶ Vector bundles over a compact space also have algebraic counterparts. If 𝑋 is compact and 𝐸 𝜋−→ 𝑋 is a complex vector bundle, the space Γ(𝑋, 𝐸) of continuous sections is naturally a module over 𝐶 (𝑋), which is necessarily of the form 𝑒𝐶 (𝑋)𝑚, where 𝑒 = 𝑒2 ∈ 𝑀𝑚 (𝐶 (𝑋)) is an idempotent matrix of elements of 𝐶 (𝑋). More generally, if 𝐴 is any algebra over ℂ, a right 𝐴-module of the form 𝑒𝐴𝑚 with 𝑒 = 𝑒2 ∈ 𝑀𝑚 (𝐴) is called a finitely generated projective module over 𝐴. The Serre–Swan theorem [99] matches vector bundles over 𝑋 with finitely generated projective modules over 𝐶 (𝑋). The idempotent 𝑒 may be constructed from the transition functions of the vector bundle by pulling back a standard idempotent from a Grassmannian bundle: see [45, §1.1] or [52, §2.1] for details. A more concrete example is that of the tangent bundle over a compact Riemannian manifold 𝑀: by the Nash embedding theorem [101, Thm 14.5.1], one can embed 𝑀 in some ℝ𝑚 so that the 1By a morphism of unital 𝐶∗-algebras we mean a ∗-homomorphism preserving the units. 3 metric on 𝑇𝑀 is obtained from the ambient Euclidean metric; if 𝑒(𝑥) is the orthogonal projector on ℝ𝑛 with range 𝑇𝑥𝑀 , then 𝑒 = 𝑒2 ∈ 𝑀𝑚 (𝐶 (𝑀)) and the module Γ(𝑀,𝑇𝑀) of vector fields on 𝑀 may be identified with the range of 𝑒. In the noncompact case, one can use Rennie’s nonunital version of the Serre–Swan theorem [84]: 𝐶0(𝑌 )-modules of the form 𝑒𝐶 (𝑋)𝑚, where 𝑋 is some compactification of 𝑌 and 𝑒 = 𝑒2 ∈ 𝑀𝑚 (𝐶 (𝑋)), consist of sections vanishing at infinity (i.e., outside of 𝑌 ) of vector bundles 𝐸 → 𝑋 . One can take 𝑋 to be the one-point compactification of 𝑌 only if 𝐸 is trivial at infinity; as a rule, the compactification to be used depends on the problem at hand. If 𝐴 is a𝐶∗-algebra, we may replace 𝑒 by an orthogonal projector (i.e., a selfadjoint idempotent) 𝑝 = 𝑝∗ = 𝑝2 so that 𝑒𝐴𝑚 ≃ 𝑝𝐴𝑚 as right 𝐴-modules. If 𝐴 is faithfully represented by bounded operators on a Hilbert spaceH, then𝑀𝑚 (𝐴) is an algebra of bounded operators onH𝑚 = H⊕· · ·⊕H (𝑚 times), so we can schematically write 𝑒 = ( 1 𝑥 0 0 ) as an operator on 𝑒H𝑚 ⊕ (1 − 𝑒)H𝑚; then 𝑝 := ( 1 0 0 0 ) is the range projector on 𝑒H𝑚. The correspondence 𝐸 ↦→ Γ(𝑋, 𝐸) is a covariant functor which carries topological invariants of 𝑋 to algebraic invariants of 𝐶 (𝑋). In particular, it identifies the 𝐾-theory group 𝐾0(𝑋), formed by stable equivalence classes of vector bundles where [𝐸] + [𝐹] := [𝐸 ⊕ 𝐹] —here ⊕ denotes Whitney sum of vector bundles over 𝑋— with the group 𝐾0(𝐶 (𝑋)) formed by stable isomorphism classes of matrix projectors over 𝐶 (𝑋) where [𝑝] + [𝑞] := [𝑝 ⊕ 𝑞] and ⊕ now denotes direct sum of projectors. The 𝐾-theory of 𝐶∗-algebras may be developed in an operator- theoretic way, see [8, 76, 108] and [52, Chap. 3], for instance; or purely algebraically, and the group 𝐾0(𝐴) turns out to be the same in both approaches. (However, the group 𝐾1(𝐴), formed by classes of unitaries in 𝑀𝑚 (𝐴), does not coincide with the algebraic 𝐾1-group in general: see, for instance, [95] or [52, p. 131].) The salient feature of both topological and 𝐶∗-algebraic 𝐾-theories is Bott periodicity, which says that two 𝐾-groups are enough: although one can define 𝐾 𝑗 (𝐴) is a systematic way for any 𝑗 ∈ ℕ, it turns out that 𝐾 𝑗+2(𝐴) ≃ 𝐾 𝑗 (𝐴) by natural isomorphisms (in marked contrast to the case of purely algebraic 𝐾-theory). ▶ To deal with a (compact) differential manifold 𝑀 (in these notes, we only treat differential manifolds without boundary), we replace the continuous functions in𝐶 (𝑀) by the dense subalgebra of smooth functions A = 𝐶∞(𝑀). This is no longer a 𝐶∗-algebra, but it is complete in its natural topology (that of uniform convergence of functions, together with their derivatives of all orders), so it is a Fréchet algebra with a 𝐶∗-completion. Likewise, given a vector bundle 𝐸 −→𝑀 , we replace the continuous sections in Γ(𝑀, 𝐸) by the A-module of smooth sections Γ∞(𝑀, 𝐸). The Serre–Swan theorem continues to hold, mutatis mutandis, in the smooth category. In the noncommutative case, with no differential structure a priori, we need to replace the 𝐶∗-algebra 𝐴 by a subalgebra A which should (a) be dense in A; (b) be a Fréchet algebra, that is, it should be complete under some countable family of seminorms including the original 𝐶∗-norm of 𝐴; and (c) satisfy 𝐾0(A) ≃ 𝐾0(𝐴). This last condition is not automatic: it is necessary that A be a pre-𝐶∗-algebra, that is to say, it should be stable under the holomorphic functional calculus (which is defined in the larger algebra 𝐴). The proof of (c) for pre-𝐶∗-algebras is given in [10]; see also [52, §3.8]. ▶ The next step is to find an algebraic description of a Riemannian metric on a smooth manifold. This can be done in a principled way through a theory of “noncommutative metric spaces” at present 4 under construction by Rieffel [91–94]. But here we shall take a short cut, by defining metrics only over spin manifolds, using the Dirac operator as our instrument; this was, indeed, the original insight of Connes [23]. A metric 𝑔 = [𝑔𝑖 𝑗 ] on the tangent bundle 𝑇𝑀 of a (compact) manifold 𝑀 yields a contragredient metric 𝑔−1 = [𝑔𝑟𝑠] on the cotangent bundle 𝑇∗𝑀; so we can build a Clifford algebra bundle ℂℓ(𝑀) −→𝑀 , whose fibre at 𝑥 is Cℓ((𝑇∗ 𝑥 𝑀)ℂ, 𝑔−1 𝑥 ), by imposing a suitable product structure on the complexified exterior bundle (Λ•𝑇∗𝑀)ℂ. We assume that 𝑀 supports a spinor bundle 𝑆 −→𝑀 , on which ℂℓ(𝑀) acts fibrewise and irreducibly; on passing to smooth sections, we may write 𝑐(𝛼) for the Clifford action of a 1-form 𝛼 on spinors. The spinor bundle comes equipped with a Hermitian metric, so the squared norm ∥𝜓∥2 := ∫ 𝑀 |𝜓(𝑥) |2 √︁ det 𝑔 𝑑𝑥 makes sense; the completion of Γ∞(𝑀, 𝑆) in this norm is the Hilbert space H = 𝐿2(𝑀, 𝑆) of square-integrable spinors. Locally, we may write the Clifford action of 1-forms as 𝑐(𝑑𝑥𝑟) := ℎ𝑟𝛼 𝛾 𝛼, where the “gamma matrices” 𝛾𝛼 satisfy 𝛾𝛼𝛾𝛽 + 𝛾𝛽𝛾𝛼 = 2 𝛿𝛼𝛽 and the coefficients ℎ𝑟𝛼 are real and obey ℎ𝑟𝛼𝛿𝛼𝛽ℎ𝑠𝛽 = 𝑔𝑟𝑠. The Dirac operator is locally defined as /𝐷 := −𝑖 𝑐(𝑑𝑥𝑟) ( 𝜕 𝜕𝑥𝑟 − 𝜔𝑟 ) , (1.1) where 𝜔𝑟 = 1 4 Γ̃ 𝛽 𝑟𝛼 𝛾 𝛼𝛾𝛽 are components of the spin connection, obtained from the Christoffel symbols Γ̃ 𝛽 𝑟𝛼 (in an orthogonal basis) of the Levi-Civita connection. The manifold 𝑀 is spin whenever these local formulae patch together to give a well-defined spinor bundle. There is a well-known topological condition for this to happen (the second Stiefel–Whitney class 𝑤2(𝑇𝑀) ∈ 𝐻2(𝑀,ℤ2) must vanish [67]), and when it is fulfilled /𝐷 extends to a selfadjoint operator on H with compact resolvent [52, 67]. Apart from these local formulas, the Dirac operator has a fundamental algebraic property. If 𝜓 is a spinor and 𝑎 ∈ 𝐶∞(𝑀) is regarded as a multiplication operator on spinors, it can be checked that /𝐷 (𝑎𝜓) = −𝑖 𝑐(𝑑𝑎) 𝜓 + 𝑎 /𝐷𝜓, or, more simply, [ /𝐷, 𝑎] = −𝑖 𝑐(𝑑𝑎). (1.2) Following [6], we call a “generalized Dirac operator” any selfadjoint operator 𝐷 on H satisfying [𝐷, 𝑎] = −𝑖 𝑐(𝑑𝑎) for 𝑎 ∈ 𝐶∞(𝑀). Now 𝑐(𝑑𝑎) is a bounded operator on 𝐿2(𝑀, 𝑆) whenever 𝑎 is smooth, and its norm is that of the gradient of 𝑎, i.e., the vector field determined by 𝑔(grad 𝑎, 𝑋) := 𝑑𝑎(𝑋) = 𝑋 (𝑎). A continuous function 𝑎 ∈ 𝐶 (𝑀) is called Lipschitz (with respect to the metric 𝑔) if its gradient is defined, almost everywhere, as an essentially bounded measurable vector field, i.e., ∥ grad 𝑎∥∞ is finite. Now the Riemannian distance 𝑑𝑔 (𝑝, 𝑞) between two points 𝑝, 𝑞 ∈ 𝑀 is usually defined as the infimum of the lengths of (piecewise smooth) paths from 𝑝 to 𝑞; but it is not hard to show (see [52, §9.3], for instance) that the distance can also be defined as a supremum: 𝑑𝑔 (𝑝, 𝑞) = sup{ |𝑎(𝑝) − 𝑎(𝑞) | : 𝑎 ∈ 𝐶 (𝑀), ∥ grad 𝑎∥∞ ⩽ 1 }. (1.3) The basic equation (1.2) allows to replace the gradient by a commutator with the Dirac operator: 𝑑𝑔 (𝑝, 𝑞) = sup{ |𝑎(𝑝) − 𝑎(𝑞) | : 𝑎 ∈ 𝐶 (𝑀), ∥ [ /𝐷, 𝑎] ∥ ⩽ 1 }. (1.4) Thus, the Riemannian distance function 𝑑𝑔 is entirely determined by /𝐷. Moreover, the metric 𝑔 is in turn determined by 𝑑𝑔, according to the Myers–Steenrod theorem [77]. From the noncommutative 5 point of view, then, the Dirac operator assumes the role of the metric. This leads to the following basic concept. Definition 1.1. A spectral triple is a triple (A,H, 𝐷), where A is a pre-𝐶∗-algebra, H is a Hilbert space carrying a representation of A by bounded operators, and 𝐷 is a selfadjoint operator on A, with compact resolvent, such that the commutator [𝐷, 𝑎] is a bounded operator on H, for each 𝑎 ∈ A. Spectral triples comes in two parities, odd and even. In the odd case, there is nothing new; in the even case, there is a grading operator 𝜒 on H (a bounded selfadjoint operator satisfying 𝜒2 = 1, making a splitting H = H+ ⊕H−), such that the representation of A is even (𝜒𝑎 = 𝑎𝜒 for all 𝑎 ∈ A) and the operator 𝐷 is odd, i.e., 𝜒𝐷 = −𝐷𝜒; thus each [𝐷, 𝑎] is a bounded odd operator on H. A noncommutative spin geometry is a spectral triple satisfying several extra conditions, which were first laid out by Connes in the seminal paper [25]. These conditions (or “axioms”, as they are sometimes called) arise from a careful consideration of the algebraic properties of ordinary metric geometry. Seven such properties are put forward in [25]; here, we shall just outline the list. Some of the terminology will be clarified later on; a more complete account, with all prerequisites, is given in [52, §10.5]. 1. Classical dimension: There is a unique nonnegative integer 𝑛, the “classical dimension” of the geometry, for which the eigenvalue sums 𝜎𝑁 := ∑ 0⩽𝑘<𝑁 𝜇𝑘 of the compact positive operator |𝐷 |−𝑛 satisfy 𝜎𝑁 ∼ 𝐶 log 𝑁 as 𝑁 → ∞, with 0 < 𝐶 < ∞; the coefficient is written 𝐶 = ⨏ |𝐷 |−𝑛, where ⨏ denotes the “Dixmier trace” if 𝑛 ⩾ 1. This 𝑛 is even if and only if the spectral triple is even. (When A = 𝐶∞(𝑀) and 𝐷 is a Dirac operator, 𝑛 equals the ordinary dimension of the spin manifold 𝑀). 2. Regularity: Not only are the operators 𝑎 and [𝐷, 𝑎] bounded, but they lie in the smooth domain of the derivation 𝛿(𝑇) := [|𝐷 |, 𝑇]. (When A is an algebra of functions and 𝐷 is a Dirac operator, this smooth domain consists exactly of the 𝐶∞ functions.) 3. Finiteness: The algebra A is a pre-𝐶∗-algebra, and the space of smooth vectors H∞ :=⋂ 𝑘 Dom(𝐷𝑘 ) is a finitely generated projective left A-module. (In the commutative case, this yields the smooth spinors.) 4. Reality: There is an antiunitary operator 𝐶 on H, such that [𝑎, 𝐶𝑏∗𝐶−1] = 0 for all 𝑎, 𝑏 ∈ A (thus 𝑏 ↦→ 𝐶𝑏∗𝐶−1 is a commuting representation on H of the “opposite algebra” A◦, with the product reversed); and moreover, 𝐶2 = ±1, 𝐶𝐷 = ±𝐷𝐶, and 𝐶𝜒 = ±𝜒𝐶 in the even case, where the signs depend only on 𝑛 mod 8. (In the commutative case, 𝐶 is the charge conjugation operator on spinors.) 5. First order: The bounded operators [𝐷, 𝑎] commute with the opposite algebra representation: [[𝐷, 𝑎], 𝐶𝑏∗𝐶−1] = 0 for all 𝑎, 𝑏 ∈ A. 6. Orientation: There is a Hochschild 𝑛-cycle c on A whose natural representative is 𝜋𝐷 (c) = 𝜒 (even case) or 𝜋𝐷 (c) = 1 (odd case). More on this later: such an 𝑛-cycle is usually a finite sum of terms like 𝑎0 ⊗ 𝑎1 ⊗ · · · ⊗ 𝑎𝑛 which map to operators 𝜋𝐷 (𝑎0 ⊗ 𝑎1 ⊗ · · · ⊗ 𝑎𝑛) := 𝑎0 [𝐷, 𝑎1] . . . [𝐷, 𝑎𝑛], and c is the algebraic expression of the volume form for the metric determined by 𝐷. 6 7. Poincaré duality: The index map of 𝐷 determines a nondegenerate pairing on the 𝐾-theory of the algebra A. (We shall not go into details, except to mention that in the commutative case, the Chern homomorphism matches this nondegeneracy with Poincaré duality in de Rham co/homology.) It is very important to know that when 𝐴 = 𝐶∞(𝑀) the usual apparatus of geometry on spin manifolds (spin structure, metric, Dirac operator) can be fully recovered from these seven conditions: for the full proof of this theorem, see [52, Chap. 11]. Another proof, assuming only that A is commutative, is developed by Rennie in [83]. 1.2 Hopf algebras: introduction The general scheme of replacing point spaces by function algebras and then moving on to noncom- mutative algebras also works for symmetry groups. Now, however, the interplay of algebra and topology is much more delicate. There are at least two ways of handling this issue. One is to leave topology aside and develop a purely algebraic theory of symmetry-bearing algebras: these are the Hopf algebras, sometimes called “quantum groups”, about which there is already a vast literature. At the other extreme, one may insist on using𝐶∗-algebras with special properties; in the unital case, there has emerged a useful theory of “compact quantum groups” [113], which only recently has been extended to the locally compact case also [66]. We begin with the more algebraic treatment, keeping to the compact case, i.e., all algebras will be unital unless indicated otherwise. The field of scalars may be taken as ℂ, ℝ or ℚ, according to convenience; to cover all cases, we shall denote it by 𝔽 . In this section, ⊗ always means the algebraic tensor product. Definition 1.2. A bialgebra is a vector space 𝐴 over 𝔽 which is both an algebra and a coalgebra in a compatible way. The algebra structure is given by 𝔽 -linear maps 𝑚 : 𝐴 ⊗ 𝐴→ 𝐴 (the product) and 𝜂 : 𝔽 → 𝐴 (the unit map) where 𝑥𝑦 := 𝑚(𝑥, 𝑦) and 𝜂(1) = 1𝐴. The coalgebra structure is likewise given by linear maps Δ : 𝐴 → 𝐴 ⊗ 𝐴 (the coproduct) and 𝜀 : 𝐴 → 𝔽 (the counit map). We write 𝜄 : 𝐴→ 𝐴, or sometimes 𝜄𝐴, to denote the identity map on 𝐴. The required properties are: 1. Associativity: 𝑚(𝑚 ⊗ 𝜄) = 𝑚(𝜄 ⊗ 𝑚) : 𝐴 ⊗ 𝐴 ⊗ 𝐴→ 𝐴; 2. Unity: 𝑚(𝜂 ⊗ 𝜄) = 𝑚(𝜄 ⊗ 𝜂) = 𝜄 : 𝐴→ 𝐴; 3. Coassociativity: (Δ ⊗ 𝜄)Δ = (𝜄 ⊗ Δ)Δ : 𝐴→ 𝐴 ⊗ 𝐴 ⊗ 𝐴; 4. Counity: (𝜀 ⊗ 𝜄)Δ = (𝜄 ⊗ 𝜀)Δ = 𝜄 : 𝐴→ 𝐴; 5. Compatibility: Δ and 𝜀 are unital algebra homomorphisms. The first two conditions, expressed in terms of elements 𝑥, 𝑦, 𝑧 of 𝐴, say that (𝑥𝑦)𝑧 = 𝑥(𝑦𝑧) and 1𝐴𝑥 = 𝑥1𝐴 = 𝑥. The next two properties are obtained by “reversing the arrows”. Commutativity may be formulated by using the “flip map” 𝜎 : 𝐴 ⊗ 𝐴 → 𝐴 ⊗ 𝐴 : 𝑥 ⊗ 𝑦 ↦→ 𝑦 ⊗ 𝑥: the bialgebra is commutative if 𝑚𝜎 = 𝑚 : 𝐴 ⊗ 𝐴 → 𝐴. Likewise, the bialgebra is called cocommutative if 𝜎Δ = Δ : 𝐴→ 𝐴 ⊗ 𝐴. The (co)associativity rules suggest the abbreviations 𝑚2 := 𝑚(𝑚 ⊗ 𝜄) = 𝑚(𝜄 ⊗ 𝑚), Δ2 := (Δ ⊗ 𝜄)Δ = (𝜄 ⊗ Δ)Δ, 7 with obvious iterations 𝑚3 : 𝐴⊗4 → 𝐴, Δ3 : 𝐴→ 𝐴⊗4; 𝑚𝑟 : 𝐴⊗(𝑟+1) → 𝐴, Δ𝑟 : 𝐴→ 𝐴⊗(𝑟+1) . Exercise 1.1. If (𝐶,Δ, 𝜀) and (𝐶′,Δ′, 𝜀′) are coalgebras, a counital coalgebra morphism between them is an 𝔽 -linear map ℓ : 𝐶 → 𝐶′ such thatΔ′ℓ = (ℓ⊗ℓ)Δ and 𝜀′ℓ = 𝜀. Show that the compatibility condition is equivalent to the condition that 𝑚 and 𝑢 are counital coalgebra morphisms. ♢ Definition 1.3. The vector space Hom(𝐶, 𝐴) of 𝔽 -linear maps from a coalgebra (𝐶,Δ, 𝜀) to an algebra (𝐴, 𝑚, 𝜂) has an operation of convolution: given two elements 𝑓 , 𝑔 of this space, the map 𝑓 ∗ 𝑔 ∈ Hom(𝐶, 𝐴) is defined as 𝑓 ∗ 𝑔 := 𝑚( 𝑓 ⊗ 𝑔)Δ : 𝐶 → 𝐴. Convolution is associative because ( 𝑓 ∗ 𝑔) ∗ ℎ = 𝑚(( 𝑓 ∗ 𝑔) ⊗ ℎ)Δ = 𝑚(𝑚 ⊗ 𝜄) ( 𝑓 ⊗ 𝑔 ⊗ ℎ) (Δ ⊗ 𝜄)Δ = 𝑚(𝜄 ⊗ 𝑚) ( 𝑓 ⊗ 𝑔 ⊗ ℎ) (𝜄 ⊗ Δ)Δ = 𝑚( 𝑓 ⊗ (𝑔 ∗ ℎ))Δ = 𝑓 ∗ (𝑔 ∗ ℎ). This makes Hom(𝐶, 𝐴) an algebra, whose unit is 𝜂𝐴𝜀𝐶 : 𝑓 ∗ 𝜂𝐴𝜀𝐶 = 𝑚( 𝑓 ⊗ 𝜂𝐴𝜀𝐶)Δ = 𝑚(𝜄𝐴 ⊗ 𝜂𝐴) ( 𝑓 ⊗ 𝜄𝔽 ) (𝜄𝐶 ⊗ 𝜀𝐶)Δ = 𝜄𝐴 𝑓 𝜄𝐶 = 𝑓 , 𝜂𝐴𝜀𝐶 ∗ 𝑓 = 𝑚(𝜂𝐴𝜀𝐶 ⊗ 𝑓 )Δ = 𝑚(𝜂𝐴 ⊗ 𝜄𝐴) (𝜄𝔽 ⊗ 𝑓 ) (𝜀𝐶 ⊗ 𝜄𝐶)Δ = 𝜄𝐴 𝑓 𝜄𝐶 = 𝑓 . A bialgebra morphism is a linear map ℓ : 𝐻 → 𝐻′ between two bialgebras, which is both a unital algebra homomorphism and a counital coalgebra homomorphism; that is, ℓ satisfies the four identities ℓ𝑚 = 𝑚′(ℓ ⊗ ℓ), ℓ𝜂 = 𝜂′, Δ′ℓ = (ℓ ⊗ ℓ)Δ, 𝜀′ℓ = 𝜀, where the primes indicate coalgebra operations for 𝐻′. A bialgebra morphism respects convolution, in the following ways; if 𝑓 , 𝑔 ∈ Hom(𝐶, 𝐻) and ℎ, 𝑘 ∈ Hom(𝐻′, 𝐴) for some coalgebra 𝐶 and some algebra 𝐴, then ℓ( 𝑓 ∗ 𝑔) = ℓ𝑚( 𝑓 ⊗ 𝑔)Δ𝐶 = 𝑚′(ℓ ⊗ ℓ) ( 𝑓 ⊗ 𝑔)Δ𝐶 = 𝑚′(ℓ 𝑓 ⊗ ℓ𝑔)Δ𝐶 = ℓ 𝑓 ∗ ℓ𝑔, (ℎ ∗ 𝑘)ℓ = 𝑚𝐴 (ℎ ⊗ 𝑘)Δ′ℓ = 𝑚𝐴 (ℎ ⊗ 𝑘) (ℓ ⊗ ℓ)Δ = 𝑚𝐴 (ℎℓ ⊗ 𝑘ℓ)Δ = ℎℓ ∗ 𝑘ℓ. Definition 1.4. A Hopf algebra is a bialgebra 𝐻 together with a (necessarily unique) convolution inverse 𝑆 for the identity map 𝜄 = 𝜄𝐻; the map 𝑆 is called the antipode of 𝐻. Thus, 𝜄 ∗ 𝑆 = 𝑚(𝜄 ⊗ 𝑆)Δ = 𝜂𝜀, 𝑆 ∗ 𝜄 = 𝑚(𝑆 ⊗ 𝜄)Δ = 𝜂𝜀. A bialgebra morphism between Hopf algebras is automatically a Hopf algebra morphism, i.e., it exchanges the antipodes: ℓ𝑆 = 𝑆′ℓ. For that, it suffices to prove that these maps provide a left inverse and a right inverse for ℓ in Hom(𝐻, 𝐻′). Indeed, since the identity in Hom(𝐻, 𝐻′) is 𝜂′𝜀, it is enough to notice that ℓ𝑆 ∗ ℓ = ℓ(𝑆 ∗ 𝜄) = ℓ𝜂𝜀 = 𝜂′𝜀 = 𝜂′𝜀′ℓ = (𝜄′ ∗ 𝑆′)ℓ = ℓ ∗ 𝑆′ℓ, and associativity of convolution then yields 𝑆′ℓ = 𝜂′𝜀 ∗ 𝑆′ℓ = ℓ𝑆 ∗ ℓ ∗ 𝑆′ℓ = ℓ𝑆 ∗ 𝜂′𝜀 = ℓ𝑆. 8 The antipode has an important pair of algebraic properties: it is an antihomomorphism for both the algebra and the coalgebra structures. Formally, this means 𝑆𝑚 = 𝑚𝜎(𝑆 ⊗ 𝑆) and Δ𝑆 = (𝑆 ⊗ 𝑆)𝜎Δ. (1.5) The first relation, evaluated on 𝑎 ⊗ 𝑏, becomes the familiar antihomomorphism property 𝑆(𝑎𝑏) = 𝑆(𝑏)𝑆(𝑎). We postpone its proof until a little later. Example 1.1. The simplest example of a Hopf algebra is the “group algebra” 𝔽𝐺 of a finite group𝐺. This is just the vector space over 𝔽 with a basis labelled by the elements of 𝐺; the necessary linear maps are specified on this basis. The product is given by 𝑚(𝑥 ⊗ 𝑦) := 𝑥𝑦, linearly extending the group multiplication, and 𝜂(1) := 1𝐺 gives the unit map. The coproduct, counit and antipode satisfy Δ(𝑥) := 𝑥 ⊗ 𝑥, 𝜀(𝑥) := 1 and 𝑆(𝑥) := 𝑥−1, for each 𝑥 ∈ 𝐺. Exercise 1.2. In a general Hopf algebra 𝐻, a nonzero element 𝑔 is called grouplike if Δ(𝑔) := 𝑔 ⊗ 𝑔. Show that this condition entails that 𝑔 is invertible and that 𝜀(𝑔) = 1 and 𝑆(𝑔) = 𝑔−1. ♢ There are two main “classical” examples of Hopf algebras: representative functions on a compact group and the enveloping algebra of a Lie algebra. Example 1.2. Now let𝐺 be a compact topological group (most often, a Lie group), and let the scalar field 𝔽 be either ℝ or ℂ. The Peter–Weyl theorem [13, III.3] shows that any unitary irreducible representation 𝜋 of 𝐺 is finite-dimensional, any matrix element 𝑓 (𝑥) := ⟨𝑢 | 𝜋(𝑥)𝑣⟩ is a continuous function on 𝐺, and the vector space R(𝐺) generated by these matrix elements is a dense subalgebra (∗-subalgebra in the complex case) of 𝐶 (𝐺). Elements of R(𝐺) can be characterized as those continuous functions 𝑓 : 𝐺 → 𝔽 whose translates 𝑓𝑡 : 𝑥 ↦→ 𝑓 (𝑡−1𝑥) generate a finite-dimensional subspace of 𝐶 (𝐺); they are called representative functions on 𝐺. The algebra R(𝐺) is a 𝐺-bimodule in the sense of Wildberger [110] under left and right trans- lation; indeed, it is the algebraic direct sum of the finite-dimensional irreducible 𝐺-subbimodules of 𝐶 (𝐺). The group structure of𝐺 makes R(𝐺) a coalgebra. Indeed, we can identify the algebraic tensor product R(𝐺) ⊗R(𝐺) with R(𝐺 ×𝐺) in the obvious way —here is where the finite-dimensionality of the translates is used [52, Lemma 1.27]— by ( 𝑓 ⊗ 𝑔) (𝑥, 𝑦) := 𝑓 (𝑥)𝑔(𝑦), and then Δ 𝑓 (𝑥, 𝑦) := 𝑓 (𝑥𝑦) (1.6) defines a coproduct on R(𝐺). The counit is 𝜀( 𝑓 ) := 𝑓 (1), and the antipode is given by 𝑆 𝑓 (𝑥) := 𝑓 (𝑥−1). Example 1.3. The universal enveloping algebra U(g) of a Lie algebra g is the quotient of the tensor algebra T(g) by the two sided ideal 𝐼 generated by the elements 𝑋𝑌 − 𝑌𝑋 − [𝑋,𝑌 ], for all 𝑋,𝑌 ∈ g. (Here we write 𝑋𝑌 instead of 𝑋 ⊗ 𝑌 , to distinguish products within T(g) from elements of T(g) ⊗ T(g).) The coproduct and counit are defined on g by Δ(𝑋) := 𝑋 ⊗ 1 + 1 ⊗ 𝑋, (1.7) and 𝜀(𝑋) := 0. These linear maps on g extend to homomorphisms of the tensor algebra; for instance, Δ(𝑋𝑌 ) = Δ(𝑋)Δ(𝑌 ) = 𝑋𝑌 ⊗ 1 + 𝑋 ⊗ 𝑌 + 𝑌 ⊗ 𝑋 + 1 ⊗ 𝑋𝑌, 9 and thus Δ(𝑋𝑌 − 𝑌𝑋 − [𝑋,𝑌 ]) = (𝑋𝑌 − 𝑌𝑋 − [𝑋,𝑌 ]) ⊗ 1 + 1 ⊗ (𝑋𝑌 − 𝑌𝑋 − [𝑋,𝑌 ]), so Δ(𝐼) ⊆ 𝐼 ⊗U(g) +U(g) ⊗ 𝐼. Clearly, 𝜀(𝐼) = 0, too. Therefore, 𝐼 is both an ideal and a “coideal” in the full tensor algebra, so the quotient U(g) is a bialgebra, in fact a Hopf algebra: the antipode is given by 𝑆(𝑋) := −𝑋 . From (1.7), the Hopf algebraU(g) is clearly cocommutative. The word “universal” is appropriate because any Lie algebra homomorphism𝜓 : g → 𝐴, where 𝐴 is an unital associative algebra, extends uniquely (in the obvious way) to a unital algebra homomorphism Ψ : U(g) → 𝐴. Example 1.4. Historically, an important example of a Hopf algebra is Woronowicz’ 𝑞-deformation of SU(2). The compact group SU(2) consists of complex matrices 𝑔 = ( 𝑎 −𝑐∗ 𝑐 𝑎∗ ) , subject to the unimodularity condition 𝑎∗𝑎 + 𝑐∗𝑐 = 1. The matrix elements 𝑎 and 𝑐, regarded as functions of 𝑔, generate the ∗-algebra R(SU(2)): that is, any matrix element of a unitary irreducible (hence finite-dimensional) representation of SU(2) is a polynomial in 𝑎, 𝑎∗, 𝑐, 𝑐∗. Woronowicz found [111] a noncommutative ∗-algebra with two generators 𝑎 and 𝑐, subject to the relations 𝑎𝑐 = 𝑞𝑐𝑎, 𝑎𝑐∗ = 𝑞𝑐∗𝑎, 𝑐𝑐∗ = 𝑐∗𝑐, 𝑎∗𝑎 + 𝑐∗𝑐 = 1, 𝑎𝑎∗ + 𝑞2𝑐𝑐∗ = 1, where 𝑞 is a real number, which can be taken in the range 0 < 𝑞 ⩽ 1. For the coalgebra structure, take Δ and 𝜀 be ∗-homomorphisms determined by Δ𝑎 := 𝑎 ⊗ 𝑎 − 𝑞𝑐∗ ⊗ 𝑐, Δ𝑐 := 𝑐 ⊗ 𝑎 + 𝑎∗ ⊗ 𝑐, and 𝜀(𝑎) := 1, 𝜀(𝑐) := 0. One can check that, by applyingΔ elementwise, the matrix 𝑔 := ( 𝑎 −𝑞𝑐∗ 𝑐 𝑎∗ ) satisfies Δ(𝑔) = 𝑔 ⊗ 𝑔. The antipode 𝑆 is the linear antihomomorphism determined by 𝑆(𝑎) := 𝑎∗, 𝑆(𝑎∗) := 𝑎, 𝑆(𝑐) := −𝑞𝑐, 𝑆(𝑐∗) := −𝑞−1𝑐∗, so that 𝑥 ↦→ 𝑆(𝑥∗) is an antilinear homomorphism, indeed an involution: 𝑆(𝑆(𝑥∗)∗) = 𝑥 for all 𝑥. This last relation is a general property of Hopf algebras with an involution. The initial interest of this example was that it could be represented by a ∗-algebra of bounded operators on a Hilbert space, whose closure was a 𝐶∗-algebra which could legitimately be called a deformation of 𝐶 (SU(2)); it has become known as 𝐶 (SU𝑞 (2)). In this way, the “quantum group” SU𝑞 (2) was born. Nowadays, many 𝑞-deformations of the classical groups are known, although 𝑞 may not always be real: for example, to define 𝑆𝐿𝑞 (2,ℝ), one needs selfadjoint generators 𝑎 and 𝑐 satisfying 𝑎𝑐 = 𝑞𝑐𝑎, which is only possible if 𝑞 is a complex number of modulus 1. ▶ If 𝑢𝑖 𝑗 (𝑥) := ⟨𝑒𝑖 | 𝜋(𝑥)𝑒 𝑗 ⟩, for 𝑖, 𝑗 = 1, . . . , 𝑛, are the matrix elements of an 𝑛-dimensional irreducible representation of a compact group 𝐺 with respect to an orthonormal basis {𝑒1, . . . , 𝑒𝑛}, then (1.6) and 𝜋(𝑥𝑦) = 𝜋(𝑥)𝜋(𝑦) show that Δ𝑢𝑖 𝑗 = ∑𝑛 𝑘=1 𝑢𝑖𝑘 ⊗ 𝑢𝑘 𝑗 , (1.8a) 10 and the coassociativity of Δ is manifested as Δ2𝑢𝑖 𝑗 = ∑ 𝑘,𝑙 𝑢𝑖𝑘 ⊗ 𝑢𝑘𝑙 ⊗ 𝑢𝑙 𝑗 , (1.8b) reflecting the associativity of matrix multiplication. This may be generalized by a notational trick due to Sweedler [100]: if 𝑎 is an element of any Hopf algebra, we write Δ𝑎 =: ∑ 𝑎:1 ⊗ 𝑎:2 (finite sum). (The prevalent custom is to write Δ𝑎 = ∑ 𝑎 (1) ⊗ 𝑎 (2) , leading to a surfeit of parentheses.) The equality of (Δ ⊗ 𝜄) (Δ𝑎) = ∑ 𝑎:1:1 ⊗ 𝑎:1:2 ⊗ 𝑎:2 and (𝜄 ⊗ Δ) (Δ𝑎) = ∑ 𝑎:1 ⊗ 𝑎:2:1 ⊗ 𝑎:2:2 is expressed by rewriting both sums as Δ2𝑎 = ∑ 𝑎:1 ⊗ 𝑎:2 ⊗ 𝑎:3. The matricial coproduct (1.8b) is a particular instance of this notation. The counit and antipode properties can now be rewritten as∑ 𝜀(𝑎:1) 𝑎:2 = ∑ 𝑎:1 𝜀(𝑎:2) = 𝑎, (1.9a)∑ 𝑆(𝑎:1) 𝑎:2 = ∑ 𝑎:1 𝑆(𝑎:2) = 𝜀(𝑎) 1. (1.9b) The coalgebra antihomomorphism property of 𝑆 is expressed as Δ(𝑆(𝑎)) = ∑ 𝑆(𝑎:2) ⊗ 𝑆(𝑎:1). (1.10) We can now prove the antipode properties (1.5). We show that 𝑆𝑚 : 𝑎 ⊗ 𝑏 ↦→ 𝑆(𝑎𝑏) and 𝑚𝜎(𝑆 ⊗ 𝑆) : 𝑎 ⊗ 𝑏 ↦→ 𝑆(𝑏)𝑆(𝑎) are one-sided convolution inverses for 𝑚 in Hom(𝐻 ⊗ 𝐻, 𝐻), so they must coincide. The coproduct in 𝐻 ⊗𝐻 is (𝜄⊗𝜎 ⊗ 𝜄) (Δ⊗Δ) : 𝑎 ⊗ 𝑏 ↦→ ∑ 𝑎:1 ⊗ 𝑏:1 ⊗ 𝑎:2 ⊗ 𝑏:2, and so (𝑆𝑚 ∗ 𝑚) (𝑎 ⊗ 𝑏) = 𝑚(𝑆𝑚 ⊗ 𝑚) (∑ 𝑎:1 ⊗ 𝑏:1 ⊗ 𝑎:2 ⊗ 𝑏:2 ) = ∑ 𝑆(𝑎:1𝑏:1)𝑎:2𝑏:2 = (𝑆 ∗ 𝜄) (𝑎𝑏) = 𝜂𝜀(𝑎𝑏) = 𝜂𝜀𝐻⊗𝐻 (𝑎 ⊗ 𝑏). On the other hand, writing 𝜏 := 𝑚𝜎(𝑆 ⊗ 𝑆), (𝑚 ∗ 𝜏) (𝑎 ⊗ 𝑏) = 𝑚(𝑚 ⊗ 𝜏) (∑ 𝑎:1 ⊗ 𝑏:1 ⊗ 𝑎:2 ⊗ 𝑏:2 ) = ∑ 𝑎:1𝑏:1𝑆(𝑏:2)𝑆(𝑎:2) = 𝜀(𝑏)∑ 𝑎:1𝑆(𝑎:2) = 𝜀(𝑎)𝜀(𝑏) 1𝐻 = 𝜂𝜀(𝑎𝑏) = 𝜂𝜀𝐻⊗𝐻 (𝑎 ⊗ 𝑏). Thus, 𝑆𝑚 ∗ 𝑚 = 𝜂𝐻𝜀𝐻⊗𝐻 = 𝑚 ∗ 𝜏, as claimed. In like fashion, one can verify (1.10) by showing that Δ𝑆 ∗ Δ = 𝜂𝐻⊗𝐻𝜀 = Δ ∗ ((𝑆 ⊗ 𝑆)𝜎Δ) in Hom(𝐻, 𝐻 ⊗ 𝐻); we leave the details to the reader. Exercise 1.3. Carry out the verification of Δ𝑆 = (𝑆 ⊗ 𝑆)𝜎Δ. ♢ Notice that in the examples 𝐻 = R(𝐺) and 𝐻 = U(g), the antipode satisfies 𝑆2 = 𝜄𝐻 , but this does not hold in the SU𝑞 (2) case. We owe the following remark to Matthias Mertens [72, Satz 2.4.2]: 𝑆2 = 𝜄𝐻 if and only if ∑ 𝑆(𝑎:2) 𝑎:1 = ∑ 𝑎:2 𝑆(𝑎:1) = 𝜀(𝑎) 1 for all 𝑎 ∈ 𝐻. (1.11) 11 Indeed, if 𝑆2 = 𝜄𝐻 , then∑ 𝑆(𝑎:2) 𝑎:1 = ∑ 𝑆(𝑎:2) 𝑆2(𝑎:1) = 𝑆 (∑ 𝑆(𝑎:1) 𝑎:2 ) = 𝑆(𝜀(𝑎) 1) = 𝜀(𝑎) 1, while the relation ∑ 𝑆(𝑎:2) 𝑎:1 = 𝜀(𝑎) 1 implies that (𝑆 ∗ 𝑆2) (𝑎) = ∑ 𝑆(𝑎:1) 𝑆2(𝑎:2) = 𝑆 (∑ 𝑆(𝑎:2) 𝑎:1 ) = 𝑆(𝜀(𝑎) 1) = 𝜀(𝑎) 1, so that (1.11) entails 𝑆 ∗ 𝑆2 = 𝑆2 ∗ 𝑆 = 𝜂𝜀, hence 𝑆2 = 𝜄𝐻 is the (unique) convolution inverse for 𝑆. Now, the relations (1.11) clearly follow from (1.9b) if 𝐻 is either commutative or cocommutative (in the latter case, Δ𝑎 = ∑ 𝑎:1 ⊗ 𝑎:2 = ∑ 𝑎:2 ⊗ 𝑎:1). It follows that 𝑆2 = 𝜄𝐻 if 𝐻 is either commutative or cocommutative. ▶ Just as locally compact but noncompact spaces are described by nonunital function algebras, one may expect that locally compact but noncompact groups correspond to some sort of “nonunital Hopf algebras”. The lack of a unit requires substantial changes in the formalism. At the purely algebraic level, an attractive alternative is the concept of “multiplier Hopf algebra” due to van Daele [103,104]. If 𝐴 is an algebra whose product is nondegenerate, that is, 𝑎𝑏 = 0 for all 𝑏 only if 𝑎 = 0, and 𝑎𝑏 = 0 for all 𝑎 only if 𝑏 = 0, then there is a unital algebra 𝑀 (𝐴) such that 𝐴 ⊆ 𝑀 (𝐴), called the multiplier algebra of 𝐴, characterized by the property that 𝑥𝑎 ∈ 𝐴 and 𝑎𝑥 ∈ 𝐴 whenever 𝑥 ∈ 𝑀 (𝐴) and 𝑎 ∈ 𝐴. Here, 𝑀 (𝐴) = 𝐴 if and only if 𝐴 is unital. A coproduct on 𝐴 is defined as a homomorphism Δ : 𝐴→ 𝑀 (𝐴 ⊗ 𝐴) such that, for all 𝑎, 𝑏, 𝑐 ∈ 𝐴, (Δ𝑎) (1 ⊗ 𝑏) ∈ 𝐴 ⊗ 𝐴, and (𝑎 ⊗ 1) (Δ𝑏) ∈ 𝐴 ⊗ 𝐴, and the following coassociativity property holds: (𝑎 ⊗ 1 ⊗ 1) (Δ ⊗ 𝜄) ((Δ𝑏) (1 ⊗ 𝑐)) = (𝜄 ⊗ Δ) ((𝑎 ⊗ 1) (Δ𝑏)) (1 ⊗ 1 ⊗ 𝑐). There are then two well-defined linear maps from 𝐴 ⊗ 𝐴 into itself: 𝑇1(𝑎 ⊗ 𝑏) := (Δ𝑎) (1 ⊗ 𝑏), and 𝑇2(𝑎 ⊗ 𝑏) := (𝑎 ⊗ 1) (Δ𝑏). We say that 𝐴 is a multiplier Hopf algebra [103] if 𝑇1 and 𝑇2 are bijective. When 𝐴 is a (unital) Hopf algebra, one finds that 𝑇−1 1 (𝑎 ⊗ 𝑏) = ((𝜄 ⊗ 𝑆)Δ𝑎) (1 ⊗ 𝑏) and 𝑇−1 2 (𝑎 ⊗ 𝑏) = (𝑎 ⊗ 1) ((𝑆 ⊗ 𝜄)Δ𝑏). In fact, 𝑇1(((𝜄 ⊗ 𝑆)Δ𝑎) (1 ⊗ 𝑏)) = ∑ 𝑇1(𝑎:1 ⊗ 𝑆(𝑎:2)𝑏) = ∑ 𝑎:1 ⊗ 𝑎:2𝑆(𝑎:3)𝑏 = ∑ 𝑎:1 ⊗ 𝜀(𝑎:2)𝑏 = 𝑎 ⊗ 𝑏, and 𝑇2((𝑎 ⊗ 1) ((𝑆 ⊗ 𝜄)Δ𝑏)) = 𝑎 ⊗ 𝑏 by a similar argument. The bijectivity of 𝑇1 and 𝑇2 is thus a proxy for the existence of an antipode. It is shown in [103] that from the stated properties of Δ, 𝑇1 and 𝑇2, one can construct both a counit 𝜀 : 𝐴→ 𝔽 and an antipode 𝑆, though the latter need only be an antihomomorphism from 𝐴 to 𝑀 (𝐴). The motivating example is the case where 𝐴 is an algebra of functions on a locally compact group 𝐺 (with finite support, say, to keep the context algebraic), and Δ 𝑓 (𝑥, 𝑦) := 𝑓 (𝑥𝑦) as before. Then 𝑇1( 𝑓 ⊗ 𝑔) : (𝑥, 𝑦) ↦→ 𝑓 (𝑥𝑦)𝑔(𝑦) also has finite support and the formula (𝑇−1 1 𝐹) (𝑥, 𝑦) := 𝐹 (𝑥𝑦−1, 𝑦) shows that 𝑇1 is bijective; similarly for 𝑇2. A fully topological theory, generalizing Hopf algebras to 12 include 𝐶0(𝐺) for any locally compact group 𝐺 and satisfying Pontryagin duality, is now available: the basic paper on that is [66]. ▶ Duality is an important aspect of Hopf algebras. If (𝐶,Δ, 𝜀) is a coalgebra, the linear dual space 𝐶∗ := Hom(𝐶, 𝔽 ) is an algebra, as we have already seen, where the product 𝑓 ⊗ 𝑔 ↦→ ( 𝑓 ⊗ 𝑔)Δ is just the restriction of Δ𝑡 to 𝐶∗ ⊗ 𝐶∗; the unit is 𝜀𝑡 , where 𝑡 denotes transpose. (By convention, we do not write the multiplication in 𝔽 , implicit in the identification 𝔽 ⊗ 𝔽 ≃ 𝔽 .) However, if (𝐴, 𝑚, 𝑢) is an algebra, then (𝐴∗, 𝑚𝑡 , 𝑢𝑡) need not be a coalgebra because 𝑚𝑡 takes 𝐴∗ to (𝐴 ⊗ 𝐴)∗ which is generally much larger than 𝐴∗ ⊗ 𝐴∗. Given a Hopf algebra (𝐻, 𝑚, 𝑢,Δ, 𝜀, 𝑆), we can replace 𝐻∗ by the subspace 𝐻◦ := { 𝑓 ∈ 𝐻∗ : 𝑚𝑡 ( 𝑓 ) ∈ 𝐻∗ ⊗ 𝐻∗ }; one can check that (𝐻◦,Δ𝑡 , 𝜀𝑡 , 𝑚𝑡 , 𝑢𝑡 , 𝑆𝑡) is again a Hopf algebra, called the finite dual (or “Sweedler dual”) of 𝐻. To see why 𝐻◦ is a coalgebra, we must check that 𝑚𝑡 (𝐻◦) ⊆ 𝐻◦ ⊗ 𝐻◦. So suppose that 𝑓 ∈ 𝐻∗ satisfies 𝑚𝑡 ( 𝑓 ) = ∑𝑚 𝑗=1 𝑔 𝑗 ⊗ ℎ 𝑗 , a finite sum with 𝑔 𝑗 , ℎ 𝑗 ∈ 𝐻∗. We may suppose that the 𝑔 𝑗 are linearly independent, so we can find elements 𝑎1, . . . , 𝑎𝑚 ∈ 𝐻 such that 𝑔 𝑗 (𝑎𝑘 ) = 𝛿 𝑗 𝑘 . Now ℎ𝑘 (𝑎𝑏) = 𝑚∑︁ 𝑗=1 𝑔 𝑗 (𝑎𝑘 )ℎ 𝑗 (𝑎𝑏) = 𝑓 (𝑎𝑘𝑎𝑏) = 𝑚∑︁ 𝑗=1 𝑔 𝑗 (𝑎𝑘𝑎)ℎ 𝑗 (𝑏), so 𝑚𝑡 (ℎ𝑘 ) = ∑𝑚 𝑗=1 𝑓 𝑗 𝑘 ⊗ ℎ 𝑗 , where 𝑓 𝑗 𝑘 (𝑎) := 𝑔 𝑗 (𝑎𝑘𝑎); thus ℎ𝑘 ∈ 𝐻◦. A similar argument shows that each 𝑔 𝑗 ∈ 𝐻◦, too. However, 𝐻◦ is often too small to be useful: in practice, one works with two Hopf algebras 𝐻 and 𝐻′, where each may be regarded as included in the dual of the other. That is to say, we can write down a bilinear form ⟨𝑎, 𝑓 ⟩ := 𝑓 (𝑎) for 𝑎 ∈ 𝐻 and 𝑓 ∈ 𝐻′ with an implicit inclusion𝐻′ ↩→ 𝐻∗. The transposing of operations between the two Hopf algebras boils down to the following five relations, for 𝑎, 𝑏 ∈ 𝐻 and 𝑓 , 𝑔 ∈ 𝐻′: ⟨𝑎𝑏, 𝑓 ⟩ = ⟨𝑎 ⊗ 𝑏,Δ′ 𝑓 ⟩, ⟨𝑎, 𝑓 𝑔⟩ = ⟨Δ𝑎, 𝑓 ⊗ 𝑔⟩, ⟨𝑆(𝑎), 𝑓 ⟩ = ⟨𝑎, 𝑆′( 𝑓 )⟩, 𝜀(𝑎) = ⟨𝑎, 1𝐻′⟩, and 𝜀′( 𝑓 ) = ⟨1𝐻 , 𝑓 ⟩. The nondegeneracy conditions which allow us to assume that 𝐻′ ⊆ 𝐻∗ and 𝐻 ⊆ 𝐻′∗ are: (i) ⟨𝑎, 𝑓 ⟩ = 0 for all 𝑓 ∈ 𝐻′ implies 𝑎 = 0, and (ii) ⟨𝑎, 𝑓 ⟩ = 0 for all 𝑎 ∈ 𝐻 implies 𝑓 = 0. Let 𝐺 be a compact connected Lie group whose Lie algebra is g. The function algebra R(𝐺) is a commutative Hopf algebra, whereas U(g) is a cocommutative Hopf algebra. On identifying g with the space of left-invariant vector fields on the group manifold 𝐺, we can realize U(g) as the algebra of left-invariant differential operators on 𝐺. If 𝑋 ∈ g, and 𝑓 ∈ R(𝐺), we define ⟨𝑋, 𝑓 ⟩ := 𝑋 𝑓 (1) = 𝑑 𝑑𝑡 ���� 𝑡=0 𝑓 (exp 𝑡𝑋), and more generally, ⟨𝑋1 . . . 𝑋𝑛, 𝑓 ⟩ := 𝑋1(· · · (𝑋𝑛 𝑓 )) (1); we also set ⟨1, 𝑓 ⟩ := 𝑓 (1). This yields a duality between R(𝐺) and U(g). Indeed, the Leibniz rule for vector fields, namely 𝑋 ( 𝑓 ℎ) = (𝑋 𝑓 )ℎ + 𝑓 (𝑋ℎ), gives ⟨𝑋, 𝑓 ℎ⟩ = 𝑋 𝑓 (1)ℎ(1) + 𝑓 (1)𝑋ℎ(1) = (𝑋 ⊗ 1 + 1 ⊗ 𝑋) ( 𝑓 ⊗ ℎ) (1 ⊗ 1) = Δ𝑋 ( 𝑓 ⊗ ℎ) (1 ⊗ 1) = ⟨Δ𝑋, 𝑓 ⊗ ℎ⟩, (1.12) 13 while ⟨𝑋 ⊗ 𝑌,Δ 𝑓 ⟩ = 𝑑 𝑑𝑡 ���� 𝑡=0 𝑑 𝑑𝑠 ���� 𝑠=0 (Δ 𝑓 ) (exp 𝑡𝑋 ⊗ exp 𝑠𝑌 ) = 𝑑 𝑑𝑡 ���� 𝑡=0 𝑑 𝑑𝑠 ���� 𝑠=0 𝑓 (exp 𝑡𝑋 exp 𝑠𝑌 ) = 𝑑 𝑑𝑡 ���� 𝑡=0 (𝑌 𝑓 ) (exp 𝑡𝑋) = 𝑋 (𝑌 𝑓 ) (1) = ⟨𝑋𝑌, 𝑓 ⟩. If ⟨𝐷, 𝑓 ⟩ = 0 for all 𝐷 ∈ U(g), then 𝑓 has a vanishing Taylor series at the identity of 𝐺. Since representative functions are real-analytic [62], this forces 𝑓 = 0. On the other hand, if ⟨𝐷, 𝑓 ⟩ = 0 for all 𝑓 , the left-invariant differential operator determined by 𝐷 is null, so 𝐷 = 0 in U(g). The remaining properties are easily checked. Definition 1.5. The relation (1.12) shows that Δ𝑋 = 𝑋 ⊗ 1 + 1 ⊗ 𝑋 encodes the Leibniz rule for vector fields. In any Hopf algebra 𝐻, an element ℎ ∈ 𝐻 for which Δℎ = ℎ ⊗ 1 + 1 ⊗ ℎ is called primitive. It follows that 𝜀(ℎ) = 0 and that 𝑆(ℎ) = −ℎ. In the enveloping algebra U(g), elements of g are obviously primitive. If 𝑎 and 𝑏 are primitive, then so is 𝑎𝑏 − 𝑏𝑎, so the vector space Prim(𝐻) of primitive elements of 𝐻 is actually a Lie algebra. Indeed, since the field of scalars 𝔽 has characteristic zero, the only primitive elements of U(g) are those in g, i.e., Prim(U(g)) = g: see [11], [52, Lemma 1.21] or [74, Prop. 5.5.3]. (Over fields of prime characteristic, there are other primitive elements in U(g) [74].) ▶ If 𝐻 is a bialgebra and 𝐴 is an algebra, and if 𝜙, 𝜓 : 𝐻 → 𝐴 are algebra homomorphisms, their convolution 𝜙 ∗𝜓 ∈ Hom(𝐻, 𝐴) is a linear map, and will be also a homomorphism provided that 𝐴 is commutative. Indeed, 𝜙 ∗𝜓 = 𝑚(𝜙 ⊗ 𝜓)Δ is a composition of three homomorphisms in this case; the commutativity of 𝐴 is needed to ensure that 𝑚 : 𝐴 ⊗ 𝐴 → 𝐴 is multiplicative. A particularly important case arises when 𝐴 = 𝔽 . Definition 1.6. A character of an algebra is a nonzero linear functional which is also multiplicative, that is, 𝜇(𝑎𝑏) = 𝜇(𝑎) 𝜇(𝑏) for all 𝑎, 𝑏; notice that 𝜇(1) = 1. The counit 𝜀 of a bialgebra is a character. Characters of a bialgebra can be convolved, since 𝜇 ∗ 𝜈 = (𝜇 ⊗ 𝜈)Δ is a composition of homomorphisms. The characters of a Hopf algebra 𝐻 form a group G(𝐻) under convolution, whose neutral element is 𝜀; the inverse of 𝜇 is 𝜇𝑆. A derivation or “infinitesimal character” of a Hopf algebra 𝐻 is a linear map 𝛿 : 𝐻 → 𝔽 satisfying 𝛿(𝑎𝑏) = 𝛿(𝑎)𝜀(𝑏) + 𝜀(𝑎)𝛿(𝑏) for all 𝑎, 𝑏 ∈ 𝐻. This entails 𝛿(1𝐻) = 0. The previous relation can also be written as 𝑚𝑡 (𝛿) = 𝛿 ⊗ 𝜀 + 𝜀 ⊗ 𝛿, which shows that 𝛿 belongs to 𝐻◦ and is primitive there; in particular, the bracket [𝛿, 𝜕] := 𝛿 ∗ 𝜕 − 𝜕 ∗ 𝛿 of two derivations is again a derivation. Thus the vector space Der𝜀 (𝐻) of derivations is actually a Lie algebra. In the commutative case, there is another kind of duality to consider: one that matches a Hopf algebra with its character group. A compact topological group𝐺 admits a normalized left-invariant integral (the Haar integral): this can be thought of as a functional 𝐽 : R(𝐺) → ℝ, where the left- invariance translates as (𝜄 ⊗ 𝐽)Δ = 𝜂𝐽. (We leave it as an exercise to show that this corresponds to 14 the usual definition of an invariant integral.) The evaluations at points of𝐺 supply all the characters of this Hopf algebra: G(R(𝐺)) ≃ 𝐺. Conversely, if 𝐻 is a commutative Hopf algebra possessing such a left-invariant functional 𝐽, then its character group is compact, and 𝐻 ≃ R(G(𝐻)). These results make up the Tannaka–Kreı̆n duality theorem —for the proofs, see [52] or [55]— and it is important either to use real scalars, or to consider only hermitian characters if complex scalars are used. The totality of all ℂ-valued characters of R(𝐺), hermitian or not, is a complex group 𝐺ℂ called the complexification of 𝐺 [13, III.8]; for instance, if 𝐺 = SU(𝑛), then 𝐺ℂ ≃ 𝑆𝐿 (𝑛,ℂ). ▶ The action of vector fields in g and differential operators inU(g) on the space of smooth functions on 𝐺, and more generally on any manifold carrying a transitive action of the group 𝐺, leads to the notion of a Hopf action of a Hopf algebra 𝐻 on an algebra 𝐴. Definition 1.7. Let 𝐻 be a Hopf algebra. A (left) Hopf 𝐻-module algebra 𝐴 is an algebra which is a (left) module for the algebra 𝐻 such that ℎ · 1𝐴 = 𝜀(ℎ) 1𝐴 and ℎ · (𝑎𝑏) = ∑(ℎ:1 · 𝑎) (ℎ:2 · 𝑏) (1.13) whenever 𝑎, 𝑏 ∈ 𝐴 and ℎ ∈ 𝐻. Grouplike elements act by endomorphisms of 𝐴, since 𝑔 · (𝑎𝑏) = (𝑔 · 𝑎) (𝑔 · 𝑏) and 𝑔 · 1 = 1 if 𝑔 is grouplike. On the other hand, primitive elements of 𝐻 act by the usual Leibniz rule: ℎ · (𝑎𝑏) = (ℎ · 𝑎)𝑏 + 𝑎(ℎ · 𝑏) and ℎ · 1 = 0 if Δℎ = ℎ ⊗ 1+ 1 ⊗ ℎ. Thus (1.13) is a sort of generalized Leibniz rule. ▶ Duality suggests that an action of U(g) should manifest itself as a coaction of R(𝐺). Definition 1.8. A vector space 𝑉 is called a right comodule for a Hopf algebra 𝐻 if there is a linear map Φ : 𝑉 → 𝑉 ⊗ 𝐻 (the right coaction) satisfying (Φ ⊗ 𝜄)Φ = (𝜄 ⊗ Δ)Φ : 𝑉 → 𝑉 ⊗ 𝐻 ⊗ 𝐻, (𝜄 ⊗ 𝜀)Φ = 𝜄 : 𝑉 → 𝑉. (1.14) In Sweedler notation, we may write the coaction as Φ(𝑣) =: ∑ 𝑣:0 ⊗ 𝑣:1, so ∑ 𝑣:0 𝜀(𝑣:1) = 𝑣 and ∑ 𝑣:0:0 ⊗ 𝑣:0:1 ⊗ 𝑣:1 = ∑ 𝑣:0 ⊗ 𝑣:1:1 ⊗ 𝑣:1:2; we can rewrite both sides of the last equality as∑ 𝑣:0 ⊗ 𝑣:1 ⊗ 𝑣:2, where, by convention, 𝑣:𝑟 ∈ 𝐻 for 𝑟 ≠ 0 while 𝑣:0 ∈ 𝑉 . Left 𝐻-comodules are similarly defined; a linear map Φ : 𝑉 → 𝐻 ⊗ 𝑉 is a left coaction if (𝜄 ⊗ Φ)Φ = (Δ ⊗ 𝜄)Φ and (𝜀 ⊗ 𝜄)Φ = 𝜄; it is convenient to write Φ(𝑣) =: ∑ 𝑣:−1 ⊗ 𝑣:0 in this case. If a 𝐻-comodule 𝐴 is also an algebra and if the coaction Φ : 𝐴 → 𝐴 ⊗ 𝐻 is an algebra homomorphism, we say that 𝐴 is a (right) 𝐻-comodule algebra. In this case, ∑(𝑎𝑏):0 ⊗ (𝑎𝑏):1 =∑ 𝑎:0𝑏:0 ⊗ 𝑎:1𝑏:1. If 𝐻 and 𝑈 are two Hopf algebras in duality, then any right 𝐻-comodule algebra 𝐴 becomes a left𝑈-module algebra, under 𝑋 · 𝑎 := ∑ 𝑎:0 ⟨𝑋, 𝑎:1⟩, for 𝑋 ∈ 𝑈 and 𝑎 ∈ 𝐴. In symbols: 𝑋 acts as the operator (𝜄 ⊗ ⟨𝑋 |)Φ on 𝐴. Indeed, it is enough to note that 𝑋 · (𝑎𝑏) = ∑ 𝑎:0𝑏:0 ⟨𝑋, 𝑎:1𝑏:1⟩ = ∑ 𝑎:0𝑏:0 ⟨Δ𝑋, 𝑎:1 ⊗ 𝑏:1⟩ = ∑ 𝑎:0𝑏:0 ⟨𝑋:1 ⊗ 𝑋:2, 𝑎:1 ⊗ 𝑏:1⟩ = ∑ 𝑎:0 ⟨𝑋:1, 𝑎:1⟩ 𝑏:0 ⟨𝑋:2, 𝑏:1⟩ = ∑(𝑋:1 · 𝑎) (𝑋:2 · 𝑏). 15 The language of coactions is used to formulate what one obtains by applying the Gelfand cofunctor (loosely speaking) to the concept of a homogeneous space under a group action. If a compact group 𝐺 acts transitively on a space 𝑀 , one can write 𝑀 ≈ 𝐺/𝐾 , where 𝐾 is the closed subgroup fixing a basepoint 𝑧0 ∈ 𝑀 (i.e., 𝐾 is the “isotropy subgroup” of 𝑧0). Then any function on 𝑀 is obtained from a function on 𝐺 which is constant on right cosets of 𝐾 . If F(𝐺) and F(𝑀) denote suitable algebras of functions on 𝐺 and 𝑀 (we shall be more precise about these algebras in a moment), then there is a corresponding algebra of right 𝐾-invariant functions F(𝐺)𝐾 := { 𝑓 ∈ F(𝐺) : 𝑓 (𝑥𝑤) = 𝑓 (𝑥) whenever 𝑤 ∈ 𝐾, 𝑥 ∈ 𝐺 }. If 𝑥 ∈ 𝑀 corresponds to the right coset 𝑥𝐾 in 𝐺/𝐾 , then 𝜁 𝑓 (𝑥) := 𝑓 (𝑥) defines an algebra isomorphism 𝜁 : F(𝐺)𝐾 → F(𝑀). [For aesthetic reasons, one may prefer to work with left 𝐾-invariant functions; for that, one should instead identify 𝑀 with the space 𝐾\𝐺 of left cosets of 𝐾 .] Suppose now that the chosen spaces of functions satisfy F(𝐺) ⊗ F(𝑀) ≃ F(𝐺 × 𝑀), (1.15) where ⊗ denotes, as before, the algebraic tensor product. Then we can define 𝜌 : F(𝑀) → F(𝐺) ⊗ F(𝑀) by 𝜌 𝑓 (𝑥, �̄�) := 𝑓 (𝑥𝑦). It follows that [𝜌𝜁 𝑓 ] (𝑥, �̄�) = 𝜁 𝑓 (𝑥𝑦) = 𝑓 (𝑥𝑦) = Δ 𝑓 (𝑥, 𝑦) = [(𝜄 ⊗ 𝜁)Δ 𝑓 ] (𝑥, �̄�), (1.16) so that 𝜌𝜁 = (𝜄 ⊗ 𝜁)Δ : F(𝐺)𝐾 → F(𝐺) ⊗ F(𝑀). Notice, in passing, that the coproduct Δ maps F(𝐺)𝐾 into F(𝐺) ⊗ F(𝐺)𝐾 , which consists of functions ℎ on 𝐺 × 𝐺 such that ℎ(𝑥, 𝑦𝑤) = ℎ(𝑥, 𝑦) when 𝑤 ∈ 𝐾 . [Had we used left cosets and left-invariant functions, the corresponding relations would be Δ(F(𝐺)𝐾) ⊆ F(𝐺)𝐾 ⊗ F(𝐺), 𝜌 : F(𝑀) → F(𝑀) ⊗ F(𝐺), and 𝜌𝜁 = (𝜁 ⊗ 𝜄)Δ.] In Hopf algebra language, 𝜌 defines a left [or right] coaction of F(𝐺) on the algebra F(𝑀), implementing the left [or right] action of the group 𝐺 on 𝑀 , and 𝜁 intertwines this with left [or right] regular coaction on 𝐾-invariant functions induced by the coproduct Δ. We get an instance of the following definition. Definition 1.9. In the lore of quantum groups —see, for instance, [61, §11.6]— a (left) embedded homogeneous space for a Hopf algebra 𝐻 is a left 𝐻-comodule algebra 𝐴 with coaction 𝜌 : 𝐴 → 𝐻 ⊗ 𝐴, for which there exists a subalgebra 𝐵 ⊆ 𝐻 and an algebra isomorphism 𝜁 : 𝐵 → 𝐴 such that 𝜌𝜁 = (𝜄 ⊗ 𝜁)Δ : 𝐵 → 𝐻 ⊗ 𝐴. A right embedded homogeneous space is defined, mutatis mutandis, in the same way. There are two ways to ensure that the relation (1.15) holds. One way is to chooseF(𝐺) := R(𝐺), which is a bona-fide Hopf algebra, and then to define R(𝑀) as the image 𝜁 (R(𝐺)𝐾) of the 𝐾- invariant representative functions. For instance, if 𝐺 = SU(2) and 𝐾 = U(1), so that 𝑀 ≈ 𝕊2 is the usual 2-sphere of spin directions, then R(𝐺) is spanned by the matrix elements D 𝑗 𝑚𝑛 of the (2 𝑗 + 1)-dimensional unitary irreducible representations of SU(2): see [7], for example. Now D 𝑗 𝑚𝑛 is right U(1)-invariant if and only if 𝑗 is an integer (not a half-integer) and 𝑛 = 0; moreover, the 16 functions 𝑌𝑙𝑚 := √︁ (2𝑙 + 1)/4𝜋D𝑙∗ 𝑚0 are the usual spherical harmonics on the 2-sphere. In other words: R(𝕊2) is the algebra of spherical harmonics on 𝕊2. ▶ To move closer to noncommutative geometry, it would be better to use either continuous functions (at the 𝐶∗-algebra level) or smooth functions on 𝐺 and 𝑀; that is, one should work with F = ℂ or with F = 𝐶∞. Notice that formulas like (1.16) make perfect sense in those cases; but the tensor product relation (1.15) is false in the continuous or smooth categories, unless the algebraic ⊗ is replaced by a more suitable completed tensor product. In the continuous case, for compact 𝐺 and 𝑀 , the relation 𝐶 (𝐺) ⊗ 𝐶 (𝑀) ≃ 𝐶 (𝐺 × 𝑀) is valid, where ⊗ denotes the “minimal” tensor product of 𝐶∗-algebras. (There may be several compatible 𝐶∗-norms on a tensor product of two 𝐶∗-algebras; but they all coincide if the algebras are commutative.) In the smooth case, we may fall back on a theorem of Grothendieck [54], which says that 𝐶∞(𝐺) ⊗̂ 𝐶∞(𝑀) ≃ 𝐶∞(𝐺 × 𝑀), where ⊗̂ denotes the projective tensor product of Fréchet spaces. But then, it is necessary to go back and reexamine our definitions: for instance, the coproduct need only satisfy Δ(𝐴) ⊆ 𝐴 ⊗ 𝐴 for a completed tensor product, which is a much weaker statement than the original one — the formula Δ𝑎 = ∑ 𝑎:1 ⊗ 𝑎:2 need no longer be a finite sum, but only some kind of convergent series. The bad news is that, in the 𝐶∗-algebra case, the product map 𝑚 : 𝐴 ⊗ 𝐴→ 𝐴 is usually not continuous; the counit 𝜀 and antipode 𝑆 become unbounded linear maps and one must worry about their domains; and so on. We shall meet examples of these generalized Hopf algebras in subsection 4.2. 1.3 Hopf actions of differential operators: an example The Hopf algebras which are currently of interest are typically neither commutative, like R(𝐺), nor cocommutative, like U(g). The enormous profusion of “quantum groups” which have emerged in the last twenty years provide many examples of such noncommutative, noncocommutative Hopf algebras: see [17,59,61,70] for catalogues of these. A class of Hopf algebras which are commutative but are not cocommutative were introduced a few years ago, first by Kreimer in a quantum field theory context [63], and independently by Connes and Moscovici [35] in connection with a local index formula for foliations; in both cases, the Hopf algebra becomes a device to organize complicated calculations. We shall discuss the QFT version at length in the next section; here we look at the geometric example first. If one wishes to deal with gravity in a noncommutative geometric framework [26], one must be able to handle the geometrical invariants of spacetime under the action of local diffeomorphisms. We consider an oriented 𝑛-dimensional manifold 𝑀 , without boundary. By local diffeomorphisms on 𝑀 we mean diffeomorphisms 𝜓 : Dom𝜓 → Ran𝜓, where both the domain Dom𝜓 and range Ran𝜓 are open subsets of 𝑀; and we shall always assume that 𝜓 preserves the given orientation on 𝑀 . Two such local diffeomorphisms can be composed if and only if the range of the first lies within the domain of the second, and any local diffeomorphism can be inverted: taken all together, they form what is called a pseudogroup. We let Γ be a subpseudogroup (with the discrete topology), and consider the pair (𝑀, Γ). 17 The orbit space 𝑀/Γ has in most cases a very poor topology. The noncommutative geometry approach is to replace this singular space by an algebra which captures the action of Γ on 𝑀 . The initial candidate, a “crossed product” algebra 𝐶 (𝑀) ⋊ Γ, still has a very complicated structure; but much progress can be made [22] by replacing 𝑀 by the bundle 𝐹 → 𝑀 of oriented frames on 𝑀 . This is a principal fibre bundle whose structure group is GL+(𝑛,ℝ), the 𝑛×𝑛matrices with positive determinant. Any 𝜓 ∈ Γ admits a prolongation to the frame bundle described as follows. Let 𝑥 = (𝑥1, . . . , 𝑥𝑛) be local coordinates on 𝑀 and let 𝑦 = (𝑦1 1, 𝑦 2 1, . . . , 𝑦 𝑛 𝑛) be local coordinates for the frame at 𝑥. To avoid a “debauch of indices”, we mainly consider the 1-dimensional case, where 𝑀 ≈ 𝕊1 is a circle and 𝐹 is a cylinder (but we use a matrix notation to indicate how to proceed for higher dimensions; the details for the general case are carefully laid out in [114]). Then 𝜓 acts locally on 𝐹 through �̃�, given by �̃�(𝑥, 𝑦) := (𝜓(𝑥), 𝜓′(𝑥)𝑦). The point is that, while 𝑀 need not carry any Γ-invariant measure, the top-degree differential form 𝜈 = 𝑦−2 𝑑𝑦 ∧ 𝑑𝑥 on 𝐹 is Γ-invariant: �̃�∗𝜈 = 𝑦−2𝜓′(𝑥)−2 𝜓′(𝑥) 𝑑𝑦 ∧ 𝜓′(𝑥) 𝑑𝑥 = 𝜈, so we can build a Hilbert space 𝐿2(𝐹, 𝜈) and represent the action of each 𝜓 ∈ Γ by the unitary operator 𝑈𝜓 defined by 𝑈𝜓𝜉 (𝑥, 𝑦) := 𝜉 (�̃�−1(𝑥, 𝑦)). It is slightly more convenient to work with the adjoint unitary operators 𝑈† 𝜓 𝜉 (𝑥, 𝑦) := 𝜉 (�̃�(𝑥, 𝑦)). These unitaries intertwine multiplication operators coming from functions on 𝐹 (specifically, smooth functions with compact support) as follows: 𝑈𝜓 𝑓𝑈 † 𝜓 = 𝑓 𝜓 , where 𝑓 𝜓 (𝑥, 𝑦) := 𝑓 (�̃�−1(𝑥, 𝑦)). (1.17) The local action of Γ on 𝐹 can be described in the language of smooth groupoids [38], or alter- natively by introducing a “crossed product” algebra which incorporates the groupoid convolution. This is a pre-𝐶∗-algebra A obtained by suitably completing the algebra span{ 𝑓𝑈† 𝜓 : 𝜓 ∈ Γ, 𝑓 ∈ 𝐶∞ 𝑐 (Dom �̃�) }. The relation (1.17) gives the multiplication rule ( 𝑓𝑈† 𝜓 ) (𝑔𝑈† 𝜙 ) = 𝑓 (𝑈† 𝜓 𝑔𝑈𝜓)𝑈† 𝜓 𝑈 † 𝜙 = 𝑓 (𝑔 ◦ �̃�)𝑈† 𝜙𝜓 , (1.18) Any two such elements are composable, since the support of 𝑓 (𝑔 ◦ �̃�) is a compact subset of Dom �̃� ∩ �̃�−1(Dom 𝜙) ⊆ Dom(𝜙�̃�). This construction is called the smash product in the Hopf algebra books: if 𝐻 is a Hopf algebra and 𝐴 is a left Hopf 𝐻-module algebra, the smash product is the algebra 𝐴 # 𝐻 which is defined as the vector space 𝐴 ⊗ 𝐻 with the product rule (𝑎 ⊗ ℎ) (𝑏 ⊗ 𝑘) := ∑ 𝑎(ℎ:1 · 𝑏) ⊗ ℎ:2𝑘. If ℎ is a grouplike element of 𝐻, this reduces to (𝑎 ⊗ ℎ) (𝑏 ⊗ 𝑘) := 𝑎(ℎ · 𝑏) ⊗ ℎ𝑘 , of which (1.18) is an instance. A local basis {𝑋,𝑌 } of vector fields on the bundle 𝐹 is defined by the “vertical” vector field 𝑌 := 𝑦 𝜕/𝜕𝑦, generating translations along the fibres, and the “horizontal” vector field 𝑋 := 𝑦 𝜕/𝜕𝑥, 18 generating displacements transverse to the fibres. In higher dimensions, the basis contains 𝑛2 vertical vector fields 𝑌 𝑖 𝑗 and 𝑛 horizontal vector fields 𝑋𝑘 [114]. Under the lifted action of Γ, 𝑌 is invariant: �̃�∗𝑌 = 𝜓′(𝑥)𝑦 𝜕 𝜕𝜓′(𝑥)𝑦 = 𝑦 𝜕 𝜕𝑦 = 𝑌, but 𝑋 is not. To see that, consider the 1-forms 𝛼 := 𝑦−1 𝑑𝑥 and 𝜔 := 𝑦−1 𝑑𝑦. The form 𝛼 is the so-called canonical 1-form on 𝐹, which is invariant since �̃�∗𝛼 = 𝑦−1𝜓′(𝑥)−1 𝑑𝜓(𝑥) = 𝑦−1 𝑑𝑥 = 𝛼, whereas 𝜔 is not invariant: �̃�∗𝜔 = 𝑦−1 𝑑𝑦 + 𝜓′(𝑥)−1 𝑑𝜓′(𝑥) = 𝑦−1 𝑑𝑦 + 𝜓 ′′(𝑥) 𝜓′(𝑥) 𝑑𝑥. This transformation rule shows that 𝜔 is a connection 1-form on the principal bundle 𝐹 → 𝑀; and the horizontality of 𝑋 means, precisely, that 𝜔(𝑋) = 0. Notice also that 𝛼(𝑋) = 1. Now the vector field �̃�−1 ∗ 𝑋 can be computed from the two equations 𝛼(�̃�−1 ∗ 𝑋) = �̃�∗𝛼(�̃�−1 ∗ 𝑋) = 𝛼(𝑋) = 1 and �̃�∗𝜔(�̃�−1 ∗ 𝑋) = 𝜔(𝑋) = 0; we get �̃�−1 ∗ 𝑋 = 𝑦 𝜕 𝜕𝑥 − 𝑦2𝜓 ′′(𝑥) 𝜓′(𝑥) 𝜕 𝜕𝑦 = 𝑋 − ℎ𝜓𝑌, (1.19a) where ℎ𝜓 (𝑥, 𝑦) := 𝑦 𝜓′′(𝑥) 𝜓′(𝑥) = 𝑦 𝜕 𝜕𝑥 ( log𝜓′(𝑥) ) . (1.19b) Any vector field 𝑍 on 𝐹 determines a linear operator on A, also denoted by 𝑍 , by 𝑍 ( 𝑓𝑈† 𝜓 ) := (𝑍 𝑓 )𝑈† 𝜓 , (1.20) which makes sense since supp(𝑍 𝑓 ) ⊆ supp 𝑓 ⊂ Dom �̃�. When applied to products, this operator gives 𝑍 ( 𝑓𝑈† 𝜓 𝑔𝑈 † 𝜙 ) = 𝑍 ( 𝑓 (𝑔 ◦ �̃�))𝑈† 𝜙𝜓 = (𝑍 𝑓 ) (𝑔 ◦ �̃�)𝑈† 𝜙𝜓 + 𝑓 𝑍 (𝑔 ◦ �̃�)𝑈† 𝜙𝜓 = (𝑍 𝑓 )𝑈† 𝜓 𝑔𝑈 † 𝜙 + 𝑓𝑈 † 𝜓 (𝑍 (𝑔 ◦ �̃�) ◦ �̃�−1)𝑈† 𝜙 = (𝑍 𝑓 )𝑈† 𝜓 𝑔𝑈 † 𝜙 + 𝑓𝑈 † 𝜓 �̃�∗𝑍 (𝑔)𝑈† 𝜙 . (1.21) Since the vector field𝑌 is invariant, �̃�∗𝑌 = 𝑌 , so the lifted operator𝑌 is a derivation on the algebraA: 𝑌 ( 𝑓𝑈† 𝜓 𝑔𝑈 † 𝜙 ) = (𝑌 𝑓 )𝑈† 𝜓 𝑔𝑈 † 𝜙 + 𝑓𝑈 † 𝜓 (𝑌𝑔)𝑈† 𝜙 , Proposition 1.1. The operator 𝑋 on A is not a derivation; however, there is a derivation 𝜆1 on A such that 𝑋 obeys the generalized Leibniz rule 𝑋 (𝑎𝑏) = 𝑋 (𝑎)𝑏 + 𝑎𝑋 (𝑏) + 𝜆1(𝑎)𝑌 (𝑏) for all 𝑎, 𝑏 ∈ A. (1.22) Proof. Using the invariance of 𝑌 and (1.19a), we get �̃�∗𝑋 − 𝑋 = �̃�∗(𝑋 − �̃�−1 ∗ 𝑋) = �̃�∗(ℎ𝜓𝑌 ) = (ℎ𝜓 ◦ �̃�−1)𝑌, 19 and it follows that 𝑓𝑈 † 𝜓 (�̃�∗𝑋 (𝑔) − 𝑋𝑔)𝑈† 𝜙 = 𝑓𝑈 † 𝜓 (ℎ𝜓 ◦ �̃�−1) (𝑌𝑔)𝑈† 𝜙 = 𝑓 ℎ𝜓𝑈 † 𝜓 (𝑌𝑔)𝑈† 𝜙 . If we define 𝜆1( 𝑓𝑈† 𝜓 ) := ℎ𝜓 𝑓𝑈† 𝜓 , (1.23) then (1.21) for 𝑍 = 𝑋 now reads 𝑋 ( 𝑓𝑈† 𝜓 𝑔𝑈 † 𝜙 ) = 𝑋 ( 𝑓𝑈† 𝜓 ) 𝑔𝑈† 𝜙 + 𝑓𝑈 † 𝜓 𝑋 (𝑔𝑈† 𝜙 ) + 𝜆1( 𝑓𝑈† 𝜓 )𝑌 (𝑔𝑈† 𝜙 ). Thus, (1.22) holds on generators. We leave the reader to check that the formula extends to finite products of generators, provided that 𝜆1 is indeed a derivation. Now (1.19b) implies ℎ𝜙𝜓 (𝑥, 𝑦) = 𝑦 𝜕 𝜕𝑥 ( log 𝜙′(𝜓(𝑥)) + log𝜓′(𝑥) ) = ℎ𝜙 (�̃�(𝑥, 𝑦)) + ℎ𝜓 (𝑥, 𝑦), so that ℎ𝜙𝜓 = �̃�∗ℎ𝜙 + ℎ𝜓 , and the derivation property of 𝜆1 follows: 𝜆1( 𝑓𝑈† 𝜓 𝑔𝑈 † 𝜙 ) = (�̃�∗ℎ𝜙 + ℎ𝜓) 𝑓 (𝑔 ◦ �̃�)𝑈† 𝜙𝜓 = 𝑓 ((ℎ𝜙𝑔) ◦ �̃�)𝑈† 𝜙𝜓 + ℎ𝜓 𝑓𝑈† 𝜓 𝑔𝑈 † 𝜙 = ( 𝑓𝑈† 𝜓 ) (ℎ𝜙𝑔𝑈† 𝜙 ) + (ℎ𝜓 𝑓𝑈† 𝜓 ) (𝑔𝑈† 𝜙 ). □ Consider now the Lie algebra obtained from the operators 𝑋 , 𝑌 and 𝜆1. The vector fields 𝑋 , 𝑌 have the commutator [𝑦 𝜕/𝜕𝑦, 𝑦 𝜕/𝜕𝑥] = 𝑦 𝜕/𝜕𝑥 and the corresponding operators on A satisfy [𝑌, 𝑋] = 𝑋 . Next, [𝑌, 𝜆1] ( 𝑓𝑈† 𝜓 ) = 𝑓 (𝑌ℎ𝜓)𝑈† 𝜓 , and from 𝑌ℎ𝜓 = ℎ𝜓 we get [𝑌, 𝜆1] = 𝜆1. Simi- larly, [𝑋, 𝜆1] ( 𝑓𝑈† 𝜓 ) = 𝑓 (𝑋ℎ𝜓)𝑈† 𝜓 , where 𝑋ℎ𝜓 = 𝑦 𝜕/𝜕𝑥 ( 𝑦 𝜓′′(𝑥)/𝜓′(𝑥) ) = 𝑦2 𝜕2/𝜕𝑥2 (log𝜓′(𝑥) ) . Introduce ℎ𝑛𝜓 = 𝑦𝑛 𝑑𝑛 𝑑𝑥𝑛 log𝜓′(𝑥), for 𝑛 = 1, 2, . . . , and define 𝜆𝑛 ( 𝑓𝑈† 𝜓 ) := 𝑓 ℎ𝑛 𝜓 𝑈 † 𝜓 , then 𝜆2 = [𝑋, 𝜆1] and by induction we obtain 𝜆𝑛+1 = [𝑋, 𝜆𝑛] for all 𝑛. Clearly 𝑌ℎ𝑛 𝜓 = 𝑛ℎ𝑛 𝜓 , which implies [𝑌, 𝜆𝑛] = 𝑛𝜆𝑛. The operators 𝜆𝑛 commute among themselves. We have constructed a Lie algebra, linearly generated by 𝑋 , 𝑌 , and all the 𝜆𝑛. We can make the associative algebra with these same generators into a Hopf algebra [35] by defining their coproducts as follows. Since 𝑌 and 𝜆1 act as derivations, they must be primitive: Δ𝑌 := 𝑌 ⊗ 1 + 1 ⊗ 𝑌, (1.24a) Δ𝜆1 := 𝜆1 ⊗ 1 + 1 ⊗ 𝜆1. (1.24b) The coproduct of 𝑋 can be read off from (1.22): Δ𝑋 := 𝑋 ⊗ 1 + 1 ⊗ 𝑋 + 𝜆1 ⊗ 𝑌 . (1.24c) Moreover, 𝜀(𝑌 ) = 𝜀(𝜆1) = 0 since 𝑌 and 𝜆1 are primitive, and 𝜀(𝑋) = 0 since 𝑋 = [𝑌, 𝑋] is a commutator; moreover, 𝜀(𝜆𝑛) = 0 for all 𝑛 ⩾ 2 for the same reason. The commutation relations yield the remaining coproducts; for instance, Δ𝜆2 := [Δ𝑋,Δ𝜆1] = 𝜆2 ⊗ 1 + 1 ⊗ 𝜆2 + 𝜆1 ⊗ 𝜆1. 20 The antipode is likewise determined: 𝑆(𝑌 ) = −𝑌 and 𝑆(𝜆1) = −𝜆1 since 𝑌 and 𝜆1 are primitive, and (𝜄 ∗ 𝑆) (𝑋) = 𝜀(𝑋)1 = 0 gives 𝑋 + 𝑆(𝑋) + 𝜆1𝑌 = 0, so 𝑆(𝑋) = −𝑋 + 𝜆1𝑌 . The relation 𝑆(𝜆𝑛+1) = [𝑆(𝜆𝑛), 𝑆(𝑋)] yields all 𝑆(𝜆𝑛) by induction. Definition 1.10. The Hopf algebra 𝐻𝐶𝑀 generated as an algebra by 𝑋 ,𝑌 and 𝜆1, with the coproduct determined by (1.24) and the indicated counit and antipode, will be called the Connes–Moscovici Hopf algebra. Exercise 1.4. Show that the commutative subalgebra generated by { 𝜆𝑛 : 𝑛 = 1, 2, 3, . . . } is indeed a Hopf subalgebra which is not cocommutative. ♢ The example 𝐻𝐶𝑀 arose in connection with a local index formula computation, which is already very involved when the base space 𝑀 has dimension 1 (the case treated above). In higher dimensions, one may start [114] with the vertical vector fields 𝑌 𝑖 𝑗 = 𝑦 𝜇 𝑗 𝜕/𝜕𝑦𝜇 𝑖 and a matrix- valued connection 1-form 𝜔𝑖 𝑗 = (𝑦−1)𝑖𝜇 (𝑑𝑦 𝜇 𝑗 + Γ 𝜇 𝛼𝛽 𝑦𝛼 𝑗 𝑑𝑥𝛽), which may be chosen torsion-free, with Christoffel symbols Γ 𝜇 𝛼𝛽 = Γ 𝜇 𝛽𝛼 . With respect to this connection form, there are horizontal vector fields 𝑋𝑘 = 𝑦𝜇𝑘 (𝜕/𝜕𝑥 𝜇−Γ𝜈𝛼𝜇𝑦 𝛼 𝑗 𝜕/𝜕𝑦 𝑗𝜈). One obtains the Lie algebra relations [𝑌 𝑗 𝑖 , 𝑌 𝑙 𝑘 ] = 𝛿 𝑗 𝑘 𝑌 𝑙 𝑖 −𝛿𝑙 𝑖 𝑌 𝑗 𝑘 and [𝑌 𝑗 𝑖 , 𝑋𝑘 ] = 𝛿 𝑗𝑘𝑋𝑖, involving “structure constants”; however, [𝑋𝑘 , 𝑋𝑙] = 𝑅𝑖 𝑗 𝑘𝑙 𝑌 𝑗 𝑖 where 𝑅𝑖 𝑗 𝑘𝑙 are the components of the curvature of the connection 𝜔, and these coefficients are in general not constant, for 𝑛 > 1. At first, Connes and Moscovici decided to use flat connections only [35], which entails [𝑋𝑘 , 𝑋𝑙] = 0; then, on lifting the𝑌 𝑖 𝑗 and the 𝑋𝑘 using (1.20), a higher-dimensional analogue of 𝐻𝐶𝑀 is obtained. For instance, one gets [114]: Δ𝑋𝑘 = 𝑋𝑘 ⊗ 1 + 1 ⊗ 𝑋𝑘 + 𝜆𝑖𝑘 𝑗 ⊗ 𝑌 𝑗 𝑖 , where the 𝜆𝑖 𝑘 𝑗 are derivations of the form (1.23). A better solution was later found [38]: one can allow commutation relations like [𝑋𝑘 , 𝑋𝑙] = 𝑅𝑖 𝑗 𝑘𝑙 𝑌 𝑗 𝑖 if one modifies the original setup to allow for “transverse differential operators with non- constant coefficients”. The algebra A remains the same as before, but the base field ℂ is replaced by the algebra R = 𝐶∞(𝐹) of smooth functions on 𝐹. Now A is an R-bimodule under the commuting left and right actions 𝛼(𝑏) : 𝑓𝑈† 𝜓 ↦→ 𝑏 · ( 𝑓𝑈† 𝜓 ) := (𝑏 𝑓 )𝑈† 𝜓 , (1.25a) 𝛽(𝑏) : 𝑓𝑈† 𝜓 ↦→ ( 𝑓𝑈† 𝜓 ) · 𝑏 := (𝑏 ◦ �̃�) · ( 𝑓𝑈† 𝜓 ) = ( 𝑓 (𝑏 ◦ �̃�))𝑈† 𝜓 . (1.25b) Letting 𝐻 now denote the algebra of operators on A generated by these operators (1.25) and the previous ones (1.20), then we no longer have a Hopf algebra over ℂ, but (𝐻,R, 𝛼, 𝛽) gives an instance of a more general structure called a Hopf algebroid over R [69]. For instance, the coproduct is an R-bimodule map from 𝐻 into 𝐻 ⊗R 𝐻, where elements of this range space satisfy (ℎ · 𝑏) ⊗R 𝑘 = ℎ ⊗R (𝑏 · 𝑘) by construction, for any 𝑏 ∈ R. Just as Hopf algebras are the noncommutative counterparts of groups, Hopf algebroids are the noncommutative counterparts of groupoids: see [69,115] for instance. For the details of these recent developments, we refer to [38]. 21 2 The Hopf Algebras of Connes and Kreimer 2.1 The Connes–Kreimer algebra of rooted trees A very important Hopf algebra structure is the one found by Kreimer [63] to underlie the combi- natorics of subdivergences in the computation of perturbative expansions in quantum field theory. Such calculations involve several layers of complication, and it is no small feat to remove one such layer by organizing them in terms of a certain coproduct: indeed, the corresponding antipode pro- vides a method to obtain suitable counterterms. Instead of addressing this matter from the physical side, the approach taken here is algebraic, in order first to understand why the Hopf algebras which emerge are in the nature of things. A given Feynman graph represents a multiple integral (say, over momentum space) where the integrand is assembled from a definite collection of Rules, and before renormalization will often be superficially divergent, as determined by power counting. Even if not itself divergent, it may well contain one or several subgraphs which yield divergent partial integrations: the first order of business is to catalogue and organize the various graphs according to this nesting of subdivergences. Kreimer’s coproduct separates out the divergences of subgraphs from those of the overall graph. In consequence, when expressed in terms of suitable generators of a Hopf algebra, the coproduct turns out to be polynomial in its first tensor factor, but merely linear in the second factor, and is therefore highly noncocommutative. Our starting point is to find a source of Hopf algebras with this kind of noncocommutativity. ▶ We start with an apparently unrelated digression into the homological classification of (associa- tive) algebras. There is a natural homology theory for associative algebras, linked with the name of Hochschild. Given an algebra A over any field 𝔽 of scalars, one forms a complex by setting 𝐶𝑛 (A) := A⊗(𝑛+1) , and defining the boundary operator 𝑏 : 𝐶𝑛 (A) → 𝐶𝑛−1(A) by 𝑏(𝑎0 ⊗ 𝑎1 ⊗ · · · ⊗ 𝑎𝑛) := 𝑛−1∑︁ 𝑗=0 (−1) 𝑗𝑎0 ⊗ · · · ⊗ 𝑎 𝑗𝑎 𝑗+1 ⊗ · · · ⊗ 𝑎𝑛 + (−1)𝑛𝑎𝑛𝑎0 ⊗ 𝑎1 ⊗ · · · ⊗ 𝑎𝑛−1, where the last term “turns the corner”. By convention, 𝑏 = 0 on 𝐶0(A) = A. One checks that 𝑏2 = 0 by cancellation. For instance, 𝑏(𝑎0 ⊗ 𝑎1) := [𝑎0, 𝑎1], while 𝑏(𝑎0 ⊗ 𝑎1 ⊗ 𝑎2) := 𝑎0𝑎1 ⊗ 𝑎2 − 𝑎0 ⊗ 𝑎1𝑎2 + 𝑎2𝑎0 ⊗ 𝑎1. There are two important variants of this definition. One comes from the presence of a “degenerate subcomplex” 𝐷•(A) where, for each 𝑛 = 0, 1, 2, . . . , the elements of 𝐷𝑛 (A) are finite sums of terms of the form 𝑎0 ⊗ · · · ⊗ 𝑎 𝑗 ⊗ · · · ⊗ 𝑎𝑛, with 𝑎 𝑗 = 1 for some 𝑗 = 1, 2, . . . , 𝑛; elements of the quotient Ω𝑛A := 𝐶𝑛 (A)/𝐷𝑛 (A) = A ⊗A ⊗𝑛 , where A = A/𝔽 , are sums of expressions 𝑎0 𝑑𝑎1 · · · 𝑑𝑎𝑛 where 𝑑 (𝑎𝑏) = 𝑑𝑎 𝑏 +𝑎 𝑑𝑏. The direct sum Ω•A = ⊕ 𝑛⩾0 Ω 𝑛A is the universal graded differential algebra generated by A in degree zero; using it, 𝑏 can be rewritten as 𝑏(𝑎0 𝑑𝑎1 · · · 𝑑𝑎𝑛) := 𝑎0𝑎1 𝑑𝑎2 · · · 𝑑𝑎𝑛 + 𝑛−1∑︁ 𝑗=1 (−1) 𝑗𝑎0 𝑑𝑎1 · · · 𝑑 (𝑎 𝑗𝑎 𝑗+1) · · · 𝑑𝑎𝑛 + (−1)𝑛𝑎𝑛𝑎0 𝑑𝑎1 · · · 𝑑𝑎𝑛−1. (2.1) 22 The second variant involves replacing the algebra A in degree 0 by any A-bimodule E, and taking 𝐶𝑛 (A,E) := E ⊗ A⊗𝑛; in the formulas, the products 𝑎𝑛𝑎0 and 𝑎0𝑎1 make sense even when 𝑎0 ∈ E. We write its homology as 𝐻•(A,E) and abbreviate 𝐻𝐻𝑛 (A) := 𝐻𝑛 (A,A). Hochschild cohomology, with values in an A-bimodule E, is defined using cochains in 𝐶𝑛 = 𝐶𝑛 (A,E), the vector space of 𝑛-linear maps 𝜓 : A𝑛 → E; this itself becomes an A-bimodule by writing (𝑎′ · 𝜓 · 𝑎′′) (𝑎1, . . . , 𝑎𝑛) := 𝑎′ · 𝜓(𝑎1, . . . , 𝑎𝑛) · 𝑎′′. The coboundary map 𝑏 : 𝐶𝑛 → 𝐶𝑛+1 is given by 𝑏𝜓(𝑎1, . . . , 𝑎𝑛+1) := 𝑎1 · 𝜓(𝑎2, . . . , 𝑎𝑛+1) + 𝑛∑︁ 𝑗=1 (−1) 𝑗𝜓(𝑎1, . . . , 𝑎 𝑗 , 𝑎 𝑗+1, . . . , 𝑎𝑛+1) + (−1)𝑛+1𝜓(𝑎1, . . . , 𝑎𝑛) · 𝑎𝑛+1. (2.2) The standard case is E = A∗ as an A-bimodule, where for 𝜓 ∈ A∗ we put (𝑎′ · 𝜓 · 𝑎′′) (𝑐) := 𝜓(𝑎′′𝑐𝑎′). Here, we identify 𝜓 ∈ 𝐶𝑛 (A,E) with the (𝑛 + 1)-linear map 𝜑 : A𝑛+1 → ℂ given by 𝜑(𝑎0, 𝑎1, . . . , 𝑎𝑛) := 𝜓(𝑎1, . . . , 𝑎𝑛) (𝑎0); then, from the first summand in (2.2) we get 𝑎1 · 𝜓(𝑎2, . . . , 𝑎𝑛+1) (𝑎0) = 𝜓(𝑎2, . . . , 𝑎𝑛+1) (𝑎0𝑎1) = 𝜑(𝑎0𝑎1, . . . , 𝑎𝑛+1), while the last summand gives 𝜓(𝑎1, . . . , 𝑎𝑛) · 𝑎𝑛+1(𝑎0) = 𝜓(𝑎1, . . . , 𝑎𝑛) (𝑎𝑛+1𝑎0) = 𝜑(𝑎𝑛+1𝑎0, . . . , 𝑎𝑛). In this case, (2.2) reduces to 𝑏𝜑(𝑎0, . . . , 𝑎𝑛+1) := 𝑛∑︁ 𝑗=0 (−1) 𝑗𝜑(𝑎0, . . . , 𝑎 𝑗 , 𝑎 𝑗+1, . . . , 𝑎𝑛+1) + (−1)𝑛+1𝜑(𝑎𝑛+1𝑎0, . . . , 𝑎𝑛). (2.3) The 𝑛-th Hochschild cohomology group is denoted 𝐻𝑛 (A,E) in the general case, and we also write 𝐻𝐻𝑛 (A) := 𝐻𝑛 (A,A∗). Suppose that 𝜇 : A → 𝔽 is a character of A. We denote by A𝜇 the bimodule obtained by letting A act on itself on the left by the usual multiplication, but on the right through 𝜇: 𝑎′ · 𝑐 · 𝑎′′ := 𝑎′𝑐 𝜇(𝑎′′) for all 𝑎′, 𝑎′′, 𝑐 ∈ A. In (2.2), the last term on the right must be replaced by (−1)𝑛+1𝜑(𝑎1, . . . , 𝑎𝑛)𝜇(𝑎𝑛+1). ▶ We return now to the Hopf algebra setting, by considering a dual kind of Hochschild cohomology for coalgebras. Actually, we now consider a bialgebra 𝐵; the dual of the coalgebra (𝐵,Δ, 𝜀) is an algebra 𝐵∗, and the unit map 𝜂 for 𝐵 transposes to a character 𝜂𝑡 of 𝐵∗. Thus we may define the Hochschild cohomology groups𝐻𝑛 (𝐵∗, 𝐵∗ 𝜂𝑡 ). An “𝑛-cochain” now means a linear map ℓ : 𝐵 → 𝐵⊗𝑛 which transposes to an 𝑛-linear map 𝜑 = (𝐵∗)𝑛 → 𝐵∗ by writing 𝜑(𝑎1, . . . , 𝑎𝑛) := ℓ𝑡 (𝑎1 ⊗ · · · ⊗ 𝑎𝑛). Its coboundary is defined by ⟨𝑎1 ⊗ · · · ⊗ 𝑎𝑛+1, 𝑏ℓ(𝑥)⟩ := ⟨𝑏𝜑(𝑎1, . . . , 𝑎𝑛+1), 𝑥⟩, 𝑥 ∈ 𝐵. We compute 𝑏ℓ using (2.2). First, ⟨𝑎1 · 𝜑(𝑎2, . . . , 𝑎𝑛+1), 𝑥⟩ = ⟨𝑎1 ⊗ 𝜑(𝑎2, . . . , 𝑎𝑛+1),Δ𝑥⟩ = ⟨𝑎1 ⊗ 𝑎2 ⊗ · · · ⊗ 𝑎𝑛+1, (𝜄 ⊗ ℓ)Δ𝑥⟩. Next, if Δ 𝑗 : 𝐵⊗𝑛 → 𝐵⊗(𝑛+1) is the homomorphism which applies the coproduct on the 𝑗 th factor only, then ⟨𝜑(𝑎1, . . . , 𝑎 𝑗𝑎 𝑗+1, . . . , 𝑎𝑛+1), 𝑥⟩ = ⟨𝑎1 ⊗ · · · ⊗ 𝑎𝑛+1,Δ 𝑗 (ℓ(𝑥))⟩. Finally, notice that 23 ⟨𝜑(𝑎1, . . . , 𝑎𝑛)𝜂𝑡 (𝑎𝑛+1), 𝑥⟩ = ⟨𝑎1 ⊗ · · · ⊗ 𝑎𝑛+1, ℓ(𝑥) ⊗ 1⟩. Thus the Hochschild coboundary operator simplifies to 𝑏ℓ(𝑥) := (𝜄 ⊗ ℓ)Δ(𝑥) + 𝑛∑︁ 𝑗=1 (−1) 𝑗Δ 𝑗 (ℓ(𝑥)) + (−1)𝑛+1ℓ(𝑥) ⊗ 1. (2.4) In particular, a linear form 𝜆 : 𝐵 → 𝔽 is a 0-cochain, and 𝑏𝜆 = (𝜄 ⊗ 𝜆)Δ − 𝜆 ⊗ 1 is its coboundary; and a 1-cocycle is a linear map ℓ : 𝐵 → 𝐵 satisfying Δℓ = ℓ ⊗ 1 + (𝜄 ⊗ ℓ)Δ. (2.5) The simplest example of a nontrivial 1-cocycle obeying (2.5) come from integration of poly- nomials in the algebra 𝐵 = 𝔽 [𝑋]; we make 𝔽 [𝑋] a cocommutative coalgebra by declaring the indeterminate 𝑋 to be primitive, so that Δ(𝑋) = 𝑋 ⊗ 1 + 1 ⊗ 𝑋 and 𝜀(𝑋) = 0. We immediately get the binomial expansion Δ(𝑋 𝑘 ) = (Δ𝑋)𝑘 = ∑𝑘 𝑗=0 (𝑘 𝑗 ) 𝑋 𝑘− 𝑗 ⊗ 𝑋 𝑗 . If 𝜆 is any linear form on 𝔽 [𝑋], then 𝑏𝜆(𝑋 𝑘 ) = (𝜄 ⊗ 𝜆)Δ(𝑋 𝑘 ) − 𝜆(𝑋 𝑘 ) ⊗ 1 = 𝑘∑︁ 𝑗=1 ( 𝑘 𝑗 ) 𝜆(𝑋 𝑘− 𝑗 ) 𝑋 𝑗 , so 𝑏𝜆 is a linear transformation of polynomials which does not raise the degree. Therefore, the integration map ℓ(𝑋 𝑘 ) := 𝑋 𝑘+1/(𝑘 + 1) is not a 1-coboundary, but it is a 1-cocycle: Δ(ℓ(𝑋 𝑘 )) = 1 𝑘 + 1 𝑘+1∑︁ 𝑗=0 ( 𝑘 + 1 𝑗 ) 𝑋 𝑘+1− 𝑗 ⊗ 𝑋 𝑗 = 𝑋 𝑘+1 𝑘 + 1 ⊗ 1 + 𝑘+1∑︁ 𝑗=1 1 𝑗 ( 𝑘 𝑗 − 1 ) 𝑋 𝑘+1− 𝑗 ⊗ 𝑋 𝑗 = ℓ(𝑋 𝑘 ) ⊗ 1 + 𝑘∑︁ 𝑟=0 1 𝑟 + 1 ( 𝑘 𝑟 ) 𝑋 𝑘−𝑟 ⊗ 𝑋𝑟+1 = ℓ(𝑋 𝑘 ) ⊗ 1 + (𝜄 ⊗ ℓ) (Δ(𝑋 𝑘 )). This simple example already shows what the “Hochschild equation” (2.5) is good for: it allows a recursive definition of the coproduct Δ, with the assistance of a degree-raising operation ℓ. Indeed, 𝔽 [𝑋] is a simple example of a connected, graded bialgebra. Definition 2.1. A bialgebra 𝐻 = ⊕∞ 𝑛=0 𝐻 (𝑛) is a graded bialgebra if it is graded both as an algebra and as a coalgebra: 𝐻 (𝑚)𝐻 (𝑛) ⊆ 𝐻 (𝑚+𝑛) and Δ(𝐻 (𝑛)) ⊆ ⊕ 𝑝+𝑞=𝑛 𝐻 (𝑝) ⊗ 𝐻 (𝑞) . (2.6) It is called connected if the degree-zero piece consists of scalars only: 𝐻 (0) = 𝔽 1 = im 𝜂. In a connected graded bialgebra, we can write the coproduct with a modified Sweedler notation: if 𝑎 ∈ 𝐻 (𝑛) , then Δ𝑎 = 𝑎 ⊗ 1 + 1 ⊗ 𝑎 +∑ 𝑎′:1 ⊗ 𝑎 ′ :2, (2.7) where the terms 𝑎′:1 and 𝑎′:2 all have degrees between 1 and 𝑛 − 1. Indeed, for the counit equations (1.9a) to be satisfied, Δ𝑎 must contain the terms 𝑎 ⊗ 1 in 𝐻 (𝑛) ⊗ 𝐻 (0) and 1 ⊗ 𝑎 in 𝐻 (0) ⊗ 𝐻 (𝑛); the remaining terms have intermediate bidegrees. On applying 𝜀 ⊗ 𝜄, we get 𝑎 = (𝜀 ⊗ 𝜄) (Δ𝑎) = 𝜀(𝑎)1 + 𝑎 + ∑ 𝜀(𝑎′:1) 𝑎 ′ :2, so that 𝜀(𝑎) = 0 when 𝑛 ⩾ 1: in a connected graded bialgebra, the “augmentation ideal” ker 𝜀 is ⊕∞ 𝑛=1 𝐻 (𝑛) , so that 𝐻 = 𝔽 1 ⊕ ker 𝜀. 24 In fact, 𝐻 is a Hopf algebra, since the grading allows us to define the antipode recursively [73, §8]. Indeed, the equation 𝑚(𝑆 ⊗ 𝜄)Δ = 𝜂𝜀 may be solved thus: if 𝑎 ∈ 𝐻 (𝑛) , we can obtain 0 = 𝜀(𝑎) 1 = 𝑆(𝑎) + 𝑎 +∑ 𝑆(𝑎′:1) 𝑎 ′ :2, where each term 𝑎′:1 has degree less than 𝑛, just by setting 𝑆(𝑎) := −𝑎 −∑ 𝑆(𝑎′:1) 𝑎 ′ :2. (2.8) Likewise, 𝑚(𝜄 ⊗ 𝑇)Δ = 𝜂𝜀 is solved by setting 𝑇 (1) := 1 and recursively defining 𝑇 (𝑎) := −𝑎 −∑ 𝑇 (𝑎′:2) 𝑎 ′ :1. It follows that 𝑇 = 𝑆 ∗ 𝜄 ∗ 𝑇 = 𝑆, so we have indeed constructed a convolution inverse for 𝜄. In the same way, if there is a 1-cocycle ℓ which raises the degree, then (2.5) gives a recursive recipe for the coproduct: start with Δ(1) := 1 ⊗ 1 in degree zero (since 𝐻 is connected, that will suffice), and use Δ(ℓ(𝑎)) := ℓ(𝑎) ⊗ 1 + (𝜄 ⊗ ℓ)Δ(𝑎) as often as necessary. The point is that, at each level, coassociativity is maintained: (𝜄 ⊗ Δ)Δ(ℓ(𝑎)) = (𝜄 ⊗ Δ) (ℓ(𝑎) ⊗ 1 + (𝜄 ⊗ ℓ) (Δ𝑎)) = ℓ(𝑎) ⊗ 1 ⊗ 1 + (𝜄 ⊗ Δℓ) (Δ𝑎) = ℓ(𝑎) ⊗ 1 ⊗ 1 + (𝜄 ⊗ ℓ) (Δ𝑎) ⊗ 1 + (𝜄 ⊗ 𝜄 ⊗ ℓ) (𝜄 ⊗ Δ) (Δ𝑎), whereas (Δ ⊗ 𝜄)Δ(ℓ(𝑎)) = (Δ ⊗ 𝜄) (ℓ(𝑎) ⊗ 1 + (𝜄 ⊗ ℓ) (Δ𝑎)) = ℓ(𝑎) ⊗ 1 ⊗ 1 + (𝜄 ⊗ ℓ) (Δ𝑎) ⊗ 1 + (𝜄 ⊗ 𝜄 ⊗ ℓ) (Δ ⊗ 𝜄) (Δ𝑎), where we have used the trivial relation (Δ⊗ 𝜄) (𝜄⊗ ℓ) = (𝜄⊗ 𝜄⊗ ℓ) (Δ⊗ 𝜄). The only remaining issues are (i) whether such a 1-cocycle ℓ exists; and (ii) whether any 𝑐 ∈ 𝐻 (𝑛+1) is a sum of products of elements of the form ℓ(𝑎) with 𝑎 of degree at most 𝑛. ▶ Both questions are answered by producing a universal example of a pair (𝐻, ℓ) consisting of a connected graded Hopf algebra and a 1-cocycle ℓ. It was pointed out by Connes and Kreimer [30] that their Hopf algebra of rooted trees gives precisely this universal example. (Kreimer had first introduced a Hopf algebra of “parenthesized words” [63], where the nesting of subdivergences was indicated by parentheses, but rooted trees are nicer, and both Hopf algebras are isomorphic by the same universality.) Definition 2.2. A rooted tree is a tree (a finite, connected graph without loops) with oriented edges, in which all the vertices but one have exactly one incoming edge, and the remaining vertex, the root, has only outgoing edges. Here are the rooted trees with at most four vertices (up to isomorphism). To draw them, we place the root at the top with a ⋄ symbol, and denote the other vertices with • symbols: ⋄ ⋄ • ⋄ • • • ⋄ • ⋄ • • • • • ⋄ • • ⋄ • ⋄ • ⋄ • • • 𝑡1 𝑡2 𝑡31 𝑡32 𝑡41 𝑡42 𝑡43 𝑡44 The algebra of rooted trees 𝐻𝑅 is the commutative algebra generated by symbols 𝑇 , one for each isomorphism class of rooted trees, plus a unit 1 corresponding to the empty tree. We shall write 25 the product of trees as the juxtaposition of their symbols. There is an obvious grading making 𝐻𝑅 a graded algebra, by assigning to each tree 𝑇 the number of its vertices #𝑇 . The counit 𝜀 : 𝐻𝑅 → 𝔽 is the linear map defined by 𝜀(1) := 1 and 𝜀(𝑇1𝑇2 . . . 𝑇𝑛) = 0 if 𝑇1, . . . , 𝑇𝑛 are trees; this ensures that 𝐻𝑅 = 𝔽 1 ⊕ ker 𝜀. To get a coproduct satisfying (2.7), we must give a rule which shows how a tree may be cut into subtrees with complementary sets of vertices. A simple cut 𝑐 of a tree 𝑇 is the removal of some of its edges, in such a way that along the path from the root to any vertex, at most one edge is removed. Here, for instance, are the possible simple cuts of 𝑡44: = = = = = • • • • • • • • • • • • ⋄ ⋄ ⋄ ⋄ Among the subtrees of 𝑇 produced by a simple cut, exactly one, the “trunk” 𝑅𝑐 (𝑇), contains the root of 𝑇 . The remaining “pruned” branches also form one or more rooted trees, whose product is denoted by 𝑃𝑐 (𝑇). The formula for the coproduct can now be given, on the algebra generators, as Δ𝑇 := 𝑇 ⊗ 1 + 1 ⊗ 𝑇 + ∑︁ 𝑐 𝑃𝑐 (𝑇) ⊗ 𝑅𝑐 (𝑇), (2.9) where the sum extends over all simple cuts of the tree 𝑇 ; as well as Δ1 := 1 ⊗ 1, of course. Here are the coproducts of the trees listed above: Δ𝑡1 = 𝑡1 ⊗ 1 + 1 ⊗ 𝑡1, Δ𝑡2 = 𝑡2 ⊗ 1 + 1 ⊗ 𝑡2 + 𝑡1 ⊗ 𝑡1, Δ𝑡31 = 𝑡31 ⊗ 1 + 1 ⊗ 𝑡31 + 𝑡2 ⊗ 𝑡1 + 𝑡1 ⊗ 𝑡2, Δ𝑡32 = 𝑡32 ⊗ 1 + 1 ⊗ 𝑡32 + 2𝑡1 ⊗ 𝑡2 + 𝑡21 ⊗ 𝑡1, Δ𝑡41 = 𝑡41 ⊗ 1 + 1 ⊗ 𝑡41 + 𝑡31 ⊗ 𝑡1 + 𝑡2 ⊗ 𝑡2 + 𝑡1 ⊗ 𝑡31, Δ𝑡42 = 𝑡42 ⊗ 1 + 1 ⊗ 𝑡42 + 𝑡1 ⊗ 𝑡32 + 𝑡2 ⊗ 𝑡2 + 𝑡1 ⊗ 𝑡31 + 𝑡2𝑡1 ⊗ 𝑡1 + 𝑡21 ⊗ 𝑡2. Δ𝑡43 = 𝑡43 ⊗ 1 + 1 ⊗ 𝑡43 + 3𝑡1 ⊗ 𝑡32 + 3𝑡21 ⊗ 𝑡2 + 𝑡31 ⊗ 𝑡1, Δ𝑡44 = 𝑡44 ⊗ 1 + 1 ⊗ 𝑡44 + 𝑡32 ⊗ 𝑡1 + 2𝑡1 ⊗ 𝑡31 + 𝑡21 ⊗ 𝑡2. (2.10) In this way, 𝐻𝑅 becomes a connected graded commutative Hopf algebra; clearly, it is not cocommutative. In order to prove that this Δ is coassociative, we need only produce the appropriate 1-cocycle 𝐿 which raises the degree by 1. The linear operator 𝐿 – also known as 𝐵+ [30] – is defined, on each product of trees, by sprouting a new common root. Definition 2.3. Let 𝐿 : 𝐻𝑅 → 𝐻𝑅 be the linear map given by 𝐿 (1) := 𝑡1 and 𝐿 (𝑇1 . . . 𝑇𝑘 ) := 𝑇, (2.11) where 𝑇 is the rooted tree obtained by conjuring up a new vertex as its root and extending edges from this vertex to each root of 𝑇1, . . . , 𝑇𝑘 . Notice, in passing, that any tree 𝑇 with 𝑛 + 1 vertices equals 𝐿 (𝑇1 · · ·𝑇𝑘 ), where 𝑇1, . . . , 𝑇𝑘 are the rooted trees, with 𝑛 vertices in all, formed by removing every edge outgoing from the root of 𝑇 . 26 For instance, 𝐿 ( • ⋄ • ) = ⋄ • • • and 𝐿 ( ⋄ • ⋄ • ) = • • ⋄ • • . Checking the Hochschild equation (2.5) is a matter of bookkeeping: see [30, p. 229] or [52, p. 603], for instance. Here, an illustration will suffice: Δ ( 𝐿 ( • ⋄ • )) = Δ ( ⋄ • • • ) = ⋄ • • • ⊗ 1 + 1 ⊗ ⋄ • • • + • ⋄ • ⊗ ⋄ + 2 ⋄ ⊗ ⋄ • • + ⋄ ⋄ ⊗ ⋄ • = 𝐿 ( • ⋄ • ) ⊗1 + (𝜄 ⊗ 𝐿) ( 1 ⊗ • ⋄ • + • ⋄ • ⊗ 1 + 2 ⋄ ⊗ ⋄ • + ⋄ ⋄ ⊗⋄ ) = 𝐿 ( • ⋄ • ) ⊗1 + (𝜄 ⊗ 𝐿)Δ ( • ⋄ • ) . Finally, suppose that a pair (𝐻, ℓ) is given; we want to define a Hopf algebra morphism 𝜌 : 𝐻𝑅 → 𝐻 such that 𝜌(𝐿 (𝑎)) = ℓ(𝜌(𝑎)), (2.12) where 𝑎 is a product of trees. Since 𝐿 (𝑎) may be any tree of degree #𝑎 + 1, we may regard this as a recursive definition (on generators) of an algebra homomorphism, starting from 𝜌(1) := 1𝐻 . The only thing to check is that it also yields a coalgebra homomorphism, which again reduces to an induction on the degree of 𝑎: Δ(𝜌(𝐿 (𝑎))) = Δ(ℓ(𝜌(𝑎))) = ℓ(𝜌(𝑎)) ⊗ 1 + (𝜄 ⊗ ℓ)Δ(𝜌(𝑎)) = ℓ(𝜌(𝑎)) ⊗ 1 + (𝜄 ⊗ ℓ) (𝜌 ⊗ 𝜌) (Δ𝑎) = 𝜌(𝐿 (𝑎)) ⊗ 1 + (𝜌 ⊗ 𝜌) (𝜄 ⊗ 𝐿) (Δ𝑎) = (𝜌 ⊗ 𝜌) ( 𝐿 (𝑎) ⊗ 1 + (𝜄 ⊗ 𝐿) (Δ𝑎) ) = (𝜌 ⊗ 𝜌)Δ(𝐿 (𝑎)), where in the third line, by using ℓ(𝜌(𝑎′:2)) = 𝜌(𝐿 (𝑎′:2)), we have implicitly relied on the property (2.7) that the nontrivial components of the coproduct Δ𝑎 have lower degree than 𝑎. ▶ Since the Hopf algebra 𝐻𝑅 is commutative, we may look for a cocommutative Hopf algebra in duality with it. Now, there is a structure theorem for connected graded cocommutative Hopf algebras, arising from contributions of Hopf, Samelson, Leray, Borel, Cartier, Milnor, Moore and Quillen,2 commonly known as the Milnor–Moore theorem, which states that such a Hopf algebra 𝐻 is necessarily isomorphic to U(g), with g being the Lie algebra of primitive elements of 𝐻. This dual Hopf algebra is constructed as follows. Each rooted tree 𝑇 gives not only an algebra generator for 𝐻𝑅, but also a derivation 𝑍𝑇 : 𝐻𝑅 → 𝔽 defined by ⟨𝑍𝑇 , 𝑇1 . . . 𝑇𝑘⟩ := 0 unless 𝑘 = 1 and 𝑇1 = 𝑇 ; ⟨𝑍𝑇 , 𝑇⟩ := 1. Also, ⟨𝑍𝑇 , 1⟩ = 0 since 𝑍𝑇 ∈ Der𝜀 (𝐻) (Definition 1.6). Notice that the ideal generated by products of two or more trees is (ker 𝜀)2, and any derivation 𝛿 vanishes there, since 𝛿(𝑎𝑏) = 2The historical record is murky; this list of contributors is due to P. Cartier. 27 𝛿(𝑎)𝜀(𝑏) + 𝜀(𝑎)𝛿(𝑏) = 0 whenever 𝑎, 𝑏 ∈ ker 𝜀. Therefore, derivations are determined by their values on the subspace 𝐻 (1) 𝑅 spanned by single trees – which equals 𝐿 (𝐻𝑅), by the way – and reduce to linear forms on this subspace; thus Der𝜀 (𝐻) can be identified with the (algebraic) dual space 𝐻 (1)∗ 𝑅 . We denote by h the linear subspace spanned by all the 𝑍𝑇 . Let us compute the Lie bracket [𝑍𝑅, 𝑍𝑆] := (𝑍𝑅 ⊗ 𝑍𝑆 − 𝑍𝑆 ⊗ 𝑍𝑅)Δ of two such derivations. Using (2.9) and ⟨𝑍𝑅, 1⟩ = ⟨𝑍𝑆, 1⟩ = 0, we get ⟨𝑍𝑅 ⊗ 𝑍𝑆,Δ𝑇⟩ = ∑︁ 𝑐 ⟨𝑍𝑅, 𝑃𝑐 (𝑇)⟩ ⟨𝑍𝑆, 𝑅𝑐 (𝑇)⟩, where ⟨𝑍𝑅, 𝑃𝑐 (𝑇)⟩ = 0 unless 𝑃𝑐 (𝑇) = 𝑅 and ⟨𝑍𝑆, 𝑅𝑐 (𝑇)⟩ = 0 unless 𝑅𝑐 (𝑇) = 𝑆; in particular, the sum ranges only over simple cuts which remove just one edge of 𝑇 . Let 𝑛(𝑅, 𝑆;𝑇) be the number of one-edge cuts 𝑐 of 𝑇 such that 𝑃𝑐 (𝑇) = 𝑅 and 𝑅𝑐 (𝑇) = 𝑆; then ⟨[𝑍𝑅, 𝑍𝑆], 𝑇⟩ = ⟨𝑍𝑅 ⊗ 𝑍𝑆 − 𝑍𝑆 ⊗ 𝑍𝑅,Δ𝑇⟩ = 𝑛(𝑅, 𝑆;𝑇) − 𝑛(𝑆, 𝑅;𝑇), and this expression vanishes altogether except for the finite number of trees𝑇 which can be produced either by grafting 𝑅 on 𝑆 or by grafting 𝑆 on 𝑅. Evaluation of the derivation [𝑍𝑅, 𝑍𝑆] on a product 𝑇1 . . . 𝑇𝑘 of two or more trees gives zero, since each 𝑇𝑗 ∈ ker 𝜀. Therefore, [𝑍𝑅, 𝑍𝑆] = ∑︁ 𝑇 ( 𝑛(𝑅, 𝑆;𝑇) − 𝑛(𝑆, 𝑅;𝑇) ) 𝑍𝑇 , which is a finite sum. In particular, [𝑍𝑅, 𝑍𝑆] ∈ h, and so h is a Lie subalgebra of Der𝜀 (𝐻). The linear duality of 𝐻 (1) 𝑅 with h then extends to a duality between the graded Hopf algebras 𝐻𝑅 and U(h). It is possible to give a more concrete description of the Hopf algebra U(h) in terms of another Hopf algebra of rooted trees 𝐻𝐺𝐿 , which is cocommutative rather than commutative. This structure was introduced by Grossman and Larson [53] and is described in [52, §14.2]; here we mention only that the multiplicative identity is the tree 𝑡1 and that the primitive elements are spanned by those trees which have only one edge outgoing from the root. Panaite [79] has shown that h is isomorphic to the Lie algebra of these primitive trees – by matching each 𝑍𝑇 to the tree 𝐿 (𝑇) – so that U(h) ≃ 𝐻𝐺𝐿 . In [30], another binary operation among the 𝑍𝑇 was introduced by setting 𝑍𝑅 ★ 𝑍𝑆 := ∑︁ 𝑇 𝑛(𝑅, 𝑆;𝑇) 𝑍𝑇 . This is not the convolution (𝑍𝑅 ⊗ 𝑍𝑆)Δ, nor is it even associative, although it is obviously true that 𝑍𝑅★𝑍𝑆 − 𝑍𝑆★𝑍𝑅 = [𝑍𝑅, 𝑍𝑆]. This nonassociative bilinear operation satisfies the defining property of a pre-Lie algebra [15]: (𝑍𝑅 ★ 𝑍𝑆) ★ 𝑍𝑇 − 𝑍𝑅 ★ (𝑍𝑆 ★ 𝑍𝑇 ) = (𝑍𝑅 ★ 𝑍𝑇 ) ★ 𝑍𝑆 − 𝑍𝑅 ★ (𝑍𝑇 ★ 𝑍𝑆). Indeed, both sides of this equation express the formation of new trees by grafting both 𝑆 and 𝑇 onto the tree 𝑅. The combinatorics of this operation are discussed in [16], and several computations with it are developed in [18] and [60]. 28 ▶ The characters of 𝐻𝑅 form a group G(𝐻𝑅) (under convolution): see Definition 1.6. This group is infinite-dimensional, and can be thought of as the set of grouplike elements in a suitable completion of the Hopf algebra𝑈 = U(h). To see that, recall that𝑈 is a graded connected Hopf algebra; denote by 𝑒 its counit. Then the sets (ker 𝑒)𝑚 = ∑ 𝑘⩾𝑚 h𝑘 , for𝑚 = 1, 2, . . . , form a basis of neighbourhoods of 0 for a vector space topology on 𝑈, and the grading properties (2.6) entail that all the Hopf operations are continuous for this topology. (The basic neighbourhoods of 0 in 𝑈 ⊗ 𝑈 are the powers of the ideal 1 ⊗ ker 𝑒 + ker 𝑒 ⊗ 1.) We can form the completion𝑈 of this topological vector space, which is again a Hopf algebra since all the Hopf operations extend by continuity; an element of𝑈 is a series ∑ 𝑘⩾0 𝑧𝑘 with 𝑧𝑘 ∈ h𝑘 for each 𝑘 ∈ ℕ, since the partial sums form a Cauchy sequence in𝑈. The closure of h within𝑈 is Der𝜀 (𝐻). For example, consider the exponential given by 𝜑𝑇 := exp 𝑍𝑇 = ∑ 𝑛⩾0(1/𝑛!) 𝑍𝑛 𝑇 ; in any evalua- tion 𝜑𝑇 (𝑇1 . . . 𝑇𝑘 ) = ∑︁ 𝑛⩾0 1 𝑛! ⟨𝑍⊗𝑛 𝑇 ,Δ𝑛−1(𝑇1 . . . 𝑇𝑘 )⟩, the series has only finitely many nonzero terms. More generally, 𝜑 := exp 𝛿 ∈ 𝑈 makes sense for each 𝛿 ∈ Der𝜀 (𝐻); and 𝜑 ∈ G(𝐻𝑅) since Δ𝜑 = exp(Δ𝛿) = exp(𝜀 ⊗ 𝛿 + 𝛿 ⊗ 𝜀) = 𝜑 ⊗ 𝜑 by continuity of Δ. In fact, the exponential map is a bijection between Der𝜀 (𝐻) and G(𝐻𝑅), whose inverse is provided by the logarithmic series log(1 − 𝑥) := −∑ 𝑘⩾1 𝑥 𝑘/𝑘; for if 𝜇 is a character, the equation 𝜇 = exp(log 𝜇) holds in𝑈, and Δ(log 𝜇) = Δ(log(𝜀 − (𝜀 − 𝜇)) = log(𝜀 ⊗ 𝜀 − Δ(𝜀 − 𝜇)) = log(𝜇 ⊗ 𝜇) = log(𝜀 ⊗ 𝜇) + log(𝜇 ⊗ 𝜀) = 𝜀 ⊗ log 𝜇 + log 𝜇 ⊗ 𝜀, so that log 𝜇 ∈ Der𝜀 (𝐻). See [55, Chap. X] or [56, Chap. XVI] for a careful discussion of the exponential map. In view of this bijection, we can regard the commutative Hopf algebra 𝐻𝑅 as an algebra of affine coordinates on the group G(𝐻𝑅), in the spirit of Tannaka–Kreı̆n duality. ▶ In any Hopf algebra, whether cocommutative or not, the determination of the primitive elements plays an important part. If in any tree 𝑇 , the longest path from the root to a leaf contains 𝑘 edges, then the coproduct Δ𝑇 is a sum of at least 𝑘 + 1 terms. In the applications to renormalization, 𝑇 represents a possibly divergent integration with 𝑘 nested subdivergences, while the primitive tree 𝑡1 corresponds to an integration without subdivergences. A primitive algebraic combination of trees represents a collection of integrations where some of these divergences may cancel. For that reason alone, it would be desirable to describe all the primitive elements of 𝐻𝑅 and then, as far as possible, to rebuild 𝐻𝑅 from its primitives. This is a work in progress [12, 18, 46], which deserves a few comments here. To begin with, since 𝑡1 is primitive andΔ𝑡2 = 𝑡2⊗1+1⊗ 𝑡2+𝑡1⊗ 𝑡1, the combination 𝑝2 := 𝑡2− 1 2 𝑡 2 1 is also primitive. One can check that 𝑝3 := 𝑡31 − 𝑡1𝑡2 + 1 3 𝑡 3 1 is primitive, too. For each 𝑘 = 1, 2, . . . , let 𝑡𝑘 denote the “stick” tree with 𝑘 − 1 edges and 𝑘 vertices in a vertical progression. (In particular, 𝑡3 and 𝑡4 are the trees previously referred to as 𝑡31 and 𝑡41, respectively.) A simple cut severs 𝑡𝑘 into two shorter sticks, and so Δ𝑡𝑘 = ∑︁ 0⩽𝑟⩽𝑘 𝑡𝑟 ⊗ 𝑡𝑘−𝑟 , (2.13) with 𝑡0 := 1 by convention. Thus the sticks generate a cocommutative graded Hopf subalgebra 𝐻𝑙 of 𝐻𝑅. 29 To find the primitives in 𝐻𝑙 , we follow the approach of [18]. Consider the formal power series 𝑔(𝑥) := ∑ 𝑘⩾0 𝑡𝑘𝑥 𝑘 whose coefficients are sticks. Then the equation (2.13) can be read as saying that 𝑔(𝑥) is grouplike in 𝐻𝑙⟦𝑥⟧, that is, Δ𝑔(𝑥) = 𝑔(𝑥) ⊗ 𝑔(𝑥). If we can find a power series 𝑝(𝑥) = ∑ 𝑟⩾1 𝑝𝑟𝑥 𝑟 , where each 𝑝𝑟 is homogeneous of degree 𝑟 in the grading of 𝐻𝑙 , such that exp(𝑝(𝑥)) = 𝑔(𝑥), the corresponding equation will be Δ𝑝(𝑥) = 𝑝(𝑥) ⊗ 1 + 1 ⊗ 𝑝(𝑥); on comparing coefficients of each 𝑥𝑟 , we see that each 𝑝𝑟 is primitive. The equation exp(𝑝(𝑥)) = 𝑔(𝑥) is solved as∑︁ 𝑟⩾1 𝑝𝑟𝑥 𝑟 = log ( 1 + ∑︁ 𝑘⩾1 𝑡𝑘𝑥 𝑘 ) , by developing the Taylor series of log(1 + 𝑥). Since a monomial 𝑡𝑚1 1 𝑡 𝑚2 2 . . . 𝑡 𝑚𝑟 𝑟 has degree 𝑚1 + 2𝑚2 + · · · + 𝑟𝑚𝑟 , the general formula [46, Prop. 9.3] is quickly found to be 𝑝𝑟 = ∑︁ 𝑚1+2𝑚2+···+𝑟𝑚𝑟=𝑟 (−1)𝑚1+···+𝑚𝑟+1 (𝑚1 + · · · + 𝑚𝑟 − 1)! 𝑚1! · · ·𝑚𝑟! 𝑡 𝑚1 1 · · · 𝑡𝑚𝑟 𝑟 , where the sum ranges over the partitions of the positive integer 𝑟. ▶ Nonstick primitives are more difficult to come by, but an algorithm which provides many of them is found in [18], based on formal differential calculus. Indeed, this “differential” approach can be extended, in principle, to deal efficiently with the more elaborate Hopf algebras of Feynman diagrams discussed in the next subsection. For each 𝑎 ∈ 𝐻𝑅, the expression Π𝑎 := ∑ 𝑆(𝑎:1) 𝑑𝑎:2 (2.14) where 𝑑 denotes an ordinary exterior derivative, may be regarded as a 1-form on 𝐺; it is a straightforward generalization of the familiar (matrix-valued) 1-form 𝑔−1 𝑑𝑔 on a group manifold, whose matrix elements are ∑ 𝑗 (𝑔−1)𝑖 𝑗 𝑑𝑔 𝑗 𝑘 . We can treat such expressions algebraically, as a “first- order differential calculus” on a Hopf algebra, in the sense of Woronowicz [112]. The commutativity of 𝐻𝑅 shows that these 1-forms have the following derivation property: Π𝑎𝑏 = ∑ 𝑆(𝑎:1𝑏:1) 𝑑 (𝑎:2𝑏:2) = ∑ 𝑆(𝑏:1)𝑆(𝑎:1)𝑎:2 𝑑𝑏:2 + 𝑆(𝑏:1)𝑏:2𝑆(𝑎:1) 𝑑𝑎:2 = 𝜀(𝑎) Π𝑏 +Π𝑎 𝜀(𝑏). In particular, Π𝑎 = 0 for 𝑎 ∈ (ker 𝜀)2, so we need only consider Π𝑎 for 𝑎 ∈ 𝐻 (1) 𝑅 . Each Π𝑎 can be thought of as a “left-invariant” 1-form, as follows. Exercise 2.1. Let 𝐺 be a compact Lie group and let R(𝐺) be its Hopf algebra of representative functions. If 𝐿𝑡 denotes left translation by 𝑡 ∈ 𝐺, then 𝐿∗𝑡 𝑓 (𝑥) = 𝑓 (𝑡−1𝑥) = Δ 𝑓 (𝑡−1, 𝑥) =∑ 𝑓:1(𝑡−1) 𝑓:2(𝑥), so that 𝐿∗𝑡 𝑓 = ∑ 𝑓:1(𝑡−1) 𝑓:2 for 𝑓 ∈ R(𝐺). Let Π 𝑓 be the smooth 1-form on 𝐺 defined by (2.14); prove that 𝐿∗𝑡Π 𝑓 = Π 𝑓 for all 𝑡 ∈ 𝐺. ♢ Each left-invariant 1-form (2.14) satisfies a “Maurer–Cartan equation”: 𝑑Π𝑎 = −∑ Π𝑎:1 ∧ Π𝑎:2 . Indeed, since 0 = 𝑑 (𝜀(𝑎) 1) = ∑ 𝑑 (𝑆(𝑎:1) 𝑎:2) = ∑ 𝑑 (𝑆(𝑎:1)) 𝑎:2 + 𝑆(𝑎:1) 𝑑𝑎:2, we find that 𝑑 (𝑆(𝑎)) = ∑ 𝑑 (𝑆(𝑎:1) 𝜀(𝑎:2)) = ∑ 𝑑 (𝑆(𝑎:1)) 𝜀(𝑎:2) = ∑ 𝑑 (𝑆(𝑎:1)) 𝑎:2 𝑆(𝑎:3) = −∑ 𝑆(𝑎:1) 𝑑𝑎:2 𝑆(𝑎:3), 30 in analogy with 𝑑 (𝑔−1) = −𝑔−1 𝑑𝑔 𝑔−1. Therefore, 𝑑Π𝑎 = ∑ 𝑑 (𝑆(𝑎:1)) ∧ 𝑑𝑎:2 = −∑ 𝑆(𝑎:1) 𝑑𝑎:2 ∧ 𝑆(𝑎:3) 𝑑𝑎:4 = −∑ Π𝑎:1 ∧ Π𝑎:2 . Suppose now that we are given some element 𝑎 ∈ 𝐻 (1) 𝑅 for which 𝑑Π𝑎 = 0. The bijectivity of the exponential map for G(𝐻𝑅) suggests that this closed 1-form should be exact: Π𝑎 = 𝑑𝑏 for some 𝑏 ∈ 𝐻𝑅. It is clear from (2.14) that the equation Π𝑎 = 𝑑𝑏 can hold only if 𝑏 is primitive. Theorem 2 of [18] uses the Poincaré lemma technique to provide a formula for 𝑏, namely, 𝑏 := −Φ−1(𝑆(𝑎)), where Φ is the operator which grades 𝐻𝑅 by the number of trees in a product: Φ(𝑇1 . . . 𝑇𝑘 ) := 𝑘 𝑇1 . . . 𝑇𝑘 . Notice that 𝑏 = 𝑎 + 𝑐, where 𝑐 ∈ (ker 𝜀)2 is a sum of higher-degree terms. Exercise 2.2. Show that 𝑎 = • ⋄ • ⋄ • + ⋄ • • • − 2 • • ⋄ • satisfies 𝑑Π𝑎 = 0, and compute that 𝑏 = • ⋄ • ⋄ • + ⋄ • • • − 2 • • ⋄ • − ⋄ • ⋄ • + ⋄ • ⋄ • . Verify directly that 𝑏 is indeed primitive. ♢ It is still not a trivial matter to find linear combinations of trees satisfying 𝑑Π𝑎 = 0, but it clearly is much easier to verify this property than to check primitivity directly on a case-by-case basis. ▶ Finally, we comment on the link between 𝐻𝑅 and the Hopf algebra 𝐻𝐶𝑀 of differential operators, developed in [30]. This is found by extending 𝐻𝑅 to a larger (but no longer commutative) Hopf algebra 𝐻𝑅. Since 𝐻𝑅 is graded by the number of vertices per tree, we regard the subspace 𝐻 (1) 𝑅 of single trees as an abelian Lie algebra, and introduce an extra generator 𝑌 with the commutation rule [𝑌,𝑇] := (#𝑇) 𝑇. For each simple cut 𝑐 of 𝑇 , it is clear that #𝑃𝑐 (𝑇) + #𝑅𝑐 (𝑇) = #𝑇 ; a glance at (2.9) then shows that Δ[𝑌,𝑇] = (#𝑇) Δ𝑇 = [𝑌 ⊗ 1 + 1 ⊗ 𝑌,Δ𝑇]. This forces 𝑌 to be primitive: Δ𝑌 := 𝑌 ⊗ 1 + 1 ⊗ 𝑌, (2.15) in order to get Δ[𝑌,𝑇] = [Δ𝑌,Δ𝑇] for consistency. Another important operator on H𝑅 is the so-called natural growth of trees. We define 𝑁 (𝑇), for each tree 𝑇 with vertices 𝑣1, . . . , 𝑣𝑛, by setting 𝑁 (𝑇) := 𝑇1 +𝑇2 + · · · +𝑇𝑛, where each 𝑇𝑗 is obtained from 𝑇 by adding a leaf to 𝑣 𝑗 . For example, 𝑁 ( ⋄ ) := ⋄ • , 𝑁 ( ⋄ • ) := ⋄ • • + • ⋄ • , 𝑁 ( ⋄ • • + • ⋄ • ) := ⋄ • • • + 3 • • ⋄ • + • ⋄ • ⋄ • + ⋄ • • • . 31 In symbols, we write these relations as 𝑁 (𝑡1) = 𝑡2, 𝑁2(𝑡1) = 𝑁 (𝑡2) = 𝑡31 + 𝑡32, 𝑁3(𝑡1) = 𝑁 (𝑡31 + 𝑡32) = 𝑡41 + 3𝑡42 + 𝑡43 + 𝑡44. We rename these 𝛿1 := 𝑡1, 𝛿2 := 𝑁 (𝛿1), 𝛿3 := 𝑁2(𝛿1), 𝛿4 := 𝑁3(𝛿1), and in general 𝛿𝑛+1 := 𝑁𝑛 (𝛿1) for any 𝑛. Notice that 𝛿𝑛+1 is a sum of 𝑛! trees. 𝑁 , defined on the algebra generators, extends uniquely to a derivation 𝑁 : 𝐻𝑅 → 𝐻𝑅. Now, we can add one more generator 𝑋 with the commutation rule [𝑋,𝑇] := 𝑁 (𝑇). The Jacobi identity forces [𝑌, 𝑋] = 𝑋 , as follows: [[𝑌, 𝑋], 𝑇] = [[𝑌,𝑇], 𝑋] + [𝑌, [𝑋,𝑇]] = (#𝑇) [𝑇, 𝑋] + [𝑌, 𝑁 (𝑇)] = −(#𝑇) 𝑁 (𝑇) + (#𝑇 + 1) 𝑁 (𝑇) = 𝑁 (𝑇) = [𝑋,𝑇] . What must the coproduct Δ𝑋 be? Proposition 3.6 of [30] – see also Proposition 14.6 of [52] – proves that Δ𝑁 (𝑇) = (𝑁 ⊗ 𝜄)Δ𝑇 + (𝜄 ⊗ 𝑁)Δ𝑇 + [𝛿1 ⊗ 𝑌,Δ𝑇] (2.16) for each rooted tree 𝑇 . The argument is as follows: to get Δ𝑁 (𝑇), we grow an extra leaf on 𝑇 and then cut the resulting trees in every allowable way. If the new edge is not cut, then it belongs either to a pruned branch or to the trunk which remains after a cut has been made on the original tree 𝑇 ; this amounts to (𝑁 ⊗ 𝜄)Δ𝑇 + (𝜄 ⊗ 𝑁)Δ𝑇 . On the other hand, if the new edge is cut, the new leaf contributes a solitary vertex 𝛿1 to 𝑃𝑐; the new leaf must have been attached to the trunk 𝑅𝑐 (𝑇) at any one of the latter’s vertices. Since (#𝑅𝑐)𝑅𝑐 = [𝑌, 𝑅𝑐], the terms wherein the new leaf is cut amount to [𝛿1 ⊗ 𝑌,Δ𝑇]. The equation (2.16) accounts for both possibilities. Then, since Δ[𝑋,𝑇] = [Δ𝑋,Δ𝑇] must hold, we get Δ𝑋 = 𝑋 ⊗ 1 + 1 ⊗ 𝑋 + 𝛿1 ⊗ 𝑌 . (2.17) Let 𝐻𝑅 be the algebra generated by 𝑋 , 𝑌 and 𝐻𝑅. We can extend the counit and antipode to it as follows. Since 𝑌 is primitive, we must take 𝜀(𝑌 ) := 0 and 𝑆(𝑌 ) := −𝑌 . Then, on applying (𝜄 ⊗ 𝜀) to (2.17), 𝜀(𝑋) := 0 follows; and by applying𝑚(𝜄⊗𝑆) or𝑚(𝑆⊗ 𝜄) to it, we also get 0 = 𝑋 +𝑆(𝑋) −𝛿1𝑌 , which forces 𝑆(𝑋) := −𝑋 + 𝛿1𝑌 . Now (2.15) and (2.17) reproduce exactly the coproducts (1.24) for the differential operators 𝑌 and 𝑋 of the Hopf algebra 𝐻𝐶𝑀 . Indeed, since 𝛿1, like 𝜆1 ∈ 𝐻𝐶𝑀 , is primitive and since 𝛿𝑛+1 = 𝑁 (𝛿𝑛) = [𝑋, 𝛿𝑛], the correspondence 𝑋 ↦→ 𝑋 , 𝑌 ↦→ 𝑌 , 𝜆𝑛 ↦→ 𝛿𝑛 maps 𝐻𝐶𝑀 isomorphically into 𝐻𝑅. 2.2 Hopf algebras of Feynman graphs and renormalization In this subsection, we shall describe briefly some other Hopf algebras which underlie the structure of a renormalizable quantum field theory. Rather than going into the details of perturbative renormalization, we shall merely indicate how such Hopf algebras are involved. 32 In a given QFT, one is faced with the problem of computing correlations (Green functions) from a perturbative expansion whose terms are labelled by Feynman graphs Γ, and consist of multiple integrals where the integrand is completely specified by the combinatorial structure of Γ (its vertices, external and internal lines, and loops) according to a small number of Feynman rules. Typically, one works in momentum space of 𝐷 dimensions, and a preliminary count of the powers of the momenta in the integrand indicates, in many cases, a superficially divergent integral; even if the graph Γ itself passes this test, it may contain subgraphs corresponding to superficially divergent integrals. The main idea of renormalization theory is to associate a “counterterm” to each superficially divergent subgraph, in order to obtain a finite result by subtraction. The first step in approaching such calculations is to realize that all superficially divergent subgraphs must be dealt with, in a recursive fashion, before finally assigning a finite value to the full graph Γ. Thus, each graph Γ determines a nesting of divergent subgraphs: this nesting is codified by a rooted tree, where the root represents the full graph, provided that the Γ does not contain overlapping divergences. (Even if overlapping divergences do occur, one can replace the single rooted tree by a sum over rooted trees after disentangling the overlaps: see [64] for a detailed analysis.) A “leaf” is a divergent subgraph which itself contains no further subdivergences. The combinatorial algebra is worked out in considerable detail in a recent article of Connes and Kreimer [31]: the following remarks can be taken as an incentive for a closer look at that paper. See also the survey of Kreimer [65] for a detailed discussion of the conceptual framework. The authors of [31] consider 𝜙3 theory in 𝐷 = 6 dimensions; but one could equally well start with 𝜙4 theory for 𝐷 = 4 [49], or QED, or any other well-known theory. Definition 2.4. Let Φ stand for any particular QFT. The Hopf algebra 𝐻Φ is a commutative algebra generated by one-particle irreducible (1PI) graphs: that is, connected graphs with at least two vertices which cannot be disconnected by removing a single line. The product is given by disjoint union of graphs: Γ1Γ2 means Γ1 ⊎ Γ2. The counit is given by 𝜀(Γ) := 0 on any generator, with 𝜀(∅) := 1 (we assign the empty graph to the identity element). The coproduct Δ is given, on any 1PI graph Γ, by ΔΓ := Γ ⊗ 1 + 1 ⊗ Γ + ∑︁ ∅⊊𝛾⊊Γ 𝛾 ⊗ Γ/𝛾, (2.18) where the sum ranges over all subgraphs which are divergent and proper (in the sense that removing one internal line cannot increase the number of its connected components); 𝛾 may be either connected or a disjoint union of several connected pieces. The notation Γ/𝛾 denotes the (connected, 1PI) graph obtained from Γ by replacing each component of 𝛾 by a single vertex. To see that Δ is coassociative, we may reason as follows. We may replace the right hand side of (2.18) by a single sum over ∅ ⊆ 𝛾 ⊆ Γ, allowing 𝛾 = ∅ or 𝛾 = Γ and setting Γ/Γ := 1. We observe that if 𝛾 ⊆ 𝛾′ ⊆ Γ, then 𝛾′/𝛾 can be regarded as a subgraph of Γ/𝛾; moreover, it is obvious that (Γ/𝛾)/(𝛾′/𝛾) ≃ Γ/𝛾′. (2.19) The desired relation (Δ ⊗ 𝜄) (ΔΓ) = (𝜄 ⊗ Δ) (ΔΓ) can now be expressed as∑︁ ∅⊆𝛾⊆𝛾′⊆Γ 𝛾 ⊗ 𝛾′/𝛾 ⊗ Γ/𝛾′ = ∑︁ ∅⊆𝛾⊆Γ, ∅⊆𝛾′′⊆Γ/𝛾 𝛾 ⊗ 𝛾′′ ⊗ (Γ/𝛾)/𝛾′′, 33 so coassociativity reduces to proving, for each subgraph 𝛾 of Γ, that∑︁ 𝛾⊆𝛾′⊆Γ 𝛾′/𝛾 ⊗ Γ/𝛾′ = ∑︁ ∅⊆𝛾′′⊆Γ/𝛾 𝛾′′ ⊗ (Γ/𝛾)/𝛾′′. Choose 𝛾′ so that 𝛾 ⊆ 𝛾′ ⊆ Γ; then ∅ ⊆ 𝛾′/𝛾 ⊆ Γ/𝛾. Reciprocally, to every 𝛾′′ ⊆ Γ/𝛾 there corresponds a unique 𝛾′ such that 𝛾 ⊆ 𝛾′ ⊆ Γ and 𝛾′/𝛾 = 𝛾′′; the previous equality now follows from the identification (2.19). We have now defined 𝐻Φ as a bialgebra. To make sure that it is a Hopf algebra, it suffices to show that it is graded and connected, whereby the antipode comes for free. Several grading operators Υ are available, which satisfy the two conditions (2.6): Υ(Γ1Γ2) = Υ(Γ1) + Υ(Γ2) and Υ(𝛾) + Υ(Γ/𝛾) = Υ(Γ) whenever 𝛾 is a divergent proper subgraph of Γ. One such grading is the loop number ℓ(Γ) := 𝐼 (Γ) −𝑉 (Γ) + 1, if Γ has 𝐼 (Γ) internal lines and 𝑉 (Γ) vertices. If ℓ(Γ) = 0, then Γ would be a tree graph, which is never 1PI; thus ker ℓ consists of scalars only, so 𝐻Φ is connected. The antipode is now given recursively by (2.8): 𝑆(Γ) = −Γ + ∑︁ ∅⊊𝛾⊊Γ 𝑆(𝛾) Γ/𝛾. (2.20) As it stands, the Hopf algebra 𝐻Φ corresponds to a formal manipulation of graphs. It remains to understand how to match these formulas to expressions for numerical values, whereby the antipode 𝑆 delivers the counterterms. This is done in two steps. First of all, the Feynman rules for the unrenormalized theory can be thought of as prescribing a linear map 𝑓 : 𝐻Φ → A, into some commutative algebraA, that is multiplicative on disjoint unions: 𝑓 (Γ1Γ2) = 𝑓 (Γ1) 𝑓 (Γ2). In other words, 𝑓 is actually a homomorphism of algebras. For instance, A is often an algebra of Laurent series in some (complex) regularization parameter 𝜀: in dimensional regularization, after adjustment by a mass unit 𝜇 so that each 𝑓 (Γ) is dimensionless, one computes the corresponding integral in dimension 𝑑 = 𝐷 + 𝜀, for 𝜀 ≠ 0. We shall also suppose that A is the direct sum of two subalgebras: A = A+ ⊕ A−. Let 𝑇 : A → A− be the projection on the second subalgebra, with ker𝑇 = A+. When A is a Laurent-series algebra, one takes A+ to be the holomorphic subalgebra of Taylor series and A− to be the subalgebra of polynomials in 1/𝜀 without constant term; the projection 𝑇 picks out the