Lie structure of the Heisenberg-Weyl algebra

As an associative algebra, the Heisenberg-Weyl algebra $\mathcal{H}$ is generated by two elements $A$, $B$ subject to the relation $AB-BA=1$. As a Lie algebra, however, where the usual commutator serves as Lie bracket, the elements $A$ and $B$ are not able to generate the whole space $\mathcal{H}$. We identify a non-nilpotent but solvable Lie subalgebra $\mathfrak{g}$ of $\mathcal{H}$, for which, using some facts from the theory of bases for free Lie algebras, we give a presentation by generators and relations. Under this presentation, we show that, for some algebra isomorphism $\varphi:\mathcal{H}\longrightarrow\mathcal{H}$, the Lie algebra $\mathcal{H}$ is generated by the generators of $\mathfrak{g}$, together with their images under $\varphi$, and that $\mathcal{H}$ is the sum of $\mathfrak{g}$, $\varphi(\mathfrak{g})$ and $\left[ \mathfrak{g},\varphi(\mathfrak{g})\right]$.


Introduction
The Heisenberg-Weyl algebra, because of its ubiquity and profundity, is said to have become the hallmark of noncommutativity in quantum theory [3].Motivated by the creation and annihilation operators in the traditional quantum harmonic oscillator, the Heisenberg-Weyl algebra is generated by two elements and that satisfy the canonical commutation relation − = 1, which implies that, in the usual Hilbert space formulation of quantum mechanical systems, and may be represented by unbounded Hilbert space operators.See, for instance, [15,.Nonetheless, virtually all correspondence schemes for representations of physical quantities in the Hilbert space formulation of quantum theory are said to be endowed with the Heisenberg-Weyl algebra structure [3].
An approximation to the commutation relation − = 1 was first proposed by Arik and Coon [1].The new commutation relation is − = 1, where the parameter is selected from an appropriate space such that the limit process as → 1 may be carried out.This new commutation relation resulted to bounded operator representations of and [1, p. 524].The new model has been successfully applied to several fields including particle physics, knot theory and general relativity [11,Chapter 12].To mention one specific example, there was a study [16] on a Helium isotope in which a model based on the commutation relation − = 1 was compared with the available experimental data, and the computed spectrum reproduces the experimental one within less than 5% discrepancy [16, p. 1100].
Thus, the Heisenberg-Weyl algebra now belongs to a family of algebras H generated by two elements , subject to the relation − = 1.We call H the -deformed Heisenberg algebra.We mention here two perspectives on the combinatorial algebra of -deformed Heisenberg algebras, that have appeared in the literature.
The first is in terms of algebraic term rewriting [12][13][14].In these studies, the focus was on the rewriting system or reduction system induced by the relation − = 1 that are used to arrive at the traditional "normal form" for a given ∈ H, which in this case, is when has been expressed as a linear combination of words where , are nonnegative integers.This reduction system and the corresponding normal form were used in [12] to study centralizers of elements of H and the algebraic dependence of commuting elements, while in [14], the structure of twosided ideals of H was studied using deformed commutator mappings.In [13], the generalization of H into H was an important running example on how the Diamond Lemma for Ring Theory [2] was generalized from its usual ring-theoretic scope into classes of power series algebras.
The second perspective on the study of -deformed Heisenberg algebras concerns a nonassociative structure, or more precisely, a Lie algebra structure, induced by the operation H × H −→ H given by the the usual commutator ( , ) ↦ → − [5][6][7]9].One main theorem about this is that, if ≠ 1, then the Lie subalgebra of H generated by , consists of all linear combinations of , , , where ≠ 0. The determination of such Lie subalgebra is said to be the solution to the Lie polynomial characterization problem [8] for H under the usual generators and relation.An algebraic solution to this was done in [5] when is not a root of unity, and in [7] when ≠ 1 is a root of unity.Alternatively, if is in the real interval (0, 1), then an operator-theoretic solution was made in [6].The methods in [5] were also used in [9] for a central extension of the algebra H .
Let us now consider some concrete examples.As mentioned earlier, the associative algebra H may be turned into a Lie algebra with Lie bracket [ , ] = − for any , ∈ H. Consider the elements of H .If is not a root of unity, then using results from [5], is a linear combination of 2 3 , 2 and 1; is a linear combination of 3 2 , 2 and 1, while is a linear combination of 3 3 , 2 2 , and 1.However, it is not possible to express elements like 3 , 3 , 2 , 2 (pure powers of or with exponent at least 2) in terms of only Lie algebra operations performed on the generators , .Properties, such as these, of some Lie algebra structure or Lie structure on H has led to some interesting results, one of which is the characterization of compact elements of H (under some operator norm) leading to a Calkin algebra isomorphic to an algebra of Laurent polynomials in one variable [6].
However, when we take the limit as → 1, in the Heisenberg-Weyl algebra H = H 1 , the aforementioned Lie structure reduces the linear span of 1, and .But still, there is more to the Lie algebra H than just the elements 1 • 1 + 2 + 3 for all scalars 1 , 2 , 2 .This is the starting point of our inquiry.If the Lie structure of the Heisenberg-Weyl algebra cannot be studied by solving a Lie polynomial characterization problem (because the solution is almost trivial), then how can the rest of the Lie algebra H (outside the span of 1, , ) be decribed?
In this work, we answer this question by expressing the Lie algebra H as the sum of three vector subspaces.The first summand is some Lie subalgebra (to be defined in Section 4), with the second summand being the image of under some algebra isomorphism , and the third summand is [ , ( )].If the Lie subalgebra is of such importance in elucidating the Lie structure of H, then we naturally want to know more about it.What we did in this work is to give a presentation for by generators and relations.To do this, we first give, in Section 3, a treatment of selected aspects of the theory of Lyndon-Shirshov words, and of the role they play in the basis theory for free Lie algebras.
For the sake of completeness, we mention here some studies on the Lie structure of some classes of associative algebras [17,18], and of a certain special product of associative algebras [22].These studies are focused on necessary and sufficient conditions for nilpotency or solvability of the desired Lie algebras in a field with nonzero characteristic.These studies also involve results that are in the framework of polynomial identity algebras.Although we shall be showing the non-nilpotence and solvability of the Lie algebra , which was mentioned earlier as our key in describing the Lie structure of H, the subject of this work differs significantly from the said approach in [17,18,22].In a later result, we will specifically require the underlying field to have zero characteristic; otherwise, some trivialities will be introduced in the Lie structure of , and hence of H. Also, instead of focusing on polynomial identities, we delve deeper into the combinatorial algebra of Lyndon-Shirshov words, and the properties of the free Lie algebra basis that can be derived from them.

Preliminaries
If the two-element set is denoted by { , }, then we shall refer to and as words of length 1. If, for some positive integer , all words of length strictly less than have been defined, then by a word of length , we mean any juxtaposition of the form 1 2 , where one of 1 or 2 is a word of length − 1, and the other is a word of length 1.If is a word of length , but mention of the positive integer is not relevant in the current context, then we simply refer to as a word, or a word on { , }, or a word on , .Conversely, if is referred to as a word, then it is assumed that is a word of length for some positive integer .In such a case, we define as the length of , which we denote by | |.Any word may be written as = 1 2 • • • , where, for each ∈ {1, 2, . . ., }, we have ∈ { , }. Induction on may be used to prove that is also a word with ∈ { , } for any ∈ {1, 2, . . ., }. Again, by induction, = | ′ |.Equality of words, or in this case, = ′ , is defined by the conditions | | = | ′ |, and = for any ∈ {1, 2, . . ., }.By the support of , we mean the set X ⊆ { , } such that for each ∈ {1, 2, . . ., }, ∈ X.We define the empty word or word of length 0 as the word with empty support.We use the symbol 1 to denote the empty word, and we define |1| := 0. A word is nonempty if ≠ 1.Given a positive integer , the word is the juxtaposition of with itself so that appears in the juxtaposition times, just like the exponentiation in elementary number systems.We interpret 0 as the empty word.For each ∈ N := {0, 1, 2 . ..}, let , be the set of all words of length .The set of all words on { , }, which is , := ∈N , , may easily be shown to be a noncommutative monoid under the operation of juxtaposition of words, with the empty word as (multiplicative) identity.
Let F be a field.We assume that any F-algebra to be mentioned is unital and associative.Since we shall not be considering any set of scalars other than F, we further drop the prefix "F-" and so we shall simply use the term "algebra."Let F , be the free algebra generated by the two-element set { , }.The words on { , } form a basis for F , , as a vector space over F. We assume that no confusion shall arise in using the same symbol for the empty word and the multiplicative identity of the field F.
The Heisenberg-Weyl algebra is the algebra H generated by two elements , satisfying the relation = + 1.By the universal property of the free algebra F , , H is isomorphic to some quotient of F , .More precisely, if K is the (two-sided) ideal of F , generated by − + + 1, then H is isomorphic to F , /K.Throughout, if a vector space basis is known for an algebra or Lie algebra A, then this basis is understood to be a Hamel basis.That is, regardless of whether A is infinite dimensional or not, A is viewed as the set of all finite linear combinations of said basis elements.Some facts about a traditional basis for the Heisenberg-Weyl algebra are discussed at the beginning of Section 4.

Nested adjoint maps
Any algebra is a Lie algebra under the operation × −→ given by ( , ) ↦ → [ , ] := − .Given ∈ , the adjoint map ad is the linear map −→ given by ↦ → [ , ].The adjoint map gives a convenient notation for nested Lie brackets that is precise, and with no need for vague use of expressions like " times," or " copies."For instance, where juxtaposition and exponentiation of adjoint maps refer to function composition.Given , ∈ N, the reader may infer the meaning of generalized nested Lie brackets like (ad ) ( ), (− ad ) ( ), (ad ) (− ad ) ( ) or (− ad ) (ad ) ( ), and perhaps these examples may show the advantage of the "adjoint notation" for nested Lie brackets.

Nonassociative regular words on two generators
In this section, we give a rigorous treatment of the aspects of the theory of regular words on two generators.These objects were motivated by notions from, and have their crucial significance in, several algebraic theories, mainly the theory of presentation of groups, the so-called "Fox calculus" or the free differential calculus, and also the theory of bases for free Lie algebras [10,20,21].Some excellent modern expositions are [4] and [23, Sections 2.2, 2.7-2.9].Our treatment here is mainly based on [23] because of the agreeable perspective in it: the said theoretical developments can be dealt with in principle on the associative level, with the aid of universal enveloping algebras, but it makes sense, however, to do so, not outside the scope of the Lie algebras themselves [23, p. 37].
Let , ∈ , .We say that is a subword of if there exist , ∈ , such that = .If = 1, then is a beginning of , and is an ending of , if = 1.A subword of is proper if ≠ .Suppose { : ( )} ⊆ , for some statement .A longest word with property is an element ′ of { : ( )} such that for any ∈ { : Any nonempty word has a unique longest ending, and any word with length at least 2 has a unique longest proper ending.
If we define > as the ordering on { , } given by > , then we may extend > to an ordering in , by defining > whenever there exists ∈ {1, 2, . . ., } such that > and < implies = .If > or = , then we write ≥ .

Definition 3.1. A nonempty word ∈ , is regular if for any proper beginning and any proper ending of such that =
, we have > .
The notion of a regular word is one of the fundamental cornerstones of the basis theory of free Lie algebras.We shall gradually introduce properties of regular words according to what shall be relevant to the Lie structure of the Heisenberg-Weyl algebra.First, we have the following property which is useful in constructing some regular words longer than the generators and .We shall be concerned with specific types of regular words that are summarized in the following.(iii) Given positive integers ℎ, , , , the word ℎ+ ℎ is regular, but That is, the biggest value among 2 , 3 , . . ., has its first occurrence at index .If 1 < , then Given an integer ≥ 2, if is a proper beginning of the word , then = for some positive integer < .The proper ending of such that = is = − .Thus, = = , and so, ≯ .This means that is not regular.

Regular factoring
Perhaps the most important property of regular words is the existence of one unique way to "factor" a regular word such that the subwords in the "factoring" are also regular, and the "factorization" process may be continued on these factors repeatedly, and the process terminates until all factors are generators.This feature of regular words makes them of extreme significance to a particular nonassociative structure in F , that we will be using later.

Lemma 3.5 ([23, Theorem 2.8.3(b)]). If is a regular word, if is the longest regular proper ending of , and if is the proper beginning of such that =
, then is regular.The uniqueness of implies the uniqueness of the pair ( , ), which we henceforth refer to as the regular factoring of .In symbols, we write this as = ★ .(iii) Given positive integers ℎ, , , with Since any nonempty proper ending of is for some positive integer , as shown in Example 3.4, none of these nonempty proper endings is regular except for itself.This explains the regular factoring = −1 ★ in Example 3.6(i).The longest proper ending of the regular word 2 is , which is already regular.Thus,

2
= ★ , and this may be extended by induction.The result is Example 3.6(ii).
As for Example 3.6(iii), a routine argument may be used to show that any proper ending of ℎ ℎ longer than ℎ is not regular.Example 3.6(iii) may be extended to Example 3.6(iv)-(v) in the same manner as how Example 3.6(i) was extended to Example 3.6(ii).

Regular bracketing
The free algebra F , is a Lie algebra under the Lie bracket [See Section 2.1 for explanatory remarks on the use of nested adjoint maps.](ii) As a consequence of (i) above, and also of Example 3.6(iii), given positive integers ℎ, , with < , (iii) Using Example 3.6(iv), and induction, given positive integers ℎ, , , with < , (iv) Using Example 3.6(iv), and induction, given positive integers ℎ, , , with ≥ , (v) We may generalize or combine (ii)-(iv) above, and make some substitutions using (i), to obtain The case = 0 with ≥ is not included because in such a case, according to Example 3.3(ii), ℎ+ ℎ is not regular.
We give a few remarks on how Example 3.8(ii) has been generalized into Example 3.8(iii)-(iv).Consider the word ℎ+1 ℎ , where ℎ, , are positive.Proposition 3.2 and Example 3.3(i) may be used to show that ℎ+1 ℎ is regular.This is regardless of which of or is bigger.If < , then the longest proper ending of ℎ+1 ℎ is ℎ ℎ , which, by Example 3.3(ii) is already regular.Thus, which is the formula in Example 3.8(iii) at = 1.This may be extended to an arbitrary positive integer by induction.For the case ≥ , a routine argument may be used to show that any proper ending of ℎ+1 ℎ longer than ℎ is not regular.Thus, which is the formula in Example 3.8(iv) at = 1.At the next value of , the longest proper ending of the word ℎ+2 ℎ is ℎ+1 ℎ , which we have established to be regular.This shall result to the formula in Example 3.8(iv) at = 2. Using Example 3.6(iv) and induction, this may be extended to any positive ∈ N.

The general regular word on two generators
After some facts about regular factoring and regular bracketing in the previous subsections, we now consider how these notions may be understood for a regular word (on two generators) of arbitrary length.To this end, we have an important necessary condition for the regularity of a word in the lemma that follows.Also, this gives us a definite form of a regular word on two generators, and we use this form to partition the collection of regular words, which shall be relevant in our main results later.Lemma 3.9.If ∈ , is nonempty and regular with length at least 2, then there exist positive integers 1 , 1 , 2 , 2 , . . ., , such that Proof.First, we claim that there exist distinct , ∈ { , }, such that = 1 1 2   2 and .The only proper beginning of is = and the proper ending such that = is = , and we also have > .Thus, is regular.For each of the other three cases for , there exists a proper beginning ′ and proper ending ′ such that = ′ ′ but ′ ≯ ′ .Thus, the only possibility is = , which satisfies the statement.Suppose that the statement holds for any regular word with length strictly less than | |.If = ★ , then by the inductive hypothesis, there exist distinct , and also some distinct where all exponents shown are positive.We have either = or = , and these cases imply = or = , respectively.Thus, where all the exponents shown are positive.Thus, in any case, satisfies the desired conditions.This completes the induction proof for the claim.What remains to be shown is = .Suppose otherwise.Then the only possibility is = , and so, . Consequently, the proper beginning = 1 and proper ending = 1 2 2 • • • of have the property that = but since 1 and 1 are positive, we may write and paying attention to the generator at the left-most positions in these words, ≯ , and so ≯ .This contradicts the regularity of .Therefore, = , and Given words and , the number of times occurs as a subword of is denoted by deg .With reference to the notation in Lemma 3.9, because the exponents 1 , 1 , 2 , 2 , . . ., , are all positive, the arbitrary regular word may be written as and so We now give some remarks concerning the regular factoring and regular bracketing of the regular word (2).Suppose that the biggest value among 2 , 3 , . . ., has its first occurrence at index .By a routine argument, the regularity of implies 1 ≥ , and so, if Routine arguments, that make use of inequalities satisfied by the exponents of , may be used to show that the words , , and are all regular.Suppose = and = are the longest regular proper endings of and , respectively.Since the exponents of in or [except 1 ] are strictly less than the first exponent of in , regularity requires that and are subwords of .Thus, there exist words = and = such that = , or in the other case, = .The corresponding regular factorings are = ( ) ★ and = ( ) ★ .Consequently, the regular bracketing of is given by

Inclusion compositions
Another important property of regular words involves interesting and useful Lie algebra manipulations when a regular subword is known.This will lead us to the notion of inclusion compositions that will be defined shorty.This notion is motivated by the following.
[For the case = 0, we interpret Φ as the identity map, or the empty composition of maps.] Proof.Let ∈ , be regular.We use induction on | |.Suppose that all words of length strictly less than satisfy the statement.If = ★ , then the inductive hypothesis applies to and .We consider cases according to Proposition 3.10.If is a subword of , then by the inductive hypothesis, there exists a word such that is regular, and that for some regular words 1 , 2 , . . ., ℓ and some 1 , 2 , . . ., and so, as desired.If is a subword of , then by the inductive hypothesis, there exists a word such that is regular, and that for some regular words 1 , 2 , . . ., and some 1 , 2 , . . ., ∈ {−1, 1}, and, consequently, which is the desired form for .The final case is when is a beginning of that intersects .Here, = for some word , and = .Thus, = Φ ( ), where Φ is the identity map.This completes the proof.
A statement similar to Proposition 3.11 was briefly remarked in [23, p. 38], but we are aiming here for a more precise articulation of the statement, because it shall be crucial in a later definition.Also, the version in [23, p. 38] does not make use of nested adjoint maps.[Recall Section 2.1.]In this author's opinion, the concept being expressed in [23, p. 38] would be better comprehended or appreciated when expressed in terms of nested adjoint maps, just like how Proposition 3.11 was stated above.Proposition 3.12 ([23, Theorem 2.8.5]).For any nonempty ∈ , , there exists a unique finite sequence 1 , 2 , . .., of regular words such that In this case, we say that = 1 2 • • • is the regular decomposition of the word .In particular , if is regular, then = 1, in which case the regular decomposition of is said to be trivial.
The Lie subalgebra Lie , of F , generated by { , } is the free Lie algebra on { , }.That is, Lie , has the canonical universal property in the category of all Lie algebras over F with the same number of generators, or equivalently, that every Lie algebra generated by two elements is isomorphic to a quotient of Lie , .The elements of Lie , are called the Lie polynomials in , .The significance of regular words, and of the nonassociative regular words derived from them, is because of the following.

Lemma 3.13 ([20, pp. 115]). The nonassociative regular words on , form a basis for Lie , .
The above result is attributed to A. I. Shirshov [4,p. 2], because of the seminal paper [20].However, the definition of a regular word by its "rotational" property is attributed to Lyndon, because of the classic paper [10].But still, the significance of regular words and their regular bracketing in the basis theory for free Lie algebras definitely rests on the theorems and constructions on [20].Thus, regular words are often referred to in the literature as Lyndon-Shirshov words.
The special case = 1 when is regular is not included in the statement of [23, Theorem 2.8.5], but it can be found in the proof [23, p. 35].In this author's opinion, mentioning this special case, and even defining a term for it, aids in understanding the idea, especially because, in succeeding proofs, the concept will be used in very specific constructions.Definition 3.14.With reference to the notation in Proposition 3.11, if ≠ 1, then suppose that has the regular decomposition = 1 2 • • • ℓ according to Proposition 3.12.By we mean the Lie polynomial that results from replacing in (7) by ) .

By the inclusion composition of the regular word
with its subword , we mean the Lie polynomial 1 − 1 .If 1 − 1 = 0, then the inclusion composition of with is said to be trivial.
The ten inclusion compositions in the following lemma form the heart of this work.
In the traditional theory of Gröbner-type bases for free Lie algebras, there is another type of composition called intersection composition, which, together with the notion of inclusion composition, was originally developed in [21].However, intersection compositions will play no role in the proofs of our main results.
The traditional theory also defines inclusion compositions in terms of linear combinations of nonassociative regular words.In this work we only consider linear combinations of exactly one nonassociative regular word.Consequently, we shall be dealing only with Lie algebras generated by two elements satisfying relations of the form = 0 where is a regular word.
where, in (9), either = 0 (if With reference to Section 3. such that and are regular, with = ( ) ★ and = ( ) ★ for some words and , then Proof.

+1
is a regular subword of

+2
. In particular, if we consider Example 3.6(ii), +1 is the longest regular proper ending of +2 , and so, by Example 3.8(i), From Definition 3.14, we find that in order to form the Lie polynomial +2 +1 , we simply retain +1 in the right-hand side of (18).Thus, we have the trivial inclusion composition (8).
(ii) Proof of ( 9) and (10).We may simplify the regular bracketing shown in Example 3.8(v) as where Another perspective is that, if we let = ℎ , then which is a form apparently more suitable in applying Proposition 3.11 and Definition 3.14 in determining ℎ+ ℎ ℎ+ .However, by Proposition 3.12, since is regular, its regular decomposition is trivial.Thus, ℎ+ in ( 20) is to be replaced by which gives us the same thing as the right-hand side of (19).Thus, the inclusion composition of ℎ+ ℎ with ℎ+ is trivial.Using arguments similar to those used in part (i) of this proof, the inclusion composition of ℎ+ ℎ with ℎ is also trivial.The result is ( 9) and ( 10).
(iii) Proof of (11) and (12).Following the third part of the proof of Proposition 3.11, for the regular word 2 +1 and its regular subword 2 , we find that 2 +1 = Φ 2 where = and Φ is the identity map.Following Proposition 3.12, the regular decomposition of is simply = , and we further have By Definition 3.14, in order to form the Lie polynomial 2 +1 2 , we replace 2 • in ( 21) by (− ad while from Example 3.8(i), we obtain We subtract ( 22) from ( 23), and after routine computations that make use of the Jacobi identity and the skew-symmetry of the Lie bracket, we obtain the inclusion composition which reduces to the trivial inclusion composition (11) if = 1, but if ≥ 2, then using Example 3.8(i)-(ii), we get (12).
(v) Proof of ( 14)-( 17).The regular bracketing (6) of may be rewritten in two other ways: Following Proposition 3.11 and Definition 3.14, the above equations imply that the inclusion compositions in the left-hand sides of ( 14)-( 17) are indeed trivial.
Lemma 3.16.Let and be regular words such that is a subword of , and let I be a Lie ideal of Lie , .
(i) If the inclusion composition of with is trivial and ∈ I, then ∈ I.

(ii) If
∈ I and ∈ I, then the inclusion composition of with is an element of I.

A Lie ideal of Lie , and its normal complement
As according to Lemma 3.13, the nonassociative regular words form a basis for Lie , , and at this point, we partition this basis into two kinds: what shall be important in the subsequent development of Lie structure theory for the Heisenberg-Weyl algebra are the nonassociative regular words and so if then all the nonassociative regular words not in (25) span a vector subspace S reg of Lie , such that we have the direct sum decomposition Lie , = S reg ⊕ S reg .
Later we shall need to classify the aforementioned basis elements of S reg , and for this we need the necessary condition for the regularity of a word from Lemma 3.9.By some routine argument, the set of all nonassociative regular words may be partitioned using the equivalence relation under which two nonassociative regular words and are related if and only if [Recall how this number was defined in (3)-( 4).]The equivalence class that contains all with deg = 0 is precisely the set containing the basis elements of S reg from (25) together with +1 , , ∈ N\{0}.
Consequently, all nonassociative regular words with deg ≥ 2, together with those in (28), form a basis, which we shall refer to as the regular basis, for S reg .Lemma 3.17.Every regular basis element of S reg is contained in the Lie ideal of Lie , generated by Proof.Let be a regular basis element of S reg , and let Π be the Lie ideal of Lie , generated by (29).This proof is organized according to the equivalence class, under the equivalence relation defined by (27), to which belongs.In each case, we shall be using an inclusion composition from Lemma 3.15, and then Lemma 3.16, to produce the desired set membership ∈ Π.

If deg
= 0, then is either one of (25), or one of (28), where the former are not regular basis elements of S reg , while the latter are.Equivalently, is a product of a power of followed by a power of , where both exponents are positive, but that of is at least 2. If this exponent of is exactly 2, then is one of the generators (29) of Π, and we are done.We proceed by induction.If, for some positive integer , we have for some ℎ, , , ∈ N with ℎ, , positive.If ℎ ≥ 2, then, given ∈ {0, 1} from ( 9), both ℎ and ℎ + are at least 2, and we have ℎ+ , ℎ ∈ Π, according to the previous case.Using the trivial inclusion composition ( 9) or (10), and Lemma 3.16(i), ℎ+ ℎ ∈ Π.The trivial inclusion composition (9) may also be used for the subcase ℎ = 1 and = 1.We now consider the subcase ℎ = 1 and = 0.That is, We use induction on .If = 1, then we use the inclusion compostion (12) where 2 +1 , 2 ∈ Π.By Lemma 3.16(ii), ∈ Π. Suppose that for some positive integer , for any integer > , ∈ Π.To proceed with the inductive step at + 1, we assume that > + 1 so that, by Example 3.3(ii), +1 is regular.From > + 1, we get + 1 > > + 1 > .Thus, both + 1 > and > are true.By the inductive hypothesis, +1 , ∈ Π, and using the inclusion composition (13) and Lemma 3.16(ii), we obtain +1 ∈ Π, which completes the induction, and also, the proof for the case deg = 1.We now consider the case deg ≥ 1, and we use induction on deg .First, we recall the notation in Section 3.3 and Lemma 3.15: for some positive integers 1 , 1 , 2 , 2 , . . ., , , where deg = − 1.If the biggest value among 2 , 3 , . . ., has its first occurrence at index , then 1 ≥ , and we consider subwords of according to the position of : where and are regular, with for some words and .Also, we may rewrite (5) as By Lemma 3.9, the regular words , , and , which appear in the regular factorings (31),(32), may also be expressed in the form (30).That is, for each ∈ { , , , }, there exists a word such that = .A routine argument may be used to show that deg = deg .Also, (33) may be rewritten as which show that one occurrence of exists between the words and , and also between and .Consequently, for each ∈ { , , , }, We now proceed with the use of induction and by Lemma 3.16(i), ∈ Π.This does not cover the case when deg = 0 for any ∈ { , , , }. Suppose that we are indeed in such a case.If 1 ≥ , then can only be or , both of which we assume to have zero occurrence of .Thus, there exist 1 , 1 , 2 , 2 ∈ N such that = 1 1 , = 2 2 so that from (33), A routine argument may be used to show that the regularity of implies 2 ≠ 0. But then, we see from (35) that deg = 0, if one of 1 , 2 is zero, or deg = 1, if 1 , 2 are both nonzero, and both cases have already been dealt with earlier.An analogous argument may be used for choices of when 1 < , and the proof is complete.0 = =1 Φ ( ) where one of the scalars is nonzero, but according to the assumption (i), Φ ( 1 ) , Φ ( 2 ) , . . ., Φ ( ) are linearly independent, a contradiction.Hence, ∉ ker Φ .We have thus proven ∉ S reg implies ∉ ker Φ , and we may augment (37) into Therefore, ker Φ = Π.

Some Lie structure theorems
The defining relation = + 1 for the Heisenberg-Weyl algebra may be used to replace any occurence of in a word on { , }, by + 1.After using the distributivity laws, the new linear combination of words on { , } may be checked for any occurence of , which again may be replaced by + 1.This process terminates, and the result is a linear combination of words on { , } with deg = 0.That this process indeed terminates is guaranteed by the Diamond Lemma for Ring Theory [2, Theorem 2.1].More precisely, the Diamond Lemma may be used to show that the elements form a basis for H.Given , ∈ N, the relation may be used to rewrite the product of any two basis elements from (39) as a linear combination of (39).That is, the formula (40) may be used to compute the structure constants of the algebra H.One of the earliest appearances of the formula (40) in the literature is [19,Equation (11)], which has an operator-theoretic proof.Since the defining relation for H is equivalent to (− ) = (− ) + 1, there exists an algebra homomorphism : Using (40), each basis element of H from (39) is the image under of some element of H, and so by some routine arguments, is surjective.Also, where by exponentiation, we mean function composition of with itself.Consequently, 3 serves as inverse for .Thus, is an isomorphism.The idea that such an isomorphism exists had one of its first appearances also in the paper [19], but was not articulated in algebraic terms, and which is instead based on the vague idea of "substuting" for and some other objects which, in our description above, is equivalent to ( ) and ( ) [19,Equation (12)].
The Heisenberg-Weyl Lie algebra is the Lie algebra generated by two elements and satisfying the relation − = 1.We immediately find that is isomorphic to the Lie subalgebra of H generated by and .Also, routine verification shows that is three-dimensional, and is in fact, one of the classical low-dimensional Lie algebras.Thus, the algebra generators and are not able to generate the whole Lie algebra H.It turns out that two additional generators are needed.
for any ∈ N. We now consider those basis elements where and are both nonzero, or are both zero.We use induction on + .The smallest possibility is + = 0, and by the defining relation of H, 0 0 = 1 = [ , ] ∈ L. Suppose that any basis element with + < + are elements of L. By routine computations that make use of (40), But by the previous case, +1 and +1 are elements of L, and so is their Lie bracket.The inductive hypothesis also guarantees that +1− +1− ∈ L for all ∈ {2, 3, . . ., min{ + 1, + 1}}.Thus, we find from (44) that ∈ L. This completes the proof.
From this point onward, we assume that the characteristic of the field F is zero.
then is a Lie subalgebra of H, with a presentation by generators , Ω := 2 , and relations Furthermore, the Lie subalgebras of (other than itself) in the lower central series are given by for all ∈ N\{0}.Hence, is non-nilpotent but solvable.
Proof.The spanning elements (45) of are among the basis elements (39) of H. Thus, the spanning elements (45) are linearly independent, and hence form a basis for .Let , be any two of said basis elements.If both and are powers of , or if But since ≥ , we have + 1 ≥ + 1, and the right-hand side of ( 50) is an element of G .We have thus shown [ , G −1 ] ⊆ G .The other set inclusion follows from (50) which shows us that every spanning set element of G is equal to −1 times the then the linear span of (53) is equal to the direct sum ⊕ where is the linear span of (54).To complete the proof, we only need to show = ( ).The linear inpendence of (53) implies the linear independence of (54), and so the spanning elements (54) of form a basis for .Using (40), By ( 55), (56), every basis element of is in ( ), while by ( 56), (57), the image, under , of every basis element of is in .Thus, = ( ), and this completes the proof.

Further directions
At this point, one continuation of this study we can suggest is the exploration of the effect of intersection compositions [4, Definition 4.1(i)], or alternatively, [23, p. 38], on the choice of the Lie subalgebra of H (perhaps different from ) which may be used to decompose H in terms of such a Lie subalgebra and of its image under .Another possibility is the extension, or the development of analogs, of the methods in this study for an arbitrary -deformed Heisenberg algebra.

Example 3 . 3 .
Proposition 3.2 has the following consequences, the proofs of which are routine.(i) By induction, the word is regular for any , ∈ N\{0} .(ii) Given positive integers ℎ, , , the word ℎ ℎ is regular if and only if < .
Definition 3.7.Define := and := .If is a regular word with length at least 2 and if, using Lemma 3.5, = ★ , then := [ , ].We shall refer to as the regular bracketing of .We say that ∈ F , is a nonassociative regular word (on , ) if there exists a regular word such that = .Example 3.8.(i) Using Example 3.6(i)-(ii) and induction, for any , ∈ N with ≥ 1, we have • • • , and we prove this by induction on | | ≥ 2. If | | = 2, then the only possible words equal to are 2 , , If is a regular subword of a regular word , then there exists a word such that is regular, and that for some ∈ N, there exist regular words 1 , 2 , . . ., and some 1 , 2 , . . ., ∈ {−1, 1} such that if ′ of such that = ′ .In this case, we say that is a beginning of that intersects .Proposition 3.11.

Theorem 4 . 1 .
As a Lie algebra, H is generated by , , 2 , 2 .Proof.Let L be the Lie subalgebra of H generated by , , 2 , 2 .Thus, L ⊆ H, and we only need to show H ⊆ L, but this reduces to showing that every basis element in (39) is in L. Concerning those basis elements where exactly one of or is zero, induction and the relation (40) may be used to show that