Polynomial matrix spectral factorization

From Wikipedia, the free encyclopedia

Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to stable polynomials. In stability theory, Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials.[1] A key tool used to study these is a matrix factorization known as either the Polynomial Matrix Spectral Factorization or the Matrix Fejer–Riesz Theorem.

Given a univariate positive polynomial , a polynomial which takes on non-negative values for any real input , the Fejer–Riesz Theorem yields the polynomial spectral factorization . Results of this form are generically referred to as Positivstellensatz. Considering positive definiteness as the matrix analogue of positivity, Polynomial Matrix Spectral Factorization provides a similar factorization for polynomial matrices which have positive definite range. This decomposition also relates to the Cholesky decomposition for scalar matrices . This result was originally proven by Norbert Wiener[2] in a more general context which was concerned with integrable matrix-valued functions that also had integrable log determinant. Because applications are often concerned with the polynomial restriction, simpler proofs and individual analysis exist focusing on this case.[3] Weaker positivstellensatz conditions have been studied, specifically considering when the polynomial matrix has positive definite image on semi-algebraic subsets of the reals.[4] Many publications recently have focused on streamlining proofs for these related results.[5][6] This article roughly follows the recent proof method of Lasha Ephremidze[7] which relies only on elementary linear algebra and complex analysis.

Spectral Factorization is used extensively in linear–quadratic–Gaussian control. Because of this application there have been many algorithms to calculate spectral factors.[8] Some modern algorithms focus on the more general setting originally studied by Wiener.[9] In the case the problem is known as polynomial spectral factorization, or Fejer-Riesz Theorem, and has many classical algorithms. Some modern algorithms have used Toeplitz matrix advances to speed up factor calculations.[10]

Statement[edit]

Let be a polynomial matrix where each entry is a complex coefficient polynomial of degree at most . Suppose that for almost all we have is a positive definite hermitian matrix. Then there exists a polynomial matrix such that for all . We can furthermore find which is nonsingular on the lower half plane.

Extension to complex inputs[edit]

Note that if then . When is a complex coefficient polynomial or complex coefficient rational function we have is also a polynomial or rational function respectively. For we have

This because of the following observation: Since the entries of and are complex polynomials which agree on the real line, they are in fact the same polynomials. We can conclude they in fact agree for all complex inputs.

Rational spectral factorization[edit]

Let be a rational function where for almost all . Then there exists a rational function such that and has no poles or zeroes in the lower half plane. This decomposition is unique up to multiplication by complex scalars of norm . This is related to the statement of the Polynomial Matrix Spectral Factorization theorem restricted to the case.

To prove existence write where . Letting , we can conclude that is real and positive. Dividing out by we reduce to the monic case. The numerator and denominator have distinct sets of roots, so all real roots which show up in either must have even multiplicity (to prevent a sign change locally). We can divide out these real roots to reduce to the case where has only complex roots and poles. By hypothesis we have . Since all of the are complex (and hence not fixed points of conjugation) they both come in conjugate pairs. For each conjugate pair, pick the zero or pole in the upper half plane and accumulate these to obtain . The uniqueness result follows in a standard fashion.

Cholesky decomposition[edit]

The inspiration for this result is a factorization which characterizes positive definite matrices.

Decomposition for scalar matrices[edit]

Given any positive definite scalar matrix , the Cholesky decomposition allows us to write where is a lower triangular matrix. If we don't restrict to lower triangular matrices we can consider all factorizations of the form . It is not hard to check that all factorizations are achieved by looking at the orbit of under right multiplication by a unitary matrix, .

To obtain the lower triangular decomposition we induct by splitting off the first row and first column:

Solving these in terms of we get

Since is positive definite we have is a positive real number, so it has a square root. The last condition from induction since the right hand side is the Schur complement of , which is itself positive definite.

Decomposition for rational matrices[edit]

Now consider where the are complex rational functions and is positive definite hermitian for almost all real . Then by the symmetric Gaussian elimination we performed above, all we need to show is there exists a rational such that for real , which follows from our rational spectral factorization. Once we have that then we can solve for . Since the Schur complement is positive definite for the real away from the poles and the Schur complement is a rational polynomial matrix we can induct to find .

It is not hard to check that we in fact get where is a rational polynomial matrix with no poles in the lower half plane.

Extension to polynomial decompositions[edit]

To prove the existence of polynomial matrix spectral factorization, we begin with the rational polynomial matrix Cholesky Decomposition and modify it to remove lower half plane singularities. Namely given with each entry a complex coefficient polynomial we have rational polynomial matrix with for real , where has no lower half plane poles. Given a rational polynomial matrix which is unitary valued for real , there exists another decomposition,.

Removing lower half-plane singularities[edit]

If then there exists a scalar unitary matrix such that . This implies has first column vanish at . To remove the singularity at we multiply by

has determinant with one less zero (by multiplicity) at a, without introducing any poles in the lower half plane of any of the entries.

Extend analyticity to all of C[edit]

After modifications, the decomposition satisfies is analytic and invertible on the lower half plane. To extend analyticity to the upper half plane we need this key observation: Given a rational matrix who is analytic in the lower half plane and nonsingular in the lower half plane, we have is analytic and nonsingular in the lower half plane. The analyticity follows from the adjugate matrix formula (since both the entries of and are analytic on the lower half plane). The nonsingularity follows from which can only have zeroes at places where had poles. The determinant of a rational polynomial matrix can only have poles where its entries have poles, so has no poles in the lower half plane.

From our observation in Extension to Complex Inputs, we have for all complex numbers. This implies . Since is analytic on the lower half plane, is analytic on the upper half plane. Finally if has a pole on the real line then has the same pole on the real line which contradicts the fact has no poles on the real line (it is analytic everywhere by hypothesis).

The above shows that if is analytic and invertible on the lower half plane indeed is analytic everywhere and hence a polynomial matrix.

Uniqueness[edit]

Given two polynomial matrix decompositions which are invertible on the lower half plane we rearrange to . Since is analytic on the lower half plane and nonsingular, is a rational polynomial matrix which is analytic and invertible on the lower half plane. Then by the same argument as above we have is in fact a polynomial matrix which is unitary for all real . This means that if is the th row of then . For real this is a sum of non-negative polynomials which sums to a constant, implying each of the summands are in fact constant polynomials. Then where is a scalar unitary matrix.

Example[edit]

Consider . Then through symmetric Gaussian elimination we get the rational decomposition . This decomposition has no poles in the upper half plane. However the determinant is , so we need to modify our decomposition to get rid of the singularity at . First we multiply by a scalar unitary matrix to make a column vanish at . Consider . Then we have a new candidate for our decomposition. Now the first column vanishes at

, so we multiply through (on the right) by to obtain Notice . This is our desired decomposition with no singularities in the lower half plane.

See also[edit]

References[edit]

  1. ^ Anatolii Grinshpan, Dmitry S. Kaliuzhnyi-Verbovetskyir, Victor Vinnikov, Hugo J. Woerdeman (2016). "Stable and real-zero polynomials in two variables". Multidimensional Systems and Signal Processing. 27: 1–26. CiteSeerX 10.1.1.767.8178. doi:10.1007/s11045-014-0286-3. S2CID 254860436.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  2. ^ N. Wiener and P. Masani (1957). "The prediction theory of multivariate stochastic processe". Acta Math. 98: 111–150. doi:10.1007/BF02404472.
  3. ^ Tim N.T. Goodman Charles A. Micchelli Giuseppe Rodriguez Sebastiano Seatzu (1997). "Spectral factorization of Laurent polynomials". Advances in Computational Mathematics. 7 (4): 429–454. doi:10.1023/A:1018915407202. S2CID 7880541.
  4. ^ Aljaž Zalar (2016). "Matrix Fejér–Riesz theorem with gaps". Journal of Pure and Applied Algebra. 220 (7): 2533–2548. arXiv:1503.06034. doi:10.1016/j.jpaa.2015.11.018. S2CID 119303900.
  5. ^ Zalar, Aljaž (2016-07-01). "Matrix Fejér–Riesz theorem with gaps". Journal of Pure and Applied Algebra. 220 (7): 2533–2548. arXiv:1503.06034. doi:10.1016/j.jpaa.2015.11.018. S2CID 119303900.
  6. ^ Lasha Ephremidze (2009). "A Simple Proof of the Matrix-Valued Fejer–Riesz Theorem". Journal of Fourier Analysis and Applications. 15: 124–127. arXiv:0708.2179. CiteSeerX 10.1.1.247.3400. doi:10.1007/s00041-008-9051-z. S2CID 115163568. Retrieved 2017-05-23.
  7. ^ Ephremidze, Lasha (2014). "An Elementary Proof of the Polynomial Matrix Spectral Factorization Theorem". Proceedings of the Royal Society of Edinburgh, Section A. 144 (4): 747–751. CiteSeerX 10.1.1.755.9575. doi:10.1017/S0308210512001552. S2CID 119125206.
  8. ^ Thomas Kailath, A. H. Sayed (2001). "A survey of spectral factorization methods". Numerical Linear Algebra Techniques for Control and Signal Processing. 8 (6–7): 467–496. doi:10.1002/nla.250. S2CID 30631226.
  9. ^ Gigla Janashia, Edem Lagvilava, Lasha Ephremidze (2011). "A New Method of Matrix Spectral Factorization". IEEE Transactions on Information Theory. 57 (4): 2318–2326. arXiv:0909.5361. doi:10.1109/TIT.2011.2112233. S2CID 3047050.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. ^ D.A. Bini, G. Fiorentino, L. Gemignani, B. Meini (2003). "Effective Fast Algorithms for Polynomial Spectral Factorization". Numerical Algorithms. 34 (2–4): 217–227. Bibcode:2003NuAlg..34..217B. doi:10.1023/B:NUMA.0000005364.00003.ea. S2CID 9800222.{{cite journal}}: CS1 maint: multiple names: authors list (link)