Description
I implemented several linear operators related to solve control problems. An example is the Lyapunov operator L: X -> AX+XA', where A is a squre matrix. This operator acts usually on symmetric/hermitian matrices X and is a special case of the more general Sylvester operator S: X -> AX+XB, with A and B square matrices (not necessarily of the same size). The definition of L as a linear operator is possible regardless X is symmetric/hermitian or not and Matrix(L) is the usual Kronecker-expansion based matrix. However, the definition of the inverse operator inv(L) : Y -> X involves the solution of a Lyapunov equation AX+XA' + Y = 0, where for a symmetric/hermitian Y the resulting X is also symmetric/hermitian. The solvers of Lyapunov equations usually exploit the symmetry of the solutions, by computing, for example, only the upper triangular part of X (the lower triangular part results via symmetry). It is possible to define inv(L) ignoring the symmetry in the input data. Unfortunately, in this case, some functions in the LinearOperators collection will fail, as for example Matrix(inv(L)), which will not generate the inverse operator matrix due to the restriction on the symmetry of the specialized solvers. To cope with this aspect, I was forced to use the more general solvers for Sylvester equations to compute the full solution X, with the associated efficiency losses.
I wonder if the problem of restrining the domain of input data to form the products as L * vec(X) can be addressed somehow, by assuming certain structural constraints on X (e.g., symetric, or hermitian or even diagonal).
Many thanks in advance for your time considering this question.
Note: The implemented linear operators belong to the recently developed (not yet registered) package MatrixEquation.jl.