Allin Cottrell schrieb:
On Sat, 14 Apr 2007, Riccardo (Jack) Lucchetti wrote:
> Thinking a bit more about it, I thought it would be way more
> economical, from a computational viewpoint, to decide whether a
> matrix is idempotent or not simply by a multiplication check,
> because matrix multiplication is much cheaper than the
> eigenproblem. But, may I ask what this check is for?
For sure, we could actually carry out the multiplication and check
whether or not A*A = A. But since we're calculating eigenvalues
anyway, and since calculating A*A directly would require an extra
memory allocation, it struck me that if we could answer this
question using the eigenvalues that would be preferable.
Like you, I'm familiar with the idea that if the symmetric matrix
A is idempotent, then the eigenvalues of A are all either 0 or 1.
But I also wondered if this is or is not a bi-implication; and in
particular I wondered what can be said in the asymmetric case
(where, in general, the eigenvalues could be complex).
From a bit of googling it seems to me that the eigenvalues will always
be only 0 or 1 (and real), even for general non-hermitian (and thus also
even complex-valued) matrices.
So you could first try to rule out that the matrix is idempotent by
checking the eigenvalues and stop if they're not all 0 or 1. But
ultimately to assert idempotency it seems to me that you have to do the
multiplication, because all other properties are just necessary but not
sufficient conditions.
Maybe Lütkepohl's Handbook of Matrices (which I don't have here right
now) has something to say on the issue...
-sven