Talk:Rayleigh quotient iteration

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This entry is utterly useless without any equations/proofs. I second this.


There is also an error in that the provided matrix example (as of 10/19/2009, 4:22 PM PST) is not actually Hermitian (real symmetric, in this case) —Preceding unsigned comment added by 65.122.178.66 (talk) 23:24, 19 October 2009 (UTC)[reply]

The algorithm is wrong. When mu comes near an eigenvalue, the matrix A-mu*I becomes near singular, that is, badly conditionned. Hence, computing the inverse of A-mu*I leads to a possibly inaccurate computation. Moreover, the matrix A-mu*I is inversed twice instead of one. This algorithm should be deeply modified before being published... Finally, the example is wrong, since the matrix is not symetric! —Preceding unsigned comment added by 128.93.12.137 (talk) 08:36, 2 March 2010 (UTC)[reply]

The method is sound. Once you get too close to the eigenvalue any method you're using to invert the linear system will throw a fit but in practical implementations of matrices sizes thousands by thousands this generally doesn't occur until you're within within 4 or 5 significant figures. If you manage to pick a mu which gives you an invalid linear system then you've already got your answer. The method is such that if your guess is wrong by a significant amount then it will hopefully give you a better approximation. The proximity to the true eigenvalue the algorithm can get are determined by the precision of your mathematical code and CPUs number handling. Furthermore the method doesn't have to be for symmetric matrices, it just happens people have proven convergence rates for the special types of matrices often encountered in large eigenvalue problems, such as Hermitian. There's no need to be able to order the Rayleigh quotients, only to measure the convergence of the series and that is done using standard absolute values on . AlphaNumeric (talk) 12:01, 29 July 2012 (UTC)[reply]

The Rayleigh iteration converges quadratically for general diagonalizable matrices and cubically for normal matrices. So the example should still show quadratic convergence. As far as I know, it is not really a problem if mu is close to an eigenvalue, since then only error components close to the eigenspace will blow up, and this is acceptable. But I guess that proving that a numerical algorithm solves the system A-mu I reliably would be an advanced topic. —Preceding unsigned comment added by 84.141.78.14 (talk) 21:57, 13 May 2010 (UTC)[reply]

Determining which eigenvalue the method finds[edit]

I think the article needs to have some comment about how reliable the algorithm is at finding a specific eigenvalue. The example in the article gives the impression you can pick a ridiculously bad guess for the eigenvalue you're after and the Rayleigh method will still converge very quickly. In reality, ie in realistic problems where eigenvalues are wanted, this isn't certain to occur. For example, suppose we have an Hermitian matrix with positive eigenvalues. The article gives the impression if you pick a mu which is less than 0 you'll end up converging to the eigenspace corresponding to the first eigenvalue (for simplicity assume no degeneracy) as once you invert that eigenvalue's eigenspace will be dominant. The Rayleigh update to mu can make mu jump to being larger than the first eigenvalue and if you have a very close spectrum this might mean it jumps to being on the other side of (say) the second eigenvalue. Once that occurs all bets are off as to whether you'll get the first eigenvalue out, since it is no longer associated to the dominant eigenspace. If you don't update mu but just do an inverse power method then the aforementioned procedure will converge to the eigenvalue you were after but obviously much slower. That seems to be the price you pay, either you get quick convergence to some eigenvalue or you get slower convergence to the one you were initially closest to. AlphaNumeric (talk) 11:54, 29 July 2012 (UTC)[reply]

Algorithm[edit]

I'm confused as to how the Octave code works without multiplying by A on every iteration (how can you compute the Rayleigh quotient otherwise?). I have looked at a variety of sources and could not find any using this method. Can anyone explain?

Thanks. InverseHypercube (talk) 10:28, 15 December 2014 (UTC)[reply]

is the Rayleigh quotient for the matrix . If is a sufficiently good approximation of an eigenvector for an eigenvalue , , so the lambda in the Octave code approximates . Taking the reciprocal and adding yields a new approximation of the eigenvalue. Maybe I should change the code to compute the real Rayleigh quotient, even though it involves an additional matrix-vector product? Sboerm (talk) 14:33, 15 December 2014 (UTC)[reply]
Thanks for the clarification, I assumed something like this was happening. I think it would be best to compute the real Rayleigh quotient for pedagogical purposes, or at least add a note. Do we know that using this approximation yields the same convergence properties as using the Rayleigh quotient? InverseHypercube (talk) 20:19, 15 December 2014 (UTC)[reply]

the example is probably wrong[edit]

As far as I can see the examples 3 eigenvectors (v1,v2,v3) are wrong since only the 2nd component changes while those have to be linearly independent (in order to span R^3) Rapus95 (talk) 19:29, 14 June 2016 (UTC)[reply]

Hopefully it's corrected now. PeR (talk) 07:22, 19 November 2016 (UTC)[reply]

The example is still wrong. The eigenvectors shown are not orthogonal. Additionally, they are not unit magnitude and possess sign errors that make it impossible to reconstruct or decompose the original in their as-shown form; however, on the scale of things, these are of far less importance than the lack of a complete basis. Note that this pertains to the original statement of eigenvectors, not the numeric solutions shown at the end of the example. 86.0.241.56 (talk) 04:53, 10 July 2018 (UTC)[reply]