Talk:Phase correlation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

noise[edit]

The images shown do not contain "noise" as I would expect. It seems to be the same image but shifted. That means that all the "noise" is also shifted with (30,33). You can see the same "brighter" spot right of the lion's head, if you look carefully. It would be nice if this can be done with real noise. 1.) Take a picture. 2.) Pick a region. 3.) Apply white noise. 4.) Pick another region. 5.) Apply white noise. 6.) Do the math. —Preceding unsigned comment added by Saviourmachine (talkcontribs) 08:41, 11 May 2011 (UTC)[reply]

Maybe it was like that in 2011, but the present image does not appear to have shifted noise. You need to open the unscaled PNG image, though; the image in the article has some scaling artifacts. Han-Kwang (t) 17:59, 10 January 2018 (UTC)[reply]

trouble[edit]

I have trouble understanding this page. There's a description of a method, and a proof, but it isn't very clear what the method is supposed to achieve. So there's no "theorem" or "claim" to prove. Certainly the claim needs to be stated more clearly, preferably both in an informal (for understanding) and a formal way. Without this the "proof" should be downgraded to a "motivation" or "why it works" kind of section. Thanks & regards. akay 09:34, 19 July 2006 (UTC)[reply]

The introduction clearly states that this is a method to determine the relative translative movement between two images, which is exactly what it does. The proof section provides a derivation for the algorithm, and proves the exactness for continous & noise-free images. Feel free to make the text more verbose it you think that will make the article more clear. You should also check out the refered paper if you want to dig more deeply into the subject. --Fredrik Orderud 23:40, 23 August 2006 (UTC)[reply]


I'm not sure what the example is supposed to show. Is the vector from <0 0> to <30 33> (where the prominent white dot appears) supposed to represent the displacement of image 2 relative to image 1? 23, August 2006

Yes, the coordinates of the white peak reveals how much the two images are translated relatively to each other. This is because is nonzero only at the coordinate . --Fredrik Orderud 23:40, 23 August 2006 (UTC)[reply]

Proof[edit]

The proof presented is flawed. The second line only applies if the image is circularly shifted, i.e. only if:

Either this needs to be stated, or the proof needs to be rewritten in terms of the continuous Fourier transform or the discrete-time Fourier transform.

Oli Filth 22:51, 1 May 2007 (UTC)[reply]

What's more, the expression that's referred to as a "normalised cross correlation" isn't a correlation at all, it's simply a multiplication. It's the spatial-domain result which is the cross-correlation. Oli Filth 08:54, 2 May 2007 (UTC)[reply]

Answer : anyway, in practice, images will be noisy and will never be the circular translated of one another, so the proof will never apply to a practical case, and the only interesting thing is the idea behind the algorithm, that's why it's useless to clutter the proof with a circular shift of the pictures. Say we suppose that both images are on a black background and translated by a small shift, so that they are both in the frame, then the theorem applies, and it gives us some kind of justification so as to why the algorithm is supposed to work. That's all we are asking for (since, again, in practice the images will never be actual translations of each other). —The preceding unsigned comment was added by 129.199.224.240 (talkcontribs) 11:12, 24 June 2007.

Yes, obviously the images are unlikely to be circularly-shifted translations of each other in a real application. However, we can't go around presenting unsound (i.e. incorrect) mathematics like this, especially with something as frequently-misunderstood and frequently-misapplied such as the Fourier transform. Oli Filth 11:18, 24 June 2007 (UTC)[reply]
I've now updated the article, hopefully addressing all of my concerns! Oli Filth 13:23, 24 June 2007 (UTC)[reply]

In reality, you zero-pad the signals so that the circularity doesn't matter. At the edges you're just measuring the correlation of the image with nothing, instead of a wrapped version of itself. —Preceding unsigned comment added by 96.224.64.135 (talk) 17:45, 1 December 2009 (UTC)[reply]

Please generalize[edit]

This article is too narrow. Phase correlation is also used in other fields besides image processing. Please generalize it and make the image processing application a subsection. —Keenan Pepper 03:34, 19 July 2007 (UTC)[reply]

Include a description of sub-pixel methods?[edit]

I found this article very informative. However, you mention sub-pixel methods, but don't specify how this might be done. Also, I think it would be helpful to comment on how this method relates to motion estimation by the correlation method - increased accuracy and reduced robustness? 196.2.111.133 15:55, 20 July 2007 (UTC)[reply]

Can't you just estimate the actual sub-pixel peak in the output, rather than just finding the pixel with the maximum value? In the example there are actually two pixels with value #ffffff, so the true peak would be about halfway between them and a little to the right. —Preceding unsigned comment added by 96.224.64.80 (talk) 19:49, 15 August 2009 (UTC)[reply]

Motivation[edit]

First, I would like to mention that the phase correlation is a private case of the normalized correlation, where we assume that two images have approximately the same Fourier magnitude. Second, one major benefit of this technique is the fact that you don't actually need to multiply the Fourier transforms of the two images and then divide by the magnitudes, but instead you can DISCARD the two magnitudes and just subtract the phases - which is much faster. —Preceding unsigned comment added by 212.25.107.145 (talk) 09:50, 7 January 2008 (UTC)[reply]

Alternate Method?[edit]

Would someone explain why this alternate expression does not work?

It would seem to give the same result. Cgage22 (talk) 22:54, 6 November 2008 (UTC)[reply]

Similarity[edit]

I've been reverting the changes which state that phase correlation is a method of identifying "similarity" been images. Whilst this is just a correlation, and therefore its suitable post-processing of its output might give a crude estimate of "similarity"), the article doesn't discuss this. The article only discusses identifying the peak in the cross-correlation, and assuming that this corresponds to a translation.

If you'd like to keep the mention of "similarity", please could you provide a reference which describes this. Then, per WP:LEAD, a description of this should be added to the article body before we mention it in the lead, as the lead shouldn't introduce material that isn't discussed elsehwere in the article. Regards, Oli Filth(talk|contribs) 19:58, 15 August 2009 (UTC)[reply]

Perhaps I misunderstood your additions, and you actually were referring to geometrical similarity. If so, this certainly isn't making the lead "more accessible", as most readers will not associate that meaning with the term "similarity". And it would still need sourcing, and describing in more detail in the article body. Oli Filth(talk|contribs) 20:14, 15 August 2009 (UTC)[reply]

Cross-correlation[edit]

How is this any different from calculating the cross-correlation between the images? The fact that this can be done using fft's is noted on the cross-correlation page. Is this really a different thing? TimmmmCam (talk) 12:29, 1 October 2009 (UTC)[reply]

The difference is that the complex values are normalized to unity magnitude. Contrary to the claim by the anonymous commenter above, that really doesn't help the speed though since you're still using 2-vector (Re,Im) representations anyway. It is possible to do a slightly faster inverse FFT for normalized complex phase fields, but I've never seen an implementation of it. I think it's mostly done because it's a tradition in the DIC literature, but for the vanilla MaxLoc(IFFT(FFT*^FFT)) the motivation is a little obscure. There are some more sophisticated subpixel algorithms that require the isolated phase data though. Tarchon (talk) 01:48, 1 December 2015 (UTC)[reply]

Too complicated explanation of a trivial idea[edit]

The basic idea is simple but it's not obvious from the explanation full of math.

Basically, determining a position of a lion is complicated but determining a position of a single white dot is easy. Phase correlation transforms the problem from a lion to a dot.

Shifted copies of an image will differ only in the phases of their frequencies, not in amplitudes. A given pixel shift will produce small angular rotation on low frequencies and high rotation on high frequencies. If we discard the amplitude information (shape of lion) and keep the phase information (pixel shift), we get what would happen if we were shifting a single white dot instead of a lion. The result will be a white dot shifted by the same amount as the lion.

This explanation immediately shows a drawback of the method - if only small portion of the frequency space is occupied by signal frequencies (narrow-band signal or blurred image) and the rest by noise, most of the phase results will give random result and the phases will mutually contradict on how much the shift was. The phases derived from noise have the same say on the result as those derived from signal. I wonder what the result will be and how this drawback can be fixed. —Preceding unsigned comment added by 80.218.244.105 (talk) 11:01, 21 January 2010 (UTC)[reply]

Out of date sources[edit]

both references links provided on this page seem to be outdated and dead (404) doing a quick google search I could at least find an, as of yet, working link to the second reference http://www.liralab.it/teaching/SINA/slides-current/fourier-mellin-paper.pdf . Could someone fix that? 62.159.242.114 (talk) 13:17, 14 March 2011 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Phase correlation. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 00:22, 12 February 2016 (UTC)[reply]

Subpixel Foroosh method[edit]

Could someone who understands the matter please rewrite this expression for the subpixel shift?

It is not clear what the meaning is of the plus-minus symbol. If r00=1 (peak value) and r01=0.5 (neighbor), then the subpixel offset is 1/(1+/-0.5) = 2.0 or 0.67 ? Han-Kwang (t) 17:25, 10 January 2018 (UTC)[reply]

According to the following sentence "the comparand images differ only by a subpixel shift", I would say that it is the latter: 0.67; besides, it is confirmed by the article [1] that "two solutions will be obtained (...) the correct solution is easy to identify since it is in the interval [-1,1] and it has the same sign as ( and being the x-coordinates of the side-peak and the main peak)." Kehino (talk) 11:12, 22 June 2018 (UTC)[reply]