Talk:Fourier transform/Archive 4

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3 Archive 4 Archive 5

Oscillatory functions

The article says about decomposition of function into oscillatory functions, like this, linking to the oscillation in the wrong meaning. — Preceding unsigned comment added by 46.146.92.252 (talk) 11:44, 12 October 2011 (UTC)

Clarifications / Hilbert Space Isometry

First of all, I find the xi notation confusing (I've always seen f for non-angular frequency), although I always use the unitary omega notation. As a result, I just ignore most of the article except for the helpful tables at the end because the major method employed here uses such an odd notation. I think I would not have been able to understand nearly anything in the article before I began grad school, and I spent a large portion of my undergrad program using Fourier Transforms. Even for people familiar with the subject matter, this reads like a page out of Virgil.

Another point I'd like to make is that the unitary Fourier transform can be expressed as the inner product of a function in a Hilbert space with the eigenfunctions of the shift operator. While eigenfunctions are discussed in the form of the Hermite polynomials (which is actually why I came to this article today), we do not discuss the fact that

Of course the above is for , and the eigenfunctions of the shift operator differ depending on the function space. Does anybody agree with me that this could be used in the article?

Now for the Hermite function's Fourier Transform. I came here because I've been alternating back and forth between scratch paper and evaluation using Mathcad, and I became confused as to which Fourier Transform is being used in the preliminary exam I've been studying. I am in the process of proving this result stated in the article:

The old prelim I've been studying requires a slightly different (although fundamentally the same) proof, namely that

I cannot for the life of me figure out why it isn't a factor of for the non-unitary transform (I assume the expression in the article uses the unitary transform). Of course, I also can't see where the comes from, even though I'm using exactly the same definitions of the Hermite polynomials and the Hermite function . I assume this is there because for odd n the Hermite polynomial is odd whereas for even n it is even, yielding purely imaginary or real Fourier Transforms, respectively.

My other qualm here is the hat notation. I was under the impression that was the unitary transform, whereas was the non-unitary transform. Please clarify.

I know I've asked a lot of questions, so I thank anybody who has the time to help me examine them. Eccomi (talk) 21:37, 31 December 2009 (UTC)

I've tried to reply to most of these issues, but I might have skipped some.
  • On the first point about ξ versus ƒ: there are different conventions, depending on the field. I almost always see ξ in Fourier analysis, though I suppose ƒ is common in signal processing. Ultimately, the article needs to settle on one of these, and ξ was probably agreed to be superior because just looks odd. Ultimately it is only a matter of notation, but when one is very used to seeing something else, it can be a bit jarring.
But the problem was originally created by changing x(t) to f(x). I understand the desire to use x as the domain variable, but IMO a less "jarring" solution is to use one of the other common choices (s, g, h, etc) for the function name.
Happy new year.
--Bob K (talk) 15:34, 1 January 2010 (UTC)
Irrelevant issues like this always seem to be the biggest bones of contention ;-) Happy new year to you as well. Sławomir Biały (talk) 15:46, 1 January 2010 (UTC)
I suppose my first qualm is indeed irrelevant. I just don't like xi because I find it difficult to draw on paper (not a problem here). Eccomi (talk) 08:35, 2 January 2010 (UTC)
  • Explaining the Fourier transform in terms of the eigenfunctions of the shift operator is a feature that seems to be lacking in the article. I view it in terms of "characters" of the group of translations, but it amounts to the same thing explained in a slightly different way. I think the article could benefit from such a section (it helps to say what the Fourier transform "really is"), but I don't know offhand of a simple way to approach it. Do you have anything in mind? (e.g., a book that does a good job?) Expressing it as an "inner product" in this way is not entirely legit, because the eigenfunctions do not lie in the Hilbert space. One approach is to use distributions to do it, and this can be found in probably any book on Fourier analysis.
  • In the conventions used in the article, applying the Fourier transform twice to a function gives the value of the function reflected in the origin; that is , so there clearly can't be a factor of in the formula for the Hermite function. (Wait, I think I misread your post. Oh well.) Perhaps the confusion is a lack of clarity over which of the three common definitions of the Fourier transform the article uses. It is , this is unitary. There is also another unitary Fourier transformation (probably often called "the" unitary transformation) which uses the angular frequency ω: . Finally, there is the non-unitary transformation . If we use the non-unitary transformation, then there is certainly a factor of . However the article uses the first convention
  • I don't think there is any established convention between using and for the unitary or non-unitary transforms. I think most authors pick a transform and stick with that one, and don't have any use for a notation that distinguishes between them.
For what it's worth. ;-) Sławomir Biały (talk) 22:27, 31 December 2009 (UTC)
I'm familiar with the three different definitions, but my issue was with the old prelim I was studying, which stated a property nearly identical to the one stated in this article, except for a factor of . Usually when I see such factors I just assume that the transformation was non-unitary, but if this were the case I would expect the factor to be instead of . After further examination, it may be possible that the problem I am scrutinizing differs due to a different definition of the Hermite function (probabilist's definition vs. physicist's definition). Eccomi (talk) 08:35, 2 January 2010 (UTC)
Sorry, I only realized that after posting. I had misinterpreted your question. Sławomir Biały (talk) 12:59, 4 January 2010 (UTC)
Regarding the treatment in terms of "characters" of the translation group, this is the viewpoint of the Fourier transform as the projection operator onto partner functions of the irreducible representations of the translation group. (In this way, each symmetry group naturally obtains its own Fourier-like unitary transformation. This is a generalization of the notion of "eigenfunctions" of the symmetry operators, because it includes the case where the representations are not numbers, i.e. representations of dimension > 1, e.g. for crystallographic symmetry groups.) This treatment can be probably found in any book on group theory and physics, e.g. Inui's Group Theory and Its Applications in Physics or probably Tinkham. — Steven G. Johnson (talk) 06:42, 1 January 2010 (UTC)
(Many mathematicians would call them "generalized eigenfunctions" because they do not live in a Hilbert space, but physicists are used to sweeping the distinction between Hilbert spaces and rigged Hilbert spaces under the rug. — Steven G. Johnson (talk) 06:45, 1 January 2010 (UTC))
The article does go into this in the section on non-abelian groups (for which there are representations that are not characters of the group). Is there an elegant and reasonably concrete way that to emphasize this approach a little more from the beginning? Sławomir Biały (talk) 14:37, 1 January 2010 (UTC)
I had not considered that these eigenfunctions of the symmetry operators do not live within the Hilbert space, but that makes sense. I have some very good class notes on the topic and would like to contribute something as to this perspective (in addition to what there is for the non-abelian groups), but of course class notes, even at a high level, will not do as a source. When I find a good source I'll create a section (or add to one). I'm happy to see that there are capable people with whom to consult. As far as sweeping things under the rug goes, I am certainly prone to this as an engineer (it's not just for physicists to want to simplify the picture).
On another note (regarding the Hermite function's Fourier transform), I found the proof I was looking for here. Thank you all for the helpful comments. Eccomi (talk) 08:35, 2 January 2010 (UTC)

basis functions

Sorry, but what section talks about the basis functions/Fourier set outer than the canonical basis functions? 018 (talk) 03:04, 29 January 2010 (UTC)

When I was taught the Fourier transform, the canonical basis function was taught as a sort of historical curiosity and the transform itself regards any orthogonal basis (such as wavelets/heavyside basis function/the wavelets based on the heavyside function). This article appears to mention one alternative basis tangentially and never says that this is an option. Or am I missing something? I guess the point is I wouldn't call this an article on the Fourier transform as much as an article on the Fourier transform with canonical basis that never actually addresses the Fourier transform itself. 018 (talk) 21:02, 3 February 2010 (UTC)
Okay, I'm going to add something to the definition noting that this article regards the subset of Fourier transforms that use the canonical basis. How about:

This article regards only the original Fourier set, but any orthonormal basis on the domain can be used as a basis set for the transform.

suggestions, rewordings, objections? 018 (talk) 22:11, 14 February 2010 (UTC)
I object to the proposed change: the Fourier transform is essentially characterized by properties under the Euclidean group. Arbitrary orthogonal decompositions do not share any but the crudest of the properties of the Fourier transform. Treating arbitrary basis sets as though they were equivalent is simply wrong. Sławomir Biały (talk) 02:09, 15 February 2010 (UTC)
Having looked up a reference I'd like to revise my proposal. Keener, "Principles of Applied Mathematics" calls what I'm talking about a "(generalized) Fourier series" (emphasis, original). I see that the time domain / frequency domain thing is unique to the trigonometric basis, as is the existence of a FFT. But the ability to fit f with arbitrary precision is by no means unique. This is dealt with somewhat clumsily in the section Fourier_transform#Eigenfunctions where the existence of basis sets for L2 is mentioned. I think the appropriate thing to do is to mention generalized FTs somewhere. I'm not sure if it is this article or perhaps, Fourier series. 018 (talk) 04:01, 15 February 2010 (UTC)
Fourier series is a more natural target for mentioning various orthogonal decompositions of L2. In fact, I must say I am surprised that the article does not already do so. However, something to bear in mind there is that the classical Fourier series, as well as the various generalizations already mentioned, are spectral in nature: they are eigenfunction expansions. This connects them in a very explicit way to the differentiability and other fine properties of the functions that they describe. Furthermore, classical Fourier series are defined for a wider class of functions than those in L2. So I would urge against language that trivializes the difference between classical Fourier series and decompositions in arbitrary orthonormal bases of L2. But I do think that the existence of alternative basis sets, and the attendant "generalized" Fourier series, is worth mentioning. Sławomir Biały (talk) 11:43, 15 February 2010 (UTC)
The reason that the Fourier basis functions are (generalized) eigenfunctions of derivatives is because derivatives are translation invariant, and the Fourier basis functions are the characters of the translation group, and the Fourier transform is essentially a projection operator. Various books on harmonic analysis define "Fourier transforms" more generally in terms of projection operators for arbitrary symmetry groups (usually subsets of the Euclidean group), and as the article notes the cases of non-Abelian groups (e.g. in crystallography) lead to an interpretation more subtle than "eigenfunctions" because of the possible existence of irreducible representations with dimension > 1. So, from this perspective, symmetry groups are much more fundamental than "eigenfunctions" of specific operators that happen to be invariant under those symmetries.
On the other hand, it's true that I have also seen a few authors refer to expansion in arbitrary orthonormal bases as "generalized Fourier series" (whether or not the basis stems from eigenfunctions or symmetry groups...e.g. you could take any basis and use Gram-Schmidt to orthogonalize), and probably this should be mentioned in the article (or in the Fourier series article). I think it should be mentioned under "generalizations," however, not in the lede, and it should be emphasized that this terminology of "generalized Fourier series" is not universally adopted; many authors reserve the term "Fourier" to refer specifically to basis sets derived from the translation group, or at least from some subgroup of the Euclidean group. — Steven G. Johnson (talk) 18:44, 15 February 2010 (UTC)
This is already getting far afield, but I'm not sure I agree that this is "the Reason" for the connection between Fourier series and fine properties of functions. I was thinking more along the lines of good old fashioned generalizations of Fourier series via "vibrations of a drum"—spectra of the Laplacian (or, the Laplace-Beltrami operator on a manifold more generally). In these situations, there is still a close connection between the regularity of functions (e.g., Sobolev spaces) and the Fourier expansions, but there is little or no symmetry is involved. Anyway, it is all a matter of perspective. Sławomir Biały (talk) 01:37, 16 February 2010 (UTC)
No symmetry? Why do you think Fourier transforms diagonalize the Laplacian? It is because the Laplacian is translation-invariant. For the same reason, Fourier transforms diagonalize any translation-invariant operator; the Laplacian is by no means special (except in being probably the simplest self-adjoint translation-invariant differential operator, not counting the identity). — Steven G. Johnson (talk) 16:51, 16 February 2010 (UTC)
Consider solving the heat equation in a general region—no symmetry involved here. The eigenfunctions have very little to do with the ones with which one is familiar (those that diagonalize the Laplacian on the square or torus). Something is clearly not being communicated here. Sławomir Biały (talk) 00:33, 17 February 2010 (UTC)
Sure, if you just take the eigenfunctions of some general self-adjoint operator with a spectral theorem but no symmetry, then you get an orthogonal basis that has nothing to do with the Fourier basis functions or any symmetry group. This discussion is getting confused because there are three distinct generalizations of Fourier transforms that people are talking about and not clearly distinguishing.
(1) orthogonal bases/decompositions based on irreducible representations of symmetry groups. (2) orthogonal bases derived from eigenfunctions (or generalized eigenfunctions) of an arbitrary self-adjoint operator to which the spectral theorem applies. (3) orthogonal bases derived from some arbitrary complete basis of functions unrelated to eigenfunctions of anything, perhaps orthonormalized with Gram-Schmidt if necessary.
Some sources restrict "harmonic analysis" to (1). I've seen a few sources talk of "generalized Fourier series" in terms of (3). I'm not sure if any sources restrict the term "generalized Fourier series" to eigenfunctions ala (2); can you give one? In any case, it certainly seems wrong to me to single out eigenfunctions as the primary generalization of Fourier transforms. Moreover, if you are talking about the original sinusoidal Fourier basis functions, then you are clearly missing something major if you view them primarily as eigenfunctions of a Laplacian and don't realize the larger context of the translation group. — Steven G. Johnson (talk) 03:57, 17 February 2010 (UTC)
I apologize if I gave the impression of singling out eigenfunctions as "the primary" generalization of Fourier series. It was not my intention to do so, but rather only to indicate that classical Fourier series and their generalizations are rather more subtle than just lumping them in with all of the other bases of L2. Symmetry was my first point, of course, way up in there response to the OP. But then a later point that I felt worth making was the connection with fine properties of functions, and comes about because the Fourier basis functions are a complete set of eigenfunctions of an elliptic differential operator. This is the view taken, for instance, by Courant and Hilbert, who call the resulting eigenfunction expansion the "Fourier expansion". (This naive generalization is, of course, completely in the spirit of Fourier's original use of the series to solve the heat equation.) I hope that you can appreciate that there is some value in having more than one "correct" way to go about doing things.
But on a different note, I would also like to continue this discussion in a more constructive and focused manner, since it started off here in too much confusion to go anywhere further, I think. Yours is a point of view that I actually agree with overall, and which I think needs to be emphasized systematically from an earlier point in the article. Sławomir Biały (talk) 04:46, 17 February 2010 (UTC)

\scriptstyle

I beautified inline formulas by using \scriptstyle such that

For any complex numbers a and b, if h(x) = aƒ(x) + bg(x), then 

became:

For any complex numbers a and b, if h(x) = aƒ(x) + bg(x), then 

but mr Sławomir Biały reverted my work without a comment. Please observe wp:reverting. Bo Jacoby (talk) 12:46, 9 July 2010 (UTC).

I don't really find scriptstyle any more "beautiful" than the default rendering of the LaTeX. Actually, it appears to align even worse with the surrounding text. I believe that in some cases, scriptstyle may be unavoidable to make LaTeX render correctly, but barring these exceptional circumstances, I think it is best not to use it. Sławomir Biały (talk) 13:16, 9 July 2010 (UTC)

None of the two align well, but scriptstyle has approximately the same size as the text around it. The following is even better

For any complex numbers and , if then 

because here at least the same variable looks exactly the same. Still I object against your violating the rules. Bo Jacoby (talk) 14:57, 9 July 2010 (UTC).

in my setup, the default Displaystyle is almost the same size, and the scriptsyle is much smaller. Sławomir Biały (talk) 15:51, 9 July 2010 (UTC)

That's interesting. I do not know where to do the setup. Bo Jacoby (talk) 18:23, 9 July 2010 (UTC).

Using \textstyle it looks like this.

if then 

On my browser it looks exactly like with \displaystyle:

if then 

How does it look on your browser? Bo Jacoby (talk) 14:20, 10 July 2010 (UTC).

If you don't like the default LaTeX math rendering, you should file a bug report with MediaWiki to suggest a different rendering. Going through every equation on Wikipedia and adding \scriptstyle everywhere is not practical, and changing it on just one page is not consistent. — Steven G. Johnson (talk) 23:58, 10 July 2010 (UTC)
I would agree that one editor should not take it upon himself to make wholesale changes that might be controversial. But I don't have a "consistency" problem with experimenting with one page and giving it a chance to catch on. Is there a specific consistency "rule" that would be violated? That's not a rhetorical question... I really don't know.
As for the renderings themselves, I will call them "large" and "small", because apparently we're not all seeing the same things. And they both look fine to me and get the job done. So I don't take either side of that issue.
--Bob K (talk) 17:56, 11 July 2010 (UTC)
Arrogant editors like Steven G. Johnson and Sławomir Biały who revert good faith edits take the fun away from contributing to WP. I am not pursuing this. Bo Jacoby (talk) 23:24, 11 July 2010 (UTC).
Bob, while of course consistency within an article is the most important thing, the general practice on Wikipedia for some time now has been to try to maintain a consistent style across articles as much as is practical, with deviations only where there is a clear motivation in the context of a particular article. This is why we have Manuals of Style, after all (and in particular see Wikipedia:Manual of Style (mathematics)). In this case, it seems clear that there is no particular reason why the topic of Fourier transformation demands a different point size in its equations than any other mathematical article, so there is no specific motivation to deviate in that regard here. Nor does adopting a Wikipedia-wide convention to use \scriptstyle everywhere seem remotely practical, although of course Bo can make the proposal on Wikipedia Talk:Manual of Style (mathematics). — Steven G. Johnson (talk) 15:41, 12 July 2010 (UTC)

Question about spherical harmonics

But surely one is left with an unknown n on the RHS. Are you saying to sum over n (particularly when using the formula to compute the Fourier transform)? Or are you saying that different Bessel fcts have different (which actually is expected)? If so, one should add an n as an auxiliary variable on the LHS. —Preceding unsigned comment added by YouRang? (talkcontribs)

n is the dimension of the Euclidean space (as in Rn), not a variable to be summed over. Also, I have checked the formula against Stein and Weiss, and it is correct. Sławomir Biały (talk) 20:54, 26 August 2010 (UTC)

I wish I could understand this

I wish I could understand this article, but I can't. I don't expect to understand the details, but I wish more science and maths articles on Wikipedia had a layperson-friendly paragraph at the start, before getting into the details someone in the field might need. Articles from many other disciplines do this, but it's pretty uneven in Wikipedia's science articles, and rarely happens with maths articles at all. I'm an educated layperson, I'm not dumb, and I'd like to get a broad understanding of what a Fourier transform is because I keep seeing it referred to when I want to understand how some software-based audio effects work. My first instinct when I want a lay overview on most things is to look at Wikipedia; when I was a kid I'd look in the encyclopedia when I wanted a quick overview on something. This works if I want to understand grammar, say, or history, or geography. But most of the maths and many of the science articles don't seem to address the interested layperson at all. It's a pity! Spoonriver (talk) 06:18, 16 November 2010 (UTC)

I have tried out a new lead. More could probably be done to address your concern. I think an image would be very helpful, but I lack the skill to make one. I was thinking of something like a graph of the sound wave produced by an (idealized) chord of music, a graph of the sound wave (a decaying sine wave) produced by each note, and then a graph of the Fourier transform (rectangular functions centered at the notes of the chord). Sławomir Biały (talk) 14:28, 16 November 2010 (UTC)
Thank you! That's much clearer. Cheers - Spoonriver (talk) 00:42, 17 November 2010 (UTC)
The new lead is an improvement but there is a long way to go. Not exactly sure how to improve, but will think about it. Cheers. - BorisG (talk) 16:37, 16 November 2010 (UTC)

Matrices Diagonalized by Discrete Fourier Transform

In the circulant matrix article, it is stated that circulant matrices are diagonalized by the discrete Fourier transform. A circulant matrix is a special case of a Toeplitz matrix. It is my understanding that Toeplitz matrices are also diagonalized by the discrete Fourier transform (which is not something explicitly stated in that article). This is an important application of the Fourier transform that needs to be mentioned in this article. Bender2k14 (talk) 16:18, 17 November 2010 (UTC)

my understanding is that this article treats primarily the continuous transform rather than the discrete transform, which has its own article. Although the applications section certainly should be expanded, I think more specialized applications of the DFT should be treated instead in the main article for that topic. Sławomir Biały (talk) 17:12, 17 November 2010 (UTC)

Introductory Paragraph

(I rarely edit wiki; I hope I did this right). The introductory paragraph is simply unacceptable as it is.

"The Fourier transform is a mathematical operation that decomposes a signal into its constituent frequencies [not nec.]. Thus the Fourier transform of a musical chord is a mathematical representation of the amplitudes of the individual notes that make it up [wrong/extremely vague]. The original signal depends on time, and therefore is called the time domain representation of the signal, whereas the Fourier transform depends on frequency [not nec.] and is called the frequency domain representation of the signal. The term Fourier transform [can] refers both to the frequency domain representation of the signal and the process that transforms the signal to its frequency domain representation."

suggestion: "The Fourier transform is a mathematical operation. Its best known purpose is to analyze the frequency content of time based signals, for instance calculating the frequency spectrum of audio recordings. The input to the Fourier transform may be a time domain signal, in which case the output is a frequency domain signal. (See 'signal domains'.) The term "Fourier transform" in everyday language often means the actual results of the transform rather than the operation itself." Wiki editors, please take and apply! Thanks. —Preceding unsigned comment added by 129.237.121.233 (talk) 20:14, 9 February 2011 (UTC)

It is extremely difficult to write a lead paragraph that will appeal to all prospective readers, regardless of their background. The current lead is probably not perfect, but your proposed revision is pretty clearly inferior. It is both less informative and more difficult to understand. It completely fails to say anything about what the Fourier transform actually is. Instead it relies on jargon ("time domain signal" versus "frequency domain signal") that cannot be understood without already knowing what the Fourier transform is. Likewise, the jargon "frequency spectrum" appears in the second sentence without explanation, and it may not even be a term that most of the likely readers will be familiar with. Some vague language in the lead actually helps a lay audience to understand the content in question. In the thread above, the current revision of the lead was lauded as substantially clearer than the old revision (which is probably closer in spirit to what you would like). Sławomir Biały (talk) 23:37, 9 February 2011 (UTC)
I edited the part about musical chords. As the above commenter noted, a Fourier transform of a musical chord will (in general) not be an explicit representation of the amplitude of the chord's notes. Rather, as nearly all "notes" are not pure (or very narrowband) sinusoids, an FT will better give the amplitudes/energy of partials. I just edited it to say "a musical chord of sinusoidal notes". I agree with the effort of the lead paragraph. Herr Lip (talk) 23:42, 28 October 2011 (UTC)

Fourier transform of sin(ax)

There has been a slow-motion edit war over the Fourier transform of sin(ax). This may have something to do with the fact that the Fourier transform function provided by Wolfram Alpha uses the wrong convention: it is defined to be rather than (initially I made this mistake too). Also, we had until now placed the i in front of the formula, rather than in the denominator, which somehow makes the usual formula for the sine harder to see in the Fourier transform. The Fourier transform of sin(ax) should be

It's an easier calculation to check that this has the correct inverse transform. Sławomir Biały (talk) 02:10, 24 February 2011 (UTC)

What is the difference between the signals sin(at) and cos(at). The difference is only in phase. Therefore the real parts must be the same and the difference must be only in imaginary part. Equation 305 has only imaginary part and no real part. Moreover, sin(at) and cos(at) represent only one frequency a. Therefore the Fourier transform must generate distinctly delta(omega-a) result in real part. Try to integrate sin(at) and cos(at) Fourier transform directly without using algebraic Euler's formula. The sin(at) and cos(at) functions - signals - are fundamental and distinctly one frequency signals. If Fourier transform formalism is valid, it must be consistent with this fundamental reality. Equaltions 304 and 305 are not valid. In such case the equation 303 is also invalid, and must correspond to the valid equations 304 and 305, and Euler's formula. Softvision (talk) 20:43, 13 March 2011 (UTC)
Not sure I follow you. For one thing, that they are phase shifts of one another doesn't mean that the will have the same real part. At any rate, as I've already indicated, it's a straightforward matter to compute the inverse transform of the last equation I wrote, and verify that the result is the required sine by deMoivres formula. Sławomir Biały (talk) 21:48, 13 March 2011 (UTC)
We can carry out the calculation via the phase-shift approach (if a bit formally)
as claimed. Sławomir Biały (talk) 22:09, 13 March 2011 (UTC)
is peak at frequency a. is peak at frequency -a. Fourier transform 201 of the higly superposed box signal generates "decomposition" with positive and negative frequencies - symmetrically conjugated. Negative frequency in Euler's formalism is nothing else than complex conjugation. The sum of two conjugated complex numbers is twice the real part - generally. The difference of conjugated complex numbers is twice the imaginary part - generally. Symmetrical distributions of F(f) eliminate the imaginary part as the result of complex conjugation. In this context the "amplitude" F(f) is complex - equation 303. This shows, that inside the Fourier transform formalism, the Dirac delta function is distinctly bound to complex formalism - equation 303 - inverse Fourier transform of real delta function is complex number - "signal". In this context, equations 303, 304, 305 are valid. Softvision (talk) 11:02, 14 March 2011 (UTC)
I still have no idea what you're talking about. It seems like nonsense. First you said 304 and 305 were not valid, now you say they are valid. In each case, the answer is backed by vague philosophical meanderings. State plainly what you mean, or I will not continue to participate in this discussion. Sławomir Biały (talk) 13:44, 14 March 2011 (UTC)

Units

I'd like to mention in the article that the FT output is a quantity of type "spectral variance". For example, if the input values have units of volts and the independent variable is time, then the FT output will have units of W/Hz. As another example, if the input has units of meter, then the output will have units of m²/Hz. As a more complicated example, if the input has units of kg and is spaced over space instead of time, then the FT output will have units of kg²/(1/m)=kg²·m. A caveat is that usually the Fourier coefficients are displayed normalized by the largest one, in which case they become unitless. Would there be any objections? Thanks. Fgnievinski (talk) 03:08, 1 March 2011 (UTC)

Maybe a link to spectral density would suffice? Fgnievinski (talk) 04:38, 1 March 2011 (UTC)

I would try to write a paragraph that can be put into the article somewhere, since I think the notion of spectral density needs to be explained (our article spectral density focuses on power over frequency only), but you have something more general in mind. In higher dimensions, it's also true and related to the scaling law for the Fourier transform:
So there is definitely something worth saying well. Sławomir Biały (talk) 12:09, 1 March 2011 (UTC)

Fourier transform of log|x|

The Fourier transform requires a careful definition of as a generalized function (tempered distributon). Following Kammler, is a continuous slowly growing function, and so a tempered distribution, whose general derivatives are well defined tempered distributions

In Kammler, David (2007), A First Course in Fourier Analysis 2Ed the transform of is given in an exercise, p. 468, as

where is the Euler–Mascheroni constant and is the second derivative of

Sprocedato (talk) 07:16, 31 August 2011 (UTC)

Poisson summation formula

This formula makes no sense:

The right side is a function of , and the left side is not. Fixing this section would essentially make it a copy of Poisson_summation_formula#Forms_of_the_equation. It should be replaced by a See also.

--Bob K (talk) 00:18, 6 September 2011 (UTC)

What? Summation is over !

Good point. I'm not used to seeing x and used as integers. Maybe others aren't either. But I will try to look the other way. Thanks.
--Bob K (talk) 14:04, 6 September 2011 (UTC)
Not really sure there's much value in having a slightly more general formula. There are many things we could possibly say about the Poisson summation formula. Why is it useful to give an obvious variant that follows from the most elementary properties of the Fourier transform? Ideally, we should maybe give one example of a Poisson summation formula (the simplest one) and if a reader is interested in more information, he or she can refer to the main article. Otherwise we get into questions of what are our personal favorite variants or most important fun facts about the Poisson formula. (My money's on the version for a lattice in R^n, for instance...) I think the earlier, simpler version should be restored, perhaps with some preamble about the conventions used, and a "for instance" clause to indicate that this is a special case of other analogous results. Sławomir Biały (talk) 14:25, 6 September 2011 (UTC)


I agree with that. I think it is sufficient to repeat this statement from the PSF intro:
The Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform.
and refer the reader to the PSF article for details.
--Bob K (talk) 15:19, 7 September 2011 (UTC)
I was thinking to include an example of the simplest case, but your suggestion is also fine with me. Sławomir Biały (talk) 11:54, 9 September 2011 (UTC)

Link to older, simpler version of the article

(Rant withdrawn. See comment below for the link to a pdf of the older version. I think a number of users will find it useful. Thank You Sławomir.)

Go to the "History" tab, and go back to sometime in 2007 (apparently before any mathematician even touched the entry). Click the date, then on the left-hand toolbar select "Print/export -> Download as PDF". Here is one version. Sławomir Biały (talk) 00:24, 17 September 2011 (UTC)

Unreferenced additions of special functions to the table

I am planning to remove (again!) the recently-added (diff) special functions to the table. These are clearly out of place in a table that includes only the Fourier transforms of elementary functions. There are entire books of tables of Fourier transforms, and these don't seem to be especially distinguished. I'm not saying they aren't important, but they are not useful in a short table summarizing only the very basic transforms. Readers looking for a specific identity should naturally consult a reference text such as Erdelyi, not a general purpose encyclopedia article. Sławomir Biały (talk) 15:15, 17 September 2011 (UTC) P.S. When discussing, please refrain from referring to the actions or opinions of any other editors as "vandalism". Please read WP:AGF before ever making such an accusation in the future. From WP:VAND: "Even if misguided, willfully against consensus, or disruptive, any good-faith effort to improve the encyclopedia is not vandalism."

I should add that the cited sources fail to support the added content in a direct straightforward fashion. These identities aren't listed in Campbell and Foster or Kammler. There are related identities in Erdelyi, but stated in terms of parabolic cylinder functions, which makes it difficult to check these identities with our Fourier transform conventions. But even if sources are found that directly support the content, with our conventions, I feel that, for reasons already explained, these do not belong in the table of this article. I am going to remove them now. I have explained the reasons and I have given ample time for responses. It can always be restored later if there is consensus to include it. Sławomir Biały (talk) 11:18, 18 September 2011 (UTC)
I see no harm in inclusion of these functions, provided they have a direct source.BorisG (talk) 16:40, 18 September 2011 (UTC)
Comment: Practically speaking, this article is already pretty big. It might be worth while to make a separate page somehow for the non-elementary expressions appearing here. It would be useful to reference, but not to present to someone just encountering Fourier transforms. Rschwieb (talk) 17:51, 18 September 2011 (UTC)
I like to see the Fourier transform of those basic functions (Eigenfunctions, monomial, Bessel Functions), I strongly believe they should stay included. — Preceding unsigned comment added by 178.190.206.152 (talk) 18:54, 18 September 2011 (UTC)
If there's a compromise to be had, I can see including the Fourier transform of a single parabolic cylinder function (207), with an appropriate reference. The others involving Hermite polynomials (208, 209, 210) all follow from this and the other properties of the Fourier transform and Hermite polynomials. The entries with Laguerre and Gegenbauer polynomials clearly don't belong. The Fourier transforms of the Bessel functions are already given in the section on "Distributions" (just not Bessel functions divided by powers of x, which again don't seem to belong here). Sławomir Biały (talk) 19:15, 18 September 2011 (UTC)
Why is there not a list of Fourier transforms already? (It's a redirect to the present article.) I think the table in this article should be very, very short, maybe even nonexistent; but there should be extensive lists in other articles; just like how we don't list integrals in the integral article but we have lots of them in lists of integrals. Ozob (talk) 19:08, 18 September 2011 (UTC)
I completely agree with Ozob here.TR 19:56, 18 September 2011 (UTC)
I know wikipedians are notorious for list-craft. But, as Ozob said, List of Fourier transforms seems like a completely reasonable solution. Obviously, this would create duplicates; but that's ok. Saying the readers should consult references is denying that a wikipedia "is" a mathematics reference (in addition to other things). -- Taku (talk) 20:24, 18 September 2011 (UTC)
I'm not in principle against having a separate article containing a table of Fourier transforms. But I really disagree with the implication that I take from your remark, that seems to suggest that every conceivable Fourier transform identity should have a place in Wikipedia. We need to be more selective than that, because there really are books and books of such identities. It is vital that we observe WP:WEIGHT. User:R.e.b., User:Stevenj, and myself have long seen the need for minimizing this sort of content from our special functions articles. Please note that some of the editors interested in adding this content here are precisely among the more "notorious" Wikipedians that you mention. We've long had trouble with certain of these editors adding oodles of unreferenced identities to special functions articles, many of them very questionable and obscure. Often when references are provided, they do not hold up under close examination. Look at the history of User talk:A. Pichler, as well as this editor's contribution history, for example. I suggest that, to avoid this situation, several references must be provided for each entry on the table. We have three different conventions for the Fourier transform, and I think requiring a reference that directly support all three conventions will prevent too much unmanageable junk from accumulating. Sławomir Biały (talk) 20:56, 18 September 2011 (UTC)
I agree that not every Fourier transform in print should be in Wikipedia. If I understand you correctly, you would like a rule that a Fourier transform may be included if, for each normalization, there is a reference for that transform with that normalization. So if one could find a reference that gave all three normalizations, then that reference alone would suffice; but one could also provide three different references if they together gave the three different normalizations. Yes? I think this would work. Ozob (talk) 12:06, 19 September 2011 (UTC)
The key point is to have several references for each identity (much like our standards for articles about integers). There are tens of thousands of obscure identities in the literature. We obviously need to adopt best practices that will ensure we are more selective than that. It will also aid in verification: identities we list should, in some sense, be "standard" ones: that is, ones that can be found in many sources. It's possible that requiring references for all three normalizations may go too far, as Steven notes below, because some identities may be field-specific. (E.g., Bessel functions have no use in signal processing, but are very important in physics). Sławomir Biały (talk) 01:37, 20 September 2011 (UTC)
I recall an electrical engineer telling me once that the Bessel functions turned up when you FM modulated a CW tone (a CW (constant wave) tone being engineering jargon for a sine wave). I remember it striking me at the time because it was the closest I'd ever gotten to seeing a Bessel function in practice. (Being an algebraist doesn't lend itself to knowing your special functions.) My suspicion is that any important identity will show up in a multitude of places; that may even be a definition of important.
That said, it still might be hard to track down references in each of the normalizations we're interested in. So what I am currently leaning towards is a requirement that each transform show up in at least three references (no matter about the normalizations). How would you feel about that? Ozob (talk) 12:14, 20 September 2011 (UTC)

(edit conflict) I though I simply reiterated one of wikipedia's foundational principles when I said wikipedia must be "comprehensive." It's non-negotiable. (But apparently I can't find it in the policy pages. WP: NOT only tells what wikipedia is not.) True, the maintenance load would be lower if there are fewer articles and the articles are shorter. (It follows that we therefore should be minimalists when creating and writing articles.) But this cannot be used as an excuse for not allowing more contents, provided they are correct and not obscure. We demand the references are "provided" and "reliable", but that's also normal. (This is the usual argument.)

Having said this, it is possible that the mechanics (i.e., more contents <-> more editors); it's not breaking down, but we're not completely certain if we can count it on the future. I'm speaking of editor declines. See [1], for example. When we add contentes, the assumption is that they can be maintained since there would be more editors. There have been and will be more editors who can keep unreliable editors in check. If this assumption doesn't hold, then, of course, we have to rethink everything. -- Taku (talk) 12:28, 19 September 2011 (UTC)

As you discovered, you couldn't find "comprehensive" written anywhere the "foundational principles", and indeed the point of WP:NOT is that there are many things that Wikipedia intentionally excludes. The key questions when adding information are: is it verifiable and reliably sourced, is it notable, and are we giving undue weight to obscure topics or viewpoints?
In the present case, it is absolutely clear that unsourced identities can and should be removed if references are not supplied and cannot easily be found by other editors; reliable sourcing (deriving from no original research) is indeed a core policy.
A second question is, for sourced but obscure identities, is it notable enough for inclusion? This is a trickier and more subjective question, but still an important one. Fourier analysis has such a long history that there surely must be zillions of obscure little identities derived in various papers, which could easily overwhelm the article and bury the most useful identities in a deluge of trivia. Sławomir's suggestion that an identity should appear in more than one reference to justify inclusion in the table is not unreasonable here. On the other hand, I'm not sure I would go so far as to agree that we must find references in all three major normalizations; not only do I not attach much importance to the normalizations, but also it is quite possible that certain identities are mainly used in particular fields (e.g. physics) where particular normalizations are dominant and hence some identities may only appear in one form. — Steven G. Johnson (talk) 01:27, 20 September 2011 (UTC)
I've addressed this point somewhat above in reply to Ozob. Sławomir Biały (talk) 01:37, 20 September 2011 (UTC)
I think that looking at the length of this page, creating a separate List of Fourier transforms article (currently this link redirects to this article) with the identities is a good idea in any case. We can just leave a short summary here in this article and just link to the list. The first thing to do is the establish a proper set of inclusion criteria for that list. I've started a talk page for the list (Talk:List_of_Fourier_transforms), I invite all participants here to comment there.TR 09:18, 21 September 2011 (UTC)

Cathegorizing transforms depending on input signal type

A signal can be either continuous or discrete, and it can be either periodic or aperiodic. The combination of these two features generates the four categories, described below and illustrated in Fig. 8-2. Aperiodic-Continuous This includes, for example, decaying exponentials and the Gaussian curve. These signals extend to both positive and negative infinity without repeating in a periodic pattern. The Fourier Transform for this type of signal is simply called the Fourier Transform. Periodic-Continuous Here the examples include: sine waves, square waves, and any waveform thatrepeats itself in a regular pattern from negative to positive infinity. This version of the Fourier transform is called the Fourier Series. Aperiodic-Discrete These signals are only defined at discrete points between positive and negative infinity, and do not repeat themselves in a periodic fashion. This type of Fourier transform is called the Discrete Time Fourier Transform. Periodic-Discrete These are discrete signals that repeat themselves in a periodic fashion from negative to positive infinity. This class of Fourier Transform is sometimes called the Discrete Fourier Series, but is most often called the Discrete Fourier Transform.

http://www.dspguide.com/CH8.PDF

The book says that the naming is confusing. I agree. Not having this information in wikipedia confuses even more. — Preceding unsigned comment added by Javalenok (talkcontribs) 12:03, 30 October 2011 (UTC)

Categorizing transforms depending on input signal type

A signal can be either continuous or discrete, and it can be either periodic or aperiodic. The combination of these two features generates the four categories, described below and illustrated in Fig. 8-2.

  • Aperiodic-Continuous

This includes, for example, decaying exponentials and the Gaussian curve. These signals extend to both positive and negative infinity without repeating in a periodic pattern. The Fourier Transform for this type of signal is simply called the Fourier Transform.

  • Periodic-Continuous

Here the examples include: sine waves, square waves, and any waveform that repeats itself in a regular pattern from negative to positive infinity. This version of the Fourier transform is called the Fourier Series.

  • Aperiodic-Discrete

These signals are only defined at discrete points between positive and negative infinity, and do not repeat themselves in a periodic fashion. This type of Fourier transform is called the Discrete Time Fourier Transform.

  • Periodic-Discrete

These are discrete signals that repeat themselves in a periodic fashion from negative to positive infinity. This class of Fourier Transform is sometimes called the Discrete Fourier Series, but is most often called the Discrete Fourier Transform.

http://www.dspguide.com/CH8.PDF

The book says that the naming is confusing. I agree. Not having this information in wikipedia confuses even more. — Preceding unsigned comment added by Javalenok (talkcontribs) 12:06, 30 October 2011 (UTC)


This article is only about the continuous Fourier transform, which is applicable only to the continuous-time, aperiodic case. Some argue that it also applies to the periodic case, and that is a useful perspective. But technically, the transform diverges (doesn't "exist") at the harmonic frequencies. The other three cases are treated in separate articles, and the top-level overview of all four is the Fourier analysis article. The summary table you desire is Fourier analysis#Summary. And note a couple of things:
  1. The Fourier series is a synthesis formula, not analysis; i.e. it is an inverse transform.
  2. The analysis formula is not limited to periodic functions. Nor is the DFT. The inverses are periodic, but normal practice is to compute only one cycle of or .
I think the real problem here is that we don't make it clear that this is just one of a group of four articles with an overview article.
--Bob K (talk) 13:57, 2 November 2011 (UTC)
"I think the real problem here is that we don't make it clear that this is just one of a group of four articles with an overview article." -- This is what I wanted to say. The reference to the taxonomy would be quite appropirate because "Fourier transorm" misleadingly sounds like "umbrella term" for the whole bunch of forward and inverse conversions. --Javalenok (talk) 10:42, 3 November 2011 (UTC)

(Note that there are many more possibilities than the ones here; essentially every type of symmetry group leads to some type of "Fourier" transform. e.g. adding mirror symmetry gives cosine transforms, but there are more complex symmetries than that.)

But I agree that the intro is confusing: it should make it clear that the general overview article on all types of Fourier-related stuff is Fourier analysis, and that this article is only on the classic case of the Fourier transform on the real line. (This article used to be called continuous Fourier transform for that reason, if I recall correctly, but it was decided that Fourier transform is a more standard term.) — Steven G. Johnson (talk) 16:52, 3 November 2011 (UTC)