Wikipedia:Reference desk/Archives/Mathematics/2011 April 22

From Wikipedia, the free encyclopedia
Mathematics desk
< April 21 << Mar | April | May >> April 23 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 22[edit]

Integration by parts for improper integrals[edit]

I'm wondering what the formal justification is for integration by parts when the integral is improper. For example, we can't integrate xe^x by parts from -infinity to infinity because it doesn't converge. We need some assumptions on convergence of the integrals.

My question is what exactly are the assumptions on the integrals? So under what conditions can we argue that for two real valued functions on the real line f and g the integral from -infinity to infininty of

f'g

is fg minus the integral from -infinity to infinity of

fg'?

This isn't a homework question. I'm just interested in knowing when I can apply this rule because I'm studying Schwartz functions on Euclidean space and need to compute the fourier transform of a partial derivative of a Schwartz function with respect to a multindex.

Could you please give proofs of your claims when possible? My guess would be that the formula above holds if f and g are continuously differentiable and for example one has bounded derivativ and the other is in L^1 (see Lp space). I'm not sure how to prove this and whether there are other such formulations.

Thanks. —Preceding unsigned comment added by 180.216.2.24 (talk) 03:46, 22 April 2011 (UTC)[reply]

A "properly improper" integral is a limit:
So apply integration by parts to the integral from a to b, and then evaluate the limit. For example, in
you have
and then you use L'Hopital's rule to find the first of the two limits as b → ∞. Michael Hardy (talk) 04:52, 22 April 2011 (UTC)[reply]
The point is that integration by parts allows us to calculate the anti-derivatives, which are functions. In fact, the general formula involves an improper interval indefinite integral, i.e. one without limits. It follows from the product rule. Consider u(x) and v(x). By the product rule:
We then integral both sides with respect to x:
There's no mention of any limit. These are just formal functional expressions. You need to make sense of the limits, if you use them. But that's just the same as having to be careful of the function 1/x when x = 0. You can integrate xex by parts. In fact you get (x − 1)ex + c. In fact, by using integration by parts we can prove that, for a positive integer n,
Again, there are no limits, only anti-derivatives. Fly by Night (talk) 16:15, 22 April 2011 (UTC)[reply]
An integral with no limits is indefinite, not improper. --Tango (talk) 16:00, 23 April 2011 (UTC)[reply]
Thanks Tango, but I didn't mention improper integrals, I mentioned improper intervals. I'm not sure what one of those is either :-) Fly by Night (talk) 17:52, 25 April 2011 (UTC)[reply]

Sometimes you can't use partial integration for a integral that does converge, because the limit doesn't exist and the integral you get doesn't converge. Then, you need to compute the limit of the two terms added up together, which can be awkward. Instead, what is often much easier, is to intrduce some parameter in the integrand such that the divergences don't occur when the parameter is in some region of the complex plane. You then compute the integral for that case and the integal for general values of the parameter can then often be found by analytic continuation. Count Iblis (talk) 17:12, 22 April 2011 (UTC)[reply]

Differentiating under the integral sign[edit]

Hi again guys. I'm interested in the formal justification for differentiating under an integral sign. So if f is a function of two variables x and t under what conditions is it true that

d(int f(x,t) dx)/dt = int ( (partial derivative of f(x,t) with respect to t) dx)

See Differentiation under the integral sign. I think I know how this should be true when this is a definite integral over a bounded set. Then we can argue by Lebesgue's dominated convergence theorem assuming the partial derivative of f with respect to t is a continuous function in t. This is because (f(t+h)-f(t))/h - (partial derivative of f with respect to t) will be dominated by a constant which is an L^1 function on a bounded interval and we can interchange integration and limit to show this goes to 0 as h goes to 0.

This argument isn't so hot when the interval is unbounded because I'm not sure what L^1 function dominates a difference quotient of f minus the partial derivative of f in that case. Constants do it if the partial derivative of f is uniformly continuous (with respect to t) but a constant isn't an L^1 function on the line.

Can you please state a formal theorem with the appropriate hypothesis when you can justify differentiating under the integral sign for improper integrals? Any proof is OK but I'd prefer a proof using the Lebesgue dominated convergence theorem though any proof is OK.

Thanks. —Preceding unsigned comment added by 180.216.2.24 (talk) 03:55, 22 April 2011 (UTC)[reply]

Weak Nulstellensatz[edit]

Hi again guys. Sorry for all the questions. This is the last one. I know the weak nulstellensatz but I'm not sure how to prove the converse. Specifically, why is it true that if K is a field and (a_1,...,a_n) is an n-tuple of elements in K then the ideal (generated by x_1-a_1, ..., x_n - a_n):

(x_1 - a_1,..., x_n - a_n) in K[x_1,...,x_n]

is maximal? I know that it is contained in the kernel of the evaluation homomorphism that takes a polynomial in x_1,...,x_n to its value at (a_1,...,a_n) but how do we show equality in this inclusion? That is, how do we show that a polynomial in n variables that vanishes at (a_1,..,a_n) is in the ideal (x_1 - a_1,...,x_n - a_n). Thanks guys. I really appreciate your help. :) —Preceding unsigned comment added by 180.216.2.24 (talk) 03:58, 22 April 2011 (UTC)[reply]

The proof of the last question: How do we show that a polynomial in n variables that vanishes at (a1,…,an) is in the ideal (x1a1,…,xnan). Well, I think that that can be proved using Hadamard's lemma. Although it might only be valid when the field is R or C; I'm not sure.Fly by Night (talk) 17:18, 22 April 2011 (UTC)[reply]
Factoring out by the ideal identifies the with the scalars . Hence the composite
is surjective. But, since K is a field and 1 is clearly not in the ideal , this is also injective, and so is an isomorphism. Sławomir Biały (talk) 00:45, 23 April 2011 (UTC)[reply]

Cool, thanks Slawomir! I should have observed that because anyway the Noether normalization lemma implies the weak nulstellensatz and exactly the same idea of the proof of this implication is about what you (Slawomir) said. —Preceding unsigned comment added by 180.216.2.24 (talk) 01:30, 23 April 2011 (UTC)[reply]

Fourier transform of Schwartz functions[edit]

Can anyone give a brief overview of why it is fundamentally convenient to study the Fourier transform on the Schwartz functions and then extent it to other functions spaces like L^1? I understand that convergence problems with integrals are surpassed because Schwartz functions decay really rapidly at infinity but what other uses does this have? —Preceding unsigned comment added by 180.216.2.24 (talk) 04:01, 22 April 2011 (UTC)[reply]

Our article on Schwartz functions gives some properties. The key property though is probably that the Fourier transform of a Schwartz function is another Schwartz function. This does not hold for other spaces like . Invrnc (talk) 12:59, 22 April 2011 (UTC)[reply]
Everything you wished was true of the Fourier transform actually is true of the Fourier transform of Schwartz functions. It's given as an integral (unlike the FT on L^2), and its inverse is also an integral (unlike the FT on L^1). It is continuous on the Schwartz topology (like continuity on L^2). You can differentiate under the sign of the integral. It is easy to show that it satisfies the convolution identity. Finally, it allows you to define the Fourier transform of any tempered distribution by taking the transpose. Sławomir Biały (talk) 13:03, 22 April 2011 (UTC)[reply]

Algebraic Topology[edit]

How to compute the homology groups &Betti numbers of the 2-shere Mathematics2011 (talk) 16:01, 22 April 2011 (UTC)[reply]

You can use the Mayer-Vietoris sequence. Alternatively, triangulate the sphere and compute the homology groups combinatorially. Sławomir Biały (talk) 16:12, 22 April 2011 (UTC)[reply]
For a torus, T, there is one maximum, two saddles, and one minimum; thus H0(T,Z) ≅ Z, H1(T,Z) ≅ Z2 and H2(T,Z) ≅ Z.
Alternatively, you could use some Morse homology. Imagine the sphere sat on the xy-plane and consider the height function, given by h(x,y,z) = z, restricted to the sphere, i.e. h|S2 : S2R. The local maxima, the saddle points, and the local minima give us the homology groups. The North Pole is the only local maximum, there are no saddle point, and the South Pole is the only local minimum. That tells us that the homology groups are
The same method works for the torus, and in fact for an compact, orientable surface of genus g. This tells us that the homology groups over Z are
The Betti numbers are given by the ranks of the homology groups. In the case of the orientable, compact surface of genus g, the Betti numbers are 1, 2g and 1. The Euler characteristic of an orientable, compact surface of genus g, say M, is given by the alternating sum of the Betti numbers; i.e. χ(M) = 1 − 2g + 1 = 2 − 2g. Fly by Night (talk) 19:16, 22 April 2011 (UTC)[reply]

thank you very much...Mathematics2011 (talk) 08:29, 24 April 2011 (UTC)[reply]