Wikipedia:Reference desk/Archives/Mathematics/2012 February 28

From Wikipedia, the free encyclopedia
Mathematics desk
< February 27 << Jan | February | Mar >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 28[edit]

differentiation operator "d" and partial derivatives[edit]

I want to clarify something about the "operators" for total and partial derivatives.

Say I have the function

and take the total derivative, "dY"

Not sure what I'm taking the derivative with respect to there, but anyway, this then gets rearranged

which results in

I'm just not sure why they switch from to using the partial derivative operator at the end. Thorstein90 (talk) 06:51, 28 February 2012 (UTC)[reply]


By my understanding, the final "switch" you mention is simply incorrect. If Y and i are the coordinates with respect to which the partial derivatives are taken. I.e., ∂/∂Y is with Y varying and i held constant, and vice versa. Thus, i/∂Y = ∂Y/∂i = 0. — Quondum 06:13, 28 February 2012 (UTC)[reply]
Now I'm even more confused. Should they be using the normal differentiation operator "d"? But what if is also a function of other variables, can this rearrangement just shows how it varies with . This is copied from an academic paper which I doubt would be simply wrong.Thorstein90 (talk) 06:55, 28 February 2012 (UTC)[reply]
The partial derivative of a function is defined as its derivative when varying one parameter and keeping the rest constant. From the problem statement, I had assumed that Y and i form a set of parameters in this sense. We can have distinct sets of variables, one set typically being functions of another. Is it clear from the paper what set contains Y and what contains i, and how they're related? Without this context, it's difficult to know how to interpret the final i/∂Y. It would have made sense as di/dY. — Quondum 11:52, 28 February 2012 (UTC)[reply]
Puzzling indeed. At the outset the Y and i looked like independent variables, because they both appear as parameters in the functions. But by computing any derivative of one with respect to the other they are suggesting some dependence. (If Y and i were truly independent then ∂Y/∂i=∂i/∂Y=0.) To me, the initial computation looks like its taking Y as a function of i, and the final line suggests i is a function of Y. But this area is really not my forte... Rschwieb (talk) 14:02, 28 February 2012 (UTC)[reply]
I'm with Quondum on this one. Changing to partial differentiation there is just wrong. it is just di/dY. What one is saying is if the two sides are equal in the first line then i must vary when Y is varied according to the last equation. Dmcq (talk) 14:22, 28 February 2012 (UTC)[reply]
When you said "Not sure what I'm taking the derivative with respect to there…", I recommend you take a look at our exterior derivative article. One way to think of it is as Y as a differential zero-form on a surface, and dY is its exterior derivative, i.e. a differential one-form. (The only time I've seen the term "total derivative" used is in books about differential invariants of Lie group actions. For example P. J. Olver (1995). Equivalence, invariants, and symmetry. Cambridge University Press. ISBN 0521478111.)Fly by Night (talk) 23:16, 28 February 2012 (UTC)[reply]

Solutions to a transcendental equation[edit]

This might be a vague question, but can anything meaningful be said about the solutions to

A*sin(x)+B*cos(x)+C*sin(3*x)+D*cos(3*x)+E*sin(5*x)+F*cos(5*x) = 0 ?

Any conditions on A,B,C,D,E,F for real solution(s) to exist? Any clever ways to find a closed form solution? Thanks a ton! deeptrivia (talk) 04:23, 28 February 2012 (UTC)[reply]

It should be pretty clear that real solutions will always exist when the coefficients are real. This function has period 2π, is continuous, and changes sign at least once as x increases by π (unless all coefficients are zero). As to a closed form, perhaps try using trig identities to rewrite each term in terms of sin(x) and cos(x). — Quondum 06:30, 28 February 2012 (UTC)[reply]

Thanks, Quondum. Surely, in terms of sin(x) and cos(x), we get:

16*F*cos(x)^5+16*sin(x)*E*cos(x)^4+(4*D-20*F)*cos(x)^3+(4*C-12*E)*sin(x)*cos(x)^2+(B-3*D+5*F)*cos(x)+(A-C+E)*sin(x)

Does this help in finding a solution? Any help is sincerely appreciated. deeptrivia (talk) 12:15, 28 February 2012 (UTC)[reply]

Substituting z=eix=cos(x)+i sin(x), sin(x)=−i(z−z−1)/2, cos(x)=(z+z−1)/2, sin(3x)=−i(z3−z−3)/2, cos(3x)=(z3+z−3)/2, sin(5x)=−i(z5−z−5)/2, cos(5x)=(z5+z−5)/2, and multiplying by 2 gives the equation

−Ai(z−z−1)+B(z+z−1)−Ci(z3−z−3)+D(z3+z−3)−Ei(z5−z−5)+F(z5+z−5) = 0. In order to get rid of the negative exponents the equation is multiplied by z5:

−Ai(z6−z4)+B(z6+z4)−Ci(z8−z2)+D(z8+z2)−Ei(z10−1)+F(z10+1) = 0.

This equation is of fifth degree in w=z2=ei2x

(F−Ei)w5+(D−Ci)w4+(B−Ai)w3+(Ai+B)w2+(Ci+D)w+Ei+F = 0.

There are standard numerical methods for finding the five roots of a quintic polynomial.

Bo Jacoby (talk) 12:27, 28 February 2012 (UTC).[reply]

Umm, I think a few powers got lost there, but the substitution is good, rather use z = eix/2. I think you end up with a quintic polynomial in z. If finding the roots of a quintic qualify as "closed form", you've got your path to a solution; that's probably as close as you'll get (noting that solving a quintic polynomial in closed form is problematic). A similar substitution that keeps everything real is t = tan x/2, sin x = 2t/(1+t2), cos x = (1−t2)/(1+t2). But in the end, you will still have to find roots of a quintic polynomial. I would try z substitution as being simpler. — Quondum 15:51, 28 February 2012 (UTC)[reply]

You are right, I made an embarrassing mistake. Corrected above. Sorry. Bo Jacoby (talk) 22:39, 28 February 2012 (UTC).[reply]

Solving a differential equation using the Green's function[edit]

First of all I have to consider given where is the Dirac delta function and calculate the Green's function. My solution to this is for and for . I guess I should check that this is correct first. However the part I don't know how to do is the next bit, which is to use it to solve for arbitrary g. How do I do this? Widener (talk) 05:55, 28 February 2012 (UTC)[reply]

I mean, I could solve it without using the Green's function. However, I am told that I have to use it somehow. Widener (talk) 05:57, 28 February 2012 (UTC)[reply]
Solving differential equations is the main use for Green's functions - our article explains how to do this in the general case, with a couple of examples. Maybe you could try working through that and come back if you have any problems? 130.88.73.65 (talk) 16:47, 1 March 2012 (UTC)[reply]
I think the problem is that I didn't really have a proper understanding of what these Green's functions actually are. I have a better understanding now. Widener (talk) 01:56, 3 March 2012 (UTC)[reply]

Adding/subtracting sine waves of the same frequency[edit]

Hi all, what's the simplest formula for adding or subtracting 2 sine waves of the same frequency? I know that you can add or subtract two arbitrary sine waves and get a fairly complex formula like on this page (under Constructive and Destructive Interference), but if I understand correctly, two waves of the same frequency should result in a simple sine wave expressible in the basic form . Is there a way to get that simple function for the resulting added (or subtracted) wave? Thanks! — Sam 63.138.152.135 (talk) 18:07, 28 February 2012 (UTC)[reply]

If it's possible then it'll be an application of the famous formulae
Fly by Night (talk) 23:27, 28 February 2012 (UTC)[reply]
See Phasor#Addition for the general case. You always end up with some shifted and scaled sine wave. Dmcq (talk) 12:26, 29 February 2012 (UTC)[reply]