Wikipedia:Reference desk/Archives/Mathematics/2011 April 17

From Wikipedia, the free encyclopedia
Mathematics desk
< April 16 << Mar | April | May >> April 18 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 17[edit]

Sum of powers[edit]

Hi. At some point I expect that I will encounter a problem as part of which I will have to find the sum of x^n, n being any integer (though hopefully not too large), when I might not have access to a graphing calculator or computer. I know how to find the formula for the sum of x^n if I know the nth Bernoulli number, but is there a way to do it without memorizing the Bernoulli numbers? (knowing my luck I'll encounter a problem where I have to find the sum of x^n when I've only memorized the Bernoulli numbers up to Bn-1 ;) 72.128.95.0 (talk) 01:36, 17 April 2011 (UTC)[reply]

Well, if you're stuck on a desert island, for example, so you have lots of time, you can just make a list of the first few sums and then use something like the method of forward differences to guess a formula. (See Finding a formula for a sequence of numbers for a simple introduction to that method.) Once you have a guess for the formula, it's easy to prove by induction that it's correct. —Bkell (talk) 01:55, 17 April 2011 (UTC)[reply]


Any such method could be used to compute the Bernouilli numbers :) So, the question is equivalent to how to compute the Bernouilli numbers in the most convenient way, which depends on what resources you want to use (mental arithmetic, paper and pencil, or a computer program, but obviously not Mathematica or Wolfram Alpha, as that would be cheating).
There exists a simple method allowing you to find the coefficients of a series expasion to order x^(2n-1) if you know them up to order x^(n-1), so you can double the number of Bernouilli numbers in each step. But this method is only convenient if you use at least an ordinary calculator plus paper and pencil. Count Iblis (talk) 01:56, 17 April 2011 (UTC)[reply]


If you need B_n for very large n, you can use this relation:

Then you use that for large n, the zeta function is very close to 1. Count Iblis (talk) 02:01, 17 April 2011 (UTC)[reply]

(edit conflict) There is an inductive process that I derived in secondary school (a.k.a. high school in the U.S.) Let n be an integer, and imagine n2 on the number line. We want to get from 0 to n2. To do that, we go from 02 to 12, then from 12 to 22, then from 22 to 32, …, then from (n – 1)2 to n2. The difference between (k – 1)2 to k2 is exactly k2 – (k – 1)2 = 2k – 1. Adding up all the gaps, we get
Working with the far left and far right equations; then simplifying gives:
You can use the same "add up the gaps" method to evaluate the sums of linear, quadratic, cubic, quartic, quintic, &hellp;, powers. Although, each time you will need to know the formulas for the lower order cases. For example: to sum the squares, you will need the formula for summing the linear terms. To sum the cubes, you will need the formulas for summing the linear and square terms. It's not very fancy, but it's a place to start :-) Fly by Night (talk) 02:35, 17 April 2011 (UTC)[reply]

Note the simple analogous formula The binomial coefficient is the basic degree-k polynomial for discrete work, where the power is the basic degree-k polynomial for continuous work. Bo Jacoby (talk) 07:47, 17 April 2011 (UTC).[reply]

The conceptually simplest way is to assume the answer is a degree n+1 polynomial (consider what happens if you replace the sum with an integral). So plot n+2 points by direct calculation, then find the Lagrange polynomial that fits them. 69.111.194.167 (talk) 17:03, 17 April 2011 (UTC)[reply]


My favorite way to compute this would be as follows. We start with the geometric series:

The coefficient of t^n in the series expansion times n! is thus the answer. Count Iblis (talk) 17:36, 17 April 2011 (UTC)[reply]

Lyapunov exponent of a Rossler oscillator[edit]

How can I calculate the Lyapunov exponents of a Rossler system? —Preceding unsigned comment added by 130.102.158.15 (talk) 09:16, 17 April 2011 (UTC)[reply]

I don't think you can calculate them exactly (I might be wrong). There are a couple of standard numerical techniques for estimating Lyapunov exponents numerically - the article talks about this a little; the one they mention involving Gram-Schmidt orthonormalization is probably the simplest. This paper gives a pretty good explanation, but it isn't open-access. 81.98.38.48 (talk) 10:26, 17 April 2011 (UTC)[reply]

(Another!) graph theory question[edit]

Hello everyone,

I'm aware you've had a lot of graph theory questions on the desk recently, but only the last one was from me so hopefully you won't mind if I ask another! There are about 20 half-hour questions between the last one I asked and this one, so it's not like I'm coming to you at a moment's trouble at any rate :)

I've done the first two parts of the following, but I'm stuck on the third:

Brooks’ Theorem states that if G is a connected graph then χ(G) <= ∆(G) unless G is complete or is an odd cycle, where χ(G) is the chromatic number and ∆(G) the maximal vertex degree in G. Prove the theorem for 3-connected graphs G. (Done)

Let G be a graph, and let d1 + d2 = ∆(G) − 1. By considering a partition V1, V2 of V (G) that minimizes the quantity d2e(G[V1]) + d1e(G[V2]), show that there is a partition with ∆(G[Vi]) <= di, i = 1, 2. (Done)

By taking d1 = 3, show that if a graph G contains no complete graph on 4 vertices K4 then χ(G) <= (3∆(G)/4)+ (3/2).

Now it's the last part I'm stuck on: I think we can safely assume G is not complete or an odd cycle (as those cases are pretty trivial) so we can assume Brooks' theorem holds from earlier (even if not necessarily 3-connected, I'd say), but I'm not sure how to use the fact that G contains no K4 to drop the bound down. Taking d1 = 3, we get the first partition having maximum degree no greater than 3, but if we just try to rewrite χ(G) <= ∆(G) = 3∆(G)/4 + ((d1+d2+1)/4) = 3∆(G)/4 + 1 + (d2/4), so if we were trying to just derive the result directly from Brooks' theorem then we would need d2 <= 2, i.e. ∆(G) <= 6, which obviously isn't necessarily the case (and we haven't used the lack of K4, anyway), so I guess some additional intuition must be necessary. Could anyone provide some suggestions? Tasterpapier (talk) 17:16, 17 April 2011 (UTC)[reply]

"transverse to a manifold"[edit]

Is it possible to explain in simple English what it means that a "directions of a phase space" are "transverse to a manifold"?

I have a rough idea of what a manifold is, it's like some N-dimensional surface on which things can move around... and a "phase space" is like a higher dimensional space that the manifold is inside of... "transverse" is apparently like the "opposite" of "tangent", so does it simply mean that we're talking about the directions in the phase space that "hit" the manifold at whatever angle, rather than just brush up against them tangentially? But then if the manifold is bending around everywhere, then a direction that is tangent to the manifold at one point might be transverse to the manifold at another point. So I'm a bit confused. —Preceding unsigned comment added by 130.102.158.15 (talk) 23:47, 17 April 2011 (UTC)[reply]

Transverse at a point of intersection means that they have no common tangent at that point. It doesn't matter what happens away from that point. Transverse, used by itself, means transverse at every point of intersection. But this is still a pointwise notion, applied to each point individually. I hope that makes sense. ;-) Sławomir Biały (talk) 00:40, 18 April 2011 (UTC)[reply]
Sławomir, that seems like a different definition to the one I'm familiar with. In the case of vector subspaces U and V of a vector space W; we say that U and V are transverse if W = U + V. (See: Arnold, V. I.; Gusein-Zade, S. M.; Varchenko, A. N. (1985), Singularities of Differentiable Maps, vol. 1, Birkhauser, pp. 30−32, ISBN 978-0817631871) For example, in three dimensional space two one-dimensional subspaces are never transversal, while two non-coincident two-dimensional subspaces are always transversal, a one-dimensional and a two-dimensional being transversal only if the one-dimensional does not lie in the two-dimensional. If U and V are submanifolds of a manifold W then U and V are transverse at pUV if TpW = TpU + TpV. We say that U and V are transverse if they are transverse for all pUV. If UV is empty then U and V are trivially transverse. Fly by Night (talk) 16:20, 18 April 2011 (UTC)[reply]
I guess that is the usual definition. At least it agrees with what we have at transversality (mathematics). It must be I was thinking of the case of complementary dimensions only. Sławomir Biały (talk) 16:41, 18 April 2011 (UTC)[reply]