Wikipedia:Reference desk/Archives/Mathematics/2016 February 15

From Wikipedia, the free encyclopedia
Mathematics desk
< February 14 << Jan | February | Mar >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 15[edit]

Vector integration[edit]

In my physics class, we were working with integration to calculate the electric field at certain points from hypothetical charged objects. For instance, we would be given a charged hoop of uniform charge density, and we'd be required to find the electric field at a point P (usually on the x or y axis). We'd do this by breaking the object into infinitesimal pieces of charge, integrating all the contributions of the field from each charge to that point P. However, we were taught to integrate the components of the field vector, because simply integrating would not get us the resultant vector that represented the field at that point (of course, if we did integrate that, we'd be modeling a vector in the positive direction that had the magnitude of the integral, but it would not be the same as the field as vectors involve both magnitude and direction, something normal integration techniques do not give us).

This got me thinking (and here's where the math component of my question comes in): is there a vector analogue of integration such that the rules of vector addition, subtraction, etc. apply, in order to find such an integral vector? In the case of the electric field vector, say we had the positively-charged circular hoop centred about the origin and we were finding the field at the origin, our point P. Each infinitesimal electric field vector could be given by a parametric equation like , and if we can somehow integrate all vectors from , we'd get an integral vector of . Is there such a thing, and if there is, what does the notation look like (and how could we model this example)? If I didn't make sense somewhere, I'd be more than happy to clarify something. Thank you! 70.54.113.74 (talk) 07:45, 15 February 2016 (UTC)[reply]

I Googled "integral of vector valued function" and got a few results which formally define what you're talking about. Basically, you can easily adapt the definition of an integral of a real valued function to an integral of a vector valued function, and the result is the same as integrating each component. There doesn't seem to be much coverage of this on Wikipedia though, unless I missed it somehow. In vector calculus you usually talk about ∫F(r)⋅dr as the line integral, which results in a number, but you'reyour example is more ∫F(r)ds which would be vector valued. --RDBury (talk) 14:14, 15 February 2016 (UTC)[reply]
@70.54.113.74: In general, you cannot get the overall magnitude of the electric field vector by integrating magnitudes of the infinitesimal electric field components. What is meant by this integration is a (component-wise) volume integral over the charged object of the infinitesimal electric field contributions from each infinitesimal volume element of the hoop. If the hoop is of negligible thickness then this computation can possibly be done by a line integral instead. If you have not yet learned multivariable calculus, now would be a good time to do so.--Jasper Deng (talk) 17:58, 19 February 2016 (UTC)[reply]
(And a remark on notation: If C denotes your hoop, the charge at P, and the charge density as a function of your parameter t, you probably want = (with ds = dt) where the individual position vectors all point towards P.--Jasper Deng (talk) 18:03, 19 February 2016 (UTC)[reply]

Tensor products[edit]

Do tensor products in category theory and in linear algebra refer to the same thing? --Ancheta Wis   (talk | contribs) 08:18, 15 February 2016 (UTC)[reply]

See Tensor product and Monoidal category:
-emphasis added by me. Does that sufficiently answer your question? SemanticMantis (talk) 16:14, 15 February 2016 (UTC)[reply]
I would quibble with the phrasing there. Vector spaces aren't monoidal categories, but the category of vector spaces is a monoidal category. (As usual when intuitively trying to convey the sense of category theory, we may need to add some more precise words to ensure that "the category of vector spaces" is actually a meaningful concept, e.g., requiring "small vector spaces".) Sławomir
Biały
16:51, 15 February 2016 (UTC)[reply]

Most Decreasing Direction[edit]

Let be a smooth function. Let be a point such that the gradient of f at x is zero. What is the vector along which the function decreases the most? Strictly speaking, what is ? Clearly, this vector is in the space spanned by the eigenvectors of the Hessian matrix of f with positive eigenvalues (as RDBury explained me in my last question). But what is it? Is it just the eigenvector corresponding to the biggest eigenvalue? Or maybe some linear combination of the eigenvectors? עברית (talk) 19:30, 15 February 2016 (UTC)[reply]

First, the argument function applies to complex analysis and doesn't really work for Rn. If you're interested in a direction in Rn then it can be given as a unit vector. Also you're taking the argument of a value of f and that is a number, not a vector. Second, the function increases, not decreases, in the direction of positive eigenvectors, and it decreases in the direction of negative eigenvectors. But this by itself does not imply that the direction of maximum decrease would be in the direction of a negative eigenvector or even in their span. (Here I'm taking a positive eigenvector to be an eigenvector corresponding to a positive eigenvalue, and similarly for negative eigenvectors.) Symbolically I think what you're trying to get at is to find u with ||u|| = 1 which minimizes f(xu), where δ is assumed to be vanishingly small. If the Hessian is non-zero then you can ignore higher terms (see the earlier thread) and take f(xu) to be f(x)+δ2utHu where H is the Hessian. So the problem reduces to minimizing utHu where ||u|| = 1. By applying Lagrange multipliers, this occurs at a eigenvector of H, so take u to be an eigenvector corresponding to the smallest eigenvalue. The situation gets more complicated if the corresponding eigenspace has dimension >1 or the Hessian is 0; in that case you'd have to consider 3rd (or higher) order terms. --RDBury (talk) 16:02, 16 February 2016 (UTC)[reply]
One correction: You always have to consider 3rd (or higher) order terms because if u is an eigenvector of H corresponding to the smallest eigenvalue where ||u|| = 1, then -u is an equally valid candidate and you need a higher derivative to determine a winner. Take for example the 1-D function f(x)=-x2+x3. Its gradient at x=0 is zero, and the function decreases in both the negative and the positive direction, but it decreases more in the negative direction because f(-δ) < f(δ) for every sufficiently small δ > 0. Egnau (talk) 18:08, 16 February 2016 (UTC)[reply]
Good point. As δ→0 the relative difference will also approach 0, but one will still be larger than the other and which one can be determined by looking at the 3rd order terms. --RDBury (talk) 21:56, 16 February 2016 (UTC)[reply]
The expression has nothing to do with complex analysis; what is meant is "the point where is minimal." It is a fairly common notation in, for example, the calculus of variations. —Kusma (t·c) 20:27, 16 February 2016 (UTC)[reply]
Right. To clean up and generalize a bit: is the value of x for which f achieves its minimum (likewise for ). I think this notation is more popular in computer science, because they don't care as much as mathematicians about how ill-defined this notation is. -- Meni Rosenfeld (talk) 21:35, 16 February 2016 (UTC)[reply]
Thanks and sorry for the confusion, I wasn't familiar with that notation. The symbolism still wasn't quite correct though since it was basically asking for the minimum of the function on a ball of radius ε while the question as stated in English was about finding a direction from x. --RDBury (talk) 21:56, 16 February 2016 (UTC)[reply]

Proposed way to find large factors of numbers that's faster than GNFS for numbers with 250-300 digits??[edit]

GNFS is the fastest known way to find prime factors of numbers with 200 or so digits. However, go to the following URL:

http://www.worldofnumbers.com/topic1.htm

Search for information that starts with the date of December 3, 2014. You'll see a 251-digit composite number that they say is very difficult to find prime factors of even with GNFS. Any proposed way to find prime factors of numbers this large that you know of?? Georgia guy (talk) 22:00, 15 February 2016 (UTC)[reply]

Shor's Algorithm would factor it pretty fast, but you have to build a quantum computer before you can run it. Mnudelman (talk) 23:58, 15 February 2016 (UTC)[reply]
Are quantum computers still proposed as of 2016?? By 2025 there should be official info on whether any will soon exist; 2025 is the end of Windows 10. Georgia guy (talk) 00:04, 16 February 2016 (UTC)[reply]
People are still trying to make quantum computers. They have almost nothing in common with desktop computers and wouldn't replace them. (Also, 2025 isn't the end of support for Windows 10, just "Windows 10, released in July 2015", which may not even include the November 2015 update.) -- BenRG (talk) 06:03, 16 February 2016 (UTC)[reply]
Shor's algorithm is highly effective. :-)
  • In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5.
  • In 2012, the factorization of 15 was repeated.
  • Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer. Bubba73 You talkin' to me? 04:27, 16 February 2016 (UTC)[reply]
As far as I know, no number has ever been factored by Shor's algorithm. They've only done simpler calculations that by a strained argument can be connected to the factoring of composites that have a certain form. Arbitrarily large numbers have already been "factored" by quantum computers by that argument (any number in the infinite sequence A261074, and possibly others). See arXiv:1411.6758 for details. -- BenRG (talk) 06:03, 16 February 2016 (UTC)[reply]
Quoting from the paper, "unlike the implementations of Shor's algorithm performed thus far, these 4-qubit factorizations do not need to use prior knowledge of the answer. " It makes it a little easier if you know the answer in advance. :-) Bubba73 You talkin' to me? 08:14, 16 February 2016 (UTC)[reply]
As far as I can tell, the numbers that are difficult to factor in that post are the composites with two (or more) large prime factors. Testing primality is much faster than finding factors, so if there is just one large prime factor then you can quickly find all the others and then prove you're done, while if there are two large prime factors it may take an enormous time to find one. Probably any other factoring algorithm would suffer from the same problem. Large composites used in cryptography are chosen to be the product of two primes of roughly half as many digits each, to make factoring maximally difficult. -- BenRG (talk) 06:03, 16 February 2016 (UTC)[reply]
But picking a composite that is close to the square of a prime number wouldn't be very secure, if the program used to break it starts at the square root of the composite number and works down. (It's interesting how cracking a code gets into mind reading at this point.) StuRat (talk) 20:36, 17 February 2016 (UTC)[reply]
Not when "close to" means "having the same number of digits as". --JBL (talk) 21:50, 17 February 2016 (UTC)[reply]