Wikipedia:Reference desk/Archives/Mathematics/2012 March 8

From Wikipedia, the free encyclopedia
Mathematics desk
< March 7 << Feb | March | Apr >> March 9 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 8[edit]

10 of 26 of 53![edit]

So I'm a classics guy, not a maths guy---My teacher has given us 53 terms to know for a test, 26 of which will appear on the exam, TEN of which I'll need to answer. I think if I memorize any given 27 I'm covered 100% but I'm wondering about other numbers. How many do I have to memorize say, be 90% assured that I'm covered? Is there a simple maths principle at work here to understand the relationships?72.74.134.42 (talk) 02:27, 8 March 2012 (UTC)[reply]

How do you figure you're covered 100%? You memorize 27; the teacher selects 26; the ones the teacher picks turn out to be exactly the ones you didn't pick. You're hosed, right? --Trovatore (talk) 02:29, 8 March 2012 (UTC)[reply]
(ec) Maybe 27 was a typo for 37? If you have 37 memorized, there has to be an overlap of at least 10, which would cover you. See pigeonhole principle and inclusion-exclusion principle. The probabilistic questions can be answered as well, but maybe start with this. (I'm assuming your interest is mathematical; my advice for the actual test is to learn all the terms, and preferably not just by memorizing them.) --Trovatore (talk) 02:33, 8 March 2012 (UTC)[reply]
To be covered 100% you will need to remember all but 16 of them. That way any selection of 26 will contain at least 10 you can answer. Widener (talk) 02:32, 8 March 2012 (UTC)[reply]

Whoops!! OP here, yes, I meant 37 ---So how many must I learn (and yes, Trovatore--this is mostly for fun, I plan to learn them all!) to be covered, say with a 90% chance?72.74.134.42 (talk) 02:48, 8 March 2012 (UTC)[reply]

If the teacher picks the 26 terms at random, there are different equally likely ways to pick which terms. Assuming you learn n of the 53 terms, the ways for the teacher to choose terms so that you know exactly k of them is . You fail to be covered if k is between 0 and 9, so the total probability of failure is . According to Wolfram Alpha that probability is roughly 10% for n = 24 [1]. Rckrone (talk) 04:16, 8 March 2012 (UTC)[reply]


number probability
18 34.89%
19 45.90%
20 56.97%
21 67.34%
22 76.41%
23 83.85%
24 89.54%
25 93.63%
26 96.37%
27 98.07%
28 99.06%
29 99.58%
30 99.83%
31 99.94%
32 99.98%

84.197.178.75 (talk) 04:20, 8 March 2012 (UTC)[reply]

Hmm, may be wrong, my calculation gives still a fair chance when you only learn 11 terms.. debugging.. 84.197.178.75 (talk) 04:30, 8 March 2012 (UTC) nevermind, changed lower subscript to avoid error when calculating 28 and higher. Numbers are correct 84.197.178.75 (talk) 05:02, 8 March 2012 (UTC)[reply]

Showing that something is a tensor[edit]

is defined as Show that is a second rank tensor.
What does this even mean? It's a bit like asking to show that is a vector. Surely it just is? Widener (talk) 10:27, 8 March 2012 (UTC)[reply]

Are you sure wasn't meant instead? Or do you mean Cartesian tensors where you don't have superscripts? Think what it would be after a coordinate transformation. Dmcq (talk) 11:00, 8 March 2012 (UTC)[reply]
It would once again be . How is that relevant? Widener (talk) 11:14, 8 March 2012 (UTC)[reply]
At least, I think that's the case. . Widener (talk) 11:19, 8 March 2012 (UTC)[reply]
I don't think the Kronecker delta is a tensor at all. Yes, it is equal to the metric tensor in a Cartesian co-ordinate system, but in other co-ordinate systems the components of the metric tensor (which is a tensor) are very different. If the Kronecker delta were a tensor then the metric tensor would be equal to the Kronecker delta in all co-ordinate systems. Gandalf61 (talk) 11:50, 8 March 2012 (UTC)[reply]
It is imperative to specify in a discussion such as this which indices are covariant and which are contravariant; thus using a convention in which upper and lower indices indicate the difference are necessary to avoid confusion. A set of components (or coefficients) can only be considered to be those of a tensor if they transform covariantly or contravariantly under a change of basis, including under a non-orthogonal transformation. With this understanding, the Kronecker delta components δμν and δμν do form the components of a tensor with any basis (subject to the proviso that the basis and cobasis correspond), but the the Kronecker delta components δμν and δμν do not form the components of a tensor, since under a general basis transformation the components will no longer be the Kronecker delta. One can always specify a tensor by specifying its components with respect to a specific basis and use transformations to deduce the components with respect to any other basis (and thus "just is a tensor"), but simultaneously specifying components with respect to more than one basis does not specify a tensor if this is inconsistent with the tensor transformation rules. — Quondum 12:15, 8 March 2012 (UTC)[reply]
So, in effect, you can regard δi j as being the mixed tensor form of the metric tensor on a Riemannian manifold, since
Is that correct ? Gandalf61 (talk) 13:01, 8 March 2012 (UTC)[reply]
(ec) These terms "covariant" and "contravariant" have never been mentioned in the lectures; I have heard them in the parlance of some of the other mathematicians but I have no idea what they mean. From what I can tell, I think indeed was probably meant. Why then would it be written ? It also says that the question relates to Euclidean space of dimension 3; maybe that's relevant. I seem to recall hearing that covariant and contravariant indices coincide in Euclidean geometry, and the Kronecker delta may be equal to the metric tensor in that case. In general, I would like to know how you show that something is a tensor of a particular rank. Widener (talk) 13:10, 8 March 2012 (UTC)[reply]
It depends somewhat on the context of the question. To show that it's a "Cartesian tensor", it's enough to establish that whenever A is an orthogonal transformation. Presumably this is what is intended. Sławomir Biały (talk) 13:17, 8 March 2012 (UTC)[reply]
I just looked through the notes - yes, they are talking about Cartesian tensors. Sorry, I didn't realize there was a distinction between Cartesian tensors and tensors in general. I think any mention of "tensor" in the assignment must actually refer to Cartesian tensors, because I think they are the only type of tensors mentioned in the notes. Widener (talk) 13:31, 8 March 2012 (UTC)[reply]
It essentially means the same thing, but my description for it is that we are constrained to orthonormal bases. It does seem likely that this is what is intended. Then the required result does follow. (On Gandalf61's question, in general provided the bases are not mixed i.e. different for the two indices.) — Quondum 14:08, 8 March 2012 (UTC)[reply]
Checking to see if it obeys the correct transformation law works if you have a particular type in mind, but if you don't have a particular type in mind, there are other ways. If you google "tests for tensor character" you will find some resources. Rschwieb (talk) 14:16, 8 March 2012 (UTC)[reply]

Thank you everyone! This has cleared up a lot of confusion. Widener (talk) 21:13, 8 March 2012 (UTC)[reply]

what is the actual percent advantage in the problem listed?[edit]

I made the mistake, I think, of thinking that I had a "ten percent advantage" over the house in the game listed (win 6.375 with 80% probability, lose 25 with 20% probability), since I calculated the expected value of that bet as 0.1 and in the context of probability 0.1 is ten percent.

But the context isn't probability, it's an actual dollar amount payoff!

So...what is the actual percent advantage here?--80.99.254.208 (talk) 10:51, 8 March 2012 (UTC)[reply]

I don't think "percent advantage" is a well defined thing. I would just say that the expected value is 0.1. If you were placing a stake at the beginning, then you could express the expectation as a percentage of that stake. Your problem could be rewritten as "Post a stake of 25. 20% of the time you get nothing back (ie. a loss of 25), 80% of the time you get 31.375 back (ie. a gain of 31.375-25=6.375)." In that context, your expectation is 25.1, which is 100.4% of your initial stake, so you have an expected return of 0.4%. --Tango (talk) 12:09, 8 March 2012 (UTC)[reply]
This can't be right though, Tango, because in the losing cases, you DO owe the "casino" your loss. When gamblers talk about "a cardcounter at blackjack has a 1% advantage against the house" if he's perfect, but lots of people try to count cards but make mistakes, so if a Casino sees someone counting cards at a blackjack table, they will let them: the people who only think they can count cards more than pay for the loss from those who can. So, what are these gamblers talking about? That 1% advantage against the house gives you a very good idea of how long it takes to exploit. If it were 25%, the perfect card counter would double his money after a few goes, and keep doing so, breaking the bank in short order... The idea is that 1% advantage simply doesn't let you do that, and the casino can eat up the loss and subsidize it with poor card counters.
Returning to the present case. Since there are rounds, or gos, does it really matter if there is a bet you have to put? I mean, isn't the game EXACTLY the same if you have to put in 25 dollars, and twenty percent of the time you lose it, eighty percent of the time you win 31.375 dollars? That game must play out EXACTLY the same as this one. So, doesn't it make sense to talk about the advantage in percent terms as though this were the case? If equivalent games don't have the same percentage for or against the house, then it doesn't even make sense to talk about a percentage advantage! Surely literally equivalent games (round to round, long-term short-term) MUST have the same house advantage... How you account for it in your brain surely can't affect your advantage, otherwise you could account yourself a 20% edge (advantage) and start winning right away... --80.99.254.208 (talk) 12:44, 8 March 2012 (UTC)[reply]
Casino game: "The house edge or vigorish is defined as the casino profit expressed as a percentage of the player's original bet" . So I tend to agree with the 0.4%, I must admit it wasn't as obvious as it is for most bets, like for example roulette odds, maybe that's just because the winnings are less than the amount you risk losing, or because it was described in terms of net profit instead of payout. 84.197.178.75 (talk) 15:10, 8 March 2012 (UTC)[reply]
Yes, it is the same game, that was the point. To express something as a percentage, you need to have a denominator, ie. it needs to be a percentage of something. The way you had expressed the game, there wasn't an obvious thing to express it as a percentage of, so I had to reword it while keeping the actual game the same. --Tango (talk) 16:29, 8 March 2012 (UTC)[reply]

The game isn't STRICTLY equivalent, as you can't play when you have 18 dollars left (can't post) but can play the original game, in fact you could win enough times to get out of the bust zone (though unlikely). However, without these differences: is the game round-to-round still the same if you have to post 75 and either get 50 back or 75+6.375? Would you still calculate 0.4% in this case? (And in all other such cases?) Or does this amount you arrived at depend on posting EXACTLY 25 dollars (which already isnt strictly equivalent to the game but which plays out the same round to round, if you have enough bankroll to post it...) --80.99.254.208 (talk) 20:48, 8 March 2012 (UTC)[reply]

There may not be a perfect definition, it depends on what you want to use it for(??) If you have to post 75 but always get at least 50 back, then for the house, the casino, this would make a difference, since for the amount of chips the customers have, their profit (or in this case loss) will be lower. For you, if the wager is fixed to one per round, it doesn't make much difference, you only need 50$ extra starting capital.
The percentage on it's own doesn't tell you everything, there's also the standard deviation. If in a lottery the jackpot is made so high that players have a 4% advantage, that wouldn't change the outcome for the 99.999999..% of players who don't get six numbers right. You'd have to play billions of times before you get something resembling a bell curve.. 84.197.178.75 (talk) 23:10, 8 March 2012 (UTC)[reply]

vapour pressure[edit]

How can I convert a vapour pressure (in Pascals) of a liquid at a given temperature to work out what the density of the substance in air would be if it was in a closed system in equilibrium? I know the molecular weight of the substance. — Preceding unsigned comment added by 86.142.11.17 (talk) 21:27, 8 March 2012 (UTC)[reply]

Try posting on the Science Ref Desk. StuRat (talk) 21:29, 8 March 2012 (UTC)[reply]
The pressure P in pascal, divided by the absolute temperature T in kelvin, is the concentration in pascal per kelvin. Divide by the gasconstant R in (joule per kelvin) per mole, to obtain the concentration in mole per cubic meter. Multiply by the molecular weight in kg per mole, to obtain the density in kg per cubic meter. Bo Jacoby (talk) 09:49, 9 March 2012 (UTC).[reply]
thanks. I got my question wrong; can I convert Pa to parts per million instead? I don't think I need to know the molecular weight for that, do I? thanks again. — Preceding unsigned comment added by 86.133.235.147 (talk) 12:34, 10 March 2012 (UTC)[reply]
Maybe you can give the whole question, exactly as it is written, because it's not clear, at least to me, what exactly you mean by parts per million. If that's in air at standard atmosferic pressure (100kPa), then your vapor pressure or partial pressure is ppm * 0.1 Pa. So 300 ppm is 30 Pa partial pressure, and then you repeat what Bo Jacoby said. 84.197.178.75 (talk) 19:39, 10 March 2012 (UTC)[reply]