Wikipedia:Reference desk/Archives/Science/2009 October 22

From Wikipedia, the free encyclopedia
Science desk
< October 21 << Sep | October | Nov >> October 23 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 22[edit]

free energy of a substance and temperature[edit]

So I'm doing ideal solutions. Can I confirm that generally, the free energy of a pure substance doesn't change as temperature increases, because the increase in dH and TdS cancel each other out? (that is, dH = Cp dT; TdS = T (Cp/T) dT). Basically, what is the temperature-dependence of the equilibrium position of Xa and Xb (concentrations) as temperature increases, with regard to a solution of A and B? Thus, in the free energy equation G = (Xa Ga)+ (1-Xa)(1-Ga) + RT ((Xa ln Xa) + (1-Xa)ln(1-Xa)), I don't have to worry about heat capacities or anything, as only the magnitude of the RT term will change, resulting in a more negative minimum a shift in the concentration position of G_min, but the position of the other critical points don't change. Thanks!John Riemann Soong (talk) 04:11, 22 October 2009 (UTC)[reply]

Wait. What are A and B? Are A and B substances involved in an equilibrium reaction such that A <--> B? Or are A & B two different substances which are dissolving, such that A(s) <--> A(aq) and B(s) <--> B(aq)? Are A and B in the same solution, or are each in their own solution? Are A & B ions, or ionic solids, or molecular solids? I'm not sure I understand the situation you are describing fully here. --Jayron32 04:27, 22 October 2009 (UTC)[reply]
A & B are monoatomic species (well I don't think whether being molecular changes things since this is an ideal solution but anyway) and you can adjust their concentration. Presumably you could take A out of solution and replace it with B and vice versa without changing the free energy of the universe. i.e. you're basically God and adjusting the molar concentrations of A and B and looking what happens to free energy. You're making A and B magically appear and disappear and looking at just the free energy of the solution. John Riemann Soong (talk) 04:31, 22 October 2009 (UTC)[reply]
If A & B are functionally identical, then they are functionally identical. However, if they are real different monatomic species, then each will bind slightly differently with the water molecules during solvation. For example, the size of the atom will affect the absolute entropy of the solvation complex, so exchanging A for B will result in a change in dS. Also, the solvation enthalpy will be different, as A and B will bind with water molecules with slightly different strengths, so dH will be different for the two. The deal is, there is not necessarily any connection between the size factor (affecting dS) and the solvation energy factor (affecting dH), so the exchange of A and B could result in any of 4 possible results (+ or - dS and + or - dH) so dG will change, but not in any predictable manner. Depending on how dS and dH change, dG's temperature dependance is also unpredictable without more information; for example a positive dS will result in a very different temperature dependance profile than a negative dS would. --Jayron32 04:44, 22 October 2009 (UTC)[reply]
Am I missing something here? It's an ideal solution? Enthalpy of solvation is zero, and size of the atom doesn't matter ... John Riemann Soong (talk) 05:05, 22 October 2009 (UTC)[reply]
Ah. Sorry, I missed that. Ideal solution contains a discussion of the thermodynamics of mixing in an ideal solution; free energy is dependent only on entropy (indeed, by definition, Free Energy is basically the inverse of entropy anyways); according to the equations there, dG is most definately temperature dependent, as expected, since dS is also temperature dependent. --Jayron32 05:25, 22 October 2009 (UTC)[reply]
Well there's two parts. There's the G_A and G_B part, and the RT(ln[stuff]) part. What I'm saying, is that except for the RT term, dG is not temperature dependent (let's say they don't mix and they're in separate containers). That is, say before mixing, as I heat two compounds up, their free energies will not change, especially w/respect to each other. That is, as enthalpy of each solution goes up (dH = Cp dT , this change is equally matched by a decrease in TdS = T Cp/T dT = Cp dT). So delta-G of temperature change is zero EXCEPT for the mixing component. John Riemann Soong (talk) 05:50, 22 October 2009 (UTC)[reply]

Uhhh, someone help? What happens to G of a solution as you increase or decrease T, and what happens to its G_min concentration (for the cases G_a >> G_b (and vice versa), G_a = G_b, etc.? It can't be that hard -- I just need someone to sort out the concepts for me. John Riemann Soong (talk) 17:50, 22 October 2009 (UTC)[reply]

Except for the RT term? Thats a pretty big except. Except for not having any money, I am a rich man. Seriously. dG is temperature dependent. Raising the temperature of the system will change the free energy; the T is right there. Look:

You can't just ignore the temperature. If I change the number that T represents, the number that deltaG represents changes too. I don't understand why you just want to ignore that. --Jayron32 20:29, 22 October 2009 (UTC)[reply]

I don't think he's looking to ignore T, rather he's trying to find exactly how G depends on T. Rckrone (talk) 01:06, 23 October 2009 (UTC)[reply]
It's a direct linear relationship. Look at the equation. DeltaG and T are both in there, and they appear on opposite sides of the equal, and on the same level and at the same power. That's a direct and linear relationship. If you plot deltaG vs. T, you'll get a line with thge slope of . This is pretty basic mathematical analysis here. The OP however seems to assert that deltaG is not dependant on T, which is plainly wrong. --Jayron32 01:48, 23 October 2009 (UTC)[reply]
I don't think that's what he was saying though. The formula you're talking about is specifically the free energy of mixing, which isn't the total Gibbs free energy. The OP asked first if ΔG is constant in T (I assume at constant P) for a pure substance (free energy of mixing is not involved). Then he asked in the case of a mixture, if the only in G with respect to T comes from the change in the free energy of mixing (the part you discussed), or if the other components of the total free energy will change as well.
I am really not familiar with this topic, but just going by the formula for Gibbs free energy given in the article, G = H - ST, results in dG = dH - SdT - TdS. At constant pressure dH = TdS as shown, but the SdT term remains, so dG/dT = -S rather than 0. This might be totally wrong but Gibbs–Helmholtz equation seems to corroborate. Rckrone (talk) 06:41, 23 October 2009 (UTC)[reply]
Thank you. I know how to deal with the mixing component -- it's just that I had no idea how to deal with the rise in the free energies of the pure substances, which are no longer constant. Also, how to sort out any possible interaction between the mixing free energy changes and the non-mixing free energy changes. I spent like 5 hours over this without any help. Thanks guys, I guess I'm getting to get a 7.5/10 for my homework. FML. I was looking at a pure substance so then it made it easier to deal with a mixed substance. John Riemann Soong (talk) 09:37, 23 October 2009 (UTC)[reply]

Jayron: I wanted to ignore the mixing component for the time being and talk only about the pure components because the mixing component is the part that's covered by my notes, the part that my group members are confident with, and the part that I essentially know how to do and did lots of laws of logs with. However, I don't know what to do with the rise the free energy of the pure components -- the rise in G_a and G_b!!!!! That's why I wanted to ignore the temperature effects on the RT term, cuz the RT term I have already worked out. Awww, why did you have to do this to me? =( John Riemann Soong (talk) 09:39, 23 October 2009 (UTC)[reply]

Isn't p-chem fun stuff? --Jayron32 12:46, 23 October 2009 (UTC)[reply]
Haha, this is p-chem lite, not taking the real one yet ... I'm taking a materials science (phase transition thermodynamics) thing for my sequence. John Riemann Soong (talk) 15:18, 23 October 2009 (UTC)[reply]

How can I obtain pure sulfur, mercury, lead, and antimony?[edit]

Hello,

My friend is giving a presentation on Paracelsus. He's going to talk about how Paracelsus prescribed sulfur, mercury, lead, and antimony for ailments. Is there a way to obtain these substances for the presentation? It's just to show the audience what they look like. We don't need all of them (although that would help).

Thanks,

Drknkn (talk) 04:39, 22 October 2009 (UTC)[reply]

Sulfur and lead are relatively easy to find; sulfur and lead are readily availble from Fisher Scientific; thus they are probably availible from your school's chemistry teacher. I've not seen a stanard educational stockroom carry antimony, but Fisher does sell it. Mercury is readily availible in things like thermometers. Its hard to find a jar of Mercury for sale anymore, but mercury thermometers are still for sale, again from fisher. --Jayron32 04:49, 22 October 2009 (UTC)[reply]
Cool. Thanks.--Drknkn (talk) 06:13, 22 October 2009 (UTC)[reply]
Some of those substances are toxic. Messing with them just for a presentation like that doesn't seem worthwhile. Show some video of them or something like that instead. 69.228.171.150 (talk) 07:19, 22 October 2009 (UTC)[reply]
I agree. Mercury for example is seriously toxic: do not touch it, expose it to the air, or leave it lying around. Best to have nothing to do with it. 78.146.56.118 (talk) 18:37, 22 October 2009 (UTC)[reply]
It is tempting to demonstrate the strange way Mercury rolls around as a shiny and unexpectedly heavy liquid metal and this is probably still done in some schools (it was in mine) in spite of the danger of poisoning. Either follow the advice not to handle Mercury or do so only in a Glovebox. Cuddlyable3 (talk) 19:13, 22 October 2009 (UTC)[reply]
What you can buy, depends on where in the world you live. e.g. In the USA you could probably obtain these chemicals, in the UK - don't bother (except lead), it's not possible for individuals to buy chemicals. I think most of Europe is also very difficult.  Ronhjones  (Talk) 19:45, 22 October 2009 (UTC)[reply]
Pure sulfur would most likely be a powder. Would that make it not a chemical? Googlemeister (talk) 21:23, 22 October 2009 (UTC)[reply]
No, being a chemical doesn't depend on the phase of matter. Rckrone (talk) 07:23, 23 October 2009 (UTC)[reply]
Isn't it somewhat unusual to use the word chemical for a pure element, though? I think compound would, certainly, and the words tend to be used interchangeably. --Trovatore (talk) 07:29, 23 October 2009 (UTC)[reply]
As in medicinal compound. Cuddlyable3 (talk) 15:24, 23 October 2009 (UTC)[reply]
Reading the sulfur article, it appears that ancient medicine (Paracelsus and earlier) used sulfur in creams to treat acne. the article also says that sulfur is still an active ingredient in some modern-day creams against acne. That makes for an interesting link between Paracelsus and modern medicince, as well as a sulfur-compound that's safe and easily available. —Preceding unsigned comment added by EverGreg (talkcontribs) 07:39, 23 October 2009 (UTC)[reply]
See Paracelsus#Contributions_to_medicine. A nice saying about the use of Mercury compounds[1][2] to treat syphilis[3] was "A night in the arms of Venus leads to a lifetime on Mercury". Cuddlyable3 (talk) 15:11, 23 October 2009 (UTC)[reply]

streoscopy[edit]

How to make stereoscope ? i have created a movie in a view of creating a 3D picture. Using the technology of two cameras at some distance. now i need a viewing glass. guide me to make it. thanks in advance...

hoping a better result.

yours ravivarma,RAJA.M —Preceding unsigned comment added by Rjravivarma (talkcontribs) 08:22, 22 October 2009 (UTC)[reply]

I would start with our article stereoscopy, especially the references and external links. I also found a page 'Let us Build a Stereoscope', but that may be too elementary for your purposes. stereoscopy.com seems to have a wealth of information and resources. --LarryMac | Talk 14:46, 22 October 2009 (UTC)[reply]
It really depends on what kind of viewing situations you need. One very simple trick is to take the two pictures and print them side-by-side onto a single sheet of paper - or place them side-by-side on your computer screen. Now you can allow your eyes to cross slightly such that you now see two copies of each image overlaid on each other...with practice, you can let the left-hand copy of the right-hand image lay exactly on top of the right-hand copy of the left hand image. When you do this, the image in the middle of the row of three suddenly "pops out" in 3D - while the ones on either side look kinda hazy and transparent.
Slightly more complex - get a pair of those red/blue glasses (they are really red/cyan). Take one image into photoshop or GIMP and delete it's red layer and the other and delete it's green and blue layers - then use the 'composite' feature to put them back together again. Now you have an image that'll pop into 3D with red/cyan glasses.
You could try to make - or buy - a stereoscope...or if you are REALLY cheap - you can print out the image to the right here at an appropriate scale onto some thin cardboard - cut and fold - and you're done!
It can get more and more complex, the fancier you want to get - but that's a good starting point. Here is some other random advice:
  • It's very easy to get the two images switched over and get peculiar inside-out 3D.
  • To get the best results, try to keep your cameras within a few inches of each other and take mostly pictures of things that are less than about 20 to 30 feet away. Beyond that distance, the 3D effect isn't strong enough to give you good results.
  • You can increase the distance between the cameras to maintain the 3D effect out beyond 20 to 30 feet - but what happens is that your brain starts to think the objects in the picture are toys...models of the actual objects. Sometimes this is a fun 'effect' - but it's not what you want for good realism.
  • Make sure that the cameras are at the same vertical height and that the top and bottom edges of their pictures lie in the same straight line. Our brains get very confused - and you can actually make people want to throw up - if you break this cardinal rule.
SteveBaker (talk) 21:02, 22 October 2009 (UTC)[reply]
I've been experimenting with 3D photography. Two significant points you've missed:
  1. You want to use a normal or near-normal lens. If you're using more than a mild telephoto lens (85mm or so on a 35mm camera), the viewer won't be able to fuse the entire scene at once -- they can put the foreground, the subject, or the background in 3D, but the rest will be double images.
  2. You want the camera-to-subject distance to be between 20 and 30 times the camera-to-camera (baseline) distance. None of the one-inch baseline pictures of my desk are online, but I do have a stereopair of Mount Hood taken at about 25 miles with a baseline of almost a mile.
Related to the above, you don't want the depth of the scene to vary too much. You can get away with a high subject-to-background distance (the background will simply look flat), but the foreground needs to be close to the subject or you'll get double images rather than 3D. --Carnildo (talk) 22:57, 22 October 2009 (UTC)[reply]

human electricty[edit]

How and where is electricty generated in the human body for muscle adtivation?

Like with neurons, it is not electricity as in electron (negative) flow but it is flow of ions, in muscles being calcium (2x positive) from outside to inside the cells. This however, is just a signal, the driving force are Motor protein which burn ATP. --Squidonius (talk) 14:36, 22 October 2009 (UTC)[reply]
I have a bit of trouble understanding the previous response; here is my version. The membranes of muscle cells contain ion pumps (in the form of specialized protein clusters) that turn them into batteries, with a voltage difference between the inside and outside. The battery does not directly power muscle contraction though -- it drives a flow of calcium ions across the membrane, and the calcium ions activate contractile proteins that produce the muscle power. Looie496 (talk) 19:02, 22 October 2009 (UTC)[reply]
To be more specific:
1) the cell membranes are capacitors, or in other words, charge separators. Charge (in the form of ions) builds up on both sides of a membrane (in the form of positive on one side and A LOT more positive on the other side) and because the ions cannot cross the membrane until certain channels are opened, the charge builds. When the channels finally do open, ions RUSH through them in order to balance both the electrical gradient and the relative ion concentration gradients.
2) calcium ions bind to troponin, forcing tropomyosin to shift from myosin-binding sites on the actin filaments. When the these binding sites become exposed, the myosin heads attach and perform their powerstrokes, thus contributing to muscle contraction.
3) nerve and muscle "electricity" in the form of capacitance and transfer of charge can be measured with an oscilloscope using alligator clips and the like. DRosenbach (Talk | Contribs) 22:52, 22 October 2009 (UTC)[reply]
The idea is that because you have two different species of ions you have both electrical forces and diffusion/osmosis forces at work; sometimes they oppose each other and sometimes they enhance each other. Actually ions don't "gush" into the membrane, in as much as electrons aren't "gushing" out of a battery. In general only a tiny percentage of the potential energy stored is consumed before electrochemical equilibrium is restored. Otherwise a single neuron would get tired pretty quick! John Riemann Soong (talk) 14:46, 25 October 2009 (UTC)[reply]

Alternate Universe?[edit]

So besides the universe being everything that is physically existing (the solar system, planets etc.) what are the chances of coming across an alternate universe? Like the one shown here: http://www.youtube.com/watch?v=cryej86SmCk


It just makes me feel so small and less substantial when I see how small I am compared to the rest of what exists. —Preceding unsigned comment added by 139.62.167.223 (talk) 15:16, 22 October 2009 (UTC)[reply]

There are lots of different theories about alternate universes. See multiverse for descriptions of some of them. I would argue that if it is possible to visit somewhere then it is, by definition, part of our universe, but other people define things differently. --Tango (talk) 15:19, 22 October 2009 (UTC)[reply]
OP the chances of you coming across an alternate universe are slim. The chances of you coming across the pseudoscientific scenario where "nanorobots carry our DNA" are zero. Your video link shows five minutes of slick graphics, wordbytes from alleged "experts", "We physicists have calculated" and "Some of the world's leading physicists" who are anonymous or dead. It has a percussive soundtrack to stop clear thinking, such as noticing when COULD changes to CAN at 4:48. The 4:36 spaceships and the 4:56 astronaut were pretty sci-fi props and I suggest you try to enjoy them as such. Cuddlyable3 (talk) 18:41, 22 October 2009 (UTC)[reply]
Nanotech doesn't exist like that yet, but it easily could in the future. However, that wouldn't help with travelling through a wormhole. The only way you could possibly do it is to exploit quantum effects, which means being of at least atomic sizes (which I think the clip actually said) - an atom is on the scale of 100 picometres, a picometre is 1/1000 of a nanometre, so we're talking significantly less than a the scale of nanotech. --Tango (talk) 20:00, 22 October 2009 (UTC)[reply]
...and way WAY smaller than a DNA molecule. Yeah - this was all so much B.S. They've taken a collection of the most speculative hypotheses out there and strung them all together (making it speculative-squared!) and then white-washed all of the difficulties and extrapolated still further from there to a flat out crazy conclusion. There are much simpler ideas that are harder to get your head around and much more believable. Take something as seemingly nutty as quantum suicide - and realise that this only needs one out of the half dozen things that video uses to be true! Or how about digital universeSimulation hypothesis - that could easily be true with nothing more than the laws of physics as we know them. If our relatively mundane universe is infinite (as it very well might be) - then we don't need a parallel universe in order to have another person identical to you, reading an identical post to this one that's different only in that this sentence ends with a colon instead of a full-stop: SteveBaker (talk) 20:11, 22 October 2009 (UTC)[reply]
You don't need to fit a DNA molecule in it - you can send multiple picoprobes if you need to. I'm not sure how to make something that small that can create the cloning technology required on the other side, even if you send trillions of them, but transmitting the DNA information using such picoprobes might well be possible. A traversable wormhole is the more unlikely discovery, IMO, even with quantum effects. --Tango (talk) 21:31, 22 October 2009 (UTC)[reply]

I love being told that I can do what I obviously cannot do because that makes me feel so small and less substantial like the OP. Cuddlyable3 (talk) 13:06, 23 October 2009 (UTC)[reply]

I'm not sure I understand... are you complaining about my use of the generic you? --Tango (talk) 15:16, 23 October 2009 (UTC)[reply]
When one finds oneself addressed on one's personal screen one must consider as I do the implication that one may in fact be the one that that other one is addressing. Or not. You can't be too careful can you?. Cuddlyable3 (talk) 23:11, 23 October 2009 (UTC)[reply]
Sounds like someone has been through the Total Perspective Vortex. I'm sorry. Imagine Reason (talk) 17:43, 24 October 2009 (UTC)[reply]

Fine-tuned Universe?[edit]

As I always say when I ask these questions lol, I know very little science and will apologise in advance for the huge display of ignorance I'm probably now about to make.

Arguments about a fine-tuned Universe and the anthropic principle tend to come up a lot when the existence of God is discussed. But to me all these theories tend to rest on the idea that universal constants are arbitrary rather than logically self-evident: we don't look for meaning in the value of pi or e, or even root 2 for that matter, because they are logically self-evident, it cannot be logically conceived that those could have values that are something else.

It's not really treated in the article, so I thought I'd ask here. Is there any scientific consensus about the nature of these constants? Would a scientist work assuming that they were arbitrary (when I say arbitrary I don't mean uncaused, I just mean that there potentially could be alternatives) or assuming that they were logically self-evident and the only possible values of what they are?

Even if there's no consensus, I'd be curious to know what the arguments were on either side of the issue, as it's something that interests me a great deal. —Preceding unsigned comment added by Dan Hartas (talkcontribs) 15:53, 22 October 2009 (UTC)[reply]

That is one of the big unsolved problems in science. The anthropic principle explains the values of the constants pretty well, but there are a lot of scientists working on finding more satisfying explanations. For example, inflationary theory explains why the average density of the universe is so close to the critical density. --Tango (talk) 16:04, 22 October 2009 (UTC)[reply]
In these discussions, one should take care to distinguish the fundamental mathematical constants (including π, e, and so forth) from fundamental physical constants (like α, the fine structure constant). Our articles physical constant and dimensionless physical constant do a pretty good (if cursory) job of defining the differences. Essentially, mathematical constants are the result of specific, arbitrary axioms and conventions we have chosen to use in defining mathematics, whereas physical constants have to be derived from measurements of the actual properties of our universe. The mathematical constants have to be what they are because they're part of the definition of the system. TenOfAllTrades(talk) 16:07, 22 October 2009 (UTC)[reply]

The "Numerological explanations" section of the fine structure constant article might interest you. 69.228.171.150 (talk) 16:10, 22 October 2009 (UTC)[reply]

All this sort of discussion begins and ends with an observer. An observer who is at leisure to ponder the arcane mysteries of the universe, rather than fighting off a king cobra or taking cover from a thunderstorm. Contemplation done at ease will usually end up concluding that the universe is finely-tuned for our human comforts. Vranak (talk) 16:45, 22 October 2009 (UTC)[reply]

It's not really the "fine-tuned for people" bit that I'm wondering about- it's more whether the laws of the Universe could logically ever be any different at all, irrespective of human comforts. —Preceding unsigned comment added by Dan Hartas (talkcontribs) 16:54, 22 October 2009 (UTC)[reply]

Most of our theories require at least some constants to be measured empirically and just plugged in, but work is being done to reduce the number of arbitrary constants. I'm not an expert of string theory, but I believe it could explain the masses of elementary particles, for example, by the resonant frequencies of the strings. --Tango (talk) 17:30, 22 October 2009 (UTC)[reply]
As far as I know, string harmonics have energies that are multiples of the Planck mass, which is far too high to account for any of the Standard Model particle masses (or anything else that has ever been seen experimentally). Certainly no string-theory explanation of anything in the Standard Model is known right now; it hasn't even been shown that string theory is consistent with the Standard Model. -- BenRG (talk) 00:17, 23 October 2009 (UTC)[reply]
I thought the whole point of string theory was that all particles are just strings vibrating in different ways. As far as I know, string theory is reasonably well developed - I think they have worked out how most of the normal particles would work. I think the harmonics you are talking about are the super-symmetric particles - the regular particles are presumably the fundamental frequencies. --Tango (talk) 15:22, 23 October 2009 (UTC)[reply]
As far as I know no one has managed to construct the Standard Model (or rather, something experimentally indistinguishable from it) inside string theory, but if it is possible then it will be by a Kaluza-Klein-like mechanism. The idea of Kaluza-Klein theory (which is much older than string theory) is that all particles (or at least all bosons) are spacetime waves (just like the graviton), but, except for the graviton, the waves involve extra dimensions besides the obvious four. The Kaluza-Klein-like dimensions in string theory are not the famous 6 (=10−4) dimensions that are supposed to have the form of a Calabi-Yau manifold, but the 16 (=26−10) extra dimensions from heterotic string theory, which form a 16-dimensional torus. A 16-dimensional torus should only give you 16 particles, but for some string-specific reason that I don't understand, but that has something to do with winding modes of the string around the torus, they get a much larger symmetry group, either SO(32) or E8×E8, giving 496 bosons. To match the Standard Model you have to match some of those with Standard Model particles and also explain why the others haven't been seen. I don't think that string vibrations are involved in any of this. The particles are vibrations of spacetime, but that's inherited from quantum field theory and Kaluza-Klein theory, where there are no strings to vibrate. At any rate it's a complicated and specific construction, not a simple matter of cellists floating through space as that awful NOVA episode would have you believe. I don't think there are any new unifying principles in string theory; they're all inherited from non-string theories like KK, the Standard Model, and supergravity. People study string theory because it seems more likely to actually work, not because it's philosophically more elegant or simple. -- BenRG (talk) 19:49, 24 October 2009 (UTC)[reply]
There is another point to be made - some people obsess about the actual numerical values - there is a movement amongst some physicists to define our units such that these constants all come out to 1.0. So (for example) instead of the plank distance being some arbitary number of meters - we simply call it 1.0 and let the 'meter' be defined as some horribly large number of plank units. Similarly, the speed of light would be 1.0 - and the 'second' would become some number of (speed-of-light-units x plank-distances). This doesn't really help to answer the question - but I can't help but feel it takes away some of the feeling of arbitaryness. In that view of things - we literally could not ask "What would it be like if the speed of light were twice as big?" because we've defined it as being 1.0 in all possible universes. You'd have to ask yourself a completely different set of hypothetical questions instead. Instead of saying "What would it be like if the plank distance was half as big?" you'd ask "How would the universe have evolved if everything that popped out of the big bang (EXCEPT the plank distance) was twice as large?" - and I can't help but think that some kind of insight lies in that direction. It changes the question from how these fundamental constants could be different to how changing the attributes of the initial state of the universe would cause it to evolve differently. SteveBaker (talk) 20:02, 22 October 2009 (UTC)[reply]
Well, this trick goes only so far. Once you've removed the units from h-bar, c, and G, you're pretty much done; that de-dimensionalizes mass, length, and time, from which all our other units can be derived. So now you're left with dimensionless constants, which can't be redefined away to 1.0 . The fine-structure constant already mentioned, the ratio of the proton mass to the elsectron mass, the charge of the electron when expressed in the Planck units, that sort of thing. Dan Hartas's question about whether these can logically be different is a real question. My provisional answer would be, I certainly see no logical reason why they couldn't. But that's open to revision, if someone comes up with a convincing argument. --Trovatore (talk) 20:11, 22 October 2009 (UTC)[reply]
What you're talking about are systems which employ so-called natural units. TenOfAllTrades(talk) 20:09, 22 October 2009 (UTC)[reply]
They aren't made equal to 1.0, they are made equal to 1. You should only say 1.0 if you mean 1 +/- 0.05. If you mean the integer, just say "1". --Tango (talk) 21:23, 22 October 2009 (UTC)[reply]
I disagree — this is the real number 1, which is a different object from the natural number 1. --Trovatore (talk) 21:31, 22 October 2009 (UTC)[reply]
The real number is still denoted (as you have denoted it) by simply "1". Trailing zeros after the decimal point are only used to denote a particular level of precision, if you are being exact then you don't use trailing zeros. --Tango (talk) 23:02, 22 October 2009 (UTC)[reply]
Strictly speaking, you should use infinitely many trailing zeroes.
The convention of using "1.0" to mean the exact real number 1, as distinct from the natural number 1, comes more from software than from science, but I think it's a useful one in some contexts. It allows a short way of expressing oneself when explaining, for example, that 00 is 1, but 0.00.0 is undefined. To be sure, it does have to be distinguished from the competing convention of using the number of digits reported to give an approximate idea of your degree of uncertainty. --Trovatore (talk) 23:07, 22 October 2009 (UTC)[reply]
How can you be strictly speaking supposed to do something impossible? Yes, in software 1.0 is used to force the computer to treat the number as a real number (so you don't get integer division, for example), but we aren't computers. Outside of the realms of computer programming, trailing zeros after decimal points are only used to denote precision. (Whether the natural numbers are a sub-semiring of the real numbers or just isomorphic to one is purely a question of semantics and isn't worthy of discussion here.) --Tango (talk) 23:20, 22 October 2009 (UTC)[reply]
"Purely a question of semantics?" What's more important than semantics? Semantics is the study of meaning itself. You are descriptively just wrong that trailing zeroes are used only to denote precision. --Trovatore (talk) 23:32, 22 October 2009 (UTC)[reply]
By "semantics" I mean the meanings of words. The meanings of the words aren't important, it is the underlying concepts that are important not the way we express them. --Tango (talk) 15:30, 23 October 2009 (UTC)[reply]
Physicists always write 1 in this situation, never 1.0, and so do mathematicians; the reciprocal of a real x is 1/x, the circumference of a circle is 2πr. For that matter, I write 2 * M_PI * r in C code too, though I've noticed that many programmers do consistently add .0. -- BenRG (talk) 00:17, 23 October 2009 (UTC)[reply]
I am a mathematician, and I occasionally use this notation to make this distinction, on those occasions when the distinction matters. So we have a counterexample. I am not alone in this, I think.
Your examples don't really prove anything; there's only one interpretation for natural-divided-by-real or natural-times-real, and the value equals the one that you get if you first apply the natural embedding from the naturals into the reals. Granted, you could look at values like "3/4", which mathematicians never use to mean "0", but still it doesn't necessarily mean the real value 3/4; it could be the rational 3/4 (which is still in the domain of algebra rather than analysis, unlike the real number ). --Trovatore (talk) 00:26, 23 October 2009 (UTC)[reply]
When we are doing rigorous constructions we distinguish between, for example, the rational number 3/4 and the equivalence class of Cauchy sequences that tend to that rational number. The rest of the time, we just identify the two things. There is rarely any need to distinguish between the rational number 3/4 and the real number 3/4, there certainly isn't any need to do so outside of pure mathematics. --Tango (talk) 15:30, 23 October 2009 (UTC)[reply]
Well, it's good mental discipline, though, and protects you from certain categories of error. Also it's useful pedagogically, in getting across the idea of a real number, which is highly non-obvious to most people. --Trovatore (talk) 20:05, 23 October 2009 (UTC)[reply]
(ec) "1" has significance as the identity element or the concept of unity. Even in a real numbers context it's more than just another value on the real line, which is what's brought to mind, at least for me, when I see "1.0". Water has a density in g/cm3 of "1.0". In natural units c is unity, or "1". There's a conceptual difference there and the choice of notation helps convey it. I don't know if I would go so far as to say that "1.0" here is strictly wrong, but it's definitely not more correct. Rckrone (talk) 00:37, 23 October 2009 (UTC)[reply]
It's true, there is a conceptual difference between "exactly the real number 1" and "rounds to this value within the precision I'm stating". But there's also a conceptual difference between the real number 1 and the natural number 1, and Steve's comment clearly evokes the former. So it's less correct on one dimension but more correct on another one. --Trovatore (talk) 00:41, 23 October 2009 (UTC)[reply]

Question: When counting how many angels can fit on a pin head, should we use real or natural numbers? Dauto (talk) 05:42, 23 October 2009 (UTC)[reply]

Natural numbers, of course. Unless the answer is infinite, in which case the transfinite cardinals are the way to go. --Trovatore (talk) 07:23, 23 October 2009 (UTC)[reply]
This actually is the classical illustration of the difference between countable and uncountable infinities. There are a countably infinite (aleph-null) number of angels on the pin, but each of them has an uncountably infinite space in which to dance... Tevildo (talk) 14:47, 23 October 2009 (UTC)[reply]
Note though that you can't give them all the same amount of space. The same numer of points, yes, but not the same two-dimensional Lebesgue measure. Either some angels are more privileged than others, or they dance on some peculiar fog of points to which a well-defined area cannot be assigned.
This is actually the key idea behind the Vitali set. --Trovatore (talk) 20:02, 23 October 2009 (UTC)[reply]
Could you give them all a space of zero measure? --Tango (talk) 18:01, 24 October 2009 (UTC)[reply]
No, not unless the pinhead itself has measure zero. This is by countable additivity. --Trovatore (talk) 19:59, 24 October 2009 (UTC)[reply]
Take all the laws of physics. Take their Gödel numbers. These are all arbitrary constants. We could have ended up in a four-dimensional universe, or a universe where forces cause jerk instead of acceleration, or any other totally alien laws of physics. Ours are arbitrary. — DanielLC 15:15, 23 October 2009 (UTC)[reply]
That would be a cool universe..
You don't know that. Dauto (talk) 17:18, 23 October 2009 (UTC)[reply]
It's OK - I've just done a Godel numbering of Dauto's reply - turns out it's just an arbitary constant too! SteveBaker (talk) 01:21, 24 October 2009 (UTC)[reply]

Hi, second paragraph, clarification tag, subject :- favourable and unfavourable thermodynamic reactions, a metabolic process couples them to even them out. Can anyone just define those two terms or give a "such as" example? (especially in a manner to add to the article) It's actually GA or FA. What is the character of favourable and unfavourable thermodynamic reactions in a metabolism? ~ R.T.G 16:43, 22 October 2009 (UTC)[reply]

I've reworded this, hopefully it is clearer now. Tim Vickers (talk) 17:21, 22 October 2009 (UTC)[reply]
It is, thank you. ~ R.T.G 17:34, 22 October 2009 (UTC)[reply]

Alignment of the planets[edit]

Often, school children are taught the order of the planets as seen in this illustration: solar system. I can imagine that younger children envision that the planets actually "look like that" -- that is, aligned neatly in a horizontal line, one after the other -- in that particular order (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune). However, this is not the case, since each planet follows its own independent orbit. My question is: do the eight independent orbits of the eight planets ever align themselves so that all eight planets actually do appear in a straight horizontal line, as in that illustration? If so, how frequently (or infrequently) does that phenomenon occur? And, is there a name for that? If not, what is the "closest" that the eight planets ever come to achieving that horizontal alignment? Thanks. (64.252.124.238 (talk) 19:12, 22 October 2009 (UTC))[reply]

That would be an extreme case of (strict) Syzygy. Our article describes one such instance with four of the planets. I'm trying some more widespread searches, but that word seems to have been used as the name of a game, a musical act, and at least one corporation, so the search parameters will take some tweaking. --LarryMac | Talk 19:22, 22 October 2009 (UTC)[reply]
It strikes me that the looser definition of syzygy given in our article, which simply requires all planets to be "on the same side of the sun" would allow the appearance of a straight line of planets to an appropriately placed observer. --LarryMac | Talk 19:43, 22 October 2009 (UTC)[reply]
They wouldn't necessarily be in the right order, though. --Tango (talk) 20:23, 22 October 2009 (UTC)[reply]

Assuming the planets follow neat non-interacting Keplerian orbits with well-defined constant "years" (time needed to complete one orbit), the answer to your question is simply the least common multiple of the length of every planet's year.

Also, when I was a child I suffered from the very misconception you talk about. I wondered how it was possible to see Saturn given that Jupiter is both bigger and closer and therefore "stands in the way" to see saturn :-) The grownups never managed to give a satisfactory answer though - perhaps due to their reluctance to clearly say that the picture does not represent the real situation. —Preceding unsigned comment added by 81.11.174.233 (talk) 19:31, 22 October 2009 (UTC)[reply]

The problem is that periods aren't integer years. We can probably assume that the periods don't have precisely rational ratios, which means there's not guaranteed to be a time when any more than 2 planets are perfectly aligned with the sun. However there will be times when 3 or more come arbitrarily close to being perfectly aligned. Rckrone (talk) 20:07, 22 October 2009 (UTC)[reply]
I had thought about the least common multiple (LCM) method (mentioned above by IP 81.11.174.233). But, wouldn't that require that all eight planets actually "start" in a perfectly horizontal line to begin with? And, if so, then every XXX years, they would all return to that "starting point". Right? I can't imagine that, one day, they all "started out" in a horizontal line. Or is my analysis of the LCM method flawed? Thanks. (64.252.124.238 (talk)) —Preceding undated comment added 20:11, 22 October 2009 (UTC).[reply]
Hypothetically, there will be some time when all 8 planets should align as described. It may very well be after the heat death of the Universe, but mathematically it should be possible to calculate how often it will happen. You don't even need to assume that they all had that particular "starting position"; the orbits of all 8 planets should represent a repeating pattern, and you just need to know the time period of one "cycle" of that pattern, and the position in that pattern we are today. Such calculations may require computers to work out all of the math, but it is at least theoretically possible to do it... Of course, that all assumes an "ideal solar system" where the planets ONLY gravitationally interact with the Sun and with no other objects, such as each other. Once you consider that the planets exert gravitational forces on each other, then you have an n-body problem, and the mathematics and physics tell us that the behavior of such systems is chaotic and unpredictable. So then again, there may well be NO way to predict when such an event would occur. --Jayron32 20:19, 22 October 2009 (UTC)[reply]
There is only going to be a repeating pattern if all the orbital periods have rational ratios, which is a big assumption. --Tango (talk) 20:26, 22 October 2009 (UTC)[reply]
Stability of the Solar System says "even the most precise long-term models for the orbital motion of the Solar System are not valid over more than a few tens of millions of years". Plus the Sun is going to dispose of a few planets long before the heat death anyway. Clarityfiend (talk) 02:52, 23 October 2009 (UTC)[reply]
(ec) More likely, they just didn't know the answer. I remember having a supply teacher (or maybe a student teacher) in primary school that insisted Mars was first and Mercury fourth (I'm guessing she knew a mnemonic and got the M's the wrong way around). She wouldn't listen when I tried to correct her... You missed out an assumption there - you need to assume they all started out in alignment. Also, you'll only get a finite lcm if the ratios of the orbital periods are rational numbers. Those two problems mean your method only tells you how frequently they will happen if they happen at all, it won't tell you if they happen. In reality, you need to be a little flexible in your definition - requiring all the planets to be within a 10 degree sector, for example, is far more reasonable and can be calculated (although I can't find anyone that has done it - there might well not be any occurrences within the time period where we can reliably model the interactions). --Tango (talk) 20:23, 22 October 2009 (UTC)[reply]
There have been various well-known close approaches to a grand alignment. On 4 February 1962, the Sun, the Moon, and all the planets from Mercury to Saturn were clustered within a 17-degree area of the sky. There was also a total eclipse of the Sun. Doomsayers had a field day, but catastrophe mysteriously failed to occur. -- JackofOz (talk) 20:35, 22 October 2009 (UTC)[reply]
We are forgetting that the planets are not perfectly in the same plane, so even if all the planets were to be aligned in the 2-D plane, a person standing on Neptune might not have all planets crossing the sun since Jupiter, or Mars, might be higher or lower then the plane right? I mean we have the ecliptic plane, but other planets "ecliptic plane (for the planets)" might very well be different. Googlemeister (talk) 20:49, 22 October 2009 (UTC)[reply]
Indeed. If you include precession in the calculation you might find a time when the nodes of all the orbits were in a line and the planets were all at those nodes - then you would genuinely have them all in a line. --Tango (talk) 21:18, 22 October 2009 (UTC)[reply]
It's not that hard to do a rough calculation. Uranus and Neptune come into approximate alignment about once every 160 years -- we can safely assume that the positions of the other planets are independent random variables during these events. In the 4.5 billion year history of the solar system there have been around 30 million such events. Crunching the numbers yields a prediction that the planets have very likely aligned to within 25 degrees at least once but probably have never all aligned to within 20 degrees. Looie496 (talk) 00:44, 23 October 2009 (UTC)[reply]
Uranus and Neptune are always in perfect alignment - any two points make a line. You need to consider at least three planets for it to be interesting. Did you mean Uranus, Neptune and Earth? --Tango (talk) 15:37, 23 October 2009 (UTC)[reply]
I meant Uranus, Neptune, and the Sun -- sorry, thought that would be taken for granted given the original problem. I used the outer planets because they have the longest periods, 84.3 and 164.8 years. The periods of the inner planets are so short that they are practically randomized for each alignment (wrt to the Sun) of Uranus and Neptune -- even Saturn has a period of only 29.5 years. Looie496 (talk) 21:25, 23 October 2009 (UTC)[reply]
Ok. You can't really take that for granted - people are often interested in two planets and the Earth being in rough alignment because that means they are very close together in the night sky. Two planets and the Sun being in alignment doesn't look interesting at all from the point of view of an observer on the Earth (although if it is close enough it might mean one planet transits across the sun from the point of view of the other planet). --Tango (talk) 18:05, 24 October 2009 (UTC)[reply]

Follow up[edit]

Thanks for the input above. Let me ask the question another way, in order to help me understand this concept. Assume that the solar system will exist forever (ad infinitum) and also assume that there will be no changes whatsoever (to the planets or their orbits, etc.). Given these two huge assumptions ... is it a mathematical certainty that the eight planets will some day eventually perfectly align? Or is that not even certain? Thanks. (64.252.124.238 (talk) 15:11, 23 October 2009 (UTC))[reply]

No, it's not certain due to the chance that the orbital periods of the planets may have irrational ratios (which sounds like it should be an oxymoron, but never mind). That is, if one planet orbits once every X seconds and one orbits every Y seconds then there might not be any pairs of positive whole numbers a and b such that aX=bY. That means that they can line up perfectly with a third planet at most once in their lifetime and that once may have been before the planets actually existed. --Tango (talk) 15:36, 23 October 2009 (UTC)[reply]
They will almost certainly never align perfectly. Given sufficient time and no changes in the system they would probably sooner or later align to any desired precision -- it might take 10100 years though. Looie496 (talk) 21:25, 23 October 2009 (UTC)[reply]
Very true. There are always rational number arbitrarily close to any irrational number. --Tango (talk) 18:05, 24 October 2009 (UTC)[reply]

Great! Thank you for all of the input. It was helpful and informative. Thank you! (64.252.124.238 (talk) 16:40, 25 October 2009 (UTC))[reply]

methycobolamin[edit]

how does methycobolamin (B12) exectly works?i need full detail. —Preceding unsigned comment added by Klaricidxl (talkcontribs) 20:39, 22 October 2009 (UTC)[reply]

Have you read Vitamin B12#Functions? methylcobalamin is the way to spell it. Graeme Bartlett (talk) 21:07, 22 October 2009 (UTC)[reply]

Cable TV distribution[edit]

We have cable TV and it is distributed around the house to various points. A technician from the cable company set that up for us, unofficially. We only have one set-top box (digibox) through which we can get the range of channels we subscribe to. I think the distribution takes place before the signal goes through the box. The other sets in the house until recently were receiving the analogue channels, what in the UK are called the "terrestrial channels" (BBC1, BBC2, ITV1, Channel 4, Channel 5). Now on two of those five channels we have a message that the analogue channel has been switched off in our area. What can we do? If we were to supply the extra TV sets with freeview digiboxes, would they be able to receive the "terrestrial channels"? We don't want to pay more than one cable subscription or to install a Freeview aerial. Any suggestions welcome, or please move the question to another refdesk if more appropriate. Thanks. Itsmejudith (talk) 20:44, 22 October 2009 (UTC)[reply]

AFAIK, there has been a roll out of digital switch overs in the UK for some months now. Some of the channels you are listing presumably are only available as digital services. As a result, your media box will have to be replaced by the provider. I have no idea about the UK, but ours (in Vienna, Austria) was exchanged free of charge. If you want a box with harddrive and HDMI, you may have to pay some minimal surcharge (here it would be around 5 Euros). --Cookatoo.ergo.ZooM (talk) 21:06, 22 October 2009 (UTC)[reply]
No, all the channels the OP names are being broadcast digitally and are free. If you are in a good reception area and the digital broadcasts are coming from the same transmitter site (likely) then your existing aerial may be good enough. However, in many cases it is necessary to upgrade the aerial to get satisfactory reception. You may also find that whatever distribution you are currently using (aerial splitters etc) to get the aerial signal to all your TVs is no longer adequate for digital and will also need to be upgraded. You lose nothing by trying it, you will need the digiboxes anyway (assuming you don't go with the cable solution). If it doesn't work with just the digiboxes then call in an aerial specialist. SpinningSpark 02:08, 23 October 2009 (UTC)[reply]
On re-reading your question you seem to be implying that you do not have an aerial at all, in which case you are relying on your cable company to supply the analog signal through the cable. This is entirely down to them whether they continue to do this or not, you will have to ask them. Freeview digiboxes will not work with the cable companies digital signal. SpinningSpark 02:27, 23 October 2009 (UTC)[reply]
That's right, we don't have an aerial; it would be awkward to fix and reception would be uncertain. You've answered my question, thanks very much, although it was not the answer I was hoping to hear. Itsmejudith (talk) 08:52, 23 October 2009 (UTC)[reply]

temperature[edit]

I noticed that one can take a person's temperature from the mouth, under the armpit, the ear, and the rear. What is the difference in temperatures that one would get from those methods if you assume that all temperatures were taken from the same person simultaneously? Googlemeister (talk) 21:14, 22 October 2009 (UTC)[reply]

See Body_temperature#Methods_of_measurement. --Tango (talk) 21:15, 22 October 2009 (UTC)[reply]
That only gives a typical reading. Most people getting their temperature taken are getting it done because it is assumed that they are of abnormal temperature. Is the relationship linear? Googlemeister (talk) 21:21, 22 October 2009 (UTC)[reply]
If the quesion is "is the temperature far from normal", all you don't have to convert to a standard "core temp" scale, you just need to know what is "normal for how you're measuring it". However, the section does also provide some important info about general trends. I don't know if it's literally linear (degree-for-degree), and that is an interesting issue, but "lower temps as you get further removed from the core" seems like a sensible pattern. The section also tells about some serious confounding variables as you get further removed from the core. Especially if the body is trying to change its temperature, those could make the whole idea of a clear relationship a hopeless idea. DMacks (talk) 21:28, 22 October 2009 (UTC)[reply]
Fever#Measurement_and_normal_variation might also be useful, then. We don't seem to have similar information for other types of abnormal temperature. --Tango (talk) 23:24, 22 October 2009 (UTC)[reply]