Talk:Moravec's paradox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Why?[edit]

Why is this called a "paradox?" Wikipedia defines paradox as: "an apparently true statement or group of statements that leads to a contradiction or a situation which defies intuition." Where's the contradiction or nonintuitiveness? Moravec's statement strikes me as simply an empirical claim, which may be true or false. How is this any different from saying, "Men are at their worst trying to do the things most natural to women," or similar for birds / fish, cops / robbers, Cowboys / Indians, etc.? Are those paradoxes too?

Also -- does this term "Moravec's paradox" exist anywhere other than on wikipedia? Google shows only 12 links. —Preceding unsigned comment added by 128.30.31.167 (talkcontribs) 18:28, 20 August 2007

It's paradoxical by the second part of the definition you posted; it defies intuition. Simply, things we find hard, computers find easy, and vice versa. It's too general to be an empirical truth. I get 7,750 Google hits, and have seen/studied a handful of papers by Rodney Brooks which mention the subject. It's definitely encyclopaedic and meets WP:NOT. --BlueNovember (talk contribs) 18:33, 12 June 2009 (UTC)[reply]

Paradox category[edit]

Charles G had removed this from the paradoxes cat. I agree it is on the outskirts, but still a paradox. A paradox does not need to be overly mysterious. All that qualifies it, is that it can be formulated as "P and not-P" I think this qualifies. Gregbard 12:19, 30 August 2007 (UTC)[reply]

I don't believe this is a paradox in the strict sense a logician would prefer. It's a profoundly counter-intuitive fact about human skills. I am considering renaming the article because the title seems to have caused confusion. I am not aware if it is known by other, perhaps more suitable, names. More research will sort this out.---- CharlesGillingham 07:24, 31 August 2007 (UTC)[reply]

Hey, in case anyone's still watching: what about the fact that it's not even true? There are things that are easy, and hard, for BOTH humans and computers. I made this table because it might be useful somewhere.

Easy (computers) Hard (computers)
Easy (humans) add small numbers

apply symbol manipulation rules

recognize faces and speech

identify relevant information

Hard (humans) quickly manipulate large numbers of symbols

repeat a task many times

prove new, useful theorems

defraud humans

Btw, why does wikipedia make it so hard to edit tables that are in articles? =( MrVoluntarist (talk) 17:43, 6 February 2008 (UTC)[reply]

You've misread the idea. It's not saying that "everything is reversed", it's saying that "some of the things that are reversed are really unexpected." Obviously, some things are easy for both machines and people, some things are hard for both machines and people. This is what we expect. What's interesting is the kinds of things that are reversed. For example, in your graph above, the human-hard, machine-easy category should also include "winning at Chess" and "doing well on the GRE", things we tend to think of as "highly intelligent" behavior. Also, in the human-easy, machine-hard category should include "walking across the park" and "catching a ball", things a five-year old can do. What's interesting is that "intelligence" isn't the "highest human faculty", contrary to expectation.---- CharlesGillingham (talk) 01:01, 7 February 2008 (UTC)[reply]
Well, I don't mean to be a monkeywrench, but you could also say that "adding one-digit numbers" is something an 8-year-old can do, and, yep, it's easy for computers. Also, "proving a new, useful theorem" would cause a human to be judged "highly intelligent", and, yep, that's hard for computers as well. So is the paradox, as you hint at, just saying, "hey, some things are reversed, and it's unexpected which ones they will be"? Btw, can computers ace all parts of the GRE? Or just certain sections? MrVoluntarist (talk) 16:58, 12 February 2008 (UTC)[reply]
Just parts of GRE. Several researchers in the early 1960s focussed on programming computers to do well on intelligence tests, such as Daniel Bobrow's STUDENT (which could do algebra word problems), Tom Evans' ANALOGY (which could solve problems like "A is to B as C is to ?" presented as diagrams), Herbert Gelernter's GTP (which could prove theorems in Euclidean geometry), and so on. These programs ran on machines with no more processing power than a modern wristwatch, but they could perform as well as graduate students on these difficult abstract tasks. Researchers today realize that "doing well on an intelligence test" is not a particularly interesting or fruitful area of research. You only learn how to solve a specific problem. And, compared to the difficult problems of mobility, perception and communication, doing well on intelligence tests is relatively easy.
I guess the key to understanding the significance of this discovery is to look backward to Locke and Shakespeare, where they talk about our "faculty of reason" as being our "highest" ability ("What a piece of work is Man! How noble in reason!" and "In apprehension how like a god"). It's the thing that puts us on the top of chain of being, the thing that creates this great gulf between us and the "lower" animals. We're "homo sapiens" and our sapience is an almost spiritual ability. Science fiction still often assumes that sapience will change machines into god-like beings. However, the experience of AI research suggests that our ability to reason is not such a big deal after all, and that we should be far more impressed with evolutionary leaps made by our ancestors hundreds of millions of years ago. ---- CharlesGillingham (talk) 00:41, 14 February 2008 (UTC)[reply]
Okay, that makes more sense. Now, I don't know if this runs afoul of Original Research, but could we maybe reword the opening to, "normal intuitions about which problems are 'easy' or 'hard' do not consistently apply to machines". Moravec would probably agree that the intuitions about ease of adding one-digit numbers, and the difficulty of proving useful theorems, carry over just fine. Btw, correct me if I'm wrong, but weren't the problems fed to ANALOGY already highly constrained? That is, it didn't receive them as picture files, the way humans would take the test. MrVoluntarist (talk) 04:41, 14 February 2008 (UTC)[reply]
I've rewritten the introduction in light of our discussion. Hopefully it's clearer now. ---- CharlesGillingham (talk) 21:32, 14 February 2008 (UTC)[reply]

Not a philosophical article[edit]

This article should not be categorized as philosophy and is not in the scope of Wikiproject:Philosophy. This is, as it states, a principle in robotics and artificial intelligence. It should be categorized only as artificial intelligence or robotics. (It's only relation to philosophy would be as an example in embodied philosophy or embodiment. This merits a "See Also", not a category.) ---- CharlesGillingham 07:24, 31 August 2007 (UTC)[reply]

Well that's a pretty narrow view there. I think it qualifies on several counts Philosophy of mind, Philosophy of science, and logic. I think at least one applies. Fortunately, the field options we have mean we don't have to choose. Greg Bard 13:23, 12 September 2007 (UTC)[reply]

More references needed[edit]

Most classes in robotics or artificial intelligence mention some version of this principle (at least the ones I took). It is part of the motivation for "Nouvelle AI" (a school of AI research named by Rodney Brooks and also practiced by Hans Moravec and most people on the robotics side of things). More research will show this. ---- CharlesGillingham 07:24, 31 August 2007 (UTC)[reply]

I've found a few: Minsky, McCorduck, etc. ---- CharlesGillingham (talk) 14:54, 4 June 2008 (UTC)[reply]

The explanation of the paradox is the theory of evolution?[edit]

I think the theory of evolution can be the explanation of everything by this standard. I don't think it is relevant, but hey, I'm a creationist, so I'll just shut my mouth and know my place in Wikipedia. —Preceding unsigned comment added by 203.97.104.99 (talk) 07:09, 2 November 2008 (UTC)[reply]

I'd actually be very interested to hear a creationist explanation. I edited the section so it reads "one explanation of the paradox is...", rather than "the explanation of the paradox is...". The paradox itself is an empirical fact, a discovery made by researchers. However, the explanation of it is not an empirical fact. Absolute proof would require more evidence.---- CharlesGillingham (talk) 23:27, 2 November 2008 (UTC)[reply]

Penrose[edit]

For those who are so old that they cannot remember, or so young that it was before their time: Minsky is an exponent of the AI hype. AI means "artificial intelligence", and it was a hype similar than .com or the credit crunch. The AI hype has wasted lots of money and generations of computer scientists by trying to build intelligent computers. From the very beginning of computer science in the 1950ies, AI was a very high priority. And it has never worked.

I've read Minsky's book "The Society of Mind". The whole book is a loose collection of ideas that have proven not to work in real life. In this respect, Minsky's book is similar to Hofstadter's "Fluid Concepts and Creative Analogies". These are the kind of book of which you make your kids a present for the winter holidays because of the nice pictures in it.

In this book, Minsky is trying to make the case in favor of AI. This notion of "paradox" is quite far fetched from the spirit of this book, IMHO. (Maybe paradox to indicate a lack of insight?)

A more credible discussion of this phenomenon can be found in Roger Penrose's "The emporor's new mind" and later books. In this context, it might be called a physical phenomenon rather than a (mathematical) paradox. —Preceding unsigned comment added by 85.181.130.151 (talk) 21:58, 2 November 2008 (UTC)[reply]

Wrong definition[edit]

Current definition stating that "uniquely human faculty of reason (conscious, intelligent, rational thought) requires very little computation" IS a complete bullshit. How something like this could be in Wikipedia? Currently we even don't know if reason is at all computable. 83.26.84.105 (talk) 22:16, 19 May 2010 (UTC)[reply]

"Reason" in this context means step-by-step problem solving of the kind that people use when doing algebra, playing chess, or proving mathematical theorems. While not necessarily "computable" in a Godelian sense, this is still something computers can do when the problem is tractable. ---- CharlesGillingham (talk) 15:09, 18 October 2021 (UTC)[reply]

Deleted text saved on Talk[edit]

Deleted text:

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.[1]

I think that this a valuable contribution, given Pinker's standing as one of the world's leading scientists and add more weight to the article. I agree that his quote is just restating the principle, but I don't think this minor redundancy is a problem. Plus Pinker says "the main lesson" (i.e., that this is really really important). The lede as it stands doesn't mention the importance of the idea without Pinker.

I'd like to put it back if no one disagrees. ---- CharlesGillingham (talk) 01:37, 10 September 2015 (UTC)[reply]

"What's easy is hard, and what's hard is easy"[edit]

This is also known in the formulation "What's easy [for humans] is hard [for computers], and what's hard [for humans] is easy [for computers]". For example, tasks such as adding up endless columns of long numbers are extremely tedious and error-prone for humans, but almost trivially easy for computers, while things that humans usually don't even really think much about, such as walking, language understanding, vision etc. have been extremely difficult to implement with computer programming. AnonMoos (talk) 21:18, 30 January 2020 (UTC)[reply]

Hardly surprising when computers developed as tools to supplement humans in their work. What would be required for something to duplicate human profiencies, rather than compensate for deficiencies, probably requires going back to near first principles. A modern Charles Babbage who is also a neuroscientist and subconscious psychologist. 185.13.50.177 (talk) 14:33, 3 March 2020 (UTC)[reply]

Trillions vs. hundreds of millions[edit]

The article described modern computers as "trillions of times faster" than 1970s computers; this is an exaggeration. A CRAY-1, 1976-era, ran at 2 MFLOPS. A modern NVIDIA A40 card can manage a peak 32-bit throughput of about 40 TFLOPS. The ratio is only 20 million or so. Even a v3 Google Cloud TPU only reaches about 400 TFLOPs; while extremely impressive, this is still only about 200 million times faster than a CRAY-1; so even a billion times faster would be an exaggeration. -- The Anome (talk) 11:15, 18 October 2021 (UTC)[reply]

I don't think there's even a point to putting specific values there. There's no single way to measure computer speed (to put it mildly). As far as estimating goes, it looks like CRAY-1 had 160 MFLOPS, and if anything it should be compared per analogy with the fastest machine - Fugaku with over 400 PFLOPS. Mithoron (talk) 13:40, 18 October 2021 (UTC)[reply]
 Done Thanks for fixing this. ---- CharlesGillingham (talk) 15:07, 18 October 2021 (UTC)[reply]
  1. ^ Pinker 2007, pp. 190–91.