Talk:Turing test/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Coby or Colby

Citation 19, says Coby but the reference says Colby. Anyone know what the correct author name is? —Preceding unsigned comment added by Kevin143 (talkcontribs) 09:26, 25 May 2008 (UTC)  Fixed---- CharlesGillingham (talk) 01:21, 18 March 2009 (UTC)

Topic

We already have Turing Test, so the two articles should be merged. Should it be capitalized? AxelBoldt


Someone wrote:

So far, no computer has passed the Turing test as such.

But I read somewhere that a museum of computers in Boston conducts an annual Turing test competition, and that they've managed to fool "some of the people some of the time". Anyone know more about this? --Ed Poor

I don't know many useful details here. I do know that my psych professor claims the Turing test has been passed, but is not passable today, because people have become more discerning in their judgements. But maybe this was taking "Turing test" in a more liberal sense, e.g. taking being fooled by ELIZA to mean that ELIZA passed the Turing test. --Ryguasu

I think that it has to do with greater discernment and exposure to software and to concepts of artificial intelligence since the Turing test requires a human judge. Ember 2199 06:44, 1 August 2006 (UTC)

On the other hand, the intelligence of fellow humans is almost always tested exclusively based on their utterances.

Anyone else think this is problematic? It seems there are many not-so-verbal ways to "test" intelligence, e.g. does X talk to walls?, can X walk without falling down?, can X pick a lock?, can X learn to play an instrument?, can X create a compelling sketch of a scene?, etc..

--Ryguasu

Think the difference is between 'test' and '"test"'. I.e. when intelligence is formally tested, the scores are usually based on verbal answers or even multiple-choice ones; but when intelligence is informally assessed by casual observers, they use all kinds of clues.
-Daniel Cristofani.

I just modified the "History" section slightly. Pretending to be the other gender was a feature of the Imitation Game, not of the Turing Test itself, and Turing's original paper only mentions the five-minute time limit when talking about how often computers might pass the Turing Test in the year 2000.

Ekaterin

Objections and replies

The "Objections and replies" section seems to be a list of objections to the fact that machines could think, and not objections on whether the test actually answers that question. This is confusing and missleading. Maybe the title should be modified to reflect this fact. The following section, moreover, does seem to discuss on possible objections on the test. --NavarroJ 12:04, 11 August 2005 (UTC)

I commented on this below, to take an example: "One of the most famous objections, it states that computers are incapable of originality." (italics added), but there is no explanation in the article (yet) of how the test demonstrates that humans are original, or if the test is relevant for originality. Ember 2199 06:44, 1 August 2006 (UTC)
I think this whole section should be taken out. If we need a seperate article detailing the issues brought on by artificial intelligence that's fine, but it doesnt belong here. Jcc1 20:48, 16 March 2007 (UTC)
This section probably could go in the article on Turing's paper. However, I think they should be at least touched on here. I believe his primary motivation in proposing the Turing test was to make easier to for readers to visualize his answers to these objections. His goal was to make it seem plausible that, in the future at least, people will agree that "machines can think." (Forgive me for indenting the previous post). ---- CharlesGillingham 06:56, 24 October 2007 (UTC)

Objections and replies is gone now, and the present Weaknesses of the test section doesn't have the same drawbacks. But I do have a big problem with it. The genius of Turing's test is that it provides us with an operational, practical way of comparing the intelligence of man and machine without having to provide any kind of definition of what we actually mean by intelligence. Criticizing it by complaining that it does not measure real intelligence misses the point entirely; unless, of course, the complainer already has another, better way of making it clear to us what intelligence is. The IQ test? Rp (talk) 16:46, 9 July 2008 (UTC)

I agree (very strongly) with you that Turing never intended his test to be a definition of intelligence or thinking. He wanted to "replace the question" -- which is a different thing entirely.
However, it gets criticized as a poor definition of intelligence anyway, and, in philosophical circles, gets criticized as a test of "mind'" and "intentionality", which is (I think you will agree) well beyond what Turing intended. (John Searle is such a philosopher. See Chinese Room under "Searle's Targets".)
I think that this criticism is not intended for Turing himself, but instead is directed at AI researchers and philosophers take the Turing test a little more seriously than Turing did. (See functionalism and computational theory of mind.)
In short, I agree that this "weakness" is not a criticism of Turing, but is a criticism of the Turing Test, as it has been (mis?) used by philosophers and others in the ensuing years. Does that make sense?
Perhaps we should add a short rebuttal to this criticism from Turing ... something along the lines of "I do not wish to give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." ---- CharlesGillingham (talk) 16:35, 11 July 2008 (UTC)
That sounds like a good idea. However, while Turing didn't propose the test as a definition, and, as you say, clearly stated that it was supposed to replace a definition, it is reasonable to view it as a functional definition. Which is much of the criticism comes from. But the real problem is with the issue of comparison that Rp raises: it does provide a means of comparing two things, but all it is comparing is the behaviour of two entities, not the intelligence of two entities. Hence Searle and others: if we compare just the behaviour of two entities, how can we know that the behaviours come from the same type of process? The Chinese Room says that it is possible that the behaviour of one entity will be due to intelligence, and the behaviour of another will be due to a mechanical, non-intelligent, process. In defense of Turing's argument, though, I doubt Turing would have cared (especially given his chess playing example), and the behaviourists (including some of the materialists and functionalists) tend to be happy with the "if it looks like a duck, and quacks like a duck ..." argument. - Bilby (talk) 22:47, 11 July 2008 (UTC)
I think you are both entirely correct; it seems obvious to me that Turing's proposal was influenced by the behaviorist attitude towards psychology, which was very popular at the time. But I still feel much of the criticism is unfair because it disregards the benefit of his approach (namely, not having to define what intelligence, consciousness, etc. are). Rp (talk) 22:00, 12 July 2008 (UTC)

Heads in the sand

The new Heads in the sand note makes some claims about what Turing said that I've never seen before. I think that either we need a reference, or to take it out. Rick Norwood 12:41, 21 September 2005 (UTC)

Since nobody has steped forward to support the claims in the "Heads in the sand" paragraph, I'm deleting it. Rick Norwood 14:44, 29 September 2005 (UTC)

Which brings us to the paragraph on "Extra Sensory Perception". Any evidence or support for the idea that Turing believed in ESP? Rick Norwood 14:48, 29 September 2005 (UTC)

Yes, there is a large section on it in his paper describing the Turing test. You should probably read the paper before making too many edits! --Lawrennd 20:45, 30 September 2005 (UTC)

Thanks for the info. My knowledge of Turing comes from secondary sources, which is why I'm careful to post ideas here where more knowledgable people can comment. Rick Norwood 23:23, 30 September 2005 (UTC)

Wikipedia and the Turing Test

Computer Scientists, please convert the Wikipedia search box to process natural language. Thanks. - MPD 09:05, 28 December 2005 (UTC)

Expansion

I'd like to suggest expanding the article into the premises of the Turing test. Does anyone know of any rigorous analyses of the premises? I also want to affirm the earlier comment that the criticisms section seems to not really discuss the fundamentals of the Turing test, which is what this article should focus on, for example how judges are chosen, the criteria for judgement, time period, format, breadth/scope of topics. Ember 2199 06:31, 1 August 2006 (UTC)

Human Computer Interface

Turing was trying to make things easy for the machinists by proposing a "simple teletype interface". Consider other forms of interaction, such as first-person gaming. Can you tell when playing CS:Source online who is a bot and who is a human? (if yes, usually only because the humans are stupid!)

http://en.wikipedia.org/wiki/Computer_game_bot http://www.turtlerockstudios.com/CSBot.html

My home desktop already makes a datacentre-class machine of 2001 vintage look quite tame, yet can support a number of these bots. This year's crop of datacentre machines are a ten-fold advance.

I haven't yet seen a machine "demonstrate learning" (rather than fool someone that it is human). This is usually the diversionary tactic that is deployed to deny the machine has passed the test.

Is there a link to Asimov? Multivac was very like Google... all you need to do is ask the right question.

Lack of Clarity

I think the descriptions on this page are not clear enough and hence are misleading.

Did Turing really say that The Turing Test is a test to see if a computer can perform human like conversation? A Turing Test could easily have communications that we would not typically call conversation, such as collaborative creativity. Hence I think the basic description should refer to a test for "intelligence".

The description says that both the computer and the human try to appear human. Unless I'm wrong (comments invited), the point of the game is for both the computer and the human to try to convince the third party that they are the human. Which is not quite the same thing as the present discussion does not make the competitive nature of the game clear.

In the section on the imitation game it is said 'In this game, both the man and the woman aim to convince the guests that they are the other.' This is clearly incorrect, as if we had a game where the male tried to convince the observer that they were the female, and the female tried to convince the observer that they are the male, then the observer would know whom is whom!

For the examples where it is claimed that a computer may or may not have passed the Turing test, it would be useful to say whether a proper version of the Turing test has been applied. I have heard (but cannot immediately provide references) that a lot of people believe that the test is for an observer to talk to an agent through the channel, and then say whether the agent was human or computer. This is a much, much, easier task, as there is no competitive angle where the human will use more and more of their intelligence to beat the machine. I also cannot see how ELIZA could come anywhere near passing the true form of the Turing test, as any human observer could easily win by simply showing an ability to talk about something different from the simple single topic that ELIZA is capable of.

I haven't written anything in this page, so I do not wish to just dive in and start changing things. But, I think that the points I raise should either be refuted, or that changes to the article should be made. —The preceding unsigned comment was added by 80.176.151.208 (talk) 17:44, 7 January 2007 (UTC).

  • You are right. The description is incorrect. I am looking at Turing's paper right now and it says

    The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

    Cj Gaconnet (talk) 18:01, 12 March 2008 (UTC)

What if...

My only question is that of the human. What if the human were to act like a chatbot? The Turing study is flawed because it doesn't account for the possibility of human deception. It is well known that human's decieve, for whatever purpose. This is a variable that needs to be taken into consideration. When we want to do a scientific study, we must account for all margins of error, and include every possible variable within the study. Simply having one person and one bot is not sufficient. This is only two groups. You must have a third group, a person that is not aware of bot technology, a human that is aware of bot technology and the bot. The Judge would then need to determine which one was each. Furthermore, I doubt that one judge is sufficient either. For it would also make a difference as to whether or not the judge was familiar with bot technology or not. You would need a pannel. This pannel would need to be "fooled" in the majority that they were not speaking to a bot when in fact they were. The Turing study, although a good start, is not accurate, because of what I had mentioned: 1. The knowledge capabilities of the human, 2. The knowledge capabilities of the judge, 3. The variable of human deception not being accounted for, 4. Only two control groups. Most psychologists have already discovered that three control groups are necessary to better evaluate human behavior. The same condition would apply here as well.

Having a person act like a bot doesn't really make too much sense from where I am standing. If there is a human control it should be trying to make the tester think it is more human than the bot. We don't need to know if it is possible for a human to behave like a chat bot; they can. As for the other judge groups, I can sort of see your point, BUT you are explicitly asking them to judge if the entities are human or computer, in effect telling them the technology exists. I am not sure if the fact they didn't know about the technology before would change the results; Many internet users are already familiar with bot-like behavior, so it makes sense. The problem I see with this even is that it is steadily going to become more impossible to find anyone not familiar already with the technology. You could use a bunch of old people or people from LDCs, but that would skew the data as well.--Shadowdrak 17:05, 2 June 2007 (UTC)

In defence of Lady Lovelace

Behind every good computer is a programmer that loves it. The "originality" clause would be a key point of differnetiation between human and machine. Ask a person the same question three times and you get three different answers; ask a computer and it is likely there would be one reply, potentially phrased three different ways. By the third question the human would have guesses that what was being asked was contextual, ironic or specious and not specific and the answer would be returned in kind.

Computers, with humans behind them programming the response patterns, can expect these types of situations. When you then go to the AI level where there computers learn and teach each other, the is illogicallity (originality) would be re-factored out. At least you would hope so. Stellar 03:12, 12 August 2007 (UTC)

racist material. must be removed immediately

Additionally, many internet relay chat participants use English as a second or third language, thus making it even more likely that they would assume that an unintelligent comment by the conversational program is simply something they have misunderstood, and are also probably unfamiliar with the technology of "chat bots" and don't recognize the very non-human errors they make. See ELIZA effect.


it foolishly assumes that all non english speakers are technologically ignorant and do not know what chat bot is (almost all non english speakers i met happen to know what chatbot is)

i just removed the racist material. do not revert it to what it was

I didn't know about that meaning of the word racist... Thanks for teaching it to us! —Preceding unsigned comment added by 86.218.48.133 (talk) 14:01, 17 October 2007 (UTC)

It wasn't racist. Whether or not you speak English as a first language does not have any bearing as to what race you are. —Preceding unsigned comment added by 167.6.245.98 (talk) 19:43, 17 March 2009 (UTC)

Discussion of relevance

Isn't this section a near perfect definition of that most hated thing, original research cluttered with weasel words? No citations, sentence openings taken almost verbatim from the "what not to do" page on weasel words. Just all around poor. I wont edit it out, but it surely needs a rewriting with some citations ? VonBlade 23:10, 10 October 2007 (UTC)

From Russia with Love!

A russian online flirting website has a chatbot, which passes the Turing test. They use it to dupe single guys and get financial info out of them for fraud: http://www.news.com/8301-13860_3-9831133-56.html

If we could combine such a russian software with a japanese humanoid robot body and soup-up its looks a bit (big tits, mini skirt, sailor suit, saucer sized eyes, neon hair colour) suddenly all those catgirl animes would become documentaries ... 82.131.210.162 (talk) 08:45, 10 December 2007 (UTC)

See more here, it managed to fool a well-respected scientist:
http://drrobertepstein.com/downloads/FROM_RUSSIA_WITH_LOVE-Epstein-Sci_Am_Mind-Oct-Nov2007.pdf —Preceding unsigned comment added by 82.131.210.162 (talk) 11:13, 10 December 2007 (UTC)

Removed reference to multiplayer games

Computer game bots generally are for playing the game and are not designed for conversation. —Preceding unsigned comment added by Sbenton (talkcontribs) 00:04, 14 March 2008 (UTC)