Talk:Algorithm characterizations

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled[edit]

There are several problems with unballanced quotes and unbalanced parantheses in the body of this article. I will try to fix this when I have access to a computer, but if there are any takers, feel free to preemt me.

It will take some work, however, yo make sure the integrity of quotes and citations. — Preceding unsigned comment added by 80.216.50.78 (talk) 19:16, 21 March 2019 (UTC)[reply]

Holding place -- Markov's (1954) definition of "algorithm"[edit]

"Boldface added for emphasis]:

"1. In mathematics, "algorithm" is commonly understood to be an exact prescription, defining a computational process, leading from various initial data to the desired result...."(p.1)

"The following three features are characteristic of algorithms and determine their role in mathemtics:

"a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility -- the definiteness of the algorithm;
"b) the possibility of starting out with initial data, which may vary within given limits -- the generality of the algorithm;
"c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data -- the conclusiveness of the algorithm." (p.1)

"2. The description of algorithm just given does not pretend to mathematical precision..."(p.1)

"3. In recent times a number of authors have elaborated theories leading naturally to a sharpening of the notion of an algorithm. We have in view the works of Kleene in the theory of recursive functions /21, 23/, the works of Church in the theory of λ-conversion /14, 15, 17/, the work of Turing in the theory of computable numbers /31/, and the work of Post in the theory of "finite combinatory processes" /25/" (p. 2; numbers between slashes refer to Markov's bibliography]

"4. ... The entire significance for the mathematics of rendering more precise the concept of algorithm emerges, however, in connection with the problem of a constructive foundation for mathematics [etc.]"(p.2)

"5. All the theories listed under point 3 are sufficiently complicated in themselves and lead to a sharpening of the precision of the concept of algorithm in an indirect manner.

"The theory of recursive functions /21, 23, 24/ deals mainly with that special case of an algorithm, when the role of initial data is played by natural numbers, and the result of its application is a number. Passage to the general case requires therefore the arithmetization of the initial data and results, which is obtained through some "Godel numbering" /18/ -- a process, consisting in the application of an algorithm, defined once and for all, but which is rather complicated.

"The establishment of the theory of algorithms on a rigorous basis through the use of the Church theory of λ-conversion requires, besides the Godel numbering, also a cumbersome formal apparatus.

"Turing's theory of "computable numbers", principally oriented toward a constructive approach to the concept of real number, leads also indirectly to a more precise notion of algorithm which is of interest to us. The exposition of its author /31/ contains some inaccuracies, pointed out by Post /28/.

"Finally, Post's theory of finite combinatorial processes, closely related to the theory of Turing, has not been worked out at all and consists in fact of one definition only /25/." (p. 2-3)

In the remainder of the text, in particular Chapter II section 3, Markov defines and defends his definition of "normal algorithm". He states that:

"It would be quite natural to assume that the theory of "natural algorithms" is equivalent to the theory of algorithms based on the concept of a recursive function. it was shown by V.K. Detlove that this is actually the case /1/." (p. 4)

wvbaileyWvbailey 20:56, 26 July 2006 (UTC)[reply]

Markov's definition of "natural algorithm"[edit]

His idea can be seen from the first three (of six) Chapters in his "Contents". In particular in Chapter II.3 he defines his normal algorithm (see sub-section below) in terms of a set of symbols that form words, and a method using substitution rules to create new but valid words.

Although this sounds exactly like a "recursive language" ("Equivalence of a type 0 grammar to a Turing Machine" cf Hopcroft and Ullman p. 221, Introduction to Automata theory, Languages and Computation, Addison-Wesley, 1979) nowhere do Hopcroft and Ullman cite A. A. Markov, neither in their index nor in their bibliography. Gurevich 2000 does, however. As does Kleene (1952): "The problem which Post [1947] and Markov [1947] prove unsolvable was proposed by Thue 1914" (Kleene p. 382). And Minsky (1967) re neural nets (p. 65).

Chapter I. LETTERS, ALPHABETS, WORDS

1. Letters
2. Alphabets
3. Words
4. Entries
5. Links and chains
6. Translations

Chapter II. THE NOTION OF ALGORITHM

1. Algorithms in alphabets
2. Examples of algorithms
3. Normal algorithms
4. Examples of normal algorithms
5. The principle of normalization

Chapter III. CONSTRUCTION OF NORMAL ALGORITHMS

1. Extension of an algorithm
2. Closure of an algorithm
3. Composition of algorithms
4. Combination of algorithms
5. Branching of algorithms
6. Reiteration of algorithms
7. Translation of an algorithm
8. Some algorithms connected with matrices

wvbaileyWvbailey 21:36, 26 July 2006 (UTC)[reply]

Nutshell description of Markov's definition of his "normal algorithm" (pp.63ff)[edit]

[Boldface added for emphasis]

In particular, the notion "2." of "steps of local nature" appear also in Blass and Gurevich with regards to Kolmogorov-Uspensky (KU) machines:

Gurevich proposed to Uspensky that " 'every computation, performing only one restricted local action at a time, can be viewed as (not only being simulated by, but actually being) the computation of an appropriate KU machine.' Uspensky concurred." (Blass, Gurevich 2003, p. 10)

Markov's definition:

"1. Separate elementary steps, each of which will be performed according to one of these rules..."

"2. ...steps of local nature ..." [as opposed to "integral nature" ... he means that the algorithm won't change more than a certain number of symbols to the left or right of the observed word/symbol ]

"3. Rules for the substitution formulas ..."

1) order given is their preferred order of use
2) apply substitution to the first entry of the left-hand member

"4. Definite prescription for termination..."

1) a means to distinguish a "concluding substitution" [lee tading to "terminal state"]
2) as distinguished from a "simple" -- i.e typical, usual, most-likely-to-occur -- subsitution

"5. [He defines the above as a "normal algorithm"]

"6. [the "scheme" of the algorithm is the list of the substitution rules.]

wvbaileyWvbailey 21:37, 26 July 2006 (UTC)[reply]

holding place for Stone's definition of "algorithm" (p. 3 - 13)[edit]

"[Informal] Definition: An algorithm is a set of rules that precisely define a sequence of operations." (p. 4)

"For people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, withoout the need for thought... however, if the instructions [to solve the quadratic equation] are to be obeyed by someone who knows how to perform arithmetic operations but does not know how to extract a square root, then we must also provide a set of rules for extracting a square root in order to satisfy the definition of algorithm" (p. 4-5)

[i.e. "the rules" must be complete to perform the expected outcome given the input symbols and the "computer's" self-contained abilities and knowledge-- every symbol in the rules and the input must be (i) distinguishable, every distinguished rule/input (ii) understood, and every distinguished, understood rule actually (iii)executable.]

"Algorithms may specify iterative behavior..." (p. 5)

"...not all instructions are acceptable, because they may require the robot to have abilities beyond those that we consider reasonable. For example ... 'Output the number 0 if Henry VIII was a King of England, otherwise output the number 1' [his example: this is unreasonable if the robot has not been previously instructed, even worse if asked if Aristotle were a king of Bngland but only 5 kings have been taught].... the instruction requires that the robot to have particular information, so that if the information is not given to the robot, it cannot obey the instruction. An intuitive definition of an acceptable sequence of instructions is one in which each instruction is precisely defined so that the robot is guaranteed to be able to obey it....We say that an instruction is effective if there is a procedure that the robot can follow in order to determine precisely how to obey the instruction.... effectiveness ... may be dependent on the finiteness property. ... every algorithm must have the property of finiteness; that is, it must be able to terminate in a finite number of steps. Each instruction must be precisely defined so as to avoid any ambiguity or misinterpretation....

"To summarize ... we define an algorithm to be a set of rules that precisely defines a sequence of operations such that each rule is effective and definite and such that the sequence terminates in a finite time." (p. 8)

"An algorithm for the Turing machine consists of a set of rules that involve head motion, tape operations, and change of state... the state of the machine at a particular instant of time identifies the instrction that the Turing machine is carryong out at the time.

"An algorithm for the Turing machine consists of a set of five-tuples such that there is exactly one five-tupe for each state-symbol pair, excluding the HALT state. Such an algorithm is known as a Turing machine program." (p. 9)

Stone's 5-tuple is classical (qi, sj, qij, sij, dij): qi is the current state, sj is a tape symbol, qij is the next state, sij the symbol to write in currently-observed cell, dij the tape motion. Observe that he has defined "algorithm" as the finite-state machine table for the machine, NOT as what he presents immediately afterwards:

"The computation of a Turing machine is described by stating:

"1. The tape alphabet
"2. The form in which the parameters are presented on the tape
"3. The initial state of the Turing machine
"4. The form in which answers will be represented on the tape when the Turing machine halts
"5. The machine program" (p. 10)

Is the machine program truly "the algorithm" or is "the algorithm" the list of 5 elements above, i.e. the definitions of tape ALPHABET, INPUT, OUTPUT, and INITIAL STATE, as well as 'the rules."

wvbaileyWvbailey 19:23, 27 July 2006 (UTC)[reply]

Holding place: Gurevich, and Blass-Gurevich definition of algorithm[edit]

[Boldface added for emphasis. more to follow as wvbailey et. al. figures out what the gents are saying]

"...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine....In practice, it would be ridiculous...[Nevertheless,] [c]an one generalize Turing machines so that any algorithm, never mind how abstract, can be modeled by a generalized machine?...But suppose such generalized Turing machines exist. What would their states be?...a first-order structure ... a particular small instruction set suffices in all cases ... computation as an evolution of the state ... could be nondeterministic... can interact with their environment ... [could be] parallel and multi-agent ... [could have] dynamic semantics ... [the two underpinings of his work are:] Turing's thesis ...[and] the notion of (first order) structure of [Tarski 1933]" (Gurevich 2000, p. 1-2)

"Gandy [1980] put four principles which any such [e.g. "vastly parallel" computational] machine must satisfy... [Gurevich is quoting Gandy here:] "The most important of these, called 'the principle of local causality' rejects the possibility of instantaneous action at a distance. ... It can be proved that if a device satisfies the principles then its successive states form a computable sequence." (Gurevich 2000 p. 4)

"4.4 Fixed Vocabulary

"...In our case, by definition, vocabularies are finite. This reflects the informal assumption that the program of an algorithm A can be given by a finite text....In particular, the vocabulary does not change during the computation." (Gurevich 2000 p. 11). He goes on to examine whether this is reasonable and concludes that self-modifying algorithms (i.e. programs) are using their text (the program itself) as part of their data.)(Gurevich 2000, p. 11)

"7.1 Constructivity

"Traditionally it is required in the foundations of mathematics that inputs to an algorithm be constructive objects (typically strings) and that the states of an algorithm have constructive representations. One champion of that tradition was Markov [1954] ...[but]... these constructivity requirements may be too restrictive in applications, especially in high-level design and specification. We abstract from the constructivity constraints. ASMs are algorithms which treat their states as databases or oracles" [say what??] (Gurevich 2000, p. 19)

> notion of an algorithm containing its own documentation

wvbaileyWvbailey 16:11, 27 July 2006 (UTC)[reply]

Holding place: an elegant description of why algorithms are necessary[edit]

From Boolos and Jeffrey (1974, 1999):

"No human being can write fast enough, or long enough, or small enough [^ footnote] to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary opertions on symbols. The problem will remain, that for all but a finite number of values of n it will be physically impossible for any human or any machine to actually carry out the computation, due to limitations on the time available for computation, and on the speed with which single steps of the computation can be carried out, and on the amount of matter in the universe which is available for forming symbols. But it will make for clarity and simplicity if we ignore these limitations..." (p. 19)
^ Footnote: "There is only a finite amount of paper in the world, so you'd have to write smaller and smaller without limit, to get an infinite number of symbols down on paper. Eventually you'd be trying to wrtie on molecules, on atoms, on electrons...."(p.19)

Algorithms seem to come in a couple basic forms:

1 Start them with blank tape ({ }), (0?), let them loose, and they print number or numbers on their tape (e.g. pi), (e.g. an up-counter)

f({ }) = pi;
f({ }) = |.||.|||.||||.|||||.||||||.|||||||.[etc]

2 Start them with an arbitrary number or arbitrary collection of numbers as presented on their tape in a preagreed manner i.e. that they consider well-formed and well-chosen, and out pops a "number" (e.g. through multiplication) in a form you will say is well-formed (acceptable):

f(1,10) = 10; f(10,11) = 110
f(1,2) = 2; f(2,3) = 6
f(||,|||) = |||; f(|||,||||) = |||||||
...|||.||||..... => ...|||||||....

In this second case the algorithm must [be able to] (i) accept the symbols (e.g. |'s only, no *'s allowed) and (ii) accept their form/presentation (i.e. two, and only two, packed strings of |'s separated by a single ".", no *'s in the string, etc.) and also (iii) agree that the strings are well-chosen (e.g. not outside certain boundaries, perhaps. Thus we might define "0" = |).

For example, a fancier multiplying-machine that accepts negative and positive integers might be able to handle stings such as

...|.|||.|.||||... (-2,-3)
...|.|||.||||... (-2,+3)

But such strings will fail to be a well-chosen for the more primitive machine that handles only positive numbers such as ...|||.||||... (2, 3). And this fails too because of the misplaced "*"

...|.|||.|*||... (-2,??)

This does raise interesting questions about "completely specified". What happens when the parameters are not correctly presented? wvbaileyWvbailey 20:26, 21 August 2006 (UTC)[reply]

Moved from Talk: Algorithm by wvbaileyWvbailey 14:29, 9 September 2006 (UTC)[reply]

Example 2 under Atomization of the instruction set[edit]

This phrase doesn't make much sense:

"The RASP can be reduced to a RAM by moving its instructions off the tape to in finite-state machine"

Is this supposed to be:

"The RASP can be reduced to a RAM by moving its instructions off the tape to a finite-state machine"

or maybe, but less likely (and not very meaningful):

"The RASP can be reduced to a RAM by moving its instructions off the tape to an infinite-state machine" ?

--SilverStar 05:51, 23 October 2006 (UTC)[reply]

Sorry. It should be something more like: "The RASP can be reduced to a RAM by moving its instructions off the tape and (perhaps with translation) into its finite-state machine "table" of instructions." Thanks for picking this up. wvbaileyWvbailey 13:49, 23 October 2006 (UTC)[reply]

The Sipser's applies to any Lambda Calculus?[edit]

About the section "2006: Sipser's assertion ...".

I think if it is only a "reference model problem", it not applies to the "characterization" it self, only to the approache/methodological options... puting into another words: If we use Lambda Calculus to reference model on the "algorithm definition", it is ok for Sipser?

-- Krauss nov. 2006

Good question. I suspect Sipser would say that the model must be a machine. (I suppose we could ask him!) There is a quote from Godel -- I have looked for it recently but didn't find it, but I know I've read it a couple places ... He was not convinced that the recursive functions (mu-recursive, partial or total) completely defined all functions that are calculable/definable. Late in life he believed that only the Turing machine and equivalents were the final definition. I don't remember anything about lambda-calculus but it is close enough to mu-recursion (I believe) so it would not meet Godel's question. If you assume the Church-Turing thesis then I suppose you have to say that any specification such as mu-recursion/lambda calculus/Turing machine is adequate. My guess is people don't use the first two because they are harder to use. I am far away from my books, so I can't research this more.wvbaileyWvbailey 15:52, 18 November 2006 (UTC)[reply]

A paradoxical situation[edit]

This was a duplicate of the post at Talk:Algorithm#A_paradoxical_situation. To keep the discussion together, I have replaced it with that link. Please come to that page if you would like to discuss the matter. — Carl (CBM · talk) 20:44, 19 March 2008 (UTC)[reply]

Why I reverted a long Multipundit edit to "A problem of definition"[edit]

The change made by multipundit lengthened this section but added nothing that wasn't discussed in appropriate detail in appropriate sections further on. Multipundit made assertions without clarification, and added a bold assertion about a fringe topic, "inductive Turing machines", without much context or substantiation.

can an algorithm produce infinite output?[edit]

The article recursively enumerable set says:

  • There is an algorithm that enumerates the members of S. That means that its output is simply a list of the members of S: s1, s2, s3, ... . If necessary, this algorithm may run forever.

Is that a legitimate use of the "algorithm"? I can't find support for it in algorithm characterizations. I would say the set is recursively enumerable if there is an algorithm (recursive function) that takes input i and computes si. I think that a function that takes some finite or empty input and produces infinite output (a list of all the primes) but is computable for every initial segment of the output is called corecursive. I don't know if "algorithm" applies to that. 66.127.52.47 (talk) 21:19, 20 March 2010 (UTC)[reply]

This bleeding thing keeps coming up; we really should document things somewhere and be able to point to it. The fact is that there's an older sense of the word algorithm that requires it to halt in finite time on every input. However this restriction tends to get in the way more often than it clarifies, and I don't believe the word is very often used in this strict sense anymore. Of course "not very often" is not the same as "not" (and for that matter I'm not even all that sure about "not very often"), so one way or another we need to deal with the ambiguity wherever it comes up — we can't just silently pick one definition or the other, because that would confuse readers expecting the other definition. --Trovatore (talk) 21:31, 20 March 2010 (UTC)[reply]
Some of the algorithm characterizations in this article allow an algorithm to be partial recursive, i.e. on some inputs the algorithm might run forever while never producing any output. This question is different--whether an algorithm can run forever and produce an infinite amount of output. One can think of an algorithm as something that runs on a Turing machine, and a Turing machine as something that computes an abstract function but cannot do I/O. I/O can only happen through some external mechanism after the Turing machine enters a halt state. Some programming languages are organized around that concept, e.g. Haskell's I/O monad can be viewed that way. I agree that "algorithm" is still a reasonable common-sense term, but we ought to source such a usage, and we haven't yet done so as far as I can tell. 66.127.52.47 (talk) 21:53, 20 March 2010 (UTC)[reply]
Um. Agree that it needs to be sourced. But a definition of algorithm that doesn't allow for I/O is seriously broken.
The problem with referring to an algorithm as "something that runs on a Turing machine" is that it focuses too much attention on the details of TMs, which are beside the point. It's really programs that run on Turing machines, not algorithms. An algorithm is supposed to be a higher level of abstraction — you can change details of the program, without changing the algorithm it implements. --Trovatore (talk) 22:52, 20 March 2010 (UTC)[reply]
It's just a figure of speech. Instead of saying "the algorithm can run forever" you could say "the algorithm takes n as input and returns, as output, sn". If the sequence depends on some other input k, the algorithm takes both k and n as inputs. So there is not actually any individual computation of a natural number that runs forever; there is just a computable function that enumerates the sequence sn as its range. All these English problems disappear as soon as one makes the slightest effort to replace "algorithm" with "computable function" as would usually be done in a recursion theory book. But the translation between "the algorithm takes no input, enumerates the sequence sn, and never halts" and "the algorithm takes n as input and returns sn" is so direct that using the first as a convenient way of expressing the second is extremely common and causes no problems in practice. If someone ever needs to prove something about such an "algorithm", the first step will simply be to choose a computable function whose range is the sequence sn.
Similarly, the way that an algorithm has "input" during the course of its computation is simply to perform the computation relative to an oracle, and consult the oracle whenever it wants additional input. So talking about algorithms with input is simply a different way about talking about oracle computations. From the point of view of the algorithm, it makes no difference if the input is typed when it is asked (as one visualizes I/O) or is piped in from a pre-existing file (as with an oracle computation). — Carl (CBM · talk) 02:30, 21 March 2010 (UTC)[reply]
The sentence in question is in the lead of recursively enumerable set and is intended to give the reader a feel for what RE means, not a precise definition. If the algorithm in question is represented by a partial recursive function, then that function could map 1 to s1, 2 to s2, etc.. Thus each (finite) part of the output is indexed by the input, and any particular execution of the algorithm only produces a finite output while the algorithm as a whole has an infinite output. JRSpriggs (talk) 07:02, 23 March 2010 (UTC)[reply]

Removal of first sentence[edit]

Bhny removed the first sentence, about algorithm not having a generally-accepted definition, as "uninformative and arguably wrong". While the sentence is problematic because it's unsourced, I don't see how it's either uninformative or arguably wrong.

It is definitely not wrong, because some insist that an algorithm must always terminate in a finite number of steps, and others do not. This is a very basic difference, and there is certainly no generally-accepted answer. It is informative to tell the reader that. --Trovatore (talk) 01:21, 18 August 2012 (UTC)[reply]

Algorithm already has a generally agreed definition here- algorithm Bhny (talk) 01:42, 18 August 2012 (UTC)[reply]
"There is no generally-accepted definition of <subject>" is a really bad way to begin an article. There actually is a generally-accepted definition of algorithm, the problem is with a formal characterization of algorithm. Agreed? Bhny (talk) 01:58, 18 August 2012 (UTC)[reply]

No, not agreed. I provided two citations that also disagree. Gurevich actually asserts that an algorithm is a Turing machine, a mechanism, and not just a sequence of symbols that is interpretable by a target mechanism. BillWvbailey (talk) 15:00, 18 August 2012 (UTC)[reply]


There is no generally-accepted definition. The one at algorithm is definitely not generally accepted. It is perhaps widely accepted, but another widely-used meaning is something like "the semantic content of a program, after abstracting away implementation details", and does not require that the program terminate in finitely many steps. --Trovatore (talk) 20:03, 18 August 2012 (UTC)[reply]

Basically if you don't define the subject in the first sentence you don't have an article. The subject here is Algorithm characterizations, so we need a definition of Algorithm characterizations. Bhny (talk) 12:03, 19 August 2012 (UTC)[reply]

The point of the article, or one of the points, is precisely that workers do not agree on the definition of "algorithm". You can't lose that. I am going to revert. --Trovatore (talk) 18:12, 19 August 2012 (UTC)[reply]
I added a definition for the topic in front of that and cleaned it up a bit. Also we should probably say "researchers such as x and y are working on this" Bhny (talk) 22:40, 19 August 2012 (UTC)[reply]

cItation #1[edit]

Here's a citation. I cc'd this off the Unsolved problems in computer science talk page. Both it and its fellow Church-Turing thesis were deemed "unworthy":

. . .Algorithm: what is such a beast?. . .

This is a holding place for the following. The intent is not to push this or any other point of view, but to make the question evident. A number of epistomologic and mathematical issues bubble up from this:

from https://research.microsoft.com/~gurevich/
[164] Andreas Blass and Yuri Gurevich
"Algorithms: A Quest for Absolute Definitions"
Bulletin of the European Association for Theoretical Computer Science
Number 81 (October 2003), pages 195-225.
Reprinted in Chapter on Logic in Computer Science
Current Trends in Theoretical Computer Science
World Scientific, 2004, pages 283-311
Reprinted in Church's Thesis After 70 Years
Ontos Verlag, 2006, 24-57
See more details here
What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the Church-Turing thesis and contrast Church's and Turing's approaches, and we finish with some recent investigations.
wvbaileyWvbailey 00:48, 5 September 2007 (UTC)[reply]

Interesting. The problem could be stated as "when are two algorithms equivalent?". Seems worth adding, and less bogus than trying to prove the Church-Turing thesis. 76.197.56.242 (talk) 15:49, 11 August 2008 (UTC)[reply]

---

BillWvbailey (talk) 01:45, 18 August 2012 (UTC)[reply]

--- Citation #2. This comes from the same talk page as the above, but further down; see #3:

I just found a fantastic article at http://math.ucsd.edu/~sbuss/ResearchWeb/FutureOfLogic/paper.pdf (cited in a 2007 Dershowitz-Gurevich paper):
Samual R. Buss, Alexander S. Kechris, Anand Pillay, and Richard A. Shore, “The Prospects for Mathematical Logic in the Twenty-first Century”.
This paper came from a panel discussion of the Association for Symbolic logic held in Urbana-Champain, June 2000. In particular, under the heading “Computer Science” Shore stated the following three problems for computer science:
”1. “Prove the Church-Turing thesis by finding intuitively obvious or at least clearly acceptable properties of computation that suffice to guarantee that any function so computed is recursive . . .. Perhaps the question is whether we can be sufficiently precise about what we mean by computation without reference to the method of carrying out the computation so as to give a more general or more convincing argument independent of the physical or logical implementation.
”2. What does physics have to say about computability (and provability or logic)? Do physical restrictions on the one hand, or quantum computing on the other, mean that we should modify our understanding of computability or at least study other notions?
”3. Find, and argue conclusively for, a formal definition of algorithm and the appropriate analog of the Church-Turing thesis. . ..Thus we want a definition that will up to some precise equivalence relation capture the notion that two algorithms are the same as opposed to just computing the same function.” (p. 7-8)

There is more from the other authors, including Sam Buss’s question about artificial intelligence, “logic for computer science”, etc. I highly recommend reading this article if someone is interested in expanding this wiki-article. wvbailey

---

BillWvbailey (talk) 01:52, 18 August 2012 (UTC)[reply]

spelling, etc.[edit]

There were lots of instances in this article of Boolos-Burgess-Jeffrey instead of Boolos–Burgess–Jeffrey (i.e. a hyphen instead of an en-dash), and several of Church-Turing instead of Church–Turing, and some page ranges and parenthetical offsets with hyphens instead of en-dashes. I haven't fixed all of the latter yet. Note:

pp. 5–30 (right)
pp. 5-30 (wrong)

(This is codified in WP:MOS.) 174.53.163.119 (talk) 15:20, 20 July 2013 (UTC)[reply]