Talk:Post–Turing machine

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Post independent of Turing?[edit]

The article claims that the work of Post is independent of that of Turing. However, Yuri Gurevich claims that Post saw Turing's article before doing his work. Is there more evidence going one way or another? Rgrig (talk) 17:18, 10 September 2010 (UTC)[reply]

Gurevich is not a slouch, so maybe he has his "facts" (understanding) wrong? The only way Post could have seen Turing's paper, would have been that Alonzo Church showed a cc of Turing's unpublished paper to Post, or someone else sneaked him a copy, and then Post lied about it. But what would Post have gained by it? When you read the historical facts (the time sequence), this makes no sense at all. So we apply Occam's Razor and stick with the simplest explanation: their work was independent.
Here are the written facts, as known by me, derived from the Martin Davis 1965 The Undecidable (a compilation of the republished papers), and Andrew Hodges' 1983 biography of Alan Turing:
In 1936 Alonzo Church started a new journal The Journal of Symbolic Logic in Princeton NJ. Meanwhile, over in the UK, via his professor, Turing submitted his paper (really his Master's thesis) on 31 May 1936 to the London Mathematical Society (Hodges 1983:112). Church was the only one who could referee the paper and so the LMS sent the paper to him for comment (Hodges 1983:113). On 23 September 1936, Turing sailed from the UK to Princeton (he was on a grant); here he met Church et. al.. He got his proofs sometime in the fall (October, November) for review (cf Hodges pages 116-123). Simultaneously, on October 7 1936, Church received Emil Post's paper (Post was teaching at the City College of NY) "Finite Combinatory Processes, Formulation I" i.e. 5 months after Turing's and the same time Turing got his proofs to check. In order to publish Post's paper, Church "had to certify that the work had been complettely independent" (Hodges 1983:125). This is what he wrote as a footnote to Post's paper:
Received October 7, 1936. The reader should compare an article by A. M. Turing, On computable numbers, shortly forthcoming the Proceedings of the London Mathematical Society. The present article, however, although bearing a later date, was written entirely independently of Turing's. Editor
Post's paper appeared in print in late 1936 (exact date?). Turing's paper appeared a bit later, in the London Mathematical Society's Proceedings, ser. 2, vol. 42 (1936-37) in January 1937.
I know of no evidence that Church and Post knew one another personally -- we'd have to find biographies of both men -- but Post cited Church's 1935 "Unsolvable Problem of Elementary Number Theory" in his 1936 paper. Church was presumably aware of Post's 1921 PhD thesis -- a quite important paper that first formalized the "truth table" method and dealt a significant blow to Russell's "theory of types" that he needed for Principia Mathematica (cf van Heijenoort 1967:264ff). So we can presume that these two men knew one another. BillWvbailey (talk) 18:30, 10 September 2010 (UTC)[reply]
This is why I always check the talk pages; Wikipedia would be a lot weaker without this this sort of information, information that would never be allowed within the rigid structure of the main page. ThankYou! 2001:470:1F09:10D6:F21F:AFFF:FE54:B8C (talk) 08:25, 13 November 2014 (UTC)[reply]

Wikipedia is not for posting original work[edit]

See http://en.wikipedia.org/wiki/Wikipedia:Five_pillars, "all editors must follow our no original research policy").

I'm aware of it. As I worked on the compromise model it became a concern. Your point is well-taken. I think I'll cut because it really isn't worth the fight. wvbaileyWvbailey

In my opinion, too much original work is occurring in this article as the author's creation (and naming) of a composite of published models. (This is not the same as describing a published composite as per its source.)

On the title we disagree. I've seen the usage on the 'net. Davis used a the words "Turing-Post program". And I've cross-referenced the hell out of it.
Turing-Post Machine
Turing-Post model
Turing-Post program
Post-Turing Machine
Post-Turing program
Post-Turing model
If it's a real trauma we could add a cross to "Turing-Equivalent models" but some folks know this as much by Post's work as by Turing's. This entry isn't just for mathematicians. I think the operative word is "encyclopedia". I would prefer all the models lumped into one place. If this becomes a big issue we might have to arbitrate it.

wvbaileyWvbailey 21:36, 12 January 2006 (UTC)[reply]

Isn't the article supposed to be limited to universal computational models of a design extremely similar to those of Post and Turing? If so, "Turing-equivalent models" would definitely be too broad.
Not necessarily "universal"... But the three quoted: are extremely similar, just variances on the 6-to-7 instructions in the "set" plus {0,1} or {blank,mark} only on the tape . The Turing-version exists as counter-point.
I don't think you would be considering them if they weren't universal.
(BTW, one unmentioned major difference between those two is that Post insisted on computations that halt, whereas Turing's computations were required to be nonhalting!)--r.e.s. (Talk) 00:36, 13 January 2006 (UTC)[reply]
Post provided his STOP for numerical computations but not logical, where it wouldn't be used (I'm not sure why).
He explains in the paper that he included the nonhalting version just for symbolic logics as a matter of taste, preferring to start with finitely many axioms encoded in the boxes, then to generate and encode all infinitely many theorems derivable from them. But he points out that this is not necessary, since an n can be included with the axioms at the start, then stopping when the nth theorem is produced. (So I should not have said he insisted on computations that halt, but rather that he made it a point to emphasise that they're sufficient for his formulation.)
You would not believe the fight I've had re this topic of HALT/STOP versus Turing's "circles". (See the dialog on page "Halting Problem" under Talk:"History" or whatever...). This "dialog" with various persons has been going on for weeks. Also: the origin of the phrase "halting problem"? This question is why I even got involved in Wikipedia, just to answer: Why is Turing's proof so different from the "halting" proofs. The origin of "halting problem"(probably) wasn't Turing, but who was it? How did it happen? Ideas? The Talk:"History" and the footnotes on the "Halting Problem" page should illuminate the issue. Also, the confusions between "Hilbert's Problems" of 1900 versus his 1928 recasting of "1900 Hilbert #2" as "1928 3 Hilbert Questions" ...wvbaileyWvbailey 02:06, 13 January 2006 (UTC)[reply]
I see no reason to doubt that Turing's was the first proof of an undecidable problem of the "halting problem" type. (Church's undecidability proof for a problem in the λ-calculus was earlier, but wasn't of this type.) As for who might have first posed such a problem (irrespective of questioning its decidability), it may impossible to discover that. But I think it's important to realise that Church's λ-calculus, and also combinatory logic (going back to the 1920's), both involve the idea of reducing a term to "normal form" in finitely-many steps -- so of course Church's 1936 paper discusses reductions that "must (if continued) terminate in the normal form". (This wasn't presented as a decidability problem, but merely a part of his development of λ-definability.)--r.e.s. (Talk) 09:16, 13 January 2006 (UTC)[reply]
Interesting, thanks. I'll look at the Church paper(s) and see where the word "terminate" is, and reference it too. As to the different forms of "the Halting Proof" (as distinct from Turing's circle-proof), about as far back as I've been able to go without a trip to the library is Minsky in 1967 [so I may add him to that page, somehow, as in "The earliest known version of...". Interesting that Minsky (1967), Davis in Steen (1978), Beltrami (1999) all use what I call the "antinomy" version, altho there are subtle differences-- all involve the use of an algorithm with a STOP at one branch but a "circle" of some sort in the other branch. wvbaileyWvbailey 17:54, 13 January 2006 (UTC)[reply]
I think it might pay to look at Hao Wang's paper that presented a "program formulation" of TMs, evidently based on Post's Formulation 1:
Wang, Hao (1957): "A variant to Turing's theory of computing machines", Journal of the Association for Computing Machinery (JACM) 4, 63-92.
There's a brief description at [Wang program]
In any event, Turing and others were still referring to his circle-free/circular version of the problem in the late 1940s, so I suspect it was in the 1950s when the "modern" version took root -- maybe it's in Wang's paper? --r.e.s. (Talk) 20:03, 13 January 2006 (UTC)[reply]
It wasn't Wang's paper. I looked it up and scanned it; for more, See Halting Problem the "Footnote|Davis". I'm sure I'll catch mega-shit for those changes too, but I was digging deep, looking at the real stuff in the Math-department collection in the Dartmouth libarary. "Halting Problem"? Looks like Davis coined that phrase....
Problem is: if a reviewer can't come back with rebutting documentation, their word is... less than useless.
RE changes to the opening paragraph: Davis really did refer to these as "Post-Turing machines", and the programs as "Post-Turing programs." And a humorous thing: poor Davis in "What is a Computation?" slipped and in one place referred to a "Post-Turing program" (p. 256) instead of his usage "Turing-Post program" elsewhere in his paper. I was surprised to find the earlier Davis stuff myself, especially in something so obscure as the document I found it in (it looked like a mimeograph... but so be it...): Anyway, it's in the printed literature... it's not "me".Wvbailey 00:25, 18 January 2006 (UTC)wvbailey[reply]
I agree that it would be good to have an article on Post's model, which he called Formulation 1, in which the similarity to Turing's can be briefly mentioned.
I have the work open before me. but there is something special about these quivalents: the simplicity. It isn't just another equivalent. wvbaileyWvbailey 21:36, 12 January 2006 (UTC)[reply]

But the two are different enough that a composite will just be yet another Turing-equivalent model. --r.e.s. (Talk) 20:58, 12 January 2006 (UTC)[reply]

Universal machine entries[edit]

What's "OT" mean?

You expanded the entry more than what I'd intended. I was just after an example of "unary coding". And the alternate squares idea-- where you place pointers. Makes you wonder about a link to a new "universal machine" page.

By "OT" I meant "off-topic" -- the topic being, as I understand it, very simple variants of Post's "formulation 1" model. To illustrate how data can be encoded in these models (which are not even "computing machines" by Turing's definition, because their alphabets are strictly binary), I think it would be preferrable to restrict examples to formulation-1 variants rather than referring to Turing's universal machine and how it was coded. (I suspect that few, other than the two of us, have much interest in Turing's original model, considering that it's been "debugged" and superceded by now-standard reformulations as in the Turing machine article.) --r.e.s. (Talk) 01:05, 26 January 2006 (UTC)[reply]
I wouldn't object if you cut it down/out and moved the reminants to the talk-page so it is back-ground reference. I guess I am still struggling with the question of what the article is "really about". wvbaileyWvbailey 17:03, 26 January 2006 (UTC)[reply]
Done -- see below.--r.e.s. (Talk) 03:21, 3 February 2006 (UTC)[reply]

From your familiarity with this stuff, I'm presuming you've written the U-code for a P-T machine so you're familiar with encoding "0" and "1" as e.g. blank-mark and mark-blank, and then leaving a third square in between as a place for pointers... or whatever (some unambiguous coding so U can keep its mind clear). And encoding instructions in unary with extra "marks" (for example, or blanks) that U erases (or prints) temporarily... but since we can't post original work...[I did this back in the mid 1980's, still have the code, did run a little example on my machine]... unless you know of some printed stuff we can reference (and I avoid web-pages because they're not peer-reviewed)... we are restricted to published accounts of how others designed U-machines in "P-T" blank-mark code. Do you know of any? I.e. paper-published U-code in "blank-mark" sentences for a "P-T machine?" (I pretty sure I saw it years ago for a true Turing-machine i.e. a 5-tuple, I think in "Hennie". I found the reference in an old notebook of mine, that he used 0's (blanks) in unary). wvbaileyWvbailey 16:52, 25 January 2006 (UTC)[reply]

I've not seen a complete program for a universal (binary) P-T machine, although I do have a published reference describing in detail a universal Wang program whose input alphabet is binary, with strictly unary input-encoding (but it uses two additional auxilliary symbols "internally", making an alphabet of four symbols):
"J. W. Thatcher, Sef-describing Turing machines and self-reproducing cellular automata", in A. W. Burks (ed.), Essays on cellular automata, U. Illinois Press, Urbana, 1970; pp. 103-131.
Probably it would be fairly straighforward to eliminate the two auxilliary symbols in a relatively minor recoding of the program.
I did "scan" Wang's paper but as I remember it is still in the Turing tradition of "5-tuples" or "4-tuples" ["3-tuples" ?] Let's see... the P-T machine is both 2-tuples ("tape action:L,R,E,P" ,"next-instruction"[or #qj] ) and what? 3-tuples for the conditional jumps? let's see ("tape-symbol", if-symbol then-go-to-qi, if not-symbol-then-next-instruction [or #qj]) [If jump: the U-machine has to run over to the "data area", look and see if a mark is on the scanned square, then run back to the go-to instruction and make its decision based on jump-if-mark or jump-if-blank. Then it either repairs its pointer and moves right to the next instruction, or it proceeds to jump: "tick" off an address-square, shuttle left to "#0 instruction" and mark it, shuttle right to the go-to and repair and "tick off" any squares to the right, then shuttle left to #0 instruction" repair the mark, go right to #1 instruction" and mark it, etc. until all the tally marks are "used up" and U has arrived at the new instruction].
Wang (1957), with credit to Post, is often cited as the source of the "program formulation" of binary-tape Turing machines using numbered instructions from the set
write 0
write 1
move left
move right
if scanning 0 then goto instruction i
if scanning 1 then goto instruction j
where sequential execution is assumed, and Post's single "if ... then ... else" has been "atomised" into two "if ... then ..." statements. Davis and the others followed Wang in this, as far as I know. In any case, this is so close to Post that it seems quite appropriate to regard these as "Post-Turing" programs. (Btw, the universal Wang program I mentioned above uses the obvious extension of this instruction-set to tapes with a more-than-binary alphabet.)
Sounds like we should put Wang on the page and ... what, remove Beltrami? (he didn't contribute anything in the historical sense, he's just another example, ditto for Penrose). In fact I'm just going to do it, copy your words into the article. Remove Beltrami and add Wang. Do you know if Wang and Davis were on the same faculty? Somehow I think they were, or Davis was younger than Wang, maybe a student.wvbaileyWvbailey 17:13, 27 January 2006 (UTC)[reply]
So really there are only three fundamental instructions "move tape", "print symbol on tape", and "Jump", and they each have two variants (interesting symmetry): L/R, P/E, Jb/Jm. And if every "move" and "print" is a 3-tuple [no..., some waste results], can we dispense with jump? ... no we need to add a "no move" [is this true? it would be effectively a conditional jump]. Interesting question: what is the minimum number of total "-tuple-entries". It looks like 8 + 3 + 3 = 14. Is this true?
I'm not sure what you mean, but of course the "jump" (conditional branching) instructions, or their equivalents, are necessary for Turing equivalence.
I guess my wondering is: what is the "most atomized" of all the instruction sets (given such a question has meaning at all)? One measure might be: not counting the use of unary code, what encoding would give the fewest instructions (where the go-to instruction counts as "double" P, L, Jb, 1, E, R, Jb, 4, Jb, 1) on the tape when writing a "example", even if there's a speed trade-off. The "good" thing about the "go to next instruction in sequence" is that the "next instruction in sequence" can be handled by a "hardware counter" and not waste tape (and time). The bad thing about it is it may cause longer instruction sequences, ultimately? Or perhaps another measure might be: what encoding gives the "simplest P-T machine" in "gate-count" (probably the canonical state machine version). Or, if the tape is considered a shift register each square of which has a certain "gate count" what would be the "cheapest" instruction set and machine. Just a wonderment. This reminds me of an (old) Scientific American article where a culture builds a a "computer" wandering the universe, where time is no problem but low energy use and reliability/complexity, and maybe size is, how would someone build it.
My interest in Turing tarpits did lead me to code and implement a universal machine in a binary variant of Böhm's P′′ programming language (which is an equivalent to Post's formulation-1, using while-loops instead conditional gotos).
Interesting. I looked at the Böhm link... he used a counter in his while loops? Sounds a bit like the stuff that Minsky described in his text. At one point I added a counter to my P-T machine, with three new commands: Up (increment), Down (decrement), and Jump-if-zero. This significantly shortened the universal-machine code and really speeded the machine up. The conditional jumps UP-ed the counter to "store" the jump-to address; then U moved left to the beginning of the code and then moved to the right "DOWN-ing" the counter at each instruction.
No counters are involved in in Böhm's "(...)" construct. It's a simple while-loop; i.e., with alphabet {0,1}, "(...)" means "while the scanned value is 1 do ... end_while", and as usual the test is performed at the beginning of each prospective loop.
To solve the data-coding problem in that case, I generalised Turing's idea of alternating types of squares so as to use four (instead of two) interleaved sequences of bits, and used strictly unary coding throughout. I mention this to emphasise the possibly-surprising leeway that binary machines have in their organisation and use of memory.--r.e.s. (Talk) 01:05, 26 January 2006 (UTC)[reply]
I totally agree with the leeway issue. It is amazing: some people like me have coded with "marks" to "keep the tally", others code with "blanks" to tally, some poor people who have never actually built one try to use binary code, I suppose you can but what a nightmare. I experimented with an unconditional "jump" plus a single conditional jump. Even the decision to use a "canonical state machine" or a "jammable up-counter" as the "instruction memory" leads to different constructions. (I built three of these variants in Excel.) Maybe this discussion here might be of use to someone-- students, say-- since it's 'off line' and we're "revealing" more. I used some of this in an effort to find a way for a microcontroller to quasi-"self-test" itself (creating "random numbers" was better, but I did discover an interesting trick from the P-T machine). This stuff may have some theoretical use we can't envision.wvbaileyWvbailey 17:03, 26 January 2006 (UTC)[reply]
Writing one's own code for a universal machine in almost any of these primitive languages is likely to be very challenging (and more than just a little bit educational, if my experience is any measure). --r.e.s. (Talk) 23:52, 26 January 2006 (UTC)[reply]
The educational thing was the source of my original interest in this, so many years ago: to identify a cheap, challenging "teaching computer"-like thing. But I did discover that it was basically "too" challenging. Only the simplest programs were do-able by the average mortal. (The multiplier program is pretty neat, about the limit I would say). Since then I have used P-T's as challenges to myself when confronted with a "new language" e.g. to build one in C, in Excel, in assembly language, whatever... wvbaileyWvbailey 17:48, 27 January 2006 (UTC)[reply]

Coding for the universal machine[edit]

[The following is a copy of a section regarded as off-topic for this article, moved here for the sake of possible Talk-page discussion. --r.e.s. (Talk) 03:21, 3 February 2006 (UTC)][reply]

In his original paper Post did not discuss a "universal machine"; this was the invention of Alan Turing (see below for an example of his coding). Later writers have, for the most part, used a combination of unary and binary coding for their "Post-like" instructions that will be put on the tape of a universal machine (cf Turing machine for more details about "the universal machine").

Penrose, in his book The Emporer's New Mind, p. 30ff., explicitly describes the use of "unary coding" (Penrose, p. 41) although he is not the first to use this convention. (Unary encoding is found in Section 10 of Turing's 1936-1937 paper, in which the nth value in a sequence is represented by "the number of figures 1 between the nth and the (n+1)th figure 0"). To quote Penrose:

"Since we wish to be able to include numerical data as part of our input [i.e. data on the tape] we shall want to have a way of describing ordinary numbers (by which I here mean the natural numbers 0, 1, 2, 3, 4, ...) as part of the input. One way to do this might be simply to use a string of n 1s to represent the number n (although this could give us a difficulty with the natural number zero):

1 -> |, 2 -> | |, 3 -> | | |, 4 -> | | | |, 5 -> | | | | |, etc.

This primitive numbering system is referred to (rather illogically) as the unary system. Then the symbol '0' could be used as a space to separate different numbers from one another. It is important that we have such a means of separating numbers..." (Penrose, p. 41)

In the creation of a universal machine "coding", Beltrami uses "binary code" for the instructions, e.g. "write 0" has the code 000, "write 1" has the code 001, etc. Unlike the Penrose model, Beltrami uses a string of 0's to designate a jump-to address rather than a string of 1's.

The use of unary "encoding" in the creation of instructions for the universal machine also appears in Hennie. But the Hennie Model is otherwise similar to the Turing Model.

Turing also used unary-like encoding as part of a machine's "Standard Description" (S.D.), which was a string using the symbols A, C, D, L, R, N, and semicolon ';'. A convention was adopted according to which the squares of the tape alternated between two types -- "F-squares" (whose contents were never to be erased, and were to form "a continuous sequence" with "no blanks until the end"), and "E-squares" (whose contents were erasable, for "rough notes to assist the memory" using "a sufficiently rich variety of symbols"). In particular, the S.D. symbols were to appear on the tape in F-squares, and the universal machine U could print any of the symbols A, C, D, 0, 1, u, v, w, x, y, z, and colon ':' (with u, v, w, x, y, z being used as erasable "markers" in E-squares). Thus, the universal machine emulates any computing machine M whose S.D. it finds on the tape in F-squares, by printing (in other F-squares) the encodings of M's successive complete configurations -- including the same subsequence of "figures" (0's and 1's) that M would print -- and using the E-squares as an auxilliary workspace.

Note: In Turing's formulation, a "computing machine" -- unlike his universal machine -- is assumed to begin with a blank tape. Thus, rather than taking an input (say n) and halting after printing the nth term of the sequence, a "circle-free" computing machine never halts, and instead prints all the terms in succession. Thus the S.D. encodes no input for the machine to be emulated by the universal machine, whereas the S.D. alone is the essential input to the universal machine.

For example: The following string is the code for the four instructions that would print on the tape 0 1 0 1 ... ad infinitum to the right (Turing's example p. 127 in The Undecidable):

;D A D D C R D A A ; D A A D D R D A A A ; D A A A D D C C R D A A A A ; D A A A A D D R D A

The instructions that form the string are shown below. Here “qn” stands for “instruction #n”, “Si”stands for “symbol #i”, S0 is "blank", S1 = “0”, S2 = “1”:

Inst{ q1 S0 S1 R q2 } = D A D D C R D A A
Inst{ q2 S0 S0 R q3 } = D A A D D R D A A A
Inst{ q3 S0 S2 R q4 } = D A A A D D C C R D A A A A
Inst{ q4 S0 S0 R q1 } = D A A A A D D R D A

Instructions were to be preceded by "D" followed by a unary-string of A's (i.e. instruction #4 is "q4" and coded as "D" followed by four A's: D A A A A). Characters to be printed (by commanding the universal machine U, only indirectly by the machine on the tape) were to be preceded by "D" followed by a unary-string of C's (specifically "blank" is D, "0" is D C, "1" is D C C. Other symbols, if required, could be defined in this manner.

Wang model; [no] error[edit]

With the normal input/output conventions, it's not possible to omit the write 0 instruction, for otherwise the numbers of 1s on the tape can never decrease and thus the function cannot be computed. I have not looked at Wang's actual writing; does anyone have easy access to it? I am sure that the issue is that Wang has some nonstandard input/output convention, just as Turing's original paper did. This needs to be clarified. CMummert 02:42, 25 July 2006 (UTC)[reply]

I don't remember writing this section (couldn't have, it's too well-written). I don't have a cc of Wang, I had to find it in the Dartmouth library. I skimmed it just hunting for his model -- another correspondent had actually pointed me to Wang's model-- at the time I was hunting for the source of the phrase "halting problem" (wasn't Wang). ... ahh... I do remember that Minsky (or someone) has a proof that there exists a Turing-equivalent machine that never erases. Yes indeed here it is, it is old Minsky himself:
"We can now demonstrate the remarkable fact, first shown by Wang [1957], that for any Turing machine T there is an equivalent Turing machine Tn that never changes a once-written symbol! In fact, we will construct a two-symbol machine Tn that can only change blank squares on its tape to 1's but can not change a 1 back to a blank." (his italics; Minsky 1967, p.262-266)
He offers us a Theorem 14.5-1:
For any Turing machine T there is an equivalent two-symbol machine which can only print 1's on blank squares but can never erase a 1, once printed.
The proof requires 3 pages. He starts by creating a machine T'n using 4 symbols 0 = 000, A = 100, B = 100, C=110. He shows that any machine T can have an equivalent T'n with the following restrictions:
0 -> A, 0 -> B, 0 -> C, A -> B, A -> C, B -> C
In a follow-up he comments that this machine wipes out all records of its computations by converting all symbols to C's. A machine that keeps a record is also possible. I haven't read the proof; it looks straight-forward but tricky. Lemme know if you need more info.wvbaileyWvbailey 03:35, 25 July 2006 (UTC)[reply]
The section in question was written by me ("r.e.s."), and (as noted) is not in error about the "write 0" instruction being unnecessary for Turing completeness. Wang's "program formulation" of TM's doesn't use "non-standard" i/o conventions, except (in the "nonerasing" case) in the sense of alternating the cells used for i/o with those used as "working cells", much as Turing did. Standard unary is used for the representation of numbers. Wang presents first a Turing-complete formulation using only four instruction-types equivalent to {"write 1", "shift right", "shift left", "if 1 goto k"} -- with the alternating-cell i/o convention -- and then discusses various alternative instruction-sets, e.g. involving the equivalents of "write 0", "if 0 then goto k", "(unconditonal) goto", etc., pointing out that the auxilliary cells are not needed when "write 0" is added to the set. (He states that it's an "open question" whether the auxilliary cells can be dispensed with in the non-erasing case.) BTW, it's probably worth noting also that Wang actually presented this "1957 paper" at an ACM meeting in 1954! --r.e.s. (Talk) 05:43, 25 July 2006 (UTC)[reply]
Suppose I want a machine that computes using the following input/output convention, which I believe is standard:
Just curious: what is "lambda x.1?" Thanks, wvbaileyWvbailey 15:22, 25 July 2006 (UTC)[reply]
  • The input is a natural number in unary (with no gaps), all other squares being blank and the head being placed at the left edge of the number when the machine starts.
  • The output must be a natural number in unary, all other cells must be blank when the machine halts, and the machine must halt with the head at the left edge of the number.
I start the machine with the input 11 (unary for 2) and all other cells blank. If the machine is unable to change one of those 1s to a blank, the machine cannot end up with a tape that just says 1. Therefore, Wang's model does not work with the conventions I just specified.
Usually "0" i.e. empty set { } is written in unary as |, a single mark. "1" i.e. successor to "0" is written as ||, "2" i.e. successor to "1" written as ||| -- three markes. As far as I know, no one attempts to render "0" as "blank".wvbaileyWvbailey 15:22, 25 July 2006 (UTC)[reply]
There is also a cryptic comment about Wang's machine not halting; there are instances where it is useful to have machines that don't halt, but lack of halting also means that the conventions I just specified can't hold, because the machine can never signal that its output is prepared.
There is a tricky way of doing this -- at the end of the calculation purposefully send the machine into a "tight circle" as in "xxx: unconditional jump to this instruction xxx" and use a 2nd machine (e.g. me-as-computer) to detect this condition. This is what I've done with my home-built machines; I-as-computer detect this condition visually. But it is simple to build a "detector". Also, we typically do this in microcontroller-based designs; at the end of a sequence of calculations we put the microcontroller into a tight loop and enable a timing interrupt; we let the interrupt "kick the dog" out of its "tight circle".wvbaileyWvbailey 15:22, 25 July 2006 (UTC)[reply]
Reply to CMummert: Sorry if the comment seemed cryptic -- I've edited (and hopefully clarified) this part of the article. Wang machines halt iff there is an attempt to transfer control to an instruction number that's not in the program. (This happens in only two ways: upon executing a goto that attempts to branch to a nonexistent instruction number, or immediately after execution of the last instruction in the program.) Please see below for my reply re i/o conventions, which are entirely standard in Wang's "machine W". --r.e.s. (Talk) 23:58, 29 July 2006 (UTC)[reply]
Turing's original paper, about computing real numbers, also uses nonstandard i/o conventions. Perhaps this is why it was thought that Wang's model does not. CMummert 11:27, 25 July 2006 (UTC)[reply]
The notation denotes the constant function . If the machine is meant to stop with only one (or two, it won't matter) nonblank cells, but the input may have arbitrarily many nonblank cells, then the machine either must erase some cells or it cannot compute this function. That is why the article needs to describe the nonstandard input/output convention that Wang is using. The closest thing to a standard contemporary understanding of a Turing machine is that it represents numbers in unary, or binary (or unary plus an end marker, if you prefer) and halts at the end of a successful computation. Wang's model, apparently, does not follow these conventions. CMummert 15:56, 25 July 2006 (UTC)[reply]
I'd have to go away and think about whether or not the "k = 1" can be successfully programmed with a "Wang machine". With the trick described above (0 = 000, A = 100, B = 100, C=110) my guess is it can.wvbaileyWvbailey 17:00, 25 July 2006 (UTC)[reply]
I was out for a walk and the thought occurred: that a Turing machine at its HALT may not be stopping at the end of a successful computation. But rather, a "bad jump" -- due to a "programming error" or "noise" or "unfortunate input" or "bad luck" or "busted machine" -- may send the machine to the HALT prematurely (best to use only one HALT in a "program") before the machine can output its calculation (number). This happens in (badly-designed) microcontroller work all the time and is a very bad thing (injures/kills people, destroys property)-- I never use a HALT (STOP) in my work, but rather the watch-dog scheme that I described above. There's a whole science around watchdog timers and their use).
My point is that HALT cannot be trusted as an indicator of a successful computation. The question then becomes: how do we know that our machine's "product" (output number, computation) is correct or even "completed"? [Possible answer: we don't, unless we do a sample calculation by hand (per advice of Knuth -- "the reader should always take pencil and paper and work through an example of each algorithm..." (Vol I p. 4). But this sample computation is valid only for that instance; we have only the likelihood (probability) of a successful computation, no guarantees. This is an ugly problem. Right before I 'retired' I was working on this very problem -- "computational correctness" -- and could find little about it, just a single group at Stanford working on it (under E. J. McClusky, as I remember.) wvbaileyWvbailey 17:00, 25 July 2006 (UTC)[reply]
By successful computation, I merely meant terminating computation. It is indeed possible for a particular machine to be misprogrammed to that it halts but leaves illegal output. This is not a problem, though; it is handled by the definition of computable function, namely, a total function f on the natural numbers is computable if there is a Turing machine that, whenever given an input n in the proper form, produces the proper form for f(n) on the tape and then halts with the head at the left edge of the answer. Conversely, any Turing machine computes a unique partial function. The domain of the function is the set of n which cause the machine to terminate with an output in the proper form (so an incorrectly formed output merely means the function is not defined for that input). This is why the input/output conventions are important - because they are integral to the definition of a computable function. There is no notion of a machine computing the wrong function; if you want a different computable function, you choose a different machine. CMummert 17:34, 25 July 2006 (UTC)[reply]

For CMummert: It's only when the instruction-set omits the "write 0" that your "standard" i/o conventions are not used. In the main article "Wang's model" refers to his larger instruction-set that does include "write 0". I suggest that the main article simply omit any mention of the tangential fact that Turing completeness is possible with smaller instruction-sets, in view of the distraction this seems to have caused. It should be apparent that the six-instruction set is sufficent to simulate any Turing machine in the modern sense (e.g. ala Kleene), using only the standard i/o conventions. --r.e.s. (Talk) 18:42, 25 July 2006 (UTC)[reply]

I'll offer another idea (per an earlier CMummert suggestion): In the spirit of simplifying main articles, create another article -- maybe call it "Wang machine" -- as yet another example of "Turing equivalent" machines. (Cyber-paper is cheap, no minds will be harmed...).wvbailey
My reason is this: I've been trying to draw a "Turing machine" as "a mechanical contrivance" with a paper tape and an indexing "tractor wheel" similar to the old-fashioned paper-tape Teletype, or a 35mm film with its square indexing holes. But again and again I run into the problem of: "ERASE" and "OVERPRINT" as virtually-impossible to implement with real mechanical objects (disappearing ink is one possibility, some kind of rubby-eraser-wheel is another). There are many ways of doing this if we were to move something -- e.g. a ball, a slide switch -- back and forth in a sequence of trays-as-tape but this is not in Turing's spirit of erasable, over-printable symbols on paper tape.wvbailey
The idea that a Turing-equivalent machine could exist identical to a Teletype excepting the "Wang-Turing-Teletype" is able to reverse direction of the tape -- I do believe I've seen such reversing tape on some early computers (I know magnetic-tape machines do this)-- is (to me) fascinating. It very much simplifies "the design" of the "Turing machine" -- the Teleype just reverses its motion to locate its required "square" and punches another hole in the tape. (So what if the tape is 186,000 miles long? And a calculation takes 14.7 years to complete?) While you and I might argue "Who the hell cares about this?" (obviously Wang and Minsky did and we care enough to spend time writing about it)-- we can never know what future mind might be sparked by such an amazing proof.wvbaileyWvbailey 19:59, 25 July 2006 (UTC)[reply]
Actually, I think Wang's model belongs in the present article even more than do those of Davis, since the Davis models came after Wang's and essentially duplicated it. (IMO, the implicit vs. explicit use of a "stop" instruction matters little in this regard.) Although it's possible that this model may also have a place in some other article with a different topical emphasis, I'm wary of your "cyber paper is cheap" approach. BTW, machine W is Wang's name for the model we're discussing ("machine B" is what he called the nonerasing version). --r.e.s. (Talk) 21:11, 25 July 2006 (UTC)[reply]
I'm not sure I could quibble with you about this; I will need to re-read the Wang model again (don't have a cc as noted above); besides Davis is more widely published than Wang. But see my observation immediately below about Kleene. All these guys (Post, Kleene, Wang, Davis, Minsky, even Chaitin) knew each other (I believe all overlapped at the Courant Institute at NYU at one time or another, either as students or professors) so I'm not surprised their models are similar. What I meant in my suggestion is to create a new page about "the Wang model without erasure (Print 0)". If you don't I may just do it myself; i.e. extend the Wikipedia:Be bold in updating pages to creating them. wvbaileyWvbailey 23:53, 25 July 2006 (UTC)[reply]

Actually Stephen C. Kleene (1952) beat both Wang by a year or so and Davis by a mile. If you can find a cc of his Introduction to Metamathematics, North-Holland Publishing Company first edition 1952, I think you will be startled, as I was. I suggest that this is the source of all of the Post-Turing models to follow. I was put onto this source by Martin Davis himself. (While he admits that he coined the phrase "halting problem" he attributes the proofs to Kleene in this book and ultimately to Turing's 2nd proof. After some consideration I have to agree with him.) Seriously, see if you can get a cc and peruse Chapter XIII Computable Functions pp. 356 ff. Observe his model in particular ("Suppose the symbol S1 is actually a tally mark '|'", etc.) wvbaileyWvbailey 23:27, 25 July 2006 (UTC)[reply]

Kleene's Introduction to Metamathematics, especially the "Computable Functions" chapter, did indeed impress me (I read it about a year ago) -- as can be seen in my edits to the Halting Problem article. But a couple of observations may be in order about Kleene's model in the context of the present article:
  • Unlike Post's model (and Wang's), Kleene's machines do not use a strictly binary alphabet -- the i/o uses a binary alphabet, but other auxilliary tape symbols are allowed in the course of a computation. (p.360)
You are correct re "... others [symbols] may be used in the computation";, however in his example of using | and blank that extends throughout the chapter (p. 356-386) he uses only | and blank; no other symbol appears on his tape excepting on p. 381 he let a 2 leak in.wvbaileyWvbailey 15:24, 26 July 2006 (UTC)[reply]
I stand corrected on the issue of whether Kleene's model required other than a strictly binary tape alphabet. After first defining "computability" in such a way that the tape alphabet was not strictly binary, he proceeds to define "1/1 computability" in such a way that it is strictly binary ("1/1" referring to 1-way infinite tape, 1 symbol other than the blank). My apologies -- the confusion was mine. --r.e.s. (Talk) 21:44, 26 July 2006 (UTC)[reply]
  • Unlike Post's model (and Wang's), Kleene's machines, by design, have the basic structure of Turing's machines -- with the "logic" embodied in a state transisition table virtually identical to Turing's (though with an explicit "halting state"). Whereas Turing's transition tables were lists of quintuplets of form (q, s, s',d',q'), Kleene's have entries (s',d',q') for each (q,s) as (row,col) -- essentially the same thing. Although the transition table can (with modern eyes) be viewed as a kind of "program", istm that the model is not a "program formulation" in nearly the same sense as those of Post and Wang. (IMO, it's hard not to see Post's Formulation 1 as an early programming language.)
wvbailey here: Kleene is explicit: "Our treatment here is closer in some respects to Post 1936. Post 1936 considered computation with a 2-way infinite tape and only 1 symbol" (p. 261 first paragraph). Then on page 379 he makes the following observation:
"Indeed it can be argued that the Turing machine act is already compound, and consists psychologically in a printing and a change in state of mind, followed by a motion and another change of mind. Post 1947 does thus separate the Turing act into two; we have not here, primarily because it saves space in the machine tables not to do so."
I believe what you are saying immediately above is what I observed in Kleene's statement -- that the Post-model machine (2 "operations" per line of the state table) more "atomizes" the actions of the machine than the Turing model (3 "operations" per line of the table). Clearly this distinction is fundamental to the Post model.wvbaileyWvbailey 15:29, 26 July 2006 (UTC)[reply]
Kleene's model was surely an important influence in bringing about the modern formulation of TMs and of TM computations that terminate. --r.e.s. (Talk) 04:53, 26 July 2006 (UTC)[reply]

Rather than quibble, what we might take away from the discussion and infuse into the article (we have plenty of verifiable backup in the quotes above from Kleene) is the strong distinction between TMs and P-T's -- (i) the addition of HALT/STOP, (ii) the further atomization of instructions beyond the Turing model, (iii) possibly -- and I am uncertain here-- the notion of sequential-instructions with the exceptional test+branch as a specific type of instruction, yielding the following 7 instructions executed sequentially { R, L, E, P, Jb,xxx, Jm,xxx, H } [or whatever abbreviations one wants to pick]. This is how I built my little model out of CMOS stuff -- I used a jammable/loadable up-counter as the "state register" aka "program counter". However, more recently I have found that when simulating a Post model on a spreadsheet, the state-machine version (each instruction followed by test+branch) is much easier to "build".

With regards to this last (iii), in your opinion does Wang and/or Post explicitly assert sequential instruction execution? I say we hunt this down and nail it -- i.e. are there quotes to back up this assertion? It is an interesting one and I agree with you that in its ultimate form the P-T model represents a primitive programming language. (Davis does assert sequential operation, in some of his later writing).wvbaileyWvbailey 15:24, 26 July 2006 (UTC)[reply]

In both the Post and Wang model, the instructions are numbered by consecutive positive integers, and by "sequential execution" I mean that instruction i+1 is executed next after instruction i:
  • In Post's Formulation 1, every instruction ("direction") contains the number of the instruction to be executed next (except of course for Stop) -- and this may be any instruction in the program -- so execution is not generally sequential.
  • In a Wang program, execution is sequential -- except obviously when branching occurs due to a conditional "goto" instruction. The relevant material to quote is on pp64-65. --r.e.s. (Talk) 20:57, 26 July 2006 (UTC)[reply]
This is good news. I believe we could rework the article to compress it into a single "computer-like" model that includes stop, sequential instruction execution, two "operations" per "machine cycle", and the use of a single symbol plus blank on a two way tape. Thoughts? Do you have a cc of Wang in pdf form? I made a special trip to the library (for this and for Markov on the algorithm page) but they were all f***d up and could not get the paper from their electronic archives (grrr -- did find markov though :-) . wvbaileyWvbailey 22:01, 26 July 2006 (UTC)[reply]
Of course this isn't the place to post original work, which is what your suggested compressed "single 'computer-like' model" sounds like to me. As I see it, there are essentially only two types of computational model under discussion -- those that are essentially program formulations (Post, Wang, ...) and those that are not (Turing, Kleene, ...). There is some overlap, but to me there's enough of a distinction to keep the article focussed on Post's Formulation 1 and closely related program formulations that can be seen as derived from it. Kleene's model doesn't belong in the article, imo, because it is very much closer to what is considered the "modern" formulation of Turing machines. (In my opinion, it's the "program formulation" aspect of the models that provides the main justification for not merging the article into the one on Turing machines.) About access to the Wang paper (and others) ... Like you, I rely on a university library, and have only hard-copy of selected sections. --r.e.s. (Talk) 01:03, 27 July 2006 (UTC)[reply]

I disagree with your concern about "program formulation". Your (implied) definition of "program" as a numbered sequential set of machine-commands is suspect/debatable and incorrect per certain authors. For example, the fancier TI DSPs that can execute more than one instruction in an "instruction cycle". Or the following definition:

"An algorithm for the Turing machine consists of a set of five-tuples such that there is exactly one five-tuple for each state-symbol pair, excluding the HALT state. Such an algorithm is known as a Turing machine program: (italics in original, p. 9 in Harold S. Stone, Introduction to Computer Organization and Data Structures, McGraw-Hill Book Company, 1972)
The term "program formulation" is Wang's, who called this model a "program formulation of Turing machines". It seems likely that some later authors also recast TMs in terms of programs largely due (indirectly) to Wang's original example. But ISTM that the present article is best confined to program formulations that are very close to Post and Wang (and Davis, who apparently adopted/adapted Wang's model). --r.e.s. (Talk) 02:49, 28 July 2006 (UTC)[reply]

RE original research: The P-T model is significant because it is more "atomic" than the Turing model, includes HALT, and as a bonus, is (sometimes) represented as a sequence of Von Neumann-like series of numbered instructions. My criterion of choice is: the farthest away from Turing's model but still in the spirit of Post's model is the winner. As the name was given by Davis and used by him at least once in the literature, by all rights his should be the "penultimate" model whatever it is. I could ask him what his intent was, but his answer would be unpublished and unvarifiable and yada yada yada.

Presumably, by "P-T model" you mean the ones called that by Davis. The one with numbered instructions is apparently taken from Wang (1954/7), and the other appears to be a trivial adaptation to labelled (rather than numbered) instructions. Is there any earlier Davis publication of such a model? If not, an encyclopedia article should give credit where due -- imo, it would be described as Wang's model as popularised by Davis, or something along those lines. --r.e.s. (Talk) 02:49, 28 July 2006 (UTC)[reply]

My concern is -- when someone talks about a Post-Turing machine what are they talking about? One of three possible models P-T #1, #2, or #3? I would like to see one model emerge only because then the words 'Post-Turing machine' becomes unambiguous. So what if we agree to define it a certain way? That isn't original research. If Wang's model is in fact "the compressed version aka penultimate model" and we agree -- and especially if it coincides with Davis's named model -- then that's the model I suggest we use. We keep the others for historical, evolutionary interest. wvbaileyWvbailey 22:40, 27 July 2006 (UTC)[reply]

Post presented his Formulation 1 (1936), Wang presented a very similar but simpler "program formulation of Turing machines" (1954/7), and Davis popularised variants of Wang's model, calling them "Post-Turing machines" (197?). If that's the historical progression, I see no problem with the article discussing it that way.
1974. The second model that he called Turing-Post model appeared in "popular" literature (i.e. a non-specialist form) is in 1980 [cf the references]. Therein lies the source of difficulty, in my mind. Davis wrote another book re mid '50's that I need to look at too. wvbailey
As I understand it, you've credited Davis with the term "Post-Turing machine"; so your concern about what the term means is surely unfounded. "When someone talks about a Post-Turing machine", surely they must mean one of those machines popularised by Davis under that name. In an encyclopedia article, it's not for us to come up with anything else (such as a "compressed version"). --r.e.s. (Talk) 02:49, 28 July 2006 (UTC)[reply]
Oh the irony, we've already done the deed. When you google Post-Turing machine [even without quotes], what do you come up with? We've already defined the phrase, albeit ambiguously, for all academics for all time [or until we change our minds]. If I'm not mistaken anyone who writes a dictionary finds current usage of words and eventually defines them per the consensual/accepted usage-- wiki is a dictionary as well. (I'll look at some of the usages and see what they specify). I don't call that original research. I believe that you and I agree -- that the model is a seven-instruction, 2-actions-per-instruction, sequentially- numbered instructions that proceed in sequence until a branch occurs. If I'm wrong about your conception of this model lemme know. I'll research the Davis monograph again (a rare find in the library so I need to go there, if i can get Wang on pdf I'll do that too and send you a cc) and see if Davis 1974 indeed is explicit about the above model.Wvbailey 15:14, 28 July 2006 (UTC)wvbailey[reply]
Subject to a few minor provisos, I think that's a fair summary of the models (afaik). First, I assume that by "2 actions per instruction" you mean the first action is either a "write", "shift", or "test", and the second action is a transfer of control (implicitly or explicitly) to the instruction to be executed next. Second, the instruction-types for Wang's machine W are "write 1", "write 0", "shift left", "shift right", and "if 1 then goto k") -- the conditional "if 0 then goto k" and the unconditional "goto k" are both optional -- so all of these instructions agree with your "2-actions per instruction" description, except for the unconditional goto (only a nit, because it's entirely optional, replaceable by a pair of conditional gotos). Third, in all Wang programs Stop exists only implicitly, as the effect of any attempted transfer to an instruction number that's not in the program. Lastly, I would consider it an inessential variation to use instruction numbers/labels (as Davis might do?) only for instructions actually referenced by some explicit goto -- under the natural assumption that the instructions are otherwise written in the sequence intended. --r.e.s. (Talk) 00:03, 29 July 2006 (UTC)[reply]
I think we agree, but i need some clarification re goto's. I have used both pairs of goto's, i.e. { "If 'mark' goto xxx", "goto xxx" } and the other pair { "If 'mark' goto xxx", "If blank goto xxx" } and as you note this choice is a nit. Re your point about instruction labels, I'm a bit confused and need your clarification.... We can either (i) fashion the goto's so they are relative to the current instruction -- when confronted with a successful test the "central control" then has to "count back" or "count forward" (or use an adder plus the state register) the offset as prescribed by the number in the goto instruction (similar to the "branch" instruction in the Motorola microcontrollers such as the MC68HC05) -- thus how to indicate a "negative" offset becomes an interesting problem (offset binary?), or (ii) as absolute as in "having a fixed label". I've always done (ii) in my work because of the issue of how to indicate a "negative" offset (i.e. jump back -3 instructions); plus a third type of jump seems to be required: (if 'mark' jump_forward xxx, if 'mark' jump_back xxx, goto xxx }. Which variant are you proposing? All my designs (i.e. using variant (ii)) built with CMOS or as a spreadsheet model have required explicit labels for every instruction. "Central control aka instruction table aka RAM" has to hunt down "the next instruction's" label with a match to the number in the state register -- usually by a parallel "match"). Lemme know what you are proposing. wvbaileyWvbailey 14:50, 29 July 2006 (UTC)[reply]
I'm sorry if I wasn't clear: I wasn't proposing any new variant, which is why I tried to emphasise instead the idea of merely summarising those extremely similar models already in the article. The "implementation" issues you mention seem to me of no concern, since we're talking about abstract theoretical models of computation. IMO, the primary focus should instead be "what are the models that are known in the literature as "Post-Turing machines" (or important predecessors so close to them as to deserve mention), and perhaps to give a nice summary of these -- but definitely not to describe yet another variant of our own. --4.232.96.249 00:57, 30 July 2006 (UTC)[reply]
I have to agree with you. But you have to forgive my mind -- always swoopsing down toward "implementing" -- as in, how would I build the sucker. I'm an engineer, so I'm an ultra-constructivist: a "If we can't build it out of slimey goo then it doesn't exist" kind of guy, an abstract mathematician's worst screamy nightmare. RE Post-Turing machines I truly have a 'gate-equivalent' model in my head -- I could draw it in NAND gates etc in short order. And my model requires a bunch of "gates" to decode every address of every possible "next instruction" (i.e a RAM with column and row address decoders). The ironic part is I could draw this stuff easier than trying to describe it. But anyway... I do understand/respect the need for abstract modelling, truly ... and I do agree with you that we should present all the variant-models... (I need to read Wang again but if he lives up to my expectations he's "da man", now seeing that Davis in '58 did not have a Post-Turing model or references to Wang (probably a student, then), see below in boldface). wvbaileyWvbailey 03:40, 30 July 2006 (UTC)[reply]
Tidbits re Library Sleuthing: while I was in the library I looked into Davis's books (all 3 in series one after the other on the shelf -- 'the shelf' is really short, just a few recursion-theory types"). The first book was his lectures at the Courant Institute that define Post-Turing "programs", the third was "Undecidable" (I have a personal cc, is a keeper). But the second was his 1958 "Computability and Unsolvability". Here on p. 70 appears the first known expression of "the halting problem". But nowhere does Davis mention Post-Turing or Turing-Post (machines or programs) and Hao Wang doesn't appear anywhere.
Davis does deliver a good full version of a Turing-machine multiply "program" (and a few others) using only mark, blank and a couple other symbols. With respect to Fredrick C. Hennie (1977) Introduction to Computability, Hennie doesn't even mention "Halting problem" (I forgot to check him re Wang. Minsky (1967) does give Hao Wang 5 citations in his index, PLUS our Wang (1957) as a source). Hennie delivers a very nice piece re the universal Turing machine, p. 90-103 -- examples and flowchart, but no actual code. I've only seen the entire Turing code one place-- Roger Penrose, The Emporer's New Mind, page 56-57. Never seen a version using Post-Turing code (I've done it, but pathetically it's "OR"). Have you seen a published Post-Turing U-machine code? wvbaileyWvbailey 03:40, 30 July 2006 (UTC)[reply]
My first thought is that since any binary-tape TM in standard 5-tuple form is so very easy to rewrite as a P-T machine (just by translating each state's pair of 5-tuples into appropriately numbered instructions), any of the UTMs of that kind in the literature could be treated this way. E.g., one published by Rogozhin has only 24 states. --r.e.s. (Talk) 08:49, 30 July 2006 (UTC)[reply]

Use a Busy Beaver as P-T example?[edit]

3-state Busy Beaver[edit]

Question: below, instruction 7 is an unconditional jump to instruction 8, similarily, instruction 21 is an unconditional jump to 22. A jump to the next instruction seems unnecessary, as that is part of the definition, no? or, is such a jump considered a no-op instruction, but then why do we need that? pmuet Aug 15, 2006

You are absolutely correct to question this. Because the P-T model can go sequentially, sometimes an unconditional jump is just a waste of an instruction (hence a nop). But this points out why the Post-Turing model is different from the Turing-machine model....
What I did, to be thorough, was turn the "3-state busy beaver" into a set of seven Post-Turing instructions with a sort of rote/unthinking "algorithm" that always ends with unconditional jumps (the conditional jump comes at the beginning, as it does in Turing's model). On the article page I broached this with an example of a "tape-erasing" subroutine at the end of the busy beaver. I hope this explanation helps. If not, lemme know. You have pointed out a possible source of confusion, thanks.wvbaileyWvbailey 20:54, 15 August 2006 (UTC)[reply]
I understand. Using this (simple) algorithm has the nice effect of allowing to order the originial T-states in any order in the P-T model.
Yes, I hadn't thought of that. But its true. A finite state machine's states need not be ordered. Only the initial state must be specified.wvbaileyWvbailey 21:41, 17 August 2006 (UTC)[reply]
Also, to be complete, we could add an unconditional jump at the very beginning and therefore remove the requirement that the initial T-state be the first block in the P-T model.
True.wvbaileyWvbailey 21:41, 17 August 2006 (UTC)[reply]
This would also have the benefit of being able to include multiple initial states (sounds like a contradiction, I know) in a P-T "program" that would cause it to behave differently depending on what state is jumped to first. Quite interesting, indeed. pmuet Aug 17, 2006

This is just a test of a macro that converts Excel spreadsheet tables into wiki-tables. It worked! The passage from Turing- to Post-Turing machine is a simple, primitive conversion -- each state of the B-B (Turing instruction) gets converted into an initial conditional jump then 3 instructions for the "0" case and three for the "1" case, for a total of 1+2x3 instructions per B-B state. Thus we expect ~21 (3 x 7) total instructions in the 3-state B-B. But the final HALT requires one less, for a total of 20 P-T instructions. If we have to use only conditional jumps the unconditional jump ends up two jumps (for example: J,15 = J0,15 followed by J1,15).

The state table (transition table, instruction table) for the Turing-machine is drawn per engineering convention (states down the left side, the various inputs drawn across the top) (cf Peterson, McClusky (1965), etc). This diverges from some (early ?) mathematics conventions e.g. Mirsky (1968)) where the tables are rotated so the symbols are down the left and the next state is across the top. Note that the spreadsheet version of the Post-Turing table is somewhat like this non-engineering convention.

tape symbol is 0: If tape symbol is 1:
Current state: Write symbol: Move tape: Next state: Write symbol: Move tape: Next state:
A 1 R B 1 L C
B 1 L A 1 R B
C 1 L B 1 R HALT

This convention for the state table follows the convention used on the busy beaver page:

Current state A: Current state B: Current state C:
Write symbol: Move tape: Next state: Write symbol: Move tape: Next state: Write symbol: Move tape: Next state:
tape symbol is 0: 1 R B 1 L A 1 L B
tape symbol is 1: 1 L C 1 R B 1 N HALT


The 22 instructions that make up the Post-Turing equivalent 3-state busy beaver. This number can be reduced (by 3 or 4) to generate a more efficient machine, but we are using the simplest possible Turing- to Post-Turing conversion-convention of 7 Post-Turing instructions/states per 1 Turing-instruction/state.

Instruction number: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Instruction: J0 P L J P R J J1 P L J P R J J1 P L J P R J H
Jump-to number : 5 15 8 12 1 8 19 8 22
Turing-state label: A B C H

The state diagram of the Turing-machine equivalent of the 3-state busy beaver drawn with the Post-Turing states inside it. The state action {e.g. P for print) is drawn inside the state, and per engineering convention (McClusky, Peterson) and the state number (or identification) is drawn inside each Post-Turing state/instruction).

location: right

Some figures don't work so good unless they are blown up; double click on image to see better, then select high-resolution when the picture comes up:

3-state Busy Beaver run on a P-T machine

wvbaileyWvbailey 20:39, 2 August 2006 (UTC)[reply]

2-state Busy Beaver[edit]

State table for a 2-state Turing-machine busy beaver. The "Print" instruction writes a 1, the "Erase" instruction writes a 0. The tape moves "Left" or "Right" (i.e. the "head" is stationary).

Current state A: Current state B:
Write symbol: Move tape: Next state: Write symbol: Move tape: Next state:
tape symbol is 0: 1 R B 1 L A
tape symbol is 1: 1 L B 1 N H

Instructions for the Post-Turing version of a 2-state busy beaver, followed by a "run" of the P-T equivalent. As shown in this drawing neither the "Erase" nor the "Jump if tape-symbol is 0" are used by the busy beaver:

Instruction #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Instruction: J1 P R J P L J J1 P L J P N J H
Jump-to #: 5 8 8 12 1 15
Turing-state label: A B H

2-state Busy Beaver run on a P-T machine

The following is a two-state Turing busy beaver with additional instructions 15-20 to demonstrate the use of "Erase", J0, etc. These will erase the 1's written by the busy beaver:

Instruction #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Instruction: J1 P R J P L J J1 P L J P N J L J0 E R J1 H
Jump-to #: 5 8 8 12 1 15 20 17
Turing-state label: A B *


Additional Post-Turing instructions 15 through 20 erase the symbols created by the busy beaver. These are more "efficient" -- to accomplish the same task a Post-Turing machine will (usually) require fewer Post-Turing states than a Turing-machine simply-converted with 7 equivalent Post-Turing states per Turing state; a go-to (jump) can occur to any "sub-state" (e.g. P, E, L, R )within the Turing-state, a grouping of move-instructions such as L, L, L, P are possible, etc.:

Instruction #: 16 17 18 19 20
Instruction: J0 E R J1 H
Jump-to #: 20 17

2-state Busy Beaver followed by tape-erase, as run on a P-T machine

2-state Busy Beaver followed by tape-erase, as run on a P-T machine

wvbaileyWvbailey 14:05, 2 August 2006 (UTC)[reply]

Drawing error[edit]

In the picture 'Algorithm_P-T_multiply_2.JPG', there's a 'P' missing where it 'repairs' the blank marker in 'b' after incrementing 'c'. Nl 2304 (talk) 10:07, 8 April 2013 (UTC)[reply]

Found the file (2006!), corrected the image, uploaded corrected issue. Let's hope this is successful. BillWvbailey (talk) 15:22, 8 April 2013 (UTC)[reply]
A day later and the file is still not the latest, corrected (current) image. BillWvbailey (talk) 14:03, 9 April 2013 (UTC)[reply]