Collaborative articulation of how abstraction and language is employed in the computational manifestation of numbers -- including analysis of the role of syntax, semantics, and meaning in the specification and use of software interfaces.

On October 24, 2007, Stephen Wolframannounced the awarding of a $250,000 prize for a proof that a particular 2-state 3-color Turing Machine is indeed universal. [I selected a specific Wikipedia page on Wolfram because it covers the prize and the ensuing discussion on FOM in a form that is accurate enough at this point. It is unimportant to this post how later developments may be reported there.]

update 2007-11-22: The challenging and testing of the (2,3)-TM's universality is continuing. I have added some additional links and I am now going to take my attention off of this activity. The (2,3)-TM is interesting as a simple test case for demonstration simulation of TMs with other computational mechanisms, but I am not going to pay anything but glancing future attention to (2,3)-TM universality questions and the connection with Wolfram's thesis, the principle of computational equivalence. Resolution of these matters seem irrelevant to the particular focus of the Miser Project.

Controversy and Problems of Definition

Ignoring the controversy around the metaphysical aspects of A New Kind of Science, there is now a new controversy around the way this prize was awarded, the definition of the conditions for the prize, and also the conditions to be satisfied under which the (2,3)-TM achieves UTM-hood, demonstration of Universal Turing Machine capability. Considering that the machine has only two states, for a total of six rules with a three-mark alphabet, most of the work is knowing how to prepare the input and then how to recognize "the answer" of any of its computations.

On the Foundations of Mathematics discussion list, FOM, there is ongoing discussion on how universality is to be understood, how one knows that a terminating computation has indeed been completed (there being no halt state), and how one encodes the initial tape and starting condition before setting the machine loose. In particular, it might be unfair to require some other universal machine to create the initial tape for the (2,3)-TM, and it is not clear whether that is the case or not. There is also some concern for the apparent requirement that the tape may require an infinite extent of initial markings and how difficult that may be to accomplish.

This questioning will play itself out. On the FOM list, award-winner Alex Smith, Stephen Wolfram, and others are testing, clarifying, and testing again with a variety of examples and speculations, all designed to narrow down the discussion and conclude how the (2,3)-TM fits into the universality pantheon, and just exactly where.

To help frame the discussion, some useful resources have been pointed out:

Martin Davis points to two historical articles on the definition of Universal Turing Machine.

Damien Woods provides some background and a reference on a variety of small Universal Turing Machines, and the variability in universality conditions.

Vaughn Pratt, in a reply to Alex Smith, provides a nice rundown on the topics of universality and non-halting computations as they have been addressed in what might be termed the mainstream development of computation theory. Pratt, in his cataloguing of contributions, mentions that some have come to advocate more-abstract axioms for recursion theory (the test case for equivalent universality in the Church-Turing convergence). This led some responders to a new topic (below).

On November 17, Vaughn Pratt looks over some earlier discussion and summarizes that "incomparable conventions for small Turing machines yield incomparable results." Pratt goes on to object to certain machines which can be shown as equivalent to being two-counter machines, and hence prospectively universal. [This has me think of this as a rudimentary Turing-Machine counterpart of the von Neumann quality, where two registers (what instruction location, what data location) are prominent, a distant reach from (2,3)-TMs.]

On November 19, discussant Bob Hearn provides some comments from Marvin Minsky and returns to the two-counter machine consideration with "Incidentally, the reduction chain from arbitrary Turing machines to cyclic 2-tag systems to Wolfram's (2,3) machine proceeds via two- counter machines." [This is not, I think, a smoking gun, but something to comprehend more carefully.]

On November 21, Alex Smith proposes a "nested-universality" criterion and I think it is safe to say that what we have here is very much a work in progress and that a sharp universality criterion for the (2,3)-TM is in need of considerable work. This is valuable for those who find the question important, although there is here before us the potent pitfall (and danger to all scientific investigations) of changing the criteria until the predetermined candidate is included.

Although it is not crucial to the analysis of the (2,3)-TM and agreement on the way in which it is to be accepted as universal, the demonstration of such universality is claimed by Wolfram to be evidence for his thesis, the Principle of Computational Equivalence (PCE). Wolfram's argument is that "most systems are computationally equivalent. For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. Computation is therefore simply a question of translating inputs and outputs from one system to another." This principle should not be confused with the Church-Turing thesis, which does not suggest where computation models are manifest, merely what the limitation of any of their powers seems to be. Wolfram accepts the Church-Turing thesis to the extent that the PCE claims an equivalent ceiling on what these ubiquitous computational mechanisms can achieve and the UTM is its exemplar. [It strikes me as ludicrous to consider it a simple matter to arrange "inputs" --conditions on the planet -- by which the weather system carries out an arbitrary desired computation. It is interesting that there are, at this point, also concerns over how inputs for the (2,3)-TM are to be arranged.]

Kovas Boguta (at Wolfram Research) points out that the PCE is not a mathematical claim but an empirical one so refutation in the mathematical sense is inappropriate. [The same holds for the Church-Turing thesis, but the two theses are not equivalent, as I've already mentioned.]

On November 19, Boguta follows up with a "Principles of Computational Equivalence" that goes into greater detail, contrasting PCE with the Church-Turing Thesis (CTT) and relates PCE (versa CTT) with regard to equivalence between complexity of behavior, equivalence between different computations, and equivalence between computational processes and physical processes. [This for me reveals the degree to which adoption of PCE can slide into metaphysical acceptance of incomparables and incomensurables.]

On November 20, Wolfram Research's Todd Rowland provided additional commentary on PCE and how it is observable: "if you see something in nature that is complicated, then that thing is likely to be universal" and "if you search a space of computational rules starting from the simplest cases, then you will quickly find examples of simple rules that are universal." There are links to further material. It is clear that we will be circling around these informal observations about universality and ponder how one can have a sharp criterion in the face of such widely different empirical settings.

Higher-Level and Practical Computation-Model Formulations

Meanwhile, Richard Heck, noticing Vaughn Pratt's rundown, asks for more about more-abstract axioms for recursion theory. Further contributions accumulate:

Andrej Bauer provides a recap of different ways computability (and computation) theory is being taken more abstract, with a nice collection of thumbnails and references.

Vladik Kreinovich suggests that Abstract State Machines be added to Bauer's survey. Bauer, who is familiar with the model, wonders whether ASMs have been used in computability theory.

And we find ourselves at an interesting point of practical application (e.g., algorithms) of abstract (universal?) computation models. I find Gurevich's work to be interesting, especially because of the way it is being applied. (The Miser Project is not anywhere close to being able to address computation at that level.)

Meanwhile,

John McCarthy quietly slipped in a straightforward observation that establishes the simple beauty of practice tied to theory.

[update 2007-11-22: I added some of the latest FOM material and I think that is more than enough. I am satisfied to have shown how the discussion is going (and how this level of scrutiny should have preceded awarding of the prize, just as for the Millenium Challenge prizes. I don't think my interests in universal computation in the limited context of the Miser Project are impacted by this struggle universality and the principle of computational equivalence. The topic of higher-level and practical computation models remains of interest to me for later.] update 2007-11-20: I added a few more links suggesting how this exploration wanders on and off of unexpected territories.]

Dennis, this whole exposition is quite interesting for an outsider like me. I was particularly interested in the empirical nature of the PCE. I find the idea that a "principle" is amenable to empirical testing plausible. But I also think that principles are abstractions; they don't really exist. At least not in my world of lived experience.

But this really shows my naivete regarding the philosophical issues here. On the other hand I feel pulled in two directions. I resist reduction because it removes the nuance from my actual work experience and practice. I use reduction in all my systems engineering work, because simplification and removal of detail are required for successful system building.