The Miser Project  

Collaborative articulation of how abstraction and language is employed in the computational manifestation of numbers -- including analysis of the role of syntax, semantics, and meaning in the specification and use of software interfaces.

Click for Blog Feed
Blog Feed

Recent Items
Republishing before Silence
Friday Miser Note: Interchanging Code
Alan Turing Birthday: July 23, Centenary: 2012
Miser: Interpretations of Identity
Turing Arrives: Petzold on Arithmetic
Reality Is the Model
Catching Up with Turing
Miser: The Logic of Ot
Miser: Frugalese for Applicative Operations
SeaFunc: 2008-02-20 Functional Programming Meetup

This page is powered by Blogger. Isn't yours?

visits to Miser Project pages

The nfoCentrale Blog Conclave
Millennia Antica: The Kiln Sitter's Diary
nfoWorks: Pursuing Harmony
Numbering Peano
Orcmid's Lair
Orcmid's Live Hideout
Prof. von Clueless in the Blunder Dome
Spanner Wingnut's Muddleware Lab (experimental)

nfoCentrale Associated Sites
DMA: The Document Management Alliance
DMware: Document Management Interoperability Exchange
Millennia Antica Pottery
The Miser Project
nfoCentrale: the Anchor Site
nfoWare: Information Processing Technology
nfoWorks: Tools for Document Interoperability
NuovoDoc: Design for Document System Interoperability
ODMA Interoperability Exchange
Orcmid's Lair
TROST: Open-System Trustworthiness



Republishing before Silence

The nfoCentrale blogs, including Numbering Peano, were published through Blogger via FTP transfer to my web sites. That service is ending.

As part of the migration, I am republishing this blog in the latest stable template format.

Then there will be silence as Blogger is unhooked, although the pages will remain.

No new posts or comments will work until I updated the web site to use its own blog engine. Once that migration is completed, posting will resume here, with details about what to know about the transition and any breakage that remains to be repaired.

Meanwhile, if you are curious to watch how this works out, check on Spanner Wingnut’s Muddleware Lab. It may be in various stages of disrepair, but that blog will come under new custodianship first.




Friday Miser Note: Interchanging Code

While I think and post mainly about the theoretical abstractions that are important in getting Miser and the Frugal language done properly, I am also thinking ahead to the way the Miser engine can be delivered as a platform component (Windows DLL, .NET assembly, etc.) that hosts Miser on behalf of a software application.  I have in mind that any kind of Frugalese user interface for a console application, including any interactive development environment (IDE), would be built atop a hosted oMiser engine as an important first case.

Because the Miser model is platform independent, it would be nice to have a platform-independent way to save, load, and interchange Miser “codes” in a neutral, highly-interchangeable format.

XML comes to mind.   Although XML is not a compact binary format, it is a well-known, standard format for which a great number of problems are already solved, including versioning, the ability to deal with extensions gracefully, the complete hiding of the storage structure, and harmony with my intention that the Miser abstraction be universally comprehended.  (That is to say, it shall seem as if there is only one oMiser in the world, and the fact that implementations are distributed and replicated is as invisible as can be practically achieved.)  In the oMiser case, the XML stream would also be highly compressible using widely-available compression techniques.

The interchangeable “object code” among all oMiser implementations will be standardized in an XML stream. 

I propose to abuse the purity of XML for this purpose.   The XML will be well-formed but it won’t be the way XML would be naturally used to communicate a persistent format for oMiser Obs.

My basic plan is that the loadable Ob will be coded in a reverse-Polish notation conveyed in XML.   The notation will have a syntactic structure that conforms to a BNF grammar such as

〈ob〉::= 〈term〉 | 〈ob〉 〈un-op〉 | 〈ob〉 〈ob〉 〈bin-op〉

allowing the bottom-up build-up of an internal Ob representation as the XML stream is being read.  Not every Ob operation needs to be implemented, only those needed for constructing constant Obs: 

  • Here 〈term〉 entries will specify individuals or links to Obs defined earlier in the stream. 
  • There will be unary〈un-op〉operations, including one for ob.e(ob) and one for labeling an Ob so that it can be referenced later in the stream.  Linking to already-expressed Obs is indispensible for simplifying repetitive introduction of the same Ob in the interior of a larger Ob.  I intend to make use of xml:id for this purpose.
  • There will be binary 〈bin-op〉operations, including one for ob.c(ob, ob). 
  • There will be additional mark-up, but this is the essential structure.

XML streams for the loadable format will be supported at the core level of an oMiser implementation.  Given a handle on an Ob, there is an interface that will emit the XML stream for persistent representation of that Ob for any interchange purpose.  Likewise, there is a low-level interface on every oMiser engine by which an XML stream is accepted and a handle for the corresponding Ob returned for use in the hosting application.

The XML representation will be flat.  That is, the <oMiser:ob> XML element contains a flat sequence of <term />, <un-op />, and <bin-op /> elements.  It is easy to process such a flat structure, and it is easy to ensure that such a structure is well-formed according to the grammar for〈ob〉given above.

There are three interesting challenges for this structure.  These need to be explored to ensure that the choice of a reverse-polish Ob-loading is as practical as I intend for it to be:

  1. It is more difficult to produce the loadable format than it is to consume it.  This desirable trade-off has to be a containable problem.  In particular, we need storage-efficient techniques for emitting the bottom-up reverse-polish construction of any Ob that we are given a handle to.
  2. Because Obs are inherently immutable, sharing of common parts is an important storage economizer in oMiser implementations.   Taking advantage of opportunities for sharing of substructure in a running implementation is an interesting challenge.  Preserving or even enhancing sharing in an exported loader format is an additional opportunity.  (It may well be the case that improved sharing and the consequent logical compression of the persistent form is better done by a separate post-processing optimization utility.)
  3. While the run-time validation of a well-formed reverse-Polish sequence of elements is trivial, its expression as a constraint on an XML schema is not so easy.  We don’t need an XML schema to implement a consumer of the format (and we can use namespace and DTD rules to signify that the format constraints apply).  Just the same, it is valuable to have some sort of schema that expresses the reverse-Polish constraint on the sequence of content elements within an XML <oMiser:ob> element.

I was reminded of all of this when I recently read the 2009-09-29 Rick Jelliffe post on context-free XML.   The shoe-horning of the context-free reverse-Polish flat stream for〈ob〉as content for an XML <oMiser:ob> element creates a context-sensitive problem for expression in some sort of XML schema.  This post is intended to elaborate on the comment that Rick’s post inspired from me.  I trust this is useful context for that.  (As usual, your mileage may vary.)

Discussing oMiser implementation considerations is long overdue in any case.

Labels: ,



Alan Turing Birthday: July 23, Centenary: 2012

I just learned that July 23, 1912 is Alan Turing’s birth date. 

There will be a centenary celebration throughout 2012 with UK events held in Cambridge, Manchester, and Bletchley Park.  There are already many interesting links on the site and among the Turing Centenary Advisory Committee.

Bletchley Park, where the work on cryptography was instrumental in the conduct of World War II, has fallen into disrepair and neglect.  One hopes that renewed attention on the life of Alan Turing may bolster support through 2012 and beyond.

[I’m thinking maybe this is an interesting way to finally meet-up with Charles Petzold.  I might even have more to offer on insights about computation that I have gained from the Miser Project by then.  This year is the time to renew my attention on that.]



Miser: Interpretations of Identity

[update 2008-06-17T16:08Z: I added t0' because I couldn't stand not to show the better refactoring of t0.  I added mention of equivalence classes in an interpretation and also came up with conditions other than invalidity for finding an interpretation unusable.]

In The Logic of Ot, I said that I would use informal expressions of Ot, the logical theory that applies to Miser Obs.   Now that there has been some use of the special characters and notations of First-Order Logic with equality, I want to take advantage of that to talk about interpretations of identity in models of Ot.  The ability to identify and distinguish has great bearing on computational systems, and identity as an interpretation is particularly useful to explore.

= as Equivalence Relation

With FOL=, identity and the relational operator, "=", are taken as given, and the following hold:

=1: ∀x(x = x)

=2: ∀xy(x = y → y = x)

=3: ∀xyz(x = y  ∧ y = z  → x = z)

≠0: ∀xy(xy ↔ ¬(x = y))

The first three are the common properties of equivalence relationships: "=" is reflexive (=1), symmetrical (=2), and transitive (=3).   The final condition is essentially the definition of "≠" in terms of "=".

What Are We Talking About?

In a first-order logical theory, the variables (x, y, and z as seen in ∀xyz) are understood to refer to objects in the domain of discourse.  We only know what there is to know about that domain from the introduction of constants and expression of conditions that are theoretically required to be satisfied over that domain.  For FOL=, we are given an equivalence relation (expressed with the symbol "="), which tells us very few things about conditions under which variables can be taken as referring to the same object of the domain of discourse.

It should be apparent that having "=" doesn't tell us much about the theoretical objects, although it is more than not having "=" (and its partner, "≠").

An intended interpretation could well be that objects in the domain of discourse be identifiable (we can tell when we are referring to the same one) and discernable (we can tell when we aren't).   Let's see how that might work.

A Tiny Domain

To provide some practice with ideas of practical interpretation, consider the logical theory obtained by adding the following conditions:

t0: ∃xyzu( x y ∧  y zx z ∧ (u = xu = yu = z) )

or the equivalent,

t0': ∃xyz( x y ∧  y zx z ∧ ∀u(u = xu = yu = z) )

The intended reading of this is as

    1. There are at least three different objects in the domain of discourse, and
    2. Any object in the domain of discourse is one of those

which is to say, there are exactly three objects in the domain of discourse.

If we only have that one additional condition (t0) in our logical theory, we know nothing beyond that. 

Notice that we have not labeled the three theoretical objects in any way.   All we have provided for is that there be exactly three.

It will be useful to appear to be more specific by naming them:

t1: A ≠ B ∧  B ≠ C ∧ A ≠ C

t2: ∀u(u = A ∨ u = B ∨ u = C)

Here, A, B, and C are constants for objects in the domain of discourse.   We haven't provided much more than what (t0) assures us of, although if there is more to say about the ways that the three objects differ, having the constants to refer to may be useful.

Illustrative Interpretations

The first interpretation will be in terms of numbers.  Assume the system of numbers and arithmetic.  Take that system as separate from the logical theory consisting of FOL= plus t0 and t1.  (t2 is actually a consequence of that much.)

One interpretation of our tiny theory in number theory would be by saying that the interpretation of A is any n < 0, the interpretation of B is 0, and the interpretation of C is any n > 0.   We could be specific, say, with interpretation of A as -1, B as 0, and C as +1.   We could also say that A is all n < 0, B is 0 only, and C is all n > 0.  That is, A, B, and C correspond to distinct classes.  Since they have no members in common, these are known as equivalence classes.  We'll explore that further when we return to exploration of Ot.

It doesn't matter, here, how the interpretation is chosen, so long as, having made it, we stick to it.  The system in which we make the interpretation is a model (in the loose sense of Reality is the Model) provided that all deductions in our logical theory hold in the interpretation.

Because the logical theory says nothing about aspects of the model that are not accounted for in the logical theory, those matters are irrelevant to the conditions of the logical theory.  It does not matter how many different ways the interpretation could be made in the model, so long as when one is made, the logical theory is seen to hold for the interpretation.  There is a common fallacy involving reasoning about extra-theoretical characteristics of the model to argue that the theory is incorrect or inapplicable, when the disagreement is more-appropriately viewed as one over choice of interpretation.   It helps to carefully separate the theory from its interpretations and models to avoid that pitfall.  The abstract theory and the logical formalism is helpful in that regard, even if it feels quite unnatural.

"="/"≠" Are Interpreted Too

It is easy to overlook one important feature of an interpretation of our tiny theory: There must be an interpretation for "="/"≠" in the model as well.  That comes along too easily in our choice of interpretations in the system of numbers and arithmetic and it is easy to overlook.  When we dig into computational systems and the details of Miser, the ability to discriminate "="/"≠" in particular interpretations becomes very important.

Finally, as an interpretation in reality: let A be earth, B be wind, and C be fire.  This is a questionable interpretation quite apart from the omission of water.  The difficulty is assuring that these are cleanly distinguishable concepts.  What do we do with flaming molten lava and the sucking wind of a forest fire?.   We will stumble here at least in an effort to have well-determined "="/"≠" and understandable communication of the conditions that others can accept and apply.  The simple, practical conclusion may be that the interpretation is invalid (or simply meaningless/useless) and the theory is inapplicable in that case. 

In other cases, a certain conceptual sloppiness, if carefully circumscribed, may be tolerable in having useful interpretations in reality.  It remains to be seen whether that is ever very workable.



Turing Arrives: Petzold on Arithmetic

2008-05-26: Catching Up with Turing
2007-11-22: More Annotated Turing
2007-11-04: Petzold Annotates Turing!
2007-11-17: Different (Universal) Computation Models
2008-02-29: Miser: Frugalese for Applicative Operations
2008-05-06: Miser: The Logic of Ot
2008-05-29: Reality Is the Model

I pre-ordered my copy of Charles Petzold's The Annotated Turing on November 22, 2007.  On May 24, I had to authorize a delay in the estimated ship date.  I waited patiently.  When I saw that Petzold had his copies, I wondered if my order would fall through the cracks.  The site listed the book as in stock, but I had no word.  That was resolved this Monday, June 9, when I received notice of shipment.  The book arrived two days later by postal mail.

The problem with actually having The Annotated Turing in my possession is deciding when to start and clearing the time to do it.  I did start reading at the end of the book, and I have nosed into a few other sections.  Naturally, the book arrived at a moment when all of my projects are behind and I am already starting an important new one.  A systematic reading is yet to come.  I know I will love it if only for the historical threads and connections that Petzold traces in the book. 

As part of the tracing of connections, Petzold has been reading One to Nine by Turing's biographer, Andrew Hodges.  There are numerous connections traced there, and I like it that Petzold finds himself arguing with Hodges as he works through the book.

Yesterday, Petzold comments on Hodges' objection to memorization of arithmetic with recognition of his own experience in learning the multiplication tables.  The interesting idiosyncrasy is how Petzold failed to have automatic memory of certain multiplication combinations and he would solve those cases by algebraic deduction when needed.  That resonated for me.  There are many cases where I did not remember a rule, but I could and did recreate it on demand.  I also share Petzold's having done that long after simply memorizing the result would have been more productive.  (This shows up in other activities of mine too, including re-inspection of already-written code to remind myself that it is sound and what the context is before adding more to it.) 

I wonder how much this ability to have abstracted an applicable principle (in my case, remembering the times-11 and times-12 cases in terms of times-10 plus times-1 or times-2) leads to algebraic facility and the handy use of identities and mathematical induction well before I developed anything like a fundamental understanding of number theory over the course of my adult years.  I recall re-derivation as being valuable in test-taking and yet it is not as direct as having embodied the result for immediate availability. 

I can't tell you how many times I have verified for myself what the correct formula for the sum of the first n integers is by redoing the constructive derivation.  My doubt is always between n(n+1)/2 and n(n-1)/2 and it is, of course [easy for me to say], the former.  I say that not because I have memorized it but because I know how to tell quickly another way, the first way I ever saw it "proved."  I suspect that I have just sped that up for myself by looking at it anew this time.  Then there's the one about the sum of the first n powers of 2 and what it looks like in binary, etc.  I suspect that our diminished respect for the teaching of arithmetic and how to verify arithmetic results is causing trouble for students and their teachers when it is time to approach algebra where one can't avoid dealing with ratios and fractions by using a calculator.

Seeing this latest post from Petzold has me thinking of the connection with a recent paper by Keith Devlin (via Richard Zack), "The Useful and Reliable Illusion of Reality in Mathematics."  Two connections that come to mind: how we might come to exercise our capacity for abstract, conceptual thinking as we develop our facility with language, and the tendency to see mathematical conceptions as real.  In the second case, Petzold has observed that Turing's machine was his idea for a "real" computer, and I am surprised by that.  There are deeper connections in the Devlin paper with how we end up regarding mathematical objects, and that is worthy of separate discussion with regard to what makes computers so successful and so devilishly difficult to deal with.



Reality Is the Model

[cross-posted to Orcmid's Lair.  This is at a level of abstract speculation that is more appropriate here than there.  However, I would like a broader audience, and reactions, to what strikes me as having practical importance in how we develop successful computer-based systems.]

During my regular Tuesday buddy call with colleague Bill Anderson, it suddenly occurred to me that I could account for reductionism, an error that scientists and others (software technologists and their masters, for example) make.  It is all captured in the following statement:

 Theories don't model reality; reality is the model.

I don't recall what prompted my exclamation on the subject during the call, unless it was something about how objectionable "code is the model," "code rules," and other developer slogans are, where the implementation of something becomes the specification, denying us access to any useful answer to the question "implementation of what?"

Now, it is not a new thing for Bill and I to be discussing these issues, including this view of the role of theory.   What hadn't landed so sharply was how viewing theories as models for reality is the very pitfall that engenders reductionism.

There's much more careful development required as part of arguing for the usefulness of "reality is the model."   I have been looking at that in a setting where I am conducting a theory-driven implementation of some software in the Miser Project.  Bill and I discuss this even more where it matters a lot to information technology, in contrasting What Computers Know with what Programmers Know to Do and how there is an essential gap between how computer systems are built, the requirements that those systems are meant to satisfy, and the world of opportunity in which those systems are instruments of human purpose.

For now, I want to look at the statement in the context of how we appear to arrive at theories and then apply them as given.

A word of warning: the value of "reality is the model" is not that it is "true."  The value to be found is in having a more useful and powerful way of looking at what we do with theories in contrast with the limitations of imagining our theories to be modeling reality.

Where Theories Come From

There seems little doubt that theories started out as explanations of the regularity in our experience of reality, of the world.  Some of these theories were, and still are, very pre-scientific (as theories about theorizing might be as well, and that won't stop me).

At some point in the course of the scientific revolution, say around 1600, typified by the work of Frances Bacon, there was an important move to development of scientific theories via inductive generalization from observations of nature, not deduction from some principles of cause.  The reliance on experimental confirmation and empirical observation became important.   A consistent case of contradictory results could show where the theory is inapplicable or even completely incorrect.  One risk is that expression of a generalized (abstracted) theory might be taken as an explanation of the nature of nature as in "objects at rest tend to remain at rest." 

Emergence of mathematical sciences, illustrated in the achievements of Isaac Newton, had a profound impact.  It permitted the deduction of consequences by calculation or proof, and it permitted the experimental confirmation of those deductions by natural experiments.  Notice, however, that the deduction occurs inside the theory, as it were, and the correspondence of the conclusion with reality is an empirical matter.

Theories on Their Own

The mathematical formulation of important theories, and the computational applications of those theories, are removed from reality.  Once we are operating in the formal, mathematical theory of a science, there is no reality there.  The connection to the reality is accomplished by our interpretation of the mathematical theory as being about reality.  Being about something, especially reality, is not a feature of mathematical systems.  Being about something is how we interpret results in the mathematical formulation as applying to the world in accord with a scientific thesis.  In other words, the scientific thesis part is not expressed in the mathematical formulation.  That is what we add ourselves (even though it is what led us to the formulation and why it might be of any value to us).   Some of these interpretations have been so reliable and so useful, we tend to speak of the expressions of the theories as laws (E = mc2 being a popular one, force being proportional to rate of change of momentum being less familiar, although we experience its confirmation every day). 

There is a combination of deductive process (predicting via calculation, say) and inductive formulation in this approach.  We might say that (empirical) experience has shown that the interpreted deductions are reliable and that the theory is a good one in that sense.

The pitfall is to think of the theory as the truth, as somehow explaining how it is that the interpretation of findings in the theory align.  Perhaps the most current conceit of this nature has to do with confusion of what nature does as computation because computational processes have some similar characteristics.

Interpreting Theories and the Reductionism Pitfall

There is an area of mathematics called "model theory" or what, here, we might call the model-theoretic view of mathematically-expressed theories. 

In model theory, the idea is that a mathematical theory, expressed in a formal, logical way, is given an interpretation by identifying its mathematical elements with those in some other system (usually some other kind of mathematical one).  If the interpretation is such that deductions in the first theory have results that are true in the interpretation, we say that the interpretation is valid, and that the interpretation is a model of the theory.  The model satisfies the theory.  I am omitting many technicalities (and probably abusing model theory) in order to appropriate the basic idea for application to interpretations in the world in mathematical sciences.

An important feature of this view is that the theory need not account for everything in the model.  For the model to be a model everything that is true of the theory is true in the interpretation, but the model is not otherwise constrained.  The interpretation is only for that aspect of the model that corresponds to the theory.  There may many other features and aspects to the model that are simply not captured by the interpretation (and hence the theory).   Under the particular interpretation, at least, however valid (in either a mathematical or an empirical sense, as the case might be), the theory has nothing to offer about those other matters.  In particular, we are free from concluding that the theory explains the model or dictates its "working."  We are also free, in the case of the world and many mathematical and logical theories, of having quite different interpretations in reality for models of the same theory.  (Interpreting objects and phenomena as numbers is something we are able to do in innumerable ways.  I had to say that.)

The reductionist pitfall is treating the theory as the model (and therefore comprehensive), and claiming the theory to be "about" the world.  In that case, there is no way to countenance there being anything else about the world and even the obvious becomes inaccessible.  There's some other kind of pitfall in faulting a theory for not embracing all of reality in its interpretations, but I am less concerned about that, although "reality is the model" avoids that too.

Computational Manifestations of Theories

This apparently-backward way of looking at theories is about the application of theories in ways that are useful in approaching reality.  The perspective is also contrary to using computation as embodiments of theories and seeing them as somehow modeling the world.  Theories may have computational models (as interpretations).  This doesn't make the computation model a model of the world any more than the theory is, in this view.  I say that there is a computation model of the theory, and there may be an intended interpretation in the world that is a model, but that does not make the computational model a model of the world any more than the theory is.

I find this a very fruitful way to look at a variety of aspects of information technology as it is developed and used.  My continuing duty is to articulate this value in less-abstract and directly valuable terms.  One curiosity is how this view can still allow for the notion that the act of programming a computer is a case of theory building.



Catching Up with Turing

[update 2008-05-29T16:15Z: Added Petzold's 2008-05-28 post with its nice tie-in of logic and Turing.]

 update 2008-05-27T16:03Z: A little tweaking in my comment on Petzold's 2008-05-25 essay.]

I received an e-mail warning announcing that Charles Petzold's The Annotated Turing is delayed and I needed to approve the shipping delay from May 23 to June 23.  A number of times, I will receive one of these announcements only to have it followed by my order shipping immediately.  Since Petzold's site reports the book is scheduled to launch on June 16, I will be content to wait. 

Meanwhile, Petzold has been posting interesting tid-bits that attracted his attention while working on the book.  In many cases, there will be deeper coverage when the text is available.  Here are some that I am particularly keen to dig further into:

2008-05-28: Babies Are Illogical: The "Lost" "Chapter" of "Code"
Starting out with a look at logical puzzles, Petzold looks at questions about truth, how connected to logic, and related challenges that did not make it into his book, Code: The Hidden Language of Computer Hardware and Software (recommended).  In this essay, Petzold illustrates Charles Dodgson's 1896 Symbolic Logic approach using George Boole's original 1854 notation and will tie it back to computability in Chapter 12 of The Annotated Turing.
2008-05-25: Reading Brian Rotman's "Ad Infinitum..."
There is an interesting tension between computations that do not stop but in the limit ( as we were inclined to say when I was a school student) the output approaches converges to some computable real number.  Turing allowed such cases, and then demonstrated that there are still far more real numbers that cannot be computed than those that can.   The way this is arrived at leaves some questions about how many real numbers we think there should be and whether we are mistaken in where conventional set theory takes us in that regard.
   Although I am not ready to concede to Petzold that any kind of Platonic commitment is necessary in conception of the transfinite, I definitely find this accompanying statement worthy of careful appraisal:
   "We like to pretend that mathematics is the most 'objective' and least human-bound intellectual endeavor, but our view of the natural numbers reveals mathematics to be founded on a very human metaphysical conceit. The natural numbers are not, in fact, "natural" — that is, intrinsically part of nature — but arise out of human discourse."
   In this essay, Petzold explores Brian Rotman's effort to avoid conception of the completed natural numbers (and certainly, in that case, the completed reals).  The focus is on tying Turing's work to philosophical issues regarding the foundation of mathematics, an unexpected connection and one that Turing did not explore over-much.
2008-05-22: Turing and Brouwer: The Unexplored Connection
Not expecting to find a connection between the work of Turing and L.E.J Brouwer, Petzold was startled to find one in the short Correction that Turing made to his original paper.  Accounting for this leads to a chapter on "Conceiving the Continuum" in The Annotated Turing and this delightful essay on how that all unfolded.  This is more on the issue of infinitudes.
2008-05-18: Turing Machines that Run Forever
From the beginning, Petzold has intimated that he wants to illuminate the Turing Machine as Alan Turing conceived of it, with it being acceptable for the machine to run forever.  This is not the ordinary formulation that has survived into contemporary computation theory.  Turing wants to bring the full Turing characterization to our attention.  This essay motivates the difference and the historical context at the time of Turing's formulation and the subsequent revisions.
2008-05-10: The 300 Page Ideal
Considering some of the later blog posts, it is amusing to see this use of "ideal."  This is a wonderful essay on the quirky fascination with attempting to produce a 300-page book, since Petzold has never managed to keep one that short. 
2008-05-09: Letting Go of the Book
This is a touching essay on The Annotated Turing now being on its way to print, with no more opportunities for changes or misgivings.  We are reminded that Petzold had been working on the book since 1999.
2008-03-27: "The Annotated Turing" Typographical Triumph
Petzold provides an account of what he went through to reset the Turing paper so that he could match Turing's symbols and typeface when elements of the Turing paper are discussed in the "annotations."  This is something that Donald Knuth will love, since the desired result was obtained (as the editor confides in a comment) by using LaTeX.  It is delightful how the reset Turing paper was proofed against the original printing.
2008-03-01: Of a Book Entitled "Mr. Turing's Computing Machine"
Petzold provide an account of his 1999 inspiration to produce The Annotated Turing, and the project's early life.
Construction Structure (Hard Hat Area) You are navigating The Miser Project.

template created 2004-05-31-22:34 -0700 (pdt) by orcmid
$$Author: Orcmid $
$$Date: 10-04-30 21:00 $
$$Revision: 22 $