Next: Beating the Interpretation Blues Up: Parallel Logic Programming Previous: Parallel Logic Programming

Sequential Implementation of Logic Programming

The use of logic as a ``programming'' language, or at least as a meaning to mechanically express and manipulate reasoning, has been implicit since its creation [33]. Nevertheless, it was not until the creation of Prolog, thanks to the brilliant intuitions of Colmeraur and Kowalski [16], that ``logic'' became ``logic programming''. Key issue in this evolution step is the understanding that a subset of first order logic, namely Horn Clause Logic, could be sufficiently expressive for practical programming purposes and still efficiently implementable.

Even though reasonably fast interpreters for Prolog, like the C-Prolog realized by Pereira, Damas, and Byrd had been developed, a myth of ``logic programming = slow implementations'' developed and still survives. Only with the advent of Prolog compilers this myth started declining and at present extremely fast implementations are available--capable on some specific programs of beating state-of-the-art C compilers [90].

The real breakthrough is represented by the introduction of the Warren Abstract Machine (WAM) [116, 2, 37], which has become a standard de-facto for the implementation of Prolog and Logic Programming languages.

The WAM defines an abstract architecture whose instruction set allows an easy mapping from Prolog source code and to it is sufficiently low-level to allow an efficient emulation and/or translation to native machine code.

The WAM is a stack-based architecture, sharing some similarities with imperative languages implementation schemes (e.g., use of call/return instructions, use of frames for maintaining procedure's local environment), but extended in order to support the features peculiar to Logic Programming, namely unification and backtracking. The major data areas of the WAM are (as shown in figure 3):

• Heap: it is used to store the complex data structures (lists and Prolog's compound terms) created during the execution.
• Local Stack: it serves the same purpose of the control stack in the implementation of imperative languages--it contains control frames, called environments, which are created upon entering a new clause (i.e., a new ``procedure'') and are used to store the local variables of the clause and the control information required for ``returning'' from the call.
• Choice Point Stack: choice points encapsulate the execution state for backtracking purposes. A choice point is created whenever a call having multiple possible solution paths (i.e., more than one clause successfully match the call) is encountered. Each choice point should contain sufficient information to restore the status of the execution at the time of creation of the choice point, and should keep track of the remaining unexplored alternatives.
• Trail Stack: during an execution variables can be instantiated. Nevertheless, during backtracking these bindings need to be undone (to restore the previous state of execution). In order to make this possible, bindings that can be subject to this operation are registered in the trail stack. Each choice point records the point of the trail where the undoing activity needs to stop.
• Code Area: it is used to store the compiled code of the program.

Figure 3: Organization of the WAM

Being a dynamically typed language, Prolog requires a type information to be associated with each data object. In the WAM, Prolog terms are represented as tagged words. Each word is composed by a tag, identifying the type of the object (e.g. atom, list, etc.), and by a value (e.g. atom name, pointer to the first molecule of a list, etc.).

Web Tour

Next: Beating the Interpretation Blues Up: Parallel Logic Programming Previous: Parallel Logic Programming

'Enrico
Tue Mar 19 14:37:09 MST 1996