next up previous contents
Next: Independent Or-Parallelism: Up: Single Parallelism Systems Previous: Single Parallelism Systems

Or-parallelism

  As described in the introduction, or-parallelism arises whenever a subgoal can unify with the heads of more than one clause, and the bodies of these clauses are concurrently executed by different computing agents--that, from now on, will be referred to as or-agents.

Or-parallelism is present in almost any application, although certain application areas seem to offer it on a larger scale. This is the case of application areas like parsing [38], optimization problems [102], databases [121], and others [68].

  example314

It should be noted that, by taking different alternatives from a non-deterministic choice point, each or-agent is actually computing a distinct solution to the original query. This has often lead to the intuition that or-parallelism, at least in principle, could be easily implemented with straightforward modifications to existing sequential technology.

This is only true if we restrict our attention to very simple forms of parallelism (i.e., independent or restricted). Implementing or-parallelism in its most general form offers few complex challenges.

In particular, it is important to observe that the independence of the alternatives explored by the different or-agents exists only at the theoretical level. This can be understood from the following example: let us consider the query tex2html_wrap_inline1074 q(Y,X), p(Y,X) and let us assume that

The Or-tree for this part of computation is represented in figure 4(i). On one hand the binding for the variable Y (Y->a) should be visible to both the alternative computations of p. On the other hand, such computations will produce different bindings for the variable X. This introduces a clear problem in the design of an implementation model - the standard approach in managing the environments is not anymore suitable to our needs. In a sequential system a single environment would be allocated for the query, containing a location for the variable X; since the two alternative computations are explored sequentially, they can easily reuse the same environment. In an or-parallel system, the two alternatives are concurrently explored, and they cannot share the same environment (and, in particular, the same location for X), as illustrated in figure 4(ii). The access, within parallel alternatives, to environments created dependency between parallel threads. Variables belonging to this kind of environments are usually named conditional variables (since their bindings may vary across different alternatives). The problem can be tackled in different ways, either imposing restrictions on the cases where parallelism is allowed, or by devising new environment representation schemes.

  339





'Enrico
Tue Mar 19 14:37:09 MST 1996