Next: An Example Up: Parallel Logic Programming Previous: Other Intermediate Languages

## Classification of Parallelism

In the previous section we briefly introduced the basic forms of parallelism commonly identified in Logic Programming languages. In order to better understand the problems connected with the exploitation of parallelism in Logic Programming, it is important to introduce an additional level of classification. The parameter in this classification is simply represented by the level of interaction existing between parallel threads of computation [82]. This classification is independent from the specific form of parallelism exploited (i.e., either or- or and-parallelism). In order to make the description uniform, we generically talk about threads of computation to indicate potentially parallel pieces of computation (i.e., parallel subgoals in and-parallelism and alternatives from a parallel choice point in or-parallelism).

We will also use the generic term dependency between threads to indicate the potential ability of one thread of affecting the execution of the other threads. For example, an unbound variable accessible by different threads may represent a dependency.

Using this parameter, we can develop the following classification of parallelism:

• Independent Parallelism: it is characterized by the fact that the parallel execution is allowed only between threads which are independent from each others. This level of parallelism is clearly the simplest to implement, since the parallel execution requires little or no synchronization. The only mechanisms that are required are those to generate the parallel work (e.g., fork of parallel computations) and, eventually, a final barrier to detect termination of all the threads.

Independent Parallelism is the weakest form of parallelism--nevertheless, it has been shown in many situations that independent parallelism can be sufficient to produce considerable speed-ups in many real-life applications [46, 94]. Furthermore, the relative simplicity of the mechanisms required to support independent parallelism guarantees a limited overhead, which translates in very efficient exploitation of parallelism.

• Restricted Parallelism: parallel execution of threads having dependencies is allowed as long as there is guarantee that no solvable conflicts will take place during the execution. In simpler words, the dependencies that are allowed are such that no conflict will occur during the computation (i.e., the different threads always agree on the dependency) or, if a conflict occurs, then a failure of the whole computation should take place. Implementation of restricted parallelism is more complex, the presence of possible dependencies requires certain care, like the use of mutual exclusion mechanisms to avoid race conditions. Nevertheless, the guarantee that no solvable conflicts will occur allows to avoid the introduction of any mechanism to order the accesses to the sources of dependencies--i.e., there is no don't know non-determinism associated to the presence of dependencies.

This level of parallelism (which strictly subsumes independent and-parallelism) allows to obtain speedups on a larger class of programs, hopefully maintaining a satisfactory degree of efficiency.

• Dependent Parallelism: in this class we collect all those models in which parallel execution of dependent threads is allowed. The presence of dependencies requires a careful design. The main issue to be tackled is to guarantee respect of the semantics of the language. The purpose of our research is to obtain transparent exploitation of parallelism: in this terms the user should be guaranteed that the outcome of its execution on a parallel machine is exactly identical to the outcome produced from a sequential execution (modulo variations in execution time). Arbitrary execution of parallel threads in presence of dependencies may cause a different behaviour in the execution, leading to a lost of the original semantics of the computation. To avoid this the system needs to enforce a certain amount of control on the order in which the threads are accessing and executing operations related to the dependencies--i.e., reintroducing a slight amount of sequentiality, in the form of synchronization points between threads.

Implementing dependent parallel system is considerably more complex, because of the problems just mentioned. Nevertheless, only the systems belonging to this class guarantee the possibility of extracting parallelism from almost any kind of application.

The classification above applies to both or- and and-parallel systems, and in the rest of this section we will see how the classes above instantiate in the two cases (for or- and and-parallelism) and for their combinations.

Next: An Example Up: Parallel Logic Programming Previous: Other Intermediate Languages

'Enrico
Tue Mar 19 14:37:09 MST 1996