next up previous contents
Next: Compile Time Analysis Up: Control Management Previous: Scheduling

Order-sensitive Executions

We have frequently referred in the previous sections to the issue of maintaining Prolog semantics within the parallel computation. In simple words, the aim we are trying to achieve is to have the parallel implementation ``behaving'' exactly like a sequential one, in terms of solutions obtained, order in which they are produced, and general external behaviour of the execution.

A sequential implementation of Prolog is characterized by a depth-first, left-to-right visit of the computation tree. This fixes the order in which each subgoal is processed. During a parallel execution, special actions are required in those cases in which a parallel computation may lead to a different behaviour. We have seen examples of this in the case of dependent and-parallelism, where an unrestricted parallel execution may lead to different solutions, since the order in which the variables get bound can be different--and this can affect the final outcome. Sequential behaviour can be restored by introducing synchronization points in the parallel execution, used to guarantee that all the order-sensitive operations are performed in the correct sequence.

Clearly, the presence of order-sensitive operations leads to a degradation of parallelism (since we are forced to reintroduce a sequential component in the execution), and the components of the system in charge of deciding which part of the program will be run in parallel (e.g., schedulers) should be designed in such a way to take this into account--e.g., avoiding parallelizing parts of the program which are heavily ``sequential''. On the other hand, the presence of order-sensitive operations does not require a sequentialization of the whole execution involved--only the order-sensitive operations need to be sequentialized, while the rest of the code in between can be run in parallel without any concern.

First of all it is quite clear that only certain kind of order-sensitive predicates will create problems when occurring in a parallel computation:

The simplest approach is refusing parallelization of parts of code which contain this kind of predicates. This option has been adopted in some of the earlier proposals (e.g., ROPM, AO-WAM, etc.), but it is unacceptable, as it will lead to a considerable loss of parallelism.

A slightly improved solution has been proposed by other systems, like PEPSys [118], where the program is statically subdivided into sequential and parallel modules, and side-effects are only allowed in the sequential ones. But, again, the loss of parallelism may be considerable.

Taken for granted that order-sensitive predicates need to be allowed also within the scope of parallel computations, the systems should be supplied with special rules that guarantee that Prolog semantics will be respected.

To maintain Prolog semantics in presence of or-parallelism, an order-sensitive predicate should be executed only after all the preceding ones have terminated. Determining a-priori this condition is impossible (it is akin to deciding the halting problem). Some approximations can be developed. The most commonly used one (e.g., Muse, Aurora, Andorra-I) is to suspend the execution of the order-sensitive predicate until its branch is leftmost in the whole tree--i.e., all the branches on its left have already been completed.

Only few techniques have been considered for handling order-sensitive predicates in presence of and-parallelism [78, 14]. The basic idea is analogous to what described in the case of or-parallelism: the execution of an order-sensitive predicate needs to be suspended until the branch (in the and-tree) to which it belongs is the leftmost active one (i.e., all the branches on its left have completed their executions). The approaches referenced above differ depending on the way in which detection of leftmostness in the tree is performed.

The conditions under which it is possible to safely execute an order-sensitive predicate in presence of and/or-parallelism can be obtained by combining the respective conditions for or- and and-parallelism. This has been studied by Gupta and Santos Costa [41]. Some improvements can be obtained by taking advantage of the different computational models (e.g., ACE proposed a slightly better algorithm for detecting executability of order-sensitive predicates [47]).




Web Tour





next up previous contents
Next: Compile Time Analysis Up: Control Management Previous: Scheduling

'Enrico
Tue Mar 19 14:37:09 MST 1996