next up previous contents
Next: Overheads in And/Or-Parallelism Up: Parallelism Efficiency Previous: Binding Arrays:

Overheads in And-Parallelism

The situation in terms of overhead is more serious in the case of and-parallelism. In or-parallelism there is the advantage that the data structures describing the list of alternative computations (i.e., the choice points) are already present in the sequential model itself, and can be easily updated to suit a parallel execution. For this reason we obtain almost zero overhead during sequential execution.

In and-parallelism the situation is more complex: the sequential model does not have an explicit representation of the list of subgoals (this is implicit in the next instruction pointers saved in the environments and in the code structure itself). Though, the sequential model (at least based on the WAM structure) does not supply sufficient information to allow an easy adaptation to and-parallelism. And-parallelism requires an explicit representation of the current resolvent, or at least of the part of the resolvent that is meant to be executed in parallel--this to allow the different agents to easily identify the presence of parallel work and take advantage of it.

The majority of and-parallel systems introduce a form of goal-stacking in their execution model (this will be described in detail in a successive section--for now it will be sufficient to know that goal-stacking is a computation model in which the current resolvent is completely described by a sequence of goal descriptors stored in a stack), either to completely replace environment stacking or limited to the parts of the resolvent subject to parallel execution. Furthermore, this should come together with some additional data structures to allow a proper control of the parallel execution (e.g., how many subgoals are still to be completed, where are they located, etc.).

The relevant point is that switching to goal stacking and paying its cost need to be performed a-priori, in the moment in which the resolvent is started, without any knowledge of whether the and-parallel execution will take place at all. This translates to pure overhead that may not be balanced by any speedup due to parallelism, with a possible slow-down with respect to sequential execution.

This has made and-parallel systems not extremely competitive with sequential technology--the amount of overhead may be considerable, especially in programs which offer a great amount of parallel work.

Backtracking offers another source of overhead. And-parallelism leads to a distribution of the execution across different agents. Backtracking requires traversal of the computation in a well-specified order (right-to-left) in order to guarantee the proper semantics. This imposes the requirement of being able to trace back the position of the different parts of the computation in the stacks of the various and-agents. This can be realized, in general, only by storing additional information that will be used during backtracking. For example, in the RAP-WAM model [58]gif each parallel subgoal's execution is delimited by markers, and each parallel subgoal has a descriptor associated to it and stored in the descriptor of the parallel call. The subgoal's descriptor stores information regarding the location of the markers delimiting the execution. On backtracking the entry point in the subgoal will be detected by locating the marker which closes its execution.

If the argument moves towards dependent and-parallelism, we must register the presence of further overheads due to the complexity of managing accesses to shared variables (e.g., guaranteeing mutual exclusion).

Solutions to at least part of these problems can be found, although their engineering in an actual implementation is far from being straightforward. Parts of the idea of the Warren Abstract Machine need to be reconsidered--the WAM has been designed to be an efficient sequential model and some of its choices are not suitable to parallel implementation. Leaving the WAM as a base for sequential implementation will not necessarily translate to a loss in performance since, as described in the previous section on sequential efficiency, new models have been recently proposed that are competitive with the WAM and are more ``goal-stacking'' oriented.


next up previous contents
Next: Overheads in And/Or-Parallelism Up: Parallelism Efficiency Previous: Binding Arrays:

'Enrico
Tue Mar 19 14:37:09 MST 1996