next up previous contents
Next: And/Or-parallelism: Up: Compile-time Analysis for Parallelism Previous: Or-parallelism:

And-parallelism:

compile-time analysis and, more specifically abstract interpretation, has been heavily used in the context of and-parallelism. In the following we sketch some of the main situations in which compile-time analysis can be used to improve and-parallel execution.
  1. independent and-parallelism relies on the knowledge that the subgoals considered for parallel execution are mutually independent, i.e. they are not going to affect each other's execution. As we have seen in a previous section, one of the best approaches to independent and-parallelism is based on the use of Conditional Graph Expressions. At run-time these conditions need to be evaluated in order to verify whether a parallel computation is possible. Even if these tests are very simple, their execution represents an overhead. On the other hand, an abstract interpretation process focused on the detection of possible variable sharings may allow to statically detect various cases in which certain variables are independent, which in turn translates to the possibility of removing many of the tests in the CGEs.
  2. non-strict independent and-parallelism is based on allowing and-parallel execution of subgoals that may share variables but at most the rightmost of them will try to bind them. The problem of detecting at compile-time cases of non-strict independence has been shown to be a very difficult problem [57]. This is due to the fact that this is not an a-priori condition, i.e. it cannot be expressed directly in terms of tests on the status of certain variables (i.e., run-time detection is unfeasible). Hermenegildo and others [13] have shown how to perform this analysis using sharing+freeness information.
  3. compile-time analysis may allow to identify even more cases of possible independent parallelism. For example, knowledge that different subgoals sharing a variable will actually produce the same binding for the variable (in absence, in the case of Prolog, of order-sensitive predicates) guarantees the possibility of running them in parallel.
  4. various forms of dependent and-parallelism require a certain amount of compile-time analysis. For example:
  5. instances of the Basic Andorra Model (like in the Andorra-I system) requires knowledge regarding the determinacy of the subgoals to be run in and-parallel. This information is approximated in the Andorra-I system (by verifying that the subgoal will match at most one clause)--and compile-time analysis is used to generate decision trees capable of making this check very fast. Use of Abstract Interpretation may allow to relax this approximation and recognize more cases of determinacy--increasing the degree of parallelism exploited.
  6. various information which can be obtained at compile-time can be used to optimize the and-parallel execution in various ways. For example, knowledge about the determinacy of certain subgoals may allow, in the case of independent and-parallelism, to arbitrarily change the order of the subgoalsgif. This allows to ``merge'' the subgoals executed by the same agent, reducing the overhead (especially during backtracking) [84].
  7. in various and-parallel systems (e.g. Andorra-I) compile-time analysis is used to detect the presence of order-sensitive predicates and to insert proper annotations (e.g. sequential conjunctions) in order to allow the system to execute the program respecting the Prolog semantics.


    Web Tour



next up previous contents
Next: And/Or-parallelism: Up: Compile-time Analysis for Parallelism Previous: Or-parallelism:

'Enrico
Tue Mar 19 14:37:09 MST 1996