Next: Sequential Efficiency
Up: Parallel Logic Programming
Previous: The Ideal System
The aim of developing parallel systems is that of being able of taking advantage
of parallel architectures, obtaining faster executions.
Unfortunately, often the research have not focused in the proper direction. The main reason
is that the ``search'' for parallelism has been often taken to an extreme. This translates
to a very large amount of parallelism extracted from programs, but without taken into
account some fundamental aspects, like:
- availability of resources: extracting more parallelism than the number of resources
available for execution (i.e., processors) leads to the need of assigning multiple
pieces of work to the same agent. This typically involves a cost that may impact the
overall system efficiency. In this terms, it may be wiser to extract ``less'' but
``better'' parallelism (e.g. parallel tasks with larger granularity).
- system efficiency: this is an issue that, especially in the older systems, has
not been taken sufficiently into account. It is well-known that supporting parallel
execution imposes certain costs (scheduling costs, work management costs, etc.),
and often these costs are proportional to the amount of parallelism exploited.
Furthermore, certain design choices made in parallel systems may end up disrupting the
sequential efficiency of the system--i.e. the efficiency of the system while executing
sequential parts of the computation.
The ideal situation is represented by the achievement of good speedups
in presence of the no-slowdown
property [54]: the parallel system, in any case, should never take more
time than a corresponding sequential execution. This, in particular, translates to
having systems capable of maintaining the efficiency of the state-of-the-art
sequential implementations.
The logical question is ``how to achieve good efficiency ?'' There are various
issues that have to be tackled in the design of a parallel logic programming system
in order to guarantee good efficiency, and the rest of this document will mainly
focus on them. We can identify three major aspects that have to be analyzed:
- Control Management: Control management is an extremely delicate issue
in parallel logic programming, especially for what concerns scheduling. Scheduling,
i.e. deciding what, when, and by whom should be executed in
parallel, is the key to the success (or failure) of any parallel system.
Scheduling should ideally select tasks that are more likely to be useful
(e.g. or-branches containing a successful derivation), having large grain, and
guaranteeing execution with a minimal amount of communication between agents.
- Memory Management: many researchers have drawn the attention on
the importance of an efficient and effective memory management. The computation
should try to minimize the amount of memory employed, aiming at avoiding
allocation of data structures that are not strictly required. This will not
only reduce the amount of memory used, but may allow considerable reductions
in execution time (due to removal of the costs related to managing such
data structures).
Furthermore, the way in which information are organized in the abstract
machine has a profound impact on the parallel execution. As we will see
in the next sections, adopting a different organization for certain data
areas may considerably improve execution time--by removing part of the cost
of searching for information spread in the various agents' stacks.
Logic programming languages are considerably eager in memory consumption,
thus garbage collection is also an issue that cannot be ignored.
- Optimizations: the mechanisms available to support parallelism are
typically designed to tackle the worst-case situation. The ability of identifying
simpler cases allows the use of simpler execution schemes, with consequent reduction in overhead
and improvement in speed. Optimizations can be distinguished in various categories:
- Pure Compile-time: the opportunities for optimization are identified
during compilation and the optimization is directly exploited by the compiler,
producing an improved compiled code;
- Pure Run-time: the possibility of improving the execution is detected during
the execution itself;
- Mixed Optimization: the compiler collects knowledge that will be used at
run-time to verify the possibility of improving the execution.
In the rest of this document we will devote limited space to this topic, being
largely unexplored in the area of Parallel Logic Programming
[23, 86, 56, 26].
The rest of this document concentrates on these issues, the problems and the current
solutions for achieving the most efficient implementations of parallel logic programming. The
first part revises the current trends in sequential technology--this is relevant since
such technology can actually be transferred and easily adapted to parallel systems. The
second part concentrates on the issues related to efficient exploitation of the various
forms of parallelism.
Next: Sequential Efficiency
Up: Parallel Logic Programming
Previous: The Ideal System
'Enrico
Tue Mar 19 14:37:09 MST 1996