Next: Granularity Control
Up: Compile-time Analysis for Parallelism
Previous: And-parallelism:
combined systems exploiting and/or-parallelism may
take considerable advantage from information collected at compile-time. Clearly, all
the advantages regarding single parallelism, as described above, still apply.
Regarding more specifically and/or-parallel systems:
- compile-time analysis may allow to spot points in which the different forms of
parallelism will more likely occur. This may allow the simplification of the mechanisms used
in those parts of the execution. Knowing, for example, that no and-parallelism will
occur in certain alternatives of an or-parallel choice point, allows to use standard
or-parallel mechanisms in exploiting parallelism from such choice-point, avoiding all
the overhead associated to the additional layer of parallelism.
- compile-time analysis may supply information to drive the scheduler during the
execution; static knowledge about distribution of work will allow to simplify
memory management. For example stack-copying (if used to support or-parallelism)
may take advantage of static knowledge about distribution of work to identify the
areas to be copied.
- if binding arrays are used in supporting or-parallelism, static knowledge can
be employed to have an estimate about the number of conditional variables and their
distribution--which can be useful in establishing the size of the binding arrays and/or
the size of the pages in which it will be partitioned (using Paged Binding Arrays scheme).
'Enrico
Tue Mar 19 14:37:09 MST 1996