Next: Ideal System:
Up: Paradigms for Parallel Processing
Previous: Explicit Parallelism:
allows programmers
to write their programs without any concern about the exploitation of parallelism.
Exploitation of parallelism is instead automatically performed by the compiler and/or the runtime system. In
this way the parallelism is transparent to the programmer
(except, hopefully, for an improvement in execution performance), maintaining the
complexity of software development at the same level of standard sequential programming.
Extracting parallelism implicitly is not an easy task.
For imperative programming
languages (e.g. FX Project, PARADIGM), the complexity of the problem is almost prohibitively and allows
positive results only for restricted sets of applications (e.g., applications which
perform intensive operations on arrays) [123].
The situation is brighter for
declarative languages. Declarative Programming languages, and in particular
Functional and Logic languages, are characterized by a very high level
of abstraction, allowing the programmer to focus on what the problem is and
leaving implicit many details of how the problem should be solved. Declarative
languages have opened new doors to automatic exploitation of parallelism. Their
focusing on a high level description of the problem and their mathematical nature
turned into positive properties for implicit exploitation of parallelism. In particular:
- they are referentially transparent, which means that variables are seen
as mathematical entities, whose value cannot be changed during execution (single-assignment
languages).
- their operational semantics are based on some form of non-determinism (e.g.,
clause selection in logic languages, apply operator in functional
languages);
- the possibility of using eager evaluation schemes allows to obtain
dataflow-like computations (highly suitable to parallel execution).
Thanks to these reasons, the efforts in parallelizing declarative programming languages
have been highly successful [80, 117, 36].
The obvious advantages (automatic parallelization, no additional effort for the
programmer) are balanced by some non-straightforward disadvantages, like:
- the system (typically) lacks knowledge of the size of the various components of
a computation, and may exploit very fine grained parallelism, leading to slow-downs
instead of speed-ups;
- the system may attempt to parallelize code which is only apparently parallel,
being instead inherently sequential. This will introduce large amounts of synchronization
points and lead to inefficient executions.
Next: Ideal System:
Up: Paradigms for Parallel Processing
Previous: Explicit Parallelism:
'Enrico
Tue Mar 19 14:37:09 MST 1996