next up previous contents
Next: Implicit Parallelism: Up: Paradigms for Parallel Processing Previous: Paradigms for Parallel Processing

Explicit Parallelism:

is characterized by the presence of explicit constructs in the programming language, aimed at describing (to a certain degree of detail) the way in which the parallel computation will take place. A wide range of solutions exists within this framework. One extreme is represented by the ``ancient'' use of basic, low level mechanisms to deal with parallelism--like fork/join primitives, semaphores, etc--eventually added to existing programming languages. Although this allows the highest degree of flexibility (any form of parallel control can be implemented in terms of the basic low level primitivesgif), it leaves the additional layer of complexity completely on the shoulders of the programmer, making his task extremely complicate.

More sophisticated approaches have been proposed, supplying the users with tools for dealing with parallel computations at a higher level of abstraction. This goes from specialized libraries supplying a uniform set of communication primitives to hide the details of the computing environment (e.g., PVM [101] and Linda [39]) to sophisticated languages like PCN [35].

Explicit parallelism has various advantages and disadvantages. The main advantage is its considerable flexibility, which allows to code a wide variety of patterns of execution, giving a considerable freedom in the choice of what should be run in parallel and how. On the other hand, the management of the parallelism--a very complex task--is left to the programmer. Activities like detecting the components of the parallel executiongif and guaranteeing a proper synchronization (e.g., absence of race conditions) can be more or less complex depending on the specific application.



'Enrico
Tue Mar 19 14:37:09 MST 1996