Evolution of computer technology has been driven by one major factor: the uninterrupted search for increased power and speed (growth in processor speed). While technology for sequential processors is quickly approaching the physical limits of chip building, an explosive growth of interest in Parallel Computing has started dominating the computer science community. In parallel computing, the traditional Von Neumann view of a computer as a single processing unit, capable of executing a single stream of instructions at once, is replaced by a cooperating view, where multiple processing units cooperate in solving the problem--leading to what has been named the ``Parallel Wave'' of computing .
On the other hand, while parallel hardware technology has improved at a high pace, the same cannot be said for the parallel software technology, opening the doors to a feared (parallel) software crisis. Mapping the parallelism implicit in applications onto a parallel computer, and coordinating the parallel executions add an extra layer of complexity to the task of programming. This is mainly related to the exponential growth of possible interactions of the different threads in which the parallel computation is articulated--which makes the explicit handling of the parallel execution a titanic task.
These considerations justify the effort of developing programming paradigms aimed at making parallel processing an activity whose cost is comparable to that of sequential processing.