Exploitation of parallelism from logic programming languages needs to take into account the current trends in the evolution of this programming paradigm. In particular, some of the most relevant topics are the followings.
Concurrency has been shown [92, 63] to be a powerful programming instruments, allowing to write programs capables of coroutining and reactive behaviour.
Broadly speaking, Control Parallelism (often indicated also as MIMD parallelism) arises when different operations (or functions, procedures) are applied to different data items in parallel. No a-priori synchronization exists between the different parallel computations--i.e., any form of synchronization needs to be explicitly programmed.
Data Parallelism, also known as SIMD/SPMD parallelism, arises when the same operation is applied in parallel to more than one different data-items. This involves a sort of lock-step execution pattern where in one step the desired operation is concurrently applied to the different data items.
In many cases data parallelism can be seen as a special instance of control parallelism, but the chief advantage of data parallelism is its more specialized nature, which makes it suitable to more efficient implementations, incurring in a considerably reduced amount of overhead.
An ideal parallel system should provide features for efficiently supporting data parallelism, identifying its occurrences and reducing the overhead as much as possible.