Models exploiting the forms of parallelism in logic programming described before have been developed and successfully implemented. Nevertheless various issues are still open and object of active research.
Efficiency: although the literature counts a large number of different proposals for exploiting parallelism from logic programming languages, only few of them have a practical meaning. This is related to the fact that a practical parallel system should be efficient, where by efficiency we intend the ability of (i) containing the amount of overhead introduced to extract parallelism; and (ii) achieving a sequential efficiency comparable to that of the state-of-the-art in sequential implementations.
The typical requirement that we would like to put on a parallel implementation model is the no-slowdown requirement: this means that in no conditions the parallel execution should be slower than a corresponding sequential one. In practice the no-slowdown requirement can be hard to meet; exploitation of parallelism may introduce overheads that in certain situations can actually slowdown the computation, and certain operations may become more expensive in presence of parallelism (this is often the case for backtracking). Nevertheless, even if the no-slowdown cannot be completely achieved, we should still aim at its best possible approximation, limiting the cases in which slowdown appears and maintaining a good sequential efficiency. Designing efficient parallel logic programming systems is a considerably complex task that is still partly unresolved.
Combinations: although efficient implementations of logic programming languages exploiting a single form of parallelism are available [40, 36], systems concurrently exploiting different forms of parallelism still elude us. Systems proposing ways of combining different forms of parallelism have been proposed, but they suffer various drawbacks, like:
Optimizations: even models based on standard sequential implementation techniques (like the Warren Abstract Machine (WAM)  may suffer considerable overheads due to the support of parallel execution. On the other hand the support for parallelism is often designed in order to tackle the worst case situations, which may seldomly occur. A wide lack of optimization principles which can be applied to tailor the cost of the exploitation of parallelism to the actual complexity of each specific instance of the problem is behind the poor performance of many implementations.
Integration: logic programming has rapidly evolved in the last years, bringing the attention on new computational models capable of solving in a more expressive way and more efficiently various categories of problems. Issues like constraint handling, concurrency, and data-parallelism need to be tackled.