In the previous section we have introduced the basic concepts of logic programming and we have intuitively justified the reasons that make logic programming languages extremely appealing in terms of parallel execution.
In this section we propose a more detailed description of the major forms of parallelism exploitable from logic programming languages, analyzing the major problems that need to be solved to obtain efficient implementations.
As mentioned previously, two major forms of parallelism are commonly identified in logic programming, or-parallelism and and-parallelism. Exploitation of these two forms of parallelism requires efficient solutions to different problems, related mainly to the management of the control of the computation (for and-parallelism) and the management of environments (for or-parallelism).
Before entering in the details of these issues, a general overview of the principles on which the most common sequential implementations of logic programming are based is due, in order to make the presentation more self-contained.