The purpose of compile-time analysis is to statically (e.g., by simply considering the syntax of the program) derive information that can be used to produce a better compiled code and, more in general, to improve program execution. Both data and control information may be derived and used to increase speed and reduce code size. The issue is even more relevant when dealing with parallelism: compile-time analysis allows to increase the degree of parallelism exploited, reducing at the same time the run-time costs of performing the parallelization.
Various techniques have been adopted to extract knowledge about the program during compilation. It must be observed that logic programming languages, being based on clear and well-defined mathematical semantics, allow a relatively easy development of tools capable of performing a semantically correct transformation of the program (something which is not always guaranteed in the case of procedural languages ). The nicely designed semantics of these languages and their tight coupling with their operational behaviour makes the task of collecting information about the program behaviour simpler and theoretically well-founded.
Most of the analysis algorithms used so far are instances of a general method usually indicated with the name Abstract Interpretation [22, 1, 53, 20]. Abstract Interpretation is based on ``executing'' the program on a domain different from the Herbrand Universe, carefully selected in order to extract information about the behaviour of the execution.