next up previous contents
Next: Order-sensitive Executions Up: Control Management Previous: Control Management

Scheduling

While in sequential systems the order in which the execution proceeds is predetermined and unique (and it may characterize the operational semantics of the language), in parallel systems a new issue emerges, related to the choice of the order in which the execution will develop. Typically, the quantity of parallel work generated by the computation is larger than the actual number of executing resources (i.e., processors). This means that, whenever one agent will complete its activity, it will be forced to look for a new piece of work to execute. This activity of searching for new work is generally named Scheduling.

In principle, scheduling should have as main objective the selection of work which allows an efficient usage of the resources--especially processors--and to complete the execution in a time close to the earliest possible time.

As we have mentioned before, execution in a parallel system can be viewed as the process of visiting/developing a computation tree (and-tree, or-tree, or and/or-tree). Considering that the nodes of the tree represent the different instances of (parallel) work available, scheduling can be described as an activity which selects nodes in such tree. In particular, in the case of and/or-parallelism, the selection of a node will also imply the selection of a specific form of parallelism (and- or or-) to be assigned to the considered computing entity.

It is custom to distinguish two categories of ``work''. Speculative work is work that lies within the scope of a pruning operator, and it may ultimately be removed--i.e., any effort spent in its execution will be wasted. Mandatory work is work that is not speculative.

Various criterias can be considered in developing a scheduling algorithm [10]. A first distinction [32] can be made depending on the approach in which work is made ``available'' to the rest of the system:

Most of the or-parallel schedulers developed are demand-driven (this is the case of the various schedulers developed for Muse and Aurora)--or-parallel work will be detected by searching for choice points with unexplored alternatives in the subtrees generated by other or-agents. The few and-parallel systems available are mainly using work-driven scheduling, in which each agent generating a parallel call will take care of pushing the subgoals associated to the parallel call in a work queue/stack, from where other agents will pick them up for remote executiongif.

Another distinction that can be made is based on the frequency of invocation of the scheduler:

Preemptive scheduling has been introduced in the most recent or-parallel schedulers (both for Muse [6] and Aurora [99, 10]) in order to deal with speculative work. With a certain frequence the execution is interrupted and the or-agents moved towards less-speculative work, i.e. work that has less probability of being pruned.

All the schedulers developed for and-parallel systems have been non-preemptive. The only cases of preemption emerges in the implementation of dependent and-parallel systems, where and-agents are moved to different subgoals whenever the current execution gets suspended waiting for a binding for a shared variable (for which the current execution is not a producer). Only recently the design of more effective scheduling for and-parallel systems, taking into account also the issue of speculativeness of work, has been started [82].

A further issue emerges when dealing with and/or-parallel systems. In these systems, the scheduling is not only involved with the selection of the best piece of work, but it must also decide which kind of work should be taken (either and- or or-). It has been shown [32] that maintaining a fixed configuration in the teams of agents may not lead to the best possible performance. This justifies the need for a top-level scheduler (also known as reconfigurer in Andorra-I), whose only task is to reconfigure the teams as a response to variations in the distribution of or- and and-work in the computation. Principles for designing reconfigurer have been studied by Dutra [32]. The presence of a reconfigurer is definitely a key issue in the effectiveness of an and/or-parallel system.




Web Tour





next up previous contents
Next: Order-sensitive Executions Up: Control Management Previous: Control Management

'Enrico
Tue Mar 19 14:37:09 MST 1996