Research

Constraint Programming

constr_progr

Constraint Programming (CP) is an software technology for declarative description and effective solving of large, particularly combinatorial problems, especially in areas of planning and scheduling. Constraints arise in most areas of human endeavour. They formalise the dependencies in physical worlds and their mathematical abstractions is simply a logical relation among several unknowns (or variables), each taking a value in a given domain. The constraint thus caputres partial information about the variables of interest and restricts the possible values that variables can take. The important feature of constraints is their declarative manner, i.e., they specify what relationship must hold without specifying a computational procedure to enforce that relationship.

View Constraint Programming publications.

 

Distributed Constraint Optimization

distr_constr)opt

A distributed constraint optimization problem (DCOP) is a problem where multiple agents coordinate with each other to take on values such that the sum of the resulting constraint costs, that are dependent on the values of the agents, is minimal. DCOPs are a popular way of formulating and solving multi-agent coordination problems such as the distributed scheduling of meetings, distributed coordination of unmanned air vehicles and the distributed allocation of targets in sensor networks. Privacy concerns in the scheduling of meetings and the limitation of communication and computation resources of each sensor in a sensor network makes centralized constraint optimization difficult. Therefore, the nature of these applications call for a distributed approach.

View distributed constraint optimization publications.

 


Probabilistic Planning

prob_plan
Many real-world planning problems occur in environments where there may be incomplete information or where actions may not always lead to the same results. Examples include planning for retirement, where the state of the economy in the future is uncertain, and planning in logistics, where the duration of travel between two cities is uncertain due to potential congestion.
Probabilistic planning is an extension of nondeterministic planning with information on the probabilities of nondeterministic events. A Markov decision process (MDP) is a popular framework for modeling decision making in these kinds of problems, where an agent needs to plan a sequence of actions that maximizes its chances of reaching its goal. A partially observable MDP (POMDP) is an extension where the world that the agent is operating in is only partially observable, and a decentralized (PO)MDP (Dec-POMDP) is an extension where a team of agents needs to collectively plan their joint actions.

View probabilistic planning publications.

 

 

Research Awards

RECENT GRANTS

INTERNATIONAL AWARDS

 

DEPARTMENTAL AWARDS

  • Outstanding Teaching Assistant Awards
    Chuan Hu and Reza Tourani — Spring 2014
    Ben Wright — Fall 2013
    Khoi Nguyen — Spring 2013
    Hieu Nguyen and Tiep Le — Fall 2012
    Ferdinando Fioretto — Spring 2012
    Hieu Nguyen — Spring 2011
    Nancy Alajarmeh — Fall 2010
  • Outstanding Research Assistant Awards
    Ferdinando Fioretto — 2013
    Son Thanh To — 2012
    Khoi Nguyen — 2011
    Son Thanh To — 2010