System PROJECTOR: An Automatic Program Rewriting Tool for Non-Ground Answer Set Programs - Yuliya Lierler
Abstract: Answer Set Programming (ASP) evolved from Logic Programming,
Deductive Databases, Knowledge Representation, and Nonmonotonic
Reasoning, and serves as a flexible language for solving problems in a
declarative way: the user does not need to provide an algorithm for
solving the problem; rather, (s)he specifies the properties of the
desired solutions by means of formal representations in a logic
program. The ASP system automatically computes the solutions having
the desired properties. ASP implements McCarty's view of the
computation of intelligent artifacts, and it is considered a major
paradigm of logic-based Artificial Intelligence (AI). After more than
twenty years from the introduction of ASP, the core solving technology
has become mature, and a number of practical applications are
available.
In this talk, we illustrate our experience to bring AI and ASP from
research to industry, through the development of advanced applications
with DLV -- one of the major ASP systems. DLV has undergone an
industrial exploitation by DLVSYSTEM, a spin-off company of University
of Calabria, and has been successfully used in several real-world
applications. In particular, in the talk, we first present our
framework for AI application development, which is based on the latest
evolutions of DLV in a server-like platform. Then, we focus on the
description of some industry-level applications of ASP, including
success stories and ongoing developments. Eventually, we share the
lessons that we have learned in our experience, and discuss our
outlook over the possible roles of ASP-based technologies in the
modern AI spring.
Bio: Nicola Leone is professor of Computer Science at University of Calabria. He previously was professor of Database Systems at Vienna
University of Technology until 2000. At the University of Calabria, he
has served as the Chair of the Degree Program of Computer Science
2001-2008, and as the head of the department of Mathematics and
Computer Science 2008-2018. He has been elected as the Rector of
University of Calabria 2019-2026.
He is internationally renowned for his research on Knowledge
Representation, Answer Set Programming (ASP), and Database Theory, and
for the development of DLV, a state-of-the-art ASP system which is
popular world-wide. He published more than 250 papers in prestigious
conferences and journals, and has more than 10,000 citations, with
h-index 54. He is area editor of the TPLP journal (Cambridge Press)
for "Knowledge Representation and Nonmonotonic Reasoning", and been
Keynote Speaker and Program Chair of several international
conferences, including, e.g., JELIA and LPNMR. He is a fellow of ECCAI
(now EurAI), recipient of two Test-of-Time awards from the ACM and
ALP, and winner of many Best Paper Awards in top-level AI conferences.
Abstract: Resource allocation poses unique challenges ranging from load balancing in heterogeneous environments to privacy concerns and various service-level agreements.
In this tutorial, we highlight industrial applications from distinct problem domains that span both extremes of the optimization landscape; operational decision making in real-time and resource provisioning for future considerations. Motivated by real-world business requirements, we will walk through how Constraint Programming delivers effective solutions in both settings and compare & contrast it with alternative approaches such as heuristics and mathematical programming.
While solving large-scale problems is of great practical importance, there is a need for solutions that are not only efficient but also flexible, easy to update, and maintain. We show how Constraint Programming neatly suits the needs of such dynamic environments with continually changing requirements.
Bio: Serdar Kadioglu is the vice president of Artificial Intelligence at Fidelity Investments. He is also an adjunct faculty member in the Computer Science Department at Brown University where he received his doctorate degree on solving combinatorial problems. Before financial services, he conducted research and development on industrial constraint solvers at Adobe and Oracle. His research interests are in discrete optimization while his current work focuses on personalization and recommendation systems.
Abstract: In this tutorial we will discuss various Natural Language Question Answering challenges that have been proposed, including some that focus on the need for common sense reasoning, and how knowledge representation and reasoning may play an important role in addressing them. We will discuss various aspects such as: what knowledge representation formalisms have been used, how they have been used, where to get the relevant knowledge, how to search for the relevant knowledge, how to know what knowledge is missing and how to combine reasoning with machine learning. We will also discuss extraction of relevant knowledge from text, learning relevant knowledge from the question answering datasets and using crowdsourcing for knowledge acquisition. We will discuss KR challenges that one faces when knowledge is extracted or learned automatically versus when they are manually coded. Finally we will discuss using natural language as a knowledge representation formalism together with natural language inference systems and semantic textual similarity systems.
Bio: Chitta Baral is a Professor in Computer Science at the Arizona State University with research experience in various sub-fields of Artificial Intelligence (AI) such as Knowledge Representation and Reasoning, Natural Language Understanding, and Image Understanding; and their applications to Molecular Biology, Health Informatics and Robotics. Chitta is the author of the book "Knowledge Representation, Reasoning and Declarative Problem Solving" published by Cambridge University Press. Chitta's research has been funded by various US federal agencies including NSF, NASA, ONR and IARPA. Chitta has been an Associate Editor of Journal of AI Research and is currently an associate Editor of the AI Journal, the two top journals in the field of AI. Chitta has been the Program Co-Chair (2014) and general chair (2016) for the Knowledge Representation and Reasoning (KR&R) Conference and is a past President of KR Inc. Chitta has published significantly in major AI journals and conferences and has graduated many Ph.Ds in the field of AI. He has given invited talks at major AI conferences including AAAI and KR&R. Some of the AI packages developed by Chitta and his students include NL2KR, a platform to create systems that can translate natural language to targeted formal and logical languages; Kparser, a semantic knowledge parser for natural language text; and DeepQA, a system to reason about biological pathways.
Abstract: Humans have evolved languages over thousands of years to provide useful abstractions for understanding and interacting with each other and with the physical world. Such languages include natural languages, mathematical languages and calculi, and most recently formal languages that enable us to interact with machines via human-interpretable abstractions. In this talk, I present the notion of a Reward Machine, an automata-based structure that provides a normal form representation for reward functions. Reward Machines can be used natively to specify complex, non-Markovian reward-worthy behavior. Furthermore, a variety of compelling human-friendly (formal) languages can be used as reward specification languages and straightforwardly translated into Reward Machines, including variants of Linear Temporal Logic (LTL), LDL, and a variety of regular languages. Reward Machines can also be learned and can be used as memory for interaction in partially-observable environments. By exposing reward function structure, Reward Machines enable reward-function-tailored reinforcement learning, including tailored reward shaping and Q-learning. Experiments show that such reward-function-tailored algorithms significantly outperform state-of-the-art (deep) RL algorithms, solving problems that otherwise can't reasonably be solved and critically reducing the sample complexity.
Bio: Sheila McIlraith is a Professor in the Department of Computer Science, University of Toronto. Prior to joining U of T, McIlraith spent six years as a Research Scientist at Stanford University, and one year at Xerox PARC. McIlraith is the author of over 100 scholarly publications on a variety of topics in artificial intelligence largely related in some way to sequential decision making, knowledge representation and reasoning, and search. McIlraith is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), associate editor of the Journal of Artificial Intelligence Research (JAIR), and is a past associate editor of the journal Artificial Intelligence (AIJ). McIlraith served as program co-chair of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18), the 13th International Conference on Principles of Knowledge Representation and Reasoning (KR2012), and the International Semantic Web Conference (ISWC2004). In 2011 she and her co-authors were honoured with the SWSA 10-year Award, recognizing the highest impact paper from the International Semantic Web Conference, 10 years prior.
Abstract: Answer set programming is a popular constraint programming paradigm that has seen wide use across various industry applications. However, logic programs under answer set semantics often require careful design and nontrivial expertise from a programmer to obtain satisfactory solving times. In order to reduce this burden on a software engineer we propose an automated rewriting technique for non-ground logic programs that we implement in a system PROJECTOR. We conduct rigorous experimental analysis, which shows that applying system PROJECTOR to a logic program can improve its performance, even after significant human-performed optimizations. This talk will present PROJECTOR and considered experimental analysis in great detail.
Bio: Yuliya Lierler is an associate professor at the Computer Science Department at the University of Nebraska Omaha. Prior to coming to the University of Nebraska, Dr. Lierler was a Computing Innovation Fellow Postdoc at the University of Kentucky. She holds a Ph.D. in Computer Sciences from the University of Texas at Austin. Dr. Lierler’s research interests include the field of artificial intelligence, especially in the area of knowledge representation, automated reasoning, declarative problem solving, and natural language understanding.
Abstract: Probabilistic models like Bayesian Networks enjoy a considerable amount of attention due to their expressiveness. However, they are generally intractable for performing exact probabilistic inference. In contrast, tractable probabilistic circuits guarantee that exact inference is efficient for a large set of queries. Moreover, they are surprisingly competitive when learning from data. In this tutorial, I present an excursus over the rich literature on tractable circuit representations, disentangling and making sense of the "alphabet soup" of models (ACs, CNs, DNNFs, d-DNNFs, OBDDs, PSDDs, SDDs, SPNs, etc...) that populate this landscape. I explain the connection between logical circuits and their probabilistic counterparts used in machine learning, as well as the connection to classical tractable probabilistic models such as tree-shaped graphical models. Under a unifying framework, I discuss which structural properties delineate model classes and enable different kinds of tractability. While doing so, I highlight the sources of intractability in probabilistic inference and learning, review the solutions that different tractable representations adopt to overcome them, and discuss what they are trading off to guarantee tractability. I will touch upon the main algorithmic paradigms to automatically learn both the structure and parameters of these models from data. Finally, I argue for high-level representations of uncertainty, such as probabilistic programs, probabilistic databases, and statistical relational models. These pose unique challenges for probabilistic inference and learning that can only be overcome by high-level reasoning about their first-order structure to exploit symmetry and exchangeability, which can also be done within the probabilistic circuit framework.
Bio: Guy Van den Broeck is an Assistant Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning (Statistical Relational Learning, Tractable Learning, Probabilistic Programming), Knowledge Representation and Reasoning (Probabilistic Graphical Models, Lifted Probabilistic Inference, Knowledge Compilation, Probabilistic Databases), and Artificial Intelligence in general. Guy is the recipient of the IJCAI-19 Computers and Thought Award. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, and KR, and an outstanding paper honorable mention at AAAI. Guy serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR).
Abstract: I will discuss a number of roles for logic in AI today, which include probabilistic reasoning, machine learning and explaining AI systems. For probabilistic reasoning, I will show how probabilistic graphical models can be compiled into tractable Boolean circuits, allowing probabilistic reasoning to be conducted efficiently using weighted model counting. For machine learning, I will show how one can learn from a combination of data and knowledge expressed in logical form, where symbolic manipulations end up playing the key role. Finally, I will show how some common machine learning classifiers over discrete features can be compiled intro tractable Boolean circuits that have the same input-output behavior, allowing one to symbolically explain the decisions made by these numeric classifiers.
Bio: Adnan Darwiche is a professor and former chairman of the computer science department at UCLA. He directs the Automated Reasoning Group, which focuses on symbolic and probabilistic reasoning, and their applications including to machine learning. Professor Darwiche is AAAI and ACM Fellow. He is a former editor-in-chief of the Journal of Artificial Intelligence Research (JAIR) and author of "Modeling and Reasoning with Bayesian Networks,î by Cambridge University Press. His groupís YouTube Channel can be found at: http://www.youtube.com/c/UCLAAutomatedReasoningGroup