Using Abstraction in Reasoning about Autonomous Agents and Multiagent Systems

Funding agency:  Natural Sciences and Engineering Research Council of Canada

Principal Investigator:
Yves Lespérance, Department of Electrical Engineering and Computer Science, York University

Duration:  5 years  (2022 -- 2027)
 

Abstract

When developing autonomous agents that perform tasks in complex dynamic environments, the use of abstraction is crucial in planning agent action, in explaining the agent's behaviour, and in doing reinforcement learning. We can use a simplified abstract model to generate high-level solutions efficiently, and later refine these using the detailed concrete model. The abstract model may be expressed in terms that humans understand, while the concrete model can be used by the machine. More generally, we may have a multi-tier representation where various models are used to perform reasoning at different levels of detail and address different kinds of contingencies.

In recent work with Banihashemi and De Giacomo, I have developed a formal account of agent abstraction in the situation calculus, a well known predicate logic framework for reasoning about action. We assume that we have a high-level specification and a low-level specification of the agent, both represented as action theories in the situation calculus. A refinement mapping specifies how each high-level action is implemented by a low-level ConGolog program and how each high-level predicate can be translated into a low-level formula. We define notions of sound/complete abstractions between such action theories. We showed that sound/complete abstractions have many useful properties that ensure that we can reason about the agent's actions (e.g., synthesize plans) at the abstract level, and then refine the solutions obtained at the low level. The framework can also be used to generate high-level explanations of low-level behavior.

In this project, my students and I will extend this work and apply it to new problems. First, we will examine how to generalize our account to apply to nondeterministic domains, where actions have many possible outcomes that are not under the agent's control. Second, we will study how abstraction can be exploited to perform explainable planning, where we bridge the gap between the human user's model and the system's model. Third, we will examine how to accommodate temporally extended goals and abstract plans in "practical reasoning" where an agent progressively refines/revises her intentions over time, while keeping them consistent; the agent should not have to consider fully detailed plans and should be able to exploit knowledge about how goals/plans interact. Fourth, we will study how abstraction can be used in reasoning about other agents and how they can help/interfere with the accomplishment of the one's goals; this should yield a form of decentralized multi-agent epistemic planning, where each agent generates her own plan, delegating subgoals to other agents, and knows enough about the other agents' abilities, intentions, and willingness to cooperate to be confident that her goals will be achieved. Finally, we will look at how to synthesize abstractions that are useful for a given application/purpose.

back to [Yves Lespérance]