The logical foundations of goal-regression planning in autonomous agents

被引:33
作者
Pollock, JL [1 ]
机构
[1] Univ Arizona, Dept Philosophy, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
autonomous agents; defeasible reasoning; goal regression; OSCAR; planning;
D O I
10.1016/S0004-3702(98)00100-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the logical foundations of goal-regression planning in autonomous rational agents. It focuses mainly on three problems. The first is that goals and subgoals will often be conjunctions, and to apply goal-regression planning to a conjunction we usually have to plan separately for the conjuncts and then combine the resulting subplans. A logical problem arises from the fact that the subplans may destructively interfere with each other. This problem has been partially solved in the AI literature (e.g., in SNLP and UCPOP), but the solutions proposed there work only when a restrictive assumption is satisfied. This assumption pertains to the computability of threats. It is argued that this assumption may fail for an autonomous rational agent operating in a complex environment. Relaxing this assumption leads to a theory of defeasible planning. The theory is formulated precisely and an implementation in the OSCAR architecture is discussed. The second problem is that goal-regression planning proceeds in terms of reasoning that runs afoul of the Frame Problem. It is argued that a previously proposed solution to the Frame Problem legitimizes goal-regression planning, but also has the consequence that some restrictions must be imposed on the logical form of goals and subgoals amenable to such planning. These restrictions have to do with temporal-projectibility. The third problem is that the theory of goal-repression planning found in the Al literature imposes restrictive syntactical constraints on goals and subgoals and on the relation of logical consequence. Relaxing these restrictions leads to a generalization of the notion of a threat, related to collective defeat in defeasible reasoning. Relaxing the restrictions also has the consequence that the previously adequate definition of "expectable-result" no longer guarantees closure under logical consequence, and must he revised accordingly. That in turn leads to the need for an additional rule for goal regression planning. Roughly, the rule allows us to plan for the achievement of a goal by searching for plans that will achieve states that "cause" the goal. Such a rule was not previously necessary, but becomes necessary when the syntactical constraints are relaxed. The fnal result is a general semantics for goal-regression planning and a set of procedures that is provably sound and complete. It is shown that this semantics can easily handle concurrent actions, quantified preconditions and effects, creation and destruction of objects, and causal connections embodying complex temporal relationships. (C) 1998 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:267 / 334
页数:68
相关论文
共 46 条
[1]  
ALLEN J, 1987, READINGS PLANNING
[2]  
[Anonymous], 1994, RIDDLE PYRAMIDS
[3]   PARTIAL-ORDER PLANNING - EVALUATING POSSIBLE EFFICIENCY GAINS [J].
BARRETT, A ;
WELD, DS .
ARTIFICIAL INTELLIGENCE, 1994, 67 (01) :71-112
[4]  
BLUM A, 1995, P 14 INT JOINT C ART, P1636
[5]   Fast planning through planning graph analysis [J].
Blum, AL ;
Furst, ML .
ARTIFICIAL INTELLIGENCE, 1997, 90 (1-2) :281-300
[6]  
BRAFMAN RI, 1998, 9806 FC BEN GUR U DE
[7]   PLANNING FOR CONJUNCTIVE GOALS [J].
CHAPMAN, D .
ARTIFICIAL INTELLIGENCE, 1987, 32 (03) :333-377
[8]  
ETZIONI O, 1992, PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE (KR 92), P115
[9]  
FERGUSON G, 1994, P 2 INT C AI PLANN S
[10]   STRIPS - NEW APPROACH TO APPLICATION OF THEOREM PROVING TO PROBLEM SOLVING [J].
FIKES, RE ;
NILSSON, NJ .
ARTIFICIAL INTELLIGENCE, 1971, 2 (3-4) :189-208