But this default is not normal4 1. N. Roos, Reasoning by cases in Default Logic, Artificial Intelligence 99 (1998) 165-183. Genesereth and Nilsson (1987) and Russell and Norvig (2009) explain how to compute predicate completion. Artificial Intelligence - Reasoning in Uncertain Situations 1. T=(0∅,{true:pp}) has the single extension E = Th({p}), but And then we can allow quantification over super-predicate symbols. Moreover, in a revision process it is expected that all formulas independent of the validity of the input information should be retained in the revised state of belief; this intuitive idea may receive a precise meaning using a suitable definition of possibilistic independence between events [Dubois et al., 1999a]. Later work by [Baker 1991] and [Shoham 1988] addressed this problem. CSM-122, Department of Computer Science, University of Essex, Wivenhoe Park, colchester CO4 3SQ. etc. At the popular level it has produced a weird conception of the potential capabilities of machines in general. Non-monotonic reasoning deals with incomplete and uncertain models. Human logic is not. ARTIFICIAL INTELLIGENCE 41 Non-Monotonic Logic I * Drew McDermott Department of Computer Science, Yale University, New Haven, CT 06520, U.S.A. Jon Doyle Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. Typically, high school dropouts are adults. Normal default theories always have extensions. ������`�K.p� Defaults of this form are called semi-normal; [Etherington, 1987a] studied this class of default theories, and gave a sufficient condition for the existence of extensions. . Speaking generally, human reasoning is not reducible to collecting facts and deriving their consequences; it embodies an active epistemic attitude that involves making assumptions and wholesale theories about the world and acting in accordance with them. S is assumed to be true as long as there is no evidence to the contrary. like ϕ * ψ ⊨ σ iff ϕ ⊨ ψ > σ for some kind of operator >. The basic formalisms of nonmonotonic reasoning could hardly be called formalizations of commonsense reasoning. Knowledge representation 7. It can be argued that normal logic programs, with appropriate semantics for negation, are sufficient to solve the frame problem in artificial intelligence. Basically, the problem was that reasoning necessary for an intelligent behavior and decision making in realistic situations has turned out to be difficult, even impossible, to represent as deductive inferences in some logical system. 1997] are the basis for robotics research in the Cognitive Robotics Group at University of Toronto [Scherl & Levesque 1993]. 303-305). For non-monotonic reasoning, it is necessary to extend Horn clauses to clauses of the form: Each Ai and Bi is an atomic formula, and “not” is read as not. The qualification problem was first pointed out by [McCarthy 1977]. Situations, basically, are the results of performing sequences of actions. Belief revision : because new knowledge may contradict old beliefs. This suppositional character of commonsense reasoning conflicts with the monotonic character of logical derivations. One of the major motivations came from reasoning about actions and events. They prove that under the assumption that the database consists of clauses whose length is bounded by some constant, default logic and autoepistemic logic can express all of the Σp2 –recognizable Boolean queries, while preference–based logics cannot. Results about the redundancy of certain versions of circumscription and default logic are presented. Non monotonic reasoning is based on default reasoning or “most probabilistic choice”. Lehmann and Magidor [1992], p.41, show that the decision procedure for rational closure is essentially as complex as the satisfiability problem for propositional logic. Default non-monotonic logic - Volume 3 Issue 4 - Peter Mott