A Theory of Heuristic Reasoning About Uncertainty

نویسندگان

  • Paul R. Cohen
  • Milton R. Grinberg
چکیده

This article describes a theory of reasoning about uncertainty, baaed on a representation of states of certainty called endorsements The theory of endorsements is an alternative to numerical methods for reasoning about uncertainty, such as subjective Bayesian methods (Shortliffe and Buchanan, 1975; Duda, Hart, and Nilsson, 1976) and the Shafer-Dempster theory (Shafer, 1976). The fundamental concern with numerical representations of certainty is that they hide the reasoning that produces them and thus limit one’s reasoning about uncertainty While numbers are easy to propagate over inferences, what the numbers mean is unclear The theory of endorsements provides a richer representation of the factors that affect certainty and supports multiple strategies for dealing with uncertainty. that exploits converging evidence. The decision to attend to a hypothesis or ignore it is related to how certain the hypothesis is: Systems are said to establish and extend “islands of certainty” (e.g., Erman, Hayes-Roth, Lesser, and Reddy, 1980). But these systems do not reason explicitly about their beliefs in their hypotheses. When inference rules are themselves uncertain, some systems augment domain inferences with parallel certainty znferences Systems such as EMYCIN (van Melle, 1980) associate certainty factors with the conclusions of inference rules. A rule of the form “IF A and B and C, THEN D” asserts D when A, B, and C are certain; additionally, a number may be associated with D to indicate one’s belief that D follows from A, B, and C. It may be that A, B, and C, though certain, suggest but do not confirm D, in which case the number associated with D might be less than the 1.0 that usually represents certainty in such systems. If A, B, or C are uncertain, then the number associated with D is modified to account for the uncertainty of its premises. These numbers are given different names by different authors; we refer to them as degrees of belief (Shafer, 1976). The functions that propagate degrees of belief over inferences are called combzning functzons. Domain rules are assigned a priori degrees of belief and the purpose of the combining functions is to faithfully represent the intent of each in the context in which it is eventually used. Some systems propagate not one degree of belief, but two, indicating a range of certainty. In NOTHING IS CERTAIN. People’s certainty of the past is limited by the fidelity of the devices that record it, their knowledge of the present is always incomplete, and their knowledge of the future is but speculation. Even though nothing is certain, people behave as if almost nothing is uncertain. They are adept at discounting uncertainty making it go away. This article discusses how AI programs might be made similarly adept. Two types of uncertainty have been studied in AI. One arises from noisy data, illustrated in speech understanding and vision programs; the other is associated with the inference rules found in many expert systems. These types of uncertainty are managed by different methods. Noisy data are usually handled by a control structure THE Al MAGAZINE Summer 1983 17 all cases, one’s certainty in a hypothesis is represented only by a numerical degree of belief. Problems with Current Approaches to Uncertainty There are serious limitations to current numerical approaches to reasoning under uncertainty. Some approaches use just a single number to represent a degree of belief, but, as Quinlan (1982) points out, “the single value tells us nothing abouts its precision.” Another problem with single numbers is that they combine evidence for and against a proposition, and so one cannot distinguish between disbelief and a lack of evidence pro or con (Shafer, 1976). Various schemes have been used to overcome these representational deficits, such as ranges instead of point values and separate measures of belief and disbelief (see Quinlan, 1982, for a review.) Most systems use a variant or generalization of Bayes’s theorem to derive the degree of belief in a conclusion from the degrees of belief of the preconditions. Unfortunately, Bayes’s theorem requires masses of statistical data in addition to the degrees of belief in preconditions. Almost always, subjective expert judgments are used in place of these data, with the risks of inaccuracy and inconsistency (see Shortliffe and Buchanan, 1975; Duda, Hart, and Nilsson, 1976; Shortliffe, Buchanan, and Feigenbaum, 1979, discuss the reasons for success and failure in several medical programs that use Bayes’s theorem.) inference rules and those that are derived from the initial numbers as the system reasons. Initial numbers are usually supplied by domain experts; for example, an expert investment counselor may associate a degree of belief of 0.6 with the inference that advanced age implies low risk tolerance. It is not always clear what the 0.6 means. It may mean that 60% of the distribution of the elderly people in a sample have low risk tolerance. More often, the 0.6 represents an expert’s degree of belief that a person has low risk tolerance if he or she is old. The number is presumably a summary of the reasons for believing and disbelieving the inference; but once summarized, these reasons are inaccessible. This is one reason that explanations in expert systems are limited to a recitation of the inferences that led to a conclusion. Current systems explain how they arrived at a conclusion, but not why they tend to believe (or disbelieve) it. None of these systems appears able to interpret its degrees of belief. These are well-known, documented problems with numerical approaches to reasoning about uncertainty. The remainder of this section discusses the representational problems in more detail. In particular, it proposes that numerical approaches to reasoning under uncertainty are restricted because the set of numbers is not a sufficiently rich representation to support considerable heuristic knowledge about uncertainty and evidence. Numerical degrees of belief in current AI systems are of two kinds: those specified znitaally as qualifications of domain The second kind of numerical degrees of belief that are found in AI systems are those derived by reasoning. A general schematic of the derivation of degrees of belief is shown in Figure 1. At the top of the figure are two domain inferences from investment counseling: Rule 1 states that advanced age implies low risk tolerance, and rule 2 infers that the client needs a high proportion of bonds if he (or she) has low risk tolerance Associated with each rule is an initial degree of belief (0.6 and 0.8, respectively) supplied by the domain expert. These numbers are combined with the degrees of belief of their rule’s premises to produce a derived degree of belief in the conclusion. For example, if it is certazn that risk tolerance is low, then 0.8 represents the undiluted degree of belief in the conclusion that the client should have a preponderance of bonds. But if-as in Figure l-it is less than certain that risk tolerance is low, then the degree of belief in the conclusion about bonds should be less than 0.8. In Figure 1, this premise is not fully believed and the certainty in the conclusion is represented by the product of 0.8 and 0.6. The function that combines these two numbers-in this case by multiplicationis called a combining function.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Framework for Heuristic Reasoning About Uncertainty

This paper describes a theory of reasoning about uncertainly, based on a representation of state'i of certainly called endorsements (see Cohen and Grinberg, 1983, for a more detailed discussion of the theory.) The theory of endorsements is an alternative to numerical methods for reasoning about uncertainty, such as subjective Bayesian methods (Shortliffe and Buchanan, 1975; Duda, Hart, and Nils...

متن کامل

A Unifying Model for Representing and Reasoning About Context under Uncertainty

Modeling and reasoning about context under uncertainty is a major challenge in context-aware computing. This paper proposes a novel approach to represent context in a unifying way and to perform reasoning about context represented with that model, under uncertainty. We develop a novel reasoning approach based on MultiAttribute Utility Theory as the means to integrate heuristics about the relati...

متن کامل

Uncertainty in Heuristic Knowledge and Reasoning

Heuristic knowledge and reasoning based on the knowledge is more or less uncertain, while computers behave logically based on rigid principles of their operation. In order for computers to mimic human intelligence using such uncertain heuristic knowledge, they must have a certain model to represent and process the uncertainty. The paper recalls how Artificial Intelligence has dealt with uncerta...

متن کامل

Probability Theory Ensues from Assumptions of Approximate Consistency: A Simple Derivation and its Implications for AGI

By slightly tweaking some recent mathematics by Dupré and Tipler, it is shown that, if an intelligence even approximately obeys certain simple consistency conditions in its reasoning about uncertainty, then its uncertainty judgments must approximately obey the rules of probability theory. It is argued that, while real-world cognitive systems will rarely be fully consistent, they will often poss...

متن کامل

Evidential Equilibria in Static Games: Heuristics and Biases in Strategic Interaction

Standard equilibrium concepts in game theory find it diffi cult to explain the empirical evidence for a large number of static games such as prisoner’s dilemma, voting, public goods, oligopoly, etc. Under uncertainty about what others will do in one-shot games of complete and incomplete information, evidence suggests that people often use evidential reasoning (ER), i.e., they assign diagnostic ...

متن کامل

Uncertainty.

Some universals of grammar with particular reference to the order of meaningful elements. Uncertainty Almost all information is subject to uncertainty. Uncertainty may arise from inaccurate or incomplete information (e.g., how large are the current U.S. petroleum reserves?), from linguistic imprecision (what exactly do we mean by " petroleum reserves " ?), and from disagreement between informat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • AI Magazine

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1983