Intractability Principle and the Concept of Z-Hardness


(Professor Lotfi A. Zadeh)
(ARO) DAAH 04-96l-034l, BISC Program of UC Berkeley, (NASA) NCC2-l006, (NASA) NAC2-ll7, (ONR) N000l4-99-C-0298, (ONR) NOOOl4-96-l-0556, and (ONR) FDNOOl499l035

Almost all scientific theories are based on Aristotelian logic and probability theory. Use of these theories has led to brilliant successes that are visible to all. But what is less visible is that alongside the brilliant successes lie many problem areas where progress has been slow or nonexistent. We cannot automate driving in city traffic, we cannot construct programs that can summarize non-stereotypical stories, and our capacity to do economic forecasting without human intervention leaves much to be desired.

To understand the reasons for the mixed picture, it is germane to start with the thesis that, in general, a system for performing a specified task may be viewed as having three principal components: (1) hardware; (2) software; and (3) what may be called brainware--a complex of concepts, theories, methods, and algorithms that govern the functioning of hardware and software.

The capability of a system to perform a specified task is a function of the capabilities of its hardware, software, and brainware components. In general, there is a tradeoff between these capabilities. However, there are important classes of problems in which the overall capability is limited primarily by the structure of brainware and, more particularly, by the underlying systems for logical reasoning and dealing with uncertainty.

What may be called the Intractability Principle is a thesis that is focused on problems that are solvable by humans, but are difficult for machines. More specifically, the principle relates machine-solvability to the structure of brainware. A basic concept that underlies the principle is that of a qualitative complexity scale, or QCS for short. The low end of QCS consists of problem/tasks that are easy to solve/perform. The opposite holds for the high end. An M-scale is a QCS that applies to machines; an H-scale applies to humans. For simplicity, QCS is assumed to be linearly ordered; it can be partially ordered, if needed. An example of M-scale is the automation of driving a car. At the low end is the problem of automation of driving on a freeway with no traffic. At the high end is the automation of driving in Istanbul.

Stated informally, the thesis has two parts, (1) negative, and (2) positive, which are summarized in the following. (1) As the complexity of a problem increases, a critical threshold is reached beyond which a solution cannot be achieved through the use of techniques based on two-valued logical systems and probability theory. On an M-scale, problems that lie to the right of the critical threshold are said to be Z-hard. (2) Beyond the critical threshold, achievement of a solution necessitates the use of fuzzy-logic-based methodology of computing with words (CW) and the perception-based theory of probabilistic reasoning (PTp).

The basic idea underlying the Intractability Principle is that problems that lie beyond the critical threshold are intractable by conventional measurement-based methods. What is not widely recognized is that many problems that are simple for humans fall into this category. Here are a few examples of Z-hard problems.

(1) Automation of driving a car. In this case, as stated already, on one end of the complexity scale we have automation of driving on a freeway with no traffic. On the other end is automation of driving in Istanbul. Humans can drive in Istanbul without any measurements or any computations. At this juncture, no conceivable system can solve the problem of automation of driving in Istanbul.

(2) Machine-execution of the instruction: Make a right turn at the intersection. In this case, at one end of the scale we have an intersection on the outskirts of Princeton. On the other end, an intersection in the center of New York.

(3) Summarization. On one end of the scale is a short stereotypical story. On the other end is a book. As a task, summarization is an order of magnitude more complex than machine translation. In (1), (2), and (3), problems at the end of the scale are Z-hard.

There are two key points that require comment. First, in most real-world problems that are simple for humans, the critical threshold is not a remote limit that is of no concern. On the contrary, it underscores the intrinsic limitation of techniques based on Aristotelian logic and probability theory to deal with problems in which decision-relevant information is, for the most part, perception-based.

The second point relates to the pivotal role of the methodology of computing with words in making it possible to solve problems that lie beyond the critical threshold. The crux of the matter is that in most problems that lie beyond the critical threshold, perception-based information is described by propositions expressed in a natural language, e.g., "traffic is usually very heavy in late afternoon." In general, such propositions are f-granular in the sense that (1) the boundaries of perceived classes are unsharp; and (2) the values of attributes are granulated, with a granule being a clump of values drawn together by indistinguishability, similarity, proximity, and functionality. The problem with theories based on Aristotelian logic and probability theory is that they do not have the capability to deal with f-granular propositions drawn from a natural language. In large measure, the importance of computing with words derives from the fact that it does provide this capability.

A key concept in computing with words is that of Precisiated Natural Language (PNL). Basically, PNL is a subset of a natural language, NL, which consists of propositions that are precisiable through translation into what is called the Generalized Constraint Language, GCL. PNL serves an important function as a definition language, with the language of fuzzy if-then rules being a sublanguage of PNL.

Computing with words provides a basis for an important generalization of probability theory--from standard probability theory, PT, to perception-based probability theory, PTp. Separately and in combination, computing with words and perception-based theory of probabilistic reasoning open the door to a solution of problems that lie beyond the critical threshold. This is the principal rationale for the positive thesis of the Intractability Principle.

* Professor in the Graduate School and Director, Berkeley Initiative in Soft Computing (BISC), Computer Science Division, Department of EECS, University of California, Berkeley, CA 94720-l776; Telephone: 5l0-642-4959; Fax: 5l0-642-l7l2; E-mail: zadeh@cs.berkeley.edu. Research supported in part by ONR Contract N00014-99-C-0298, NASA Contract NCC2-1006, NASA Grant NAC2-1l7, ONR Grant NOOO14-96-1-0556, ONR Grant FDNOO14991035, ARO Grant DAAH 04-961-0341, and the BISC Program of UC Berkeley.


More information (http://www-bisc.cs.berkeley.edu) or

Send mail to the author : (zadeh@cs.berkeley.edu)


Edit this abstract