Profile of AM, Automated Mathematician
GENERAL
Domain:
Mathematics
Main General Function:
Invent, create, conjecture, etc
System Name:
AM (Automated Mathematician)
Dates:
1975-1976
Researchers:
Douglas B. Lenat
Location:
Stanford Univ.
Language:
Interlisp, January 1975 release
Machine:
SUMEX, PDP-10, KI-10 uniprocessor, 256k core memory
Brief Summary:
AM is initially given a collection of 115 core concepts, with only a few facets filled in for each concept. Its sole activity is to choose some facet of some concept, and fill in that particular slot. In so doing, new notions will often emerge. Uninteresting ones are forgotten, mildly interesting ones are kept as parts of one facet of one concept, and very interesting ones are granted full concept-module status. Each of these new modules has dozens of blank slots; hence the space of possible actions grows rapidly. The same heuristics are used both to suggest new directions for investigation and to limit attention: both to sprout and to prune.
Related Systems:
a) None
b) Eurisko
c) None
CATEGORY TWO
Characterization of Givens:
115 basic concepts on set theory
250 heuristics
Characterization of Output:
About 200 new concepts are synthesized, among which about 130 are acceptable, 2 dozens are significant
Is the data reliable?
N/A
Is the data complete?
N/A
Generic Tasks:
In one sense, AM can be thought of as a design process, i.e. designing the concept hierarchy of mathematics.
In another sense, AM can be thought of as a self-augmenting knowledge mass, like a snowball runs bigger and bigger.
Yet another sense AM can be thought of as automatic programming process, in which lisp codes are manipulated and synthesized towards a non-defined goal.
Theoretical Commitment:
None
Reality:
Mathematics is viewed as a process of finding relationship in empirical data. Concepts are then viewed as these relationships.
CATEGORY THREE
Completeness:
AM is fully implemented.
Use:
AM is not used as a productive system.
Performance:
Are there any performance measures available?
No.
How was the system evaluated? How did it fare?
It is very difficult to evaluate a system like AM, since there is nothing it must do, and nothing it must not do.
The evaluation are basically based on the concepts and conjectures that AM synthesized. In this sense, AM is remarkably successful.
CATEGORY FOUR
Phases:
Is the system organized into distinct phases of different activity?
Distinct subtasks? What are they?
No.
Subfunctions:
None
Use of Simulation or Analysis:
No.
System/Control Implementation Architecture:
The top-level control mechanism is implemented as Agenda, which is a list of task. The task with highest worth-value is executed first.
The bottom level control mechanism is implied in the Heuristic rules.
CATEGORY FIVE
Characterization of Structure Knowledge:
Knowledge is represented as Concepts, which is stored as frames. Each concept has about two dozens of facets such as Name, Definition, Examples, Generalizations, Specializations, Conjectures, Interest, Worth, etc.
Characterization of Process Knowledge:
The process is to choose some facet of some concept, and fill in the particular slot.
Deep or Surface:
Some facets such as Definitions, Examples, Worth are using deep knowledge, i.e. AM can look into the facet and manipulate the contents.
Other facets such as Name, Conjectures are using surface knowledge, i.e. AM treat them as simple strings.
CATEGORY SIX
Search Space:
The search space consists of the Cartesian Product of Concepts and Facets (with sub-facets).
Space Traversal:
The space is traversed by selecting the most interesting task in the agenda. The interestingness of a task is determined by the task itself and the reasons supporting the task.
Search Control Strategy:
Intuitively, the more interesting a concept is, the more time the system will spend on it. This is accordant to the way human mathematician does.
Standard Search Strategies:
AM uses a function to determine the interestingness of a task, the function takes the Concept, Facet, Action, and reasons supporting the task as input, gives a number between 0 and 1000 as output.
Search Control Characterization:
The Best-First search strategy is used in the top-level control.
Subproblems:
N/A
Search Control Representation:
The search control knowledge (i.e. the heuristics) is tacked into sub-facets. A process of rippling, i.e. traverse through the concept hierarchy, is used to find all relevant heuristics.
Search Control Strength:
AM uses domain-dependent, albeit general heuristics. This means, the heuristics are all about mathematics, but they do not apply to particular concepts, rather, they are generic rules used in human reasoning.
CATEGORY SEVEN
Failure Method:
N/A
Uncertainty:
N/A
Management of Uncertainty:
N/A
Management of Time:
No.
CATEGORY EIGHT
Knowledge Representation Method:
Knowledge is represented as concepts. Here is an example:
NAME: Prime Numbers
DEFINITIONS:
ORIGIN: Number-of-divisors-of (x)
PREDICATE-CALCULUS: Prime(x) <=> (¡ )
ITERATIVE: (for x > 1): For i from 2 to
¡Ìx, ¡EXAMPLES: 2,3,5,7,11,13,17
BOUNDARY: 2,3
BOUNDARY_FAILURES: 0, 1
FAILURES: 12
GENERALIZATIONS: Nos., Nos. With an even no. Of divisors
SPECIALIZATIONS: Odd Primes, Prime Pares, Prime Uniquely-addables
CONJECS: Unique factorization, Goldbach¡¯s conjec., Extrema of No-of-divisors-of
INTEREST: Conjectures tying Primes to Times, to Divisors-of, to related operations
WORTH: 800
Knowledge Representation Generality:
AM uses Lisp as its representation language. A lot of lambda-rules are used to represent and manipulate knowledge.
Knowledge Structuring:
Concepts are structured as a hierarchy based on their generalization.
CATEGORY NINE
Alternative Representations:
Sub-facets can be used as alternative representations for the same piece of knowledge. For example, a concept can have more than one Definition, such as Origin, Predicate-calculus, or Iterative, each has its own applicability. The reason of these alternative representations is to provide alternative methods to generate examples.
Alternative Solution Methods:
A concept can be synthesized in more than one way, in this case, AM will consider it as a coincidence, and increase the interestingness of the concept.
Optimization:
N/A
Multiple Results:
N/A
CATEGORY TEN
Interaction:
There are three ways a user can interact with AM:
- Rename a new concept.
- Ask a reason for an action.
- Choose one particular task from the system-generated candidate tasks
Data collection:
N/A
Data format:
N/A
Acquisition:
Knowledge is encoded in the program. There is no formal process of knowledge acquisition in AM.
Learning:
No.
Explanation:
AM will explain the reasons for a particular task or a particular conjecture if asked. But these reasons are hand-coded as strings, AM can not manipulate these strings.
CATEGORY ELEVEN
Strengths:
The system is coded in Lisp. As the author had pointed out, the density of worthwhile math concepts as represented in Lisp is one main factor of the success of the system. Although the heuristics are relative general, they are powerful enough to guide an automated math research, at least in the first 2 hours of running. Weaknesses:
One main weakness of AM is that heuristics are hand-coded before the system runs. When the system runs longer and longer, the original heuristics are too general and not powerful enough to analysis the newly found concept.
Another weakness is that there is no theoretical basis for the heuristics, thus it is difficult to explain whether the result is predetermined by these heuristics.
Other: