- check out the stylin' NEW Collusion haxor gear at Jinx Hackwear!!! -
- sign up on the Collusion Syndicate's infotainment discussion lists!!! -

Volume 13
Jul 2000


 HOME

 TechKnow
 Media Hack
 Parallax
 Reviews
 Fiction
 Humor
 Events
 Offsite

 Mission
 Responses
 Discussion
 #Collusion
 NEW!

 Submit a Story
 Collusioneers
 © & TM Info
 Contact Us


SETI@Home

Join the
Collusion
SETI Team!




The Intelligence of an Entity
 by Robby Glen Garner and Steven Boyd Henderson

Preface

Mimetic Synthesis is a new terminology that more accurately describes a programming methodology used to mimic human behavior in a computer such as a PC. Previous work in this field has been incorrectly categorized under various aspects of Artificial Intelligence (AI).

On Intelligence

Testing and quantifying intelligence is difficult at best, even if it’s human intelligence. To Quote Tariq Samad from "Notes on Measuring Intelligence in Constructed Systems", "The difficulty of compressing the multifaceted nature of intelligence into one scalar quotient has led to proposals to consider intelligence not as one unitary quantity but as a collection of properties that are mutually incommensurable." Furthermore, one of the many lessons from a century of work on human intelligence is that we still don’t really know what intelligence is.

Mimetic Entities

The early mimetic systems developed by Robby Garner are hierarchical in structure. This allows the "Mimetic Entity" to synthesize the combined behavior of subsystems into a unified presentation. This structure certainly suggests that one way to measure the intelligence of such machines is to review the hierarchical concepts it uses and the processes that contribute to the goals of the whole system.

One of the first hierarchical mimetic synthesizers was called Albert. This program combined the behavior of several methods that shared the same goal of simulating human conversation. Each method represents a separate strategy used to form the response to a human stimulus phrase.

The first method is based on a simple model of behavior, where conversation is represented by strings of (stimulus -- response) nodes. The goal of this particular method is to find a match for the user’s input stimulus in a database, and form the reply with the corresponding "response" from the database. If the first method is not successful, the program follows down the hierarchy from most specific method, to least specific.

The second method looks in a table of Boolean rules and attempts to fit a rule to the user’s input. If a rule is satisfied, its corresponding response is used. The goal of this method is to satisfy a Boolean expression based on the user’s input phrase.

And so on, the third method attempts to find a generalization about the user’s input phrase using a "framed" template to determine a match. The goal of this method is to find a generalization that applies to the user’s input phrase.

Then finally, if none of the other methods has succeeded, a final method selects a "new topic" from a pool of unused topics. The goal of this method is merely to make a response. (To change the subject) So, one can see that the overall goal of simulating conversation is attempted by using a variety of strategies, all contributing to the main goal. The hierarchical structure ensures that the best possible response may be used.

It must be obvious that the performance of the mimetic entity with regards to simulating a conversation depends entirely on the performance of all of these various methods or subsystems. Yet it depends first and foremost on the person talking to it.

The Loebner Show

But what can we say about Albert’s intelligence? None of the methods used are intelligent, so their "unified" representation is not intelligent. That Albert may be perceived as intelligent by a human being is evidenced by the 1998 Loebner Prize Contest, but the program is not in fact intelligent. http://www.cs.flinders.edu.au/research/AI/LoebnerPrize/

Then if we can know what intelligence is not, does that tell us what intelligence is?

No, because none of the competitors in the Loebner contest have exhibited intelligence. At best they exhibit a behavior which seems familiar to the user (judge), and some of them have used very clever means to achieve this. But the ingenuity of the programmer does not make the program intelligent. One also has to agree that an imitation is not the same as the thing it imitates. Furthermore, some may object to things that are artificial for no other reason except that they are artificial. Yet if a thing works, does it matter why it works or what it is made from? Some people would say that if a thing is not really "intelligent" then it is an impostor, and therefore "dangerous." But if a tool performs a job according to specification, why is that less intelligent than if a human being had performed the same job?

By doing a job, there is at least one goal implied, and that is the completion of the job. If a computer completes the same job as a human in a smaller amount of time, we would say the computer has better performance, not better intelligence?

Human Intelligence

In dealing with other people, we assess their intelligence on a casual basis by observing their behavior, the things they say, their solutions to problems, or other factors, many of which are purely subjective.

Measuring machine intelligence would be much easier if people could agree on how to measure human intelligence!

So I think there is always a disparity between "perceived intelligence" and "actual intelligence", especially in evaluation of human intelligence. Intelligence is not solely performance, but is it possible to measure intelligence without also measuring a performance?

Sometimes a performance involves a great deal of preparation and training. If a man repeats the same sequence of behavior, practices it over and over until it can be done repetitively without thinking, is that intelligence?

Summary

The key to true intelligence is the ability of an entity to enlist strategy to accomplish its mission, not preconceived knowledge, or rote behavior.

Military confrontation is a good example according to R. Neil Bishop. "Time and time again, superior firepower and resources have been overcome by an inferior force with an intuitive strategy, which gave them a monumental advantage."

Also strategy is the key element needed to develop successful research techniques which, in pure science, may not even exist before the scientist begins. The strategy of obtaining and integrating knowledge is the key to reaching beyond what is presently known or understood.

The use of strategy applies not only to the highest level of abstraction, but is also evident in the "rank and file" subsystems that perform even the most basic tasks required by an entity as a whole. The strategy or algorithm employed by a programmer may be akin to "instinct" in some systems. Is instinctive behavior intelligent?

Robby Garner
Chief of Research / Mimetics
Technology Applied Systems Corporation