A Mimetic Entity is a Machine that may mimic the personality of a
human
being. A Functional Response Emulation Device is an ME which
synthesizes the
combined behavior of agents or subsystems that share a common goal or
goals
into a unified representation or function, perhaps using a "human"
interface
with us.
I believe that "Intelligence" in the common usage is a very broad
term. I
would prefer to think about [Most of AI] these systems in terms of
ARTIFICIAL REASONING, which is a subset of human behavior.
It seems apparent that since technology is only capable of producing
isolated pieces of human behavior that the metrics may have to be
tailored
for each category of components. With that in mind, it seems that the
first
order of business would be to define these categories.
Newell listed these properties that an intelligent system must have.
They are, in fact, categories of human behavior:
- recognize and make sense of a scene
- understand a sentence
- construct a correct response from the perceived situation
- form a sentence that is both comprehensible and carries a meaning
of the
selected response
- represent a situation internally
- be able to do tasks that require discovering relative knowledge
Yet human intelligence also has properties that are very hard to
describe:
- creativity (ability to synthesize new ideas from old ones)
- attitude (determination vs. apathy for instance)
- comprehension of BOTH spoken and written language
Ray Dillinger of Neuromedia Inc. proposed some categories for future
chatterbot contests in a Wired magazine article.
He says:
"I think that the Turing Test is focusing far too much attention on the
process of deceiving people. I'd much prefer to see individual
competitions
for different events relating to the real obstacles involved -- 'best
semantic comprehension of user utterances', 'most useful chatterbot',
'best
ability to answer direct questions', 'best ability to infer user
emotion
from user utterances', 'best ability to resolve preferences to previous
conversation', etc. "
I agree with his idea for these categories, but he fails to understand
the
nature of the deception in a Turing test.
To me, it is like this: Sensitivity Training is for people who want to
*SEEM* sensitive, much like mimetic training is for robots that we
want to
seem human.
There is no shame in only seeming human; in fact, it is admirable to
strive
for this kind of advanced ergonomics. The usefulness of a program
that only
seems human should not be measured in terms of human intelligence, but
in
terms of the performance of the functional applications under it's
control,
and the ease with which they may be employed by us via our "human"
interface
with the ME. Once again, each agent or subsystem would be measured by
the
metrics of its particular category.
The Albus definition of intelligence is fine for systems that must use
ARTIFICIAL REASONING, but our Mimetic Entities would create a unified
representation to the user for these functionally nonhuman systems.
The ME
works in the environment that we humans work best in, i.e. working with
other
humans. And humans are indeed multi-faceted creatures. Likewise, in
our
vision for the human-computer interface of an ME, there could be no
single
metric to describe its behavior, since that behavior is comprised of
very
specialized, often goal oriented, subsystems or agents.
Regards,
Robby.
For more information on Robby Garners work please visit
http://www.robitron.com/
|