Mind.Forth Robot AI Penultimate Release #27 
Author Message
 Mind.Forth Robot AI Penultimate Release #27

Mind.Forth PD AI Release #27 of Tues.8.Jun.1999 is being uploaded
to the Web today in an ASCII text format linked from the end of
http://www.*-*-*.com/ ~mentifex/aisource.html Archival Release #11
as described in the ACM SIGPLAN Notices 33(12):25-31 (Dec 1998).

From a Human-Computer Interaction (HCI) viewpoint, Mind.Forth #27
is the most interesting (~90% complete) version to date, because
on a scale of one to nine it lets the user/programmer invoke an
eery diagnostic display of up to nine levels of visible thinking.

One level will become a standard display mode for AI exhibitions.

The other levels will permit the curious and the dubious to decide
whether Mind.Forth is actually thinking in its primitive AI-life.

At the least tedious and most informative HCI level of four (4),
you see Mind.Forth compose a sentence of thought to you and await
your response.  When you type in a strictly formatted three-word
sentence such as, "Robots have rights," you see Mind.Forth cycle
through its intricate cognitive hierarchy to recognize any known
concepts and to instantiate any new concepts, before answering.

The native Amiga code has already been uploaded to the Seattle-
area Gramma's Bulletin Board System at U.S. Tel. 425-744-1254
into the Files area #43 devoted to programming, as shown here:

Browsing the Programming library backwards from latest file.

   File: 11352 KeyWords: Mind.Forth 8jun1999 Release #27
Name: mind27.lha       Size: 11106 bytes   Downloads: 0
From: ARTHUR_T_MURRAY  Date: 08 Jun 1999 10:16AM  Lib: Programming
==================================================================
[C]ontents [D]ownload [E]dit [K]ill [M]ark [N]on-Stop [Q]uit [?] >

  /^^^^^^^^^^^\ Mind-grid Arrays{ } in Mind.Forth /^^^^^^^^^^^\
 /visual memory\                   _________     /  auditory   \
|      /--------|---------\       / LANG-UK \   |   memory      |
|      |  recog-|nition   |       \_________/---|-------------\ |
|   ___|___     |         | flush-vector|       |   ________  | |
|  /image  \    |     ____V_        ____V__     |  /        \ | |
| / percept \   |    /psi{ }\------/ uk{ } \----|-/ ear{ }   \| |
| \ engrams /---|---/concepts\----/ lexicon \---|-\ phonemes /  |
|  \_______/    |   \________/    \_________/   |  \________/   |



Sat, 24 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:

> Mind.Forth PD AI Release #27 of Tues.8.Jun.1999 is being uploaded
> to the Web today in an ASCII text format linked from the end of
> http://www.scn.org/~mentifex/aisource.html Archival Release #11
> as described in the ACM SIGPLAN Notices 33(12):25-31 (Dec 1998).

[...]

The good news is that if, after saving the referred to HTML file
as a text file from the first: "Screen # 0    ram:Mind.Forth-27"
up to the last "--------------------------------"

and prependending to that file:

: Screen POSTPONE \ ; IMMEDIATE
: --------------------------------
  BEGIN DEPTH WHILE DROP REPEAT ; IMMEDIATE
: THRU 2DROP ; IMMEDIATE
: ?TERMINAL KEY? ;
: 2+ CELL+ ;
: 2* CELLS ;

the whole Mind.Forth compiles under the mostly ANS Power MacForth...

The bad news is that as soon as you invoke: MIND, the program
crashed just after issuing a: "(Clearing Memory...)" message ;-(

As the saying goes: "When you've seen one Forth..."

--



Sun, 25 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:

> : Screen POSTPONE \ ; IMMEDIATE
> : --------------------------------
>   BEGIN DEPTH WHILE DROP REPEAT ; IMMEDIATE
> : THRU 2DROP ; IMMEDIATE
> : ?TERMINAL KEY? ;
> : 2+ CELL+ ;
> : 2* CELLS ;

> the whole Mind.Forth compiles under the mostly ANS Power MacForth...

> The bad news is that as soon as you invoke: MIND, the program
> crashed just after issuing a: "(Clearing Memory...)" message ;-(

You must strip out the line numbers from the listing.  The way they are,
they will be compiled into every colon defintion.  All heck breaks loose
when you try and execute words like ARRAY.

If you don't like stripping the line numbers, you could redefine ACCEPT
to ignore the first 2 or 3 characters after a carriage return.  Or write
some wildcard extensions to the editor's search & replace.

When all the line #'s are removed, it compiles and DOESN'T crash (at
least with a very light workout)

-- ward



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27


Quote:
>Mind.Forth PD AI Release #27 of Tues.8.Jun.1999 is being uploaded
>to the Web today in an ASCII text format linked from the end of
>http://www.scn.org/~mentifex/aisource.html Archival Release #11
>as described in the ACM Sigplan Notices 33(12):25-31 (Dec 1998).

As author/reviewer of the SIGPLAN Notices article on Mind.Forth,
I have enjoyed watching Mr. Murray's progress over the last year.
Taking Mind.Forth from the Amiga to PC environment should stimulate
much more interest among potential enthusiasts.

Quote:
>From a Human-Computer Interaction (HCI) viewpoint, Mind.Forth #27
>is the most interesting (~90% complete) version to date, because
>on a scale of one to nine it lets the user/programmer invoke an
>eery diagnostic display of up to nine levels of visible thinking.

As a student of animal/human physiology and neuroanatomy, I am
amused by this "layering" approach to cognitive function. This
technique appears to recapitulate exactly what evolutionary
forces have accomplished in creating our brain/minds. One
interesting (assumed) difference between animals and men is not
self-awareness (because our pets are certainly self-aware), but
an awareness of our self-awareness; a kind of META-self-awareness.
This function would represent a higher level of organization of
brain/mind and could be added as another layer of software in
Mind.Forth.

Quote:
>The other levels will permit the curious and the dubious to decide
>whether Mind.Forth is actually thinking in its primitive AI-life.

Will "I think, therefore I am" be replaced by "I think I think,
therefore I think I am"?

Quote:
>At the least tedious and most informative HCI level of four (4),
>you see Mind.Forth compose a sentence of thought to you and await
>your response.  When you type in a strictly formatted three-word
>sentence such as, "Robots have rights," you see Mind.Forth cycle
>through its intricate cognitive hierarchy to recognize any known
>concepts and to instantiate any new concepts, before answering.

Once that process is stable, variations can be added to it. For
example, the ability to reformat inputs to search for meaning.

(shortened)

I would be interested in seeing how a multitasking Forth environment
might enhance the operation of Mind.Forth; many functions described
by Murray would seemingly best be defined as "background tasks". Of
those, a natural hierarchy will develop. For example, vision: the
eye/camera captures the light sources in a series of frames. The
frames are digitized and stored in RAM. A background task analyzes
these frames for recognizable shapes (line, semicircle, etc), storing
its findings in a non-graphic descriptor file. Another task reads
these files to look for probable objects (faces, furniture, houses,
trees, etc) and lists what it finds. Still higher-level tasks can try
to identify WHOSE-FACE? and WHAT-EXPRESSION? These objects could be
successfully handled in the conscious awareness of an artificial mind.
Now if that mind had SELF-AWARENESS, VOLITION, and PERSONALITY ....
well, it would be breathtaking, wouldn't it?

Paul Frenger MD
Associate Editor (Forth)
ACM Sigplan Notices



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27


[stuff]

So far I have ignored Mind.Forth and will continue to ignore it,
but I am glad to see at last an appreciative notice of Mr Murray's
project.  Thank you, Dr. Frenger.

--
Leo Wong

http://www.albany.net/~hello/
The Forth Ring: http://zForth.com/

Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27
What I see as the most impressive part of Mind.Forth is the
way that the approach is general enough to cover sensation
of all types, coordination, deductive reasoning, and awareness.
It will be even more impressive when dealing with more sensory
inputs than just ascii strings.



Quote:
> I would be interested in seeing how a multitasking Forth environment
> might enhance the operation of Mind.Forth; many functions described

I am even more interested in how a multiprocessing Forth environment
will enhance the operation of Mind.Forth.  The multitasking version
could be compiled for parallel operation on an SMP.  The human
brain is an example of a _massively_ parallel implementation of
neurons.

Quote:
> by Murray would seemingly best be defined as "background tasks". Of
> those, a natural hierarchy will develop. For example, vision: the
> eye/camera captures the light sources in a series of frames. The
> frames are digitized and stored in RAM. A background task analyzes
> these frames for recognizable shapes (line, semicircle, etc), storing
> its findings in a non-graphic descriptor file. Another task reads
> these files to look for probable objects (faces, furniture, houses,
> trees, etc) and lists what it finds. Still higher-level tasks can try
> to identify WHOSE-FACE? and WHAT-EXPRESSION? These objects could be
> successfully handled in the conscious awareness of an artificial mind.

I have tried building some simple neural nets that perform the
functions of specialized ganglia in the optic nerve so that higher
level vision functions can be built with either combinations of
recognized features being fed into other neural nets being trained
to recognize faces, pets, whatever, or by an expert system using
rules expressed in natural language.  But feeding the output of
feature extractors to an expert system requires working out all
the expert system rules.  It is disturbing to many people to learn
how few expert rules are actually needed to perform jobs that
we think of as requiring a human expert.  Extracting and encoding
rules requires a human expert in the field and a human expert
to study and extract the rules being used.

Mind.Forth however does not require building a conscious entity
with formal knowledge of natural languages or the ever elusive
common sense program to get that holy grail of Artificial Mind.
It seems to provide a shockingly simple way to construct
conscious machines.  It is a bit frightening that such a simple
mechanism can do this and the implications are difficult to
fathom.

Quote:
> Now if that mind had SELF-AWARENESS, VOLITION, and PERSONALITY ....
> well, it would be breathtaking, wouldn't it?

Let's just make sure it isn't too breathtaking. ;-)
But people are used to rug bots with the equivalent of
a couple of neurons.  With mind.forth even a rug bot might
seem pretty smart.

With a few more neurons in front of it I want to do sound,
and speech as well as visual recognition and natural language
and Mind.Forth seems to be great new tool for the job.

I very much enjoyed the working group on Robotics that you
chaired at a Rochester Forth Conference three years ago.  I
loved the idea of a plug and play robotic extension to
something like open firmware that could allow robots to
have interchangable parts.  Snap on a new limb, the limb
and the body have a little dialog, and go.  I loved
that idea.  Now we can put mind.forth into the brain.
Have there been any more discussions about Forth multiprocessing
and robotics like we had in Toranto in 96?

--
Jeff Fox   Ultra Technology
www.UltraTechnology.com

Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27
Quote:

> >whether Mind.Forth is actually thinking in its primitive AI-life.

> Will "I think, therefore I am" be replaced by "I think I think,
> therefore I think I am"?

I prefer the foumulation: "I think, therefor I am, I think."

Cogito ergo sum credo.

Jerry



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

--

----------

Quote:


>> : Screen POSTPONE \ ; IMMEDIATE
>> : --------------------------------
>>   BEGIN DEPTH WHILE DROP REPEAT ; IMMEDIATE
>> : THRU 2DROP ; IMMEDIATE
>> : ?TERMINAL KEY? ;
>> : 2+ CELL+ ;
>> : 2* CELLS ;

>> the whole Mind.Forth compiles under the mostly ANS Power MacForth...

>> The bad news is that as soon as you invoke: MIND, the program
>> crashed just after issuing a: "(Clearing Memory...)" message ;-(

> You must strip out the line numbers from the listing.  The way they are,
> they will be compiled into every colon defintion.  All heck breaks loose
> when you try and execute words like ARRAY.

Precisely not!

what -------------------------------- does is that it empties the
stack. And under normal circumstances, : is not expected (allowed?)
to access data items on the stack that it didn't put there itself.


that would mess up the stack.

The same reasoning hold for ARRAY: since it is only concerned with
the top TWO items on the stack, and those two items HAVE been pushed
correctly and are at the right stack position, whatever else is on
the stack is irrelevant to ARRAY [that's what a stack is a neat
thing for, isn't it?]

I suspect the problem to be more subtle, in some assumptions that
simply don't hold with an ANS system, but I haven't had the time to
investigate further yet ;-(

Nice try, though.

- Show quoted text -

Quote:
> If you don't like stripping the line numbers, you could redefine ACCEPT
> to ignore the first 2 or 3 characters after a carriage return.  Or write
> some wildcard extensions to the editor's search & replace.

> When all the line #'s are removed, it compiles and DOESN'T crash (at
> least with a very light workout)

> -- ward



Mon, 26 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:

> > You must strip out the line numbers from the listing.  The way they are,
> > they will be compiled into every colon defintion.  All heck breaks loose
> > when you try and execute words like ARRAY.

> Precisely not!

> what -------------------------------- does is that it empties the
> stack. And under normal circumstances, : is not expected (allowed?)
> to access data items on the stack that it didn't put there itself.


> that would mess up the stack.

> The same reasoning hold for ARRAY: since it is only concerned with
> the top TWO items on the stack, and those two items HAVE been pushed
> correctly and are at the right stack position, whatever else is on
> the stack is irrelevant to ARRAY [that's what a stack is a neat
> thing for, isn't it?]

NO NO NO.  COMPILATION action of a number appearing in the input stream
is to create a literal from it.

This code:

 1  : ARRAY ( #rows #columns --)
 2     CREATE  \ 1Brodie p. 207; returns address of new name.
 3     OVER    \ a b -- a b a )  ( #r #c -- #r #c #r )
 4             \ Make copy of 2nd item (#rows) and push it on top.
 5     ,       \ Store number of rows from stack to the array.
 6     * CELLS ( Feed product of columns X rows to ALLOT )
 7     ALLOT   ( Reserve given quantity of cells for array. )
 8     DOES>   ( member; row col -- a-addr ) \ e.g., 34 0 ear{ }get
 9             \ row col pfa ( contents of stack )

11     ROT *   \ row pfa col-index ( changes top 2 to a product )
12     ROT +   \ pfa index ( adds the product to the row# )
13     1 +     \ because first cell has #rows.
14     CELLS   \ from number of items to number of bytes in offset.

can NOT compile into the same the same as produced by:

  : ARRAY ( #rows #columns --)
     CREATE  \ 1Brodie p. 207; returns address of new name.
     OVER    \ a b -- a b a )  ( #r #c -- #r #c #r )
             \ Make copy of 2nd item (#rows) and push it on top.
     ,       \ Store number of rows from stack to the array.
     * CELLS ( Feed product of columns X rows to ALLOT )
     ALLOT   ( Reserve given quantity of cells for array. )
     DOES>   ( member; row col -- a-addr ) \ e.g., 34 0 ear{ }get
             \ row col pfa ( contents of stack )

     ROT *   \ row pfa col-index ( changes top 2 to a product )
     ROT +   \ pfa index ( adds the product to the row# )
     1 +     \ because first cell has #rows.
     CELLS   \ from number of items to number of bytes in offset.

Only the number '1' remains on the stack in the first case.  All the
other numbers are compiled as literals.  Just as:

: AddTwenty
   20 + ;  
compiles 20 in as a literal

Your statement would be true (and your fix would work) if the line
numbers were bracketed like:
[ 1 ]

Try disassembling the Mentifex ARRAY word.  Or just use it to do:
 1 1 ARRAY test  .s

SEVEN items appear on the stack from creating the array.

-- ward



Tue, 27 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:

>Your statement would be true (and your fix would work) if the line
>numbers were bracketed like:
>[ 1 ]

Good thought!  So ideally we might show Mentifex how to redefine LIST
so he could make it easier on people who want to use his listings.
Something like:

: LIST ( n -- )
   CR ." .( --------------------------------------------------------------- )"
   CR CR ." .( Screen " DUP . ."  )"
   CR BLOCK 16 0 DO
     CR ." .( " I . ."  ) " DUP 64 CHARS TYPE
     64 CHARS +
   LOOP DROP
   CR ." .( --------------------------------------------------------------- )"
   CR ;

Or maybe do ( instead of .( .

And redefine the words that use LIST to print a whole series of screens.

It ought to be easier to fix the problem at the source.



Tue, 27 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:
>   Organization: Netcom>

>>Your statement would be true (and your fix would work) if the line
>>numbers were bracketed like:
>>[ 1 ]

>Good thought!  So ideally we might show Mentifex how to redefine LIST
>so he could make it easier on people who want to use his listings.
>Something like:

>: LIST ( n -- )
>   CR ." .( --------------------------------------------------------------- )"
>   CR CR ." .( Screen " DUP . ."  )"
>   CR BLOCK 16 0 DO
>     CR ." .( " I . ."  ) " DUP 64 CHARS TYPE
>     64 CHARS +
>   LOOP DROP
>   CR ." .( --------------------------------------------------------------- )"
>   CR ;

>Or maybe do ( instead of .( .

>And redefine the words that use LIST to print a whole series of screens.

>It ought to be easier to fix the problem at the source.

                                  ^^^^^^^ ^^ ^^^ ^^^^^^
Yes, JET, but I may not know Forth well enough to understand the
solution.  (Most of my progress in 1999 has been thanks to your
sending me files of revamped code to work with, and thanks to yours
and John Passaniti's strong suggestion to get rid of too many and
too idiosyncratic Forth varaibles.)

I mean, are you suggesting that I change the way the Forth word LIST
is implemented in Amiga FF977 MVP-Forth?  I was not even conscious
of using the LIST word.  I go through a grueling ordeal to post
the Mind.Forth code to the Web at
http://www.*-*-*.com/ ~mentifex/aisource.html ( shameless plug! ) and
associated URL's.  First I "fake" send each of 15 Forth triads
to my non-existent Amiga serial printer, intercepting each
formatted print-out with the Amiga "CMD" command which redirects
a printed file.  Secondly, I run all fif{*filter*} print-files through
an Amiga Rexx program that I wrote for conversion to ASCII.
Finally I edit the margins of the text, pre-pend and ap-pend HTML,
and upload to http://www.*-*-*.com/ ~mentifex/mind27.html (or higher).

So I would like to fix all problems at the source, but I still
don't feel very good at Forth.  Anyway, I am crossing this plug
into c.ai.ph and c.rob.misc to "show the flag" that Mind.Forth is
still on its way.  Thanks very much for your emnormous quantity



Tue, 27 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27
What can I say?

Oooooppps!

Thanks for hammering that point home!
--

----------

Quote:


>> > You must strip out the line numbers from the listing.  The way they are,
>> > they will be compiled into every colon defintion.  All heck breaks loose
>> > when you try and execute words like ARRAY.

>> Precisely not!

>> what -------------------------------- does is that it empties the
>> stack. And under normal circumstances, : is not expected (allowed?)
>> to access data items on the stack that it didn't put there itself.


>> that would mess up the stack.

>> The same reasoning hold for ARRAY: since it is only concerned with
>> the top TWO items on the stack, and those two items HAVE been pushed
>> correctly and are at the right stack position, whatever else is on
>> the stack is irrelevant to ARRAY [that's what a stack is a neat
>> thing for, isn't it?]

> NO NO NO.  COMPILATION action of a number appearing in the input stream
> is to create a literal from it.

> This code:

>  1  : ARRAY ( #rows #columns --)
>  2     CREATE  \ 1Brodie p. 207; returns address of new name.
>  3     OVER    \ a b -- a b a )  ( #r #c -- #r #c #r )
>  4             \ Make copy of 2nd item (#rows) and push it on top.
>  5     ,       \ Store number of rows from stack to the array.
>  6     * CELLS ( Feed product of columns X rows to ALLOT )
>  7     ALLOT   ( Reserve given quantity of cells for array. )
>  8     DOES>   ( member; row col -- a-addr ) \ e.g., 34 0 ear{ }get
>  9             \ row col pfa ( contents of stack )

> 11     ROT *   \ row pfa col-index ( changes top 2 to a product )
> 12     ROT +   \ pfa index ( adds the product to the row# )
> 13     1 +     \ because first cell has #rows.
> 14     CELLS   \ from number of items to number of bytes in offset.

> can NOT compile into the same the same as produced by:

>   : ARRAY ( #rows #columns --)
>      CREATE  \ 1Brodie p. 207; returns address of new name.
>      OVER    \ a b -- a b a )  ( #r #c -- #r #c #r )
>              \ Make copy of 2nd item (#rows) and push it on top.
>      ,       \ Store number of rows from stack to the array.
>      * CELLS ( Feed product of columns X rows to ALLOT )
>      ALLOT   ( Reserve given quantity of cells for array. )
>      DOES>   ( member; row col -- a-addr ) \ e.g., 34 0 ear{ }get
>              \ row col pfa ( contents of stack )

>      ROT *   \ row pfa col-index ( changes top 2 to a product )
>      ROT +   \ pfa index ( adds the product to the row# )
>      1 +     \ because first cell has #rows.
>      CELLS   \ from number of items to number of bytes in offset.

> Only the number '1' remains on the stack in the first case.  All the
> other numbers are compiled as literals.  Just as:

> : AddTwenty
>    20 + ;
> compiles 20 in as a literal

> Your statement would be true (and your fix would work) if the line
> numbers were bracketed like:
> [ 1 ]

> Try disassembling the Mentifex ARRAY word.  Or just use it to do:
>  1 1 ARRAY test  .s

> SEVEN items appear on the stack from creating the array.

> -- ward



Tue, 27 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:
>Mind.Forth ... seems to provide a
>shockingly simple way to construct
>conscious machines. It is a bit frightening
>that such a simple mechanism can do
>this and the implications are difficult to
>fathom.

Actually, the implication (to me at least)
is that the billion neurons we carry around
in our heads are highly redundant for the
uses to which we put them. The argument
that "our inability to put a billion artificial
neurons into a robot brain means that the
robot can never attain consciousness" is
false, because it doesn't take anywhere
near that many neurons to do the job.
...

Quote:
>But people are used to rug bots with the
>equivalent of a couple of neurons.  
>With mind.forth even a rug bot might
>seem pretty smart.

Jonathan Connell built a pretty clever
room-navigating robot with just 11 analog
neurons in August, 1991(Popular Electronics,
Vol.8 No.8). It used Rodney Brooks (MIT)
subsumption architecture to mimic the
movements of pond snails.

I gave a paper at a bioengineering meeting
in April of this year entitled "Linear Circuits
for Neural Networks and Affective Computing"
where I described how to add artificial
emotions to a robot/android using just two
operational amplifiers: one to sum the
weighted inputs of the individual emotion
generators (voltages), and one to detect
when the threshold for action was reached
(Schmitt trigger). A clever analog engineer
might get away with just one op-amp.

It doesn't take a million neurons to decide
when unpleasant stimuli should make us
do something, or to tell when we are
happy.

Star Trek Commander Data's "emotion
chip" might be no more complicated than
a dual LM741 op-amp IC!

Paul Frenger



Wed, 28 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27

Quote:

>> Mind.Forth ... seems to provide a
>> shockingly simple way to construct  
>> conscious machines. It is a bit frightening
>> that such a simple mechanism can do
>> this and the implications are difficult to
>> fathom.
> Actually, the implication (to me at least)
> is that the billion neurons we carry around
> in our heads are highly redundant for the              
> uses to which we put them. The argument
> that "our inability to put a billion artificial          
> neurons into a robot brain means that the
> robot can never attain consciousness" is
> false, because it doesn't take anywhere
> near that many neurons to do the job.
> ...
>> But people are used to rug bots with the
>> equivalent of a couple of neurons.
>> With mind.forth even a rug bot might
>> seem pretty smart.

http://www.scn.org/~mentifex/aisource.html Mind.Forth Robot AI --
now nearing completion and release in its Version 1.0 format --
owes its existence mainly to the above quoted Jeff Fox, who
in 1998 stood up for Mentifex AI on the Net and inspired me
to resume the Mind.Forth coding that had stalled in 1995.

Quote:
> Jonathan Connell built a pretty clever    
> room-navigating robot with just 11 analog
> neurons in August, 1991(Popular Electronics,
> Vol.8 No.8). It used Rodney Brooks (MIT)  
> subsumption architecture to mimic the      
> movements of pond snails.                

[Dr. Frenger, who wrote the ACM Sigplan Notices paper on Mind.Forth:]

Quote:
> I gave a paper at a bioengineering meeting    
> in April of this year entitled "Linear Circuits
> for Neural Networks and Affective Computing"
> where I described how to add artificial
> emotions to a robot/android using just two
> operational amplifiers: one to sum the
> weighted inputs of the individual emotion
> generators (voltages), and one to detect
> when the threshold for action was reached
> (Schmitt trigger). A clever analog engineer
> might get away with just one op-amp.

These thoughts on "affective computing" prompt me to
remark that Mind.Forth, if successful as a teaching AI,
may serve as a primitive minimal framework upon which
to hang ad-hoc elaborations of almost any function.

- Show quoted text -

Quote:
> It doesn't take a million neurons to decide
> when unpleasant stimuli should make us  
> do something, or to tell when we are        
> happy.
> Star Trek Commander Data's "emotion
> chip" might be no more complicated than
> a dual LM741 op-amp IC!
> Paul Frenger



Wed, 28 Nov 2001 03:00:00 GMT  
 Mind.Forth Robot AI Penultimate Release #27
Before I start, I should state that I am not a student of AI
techniques and technologies.  I follow it independently, mostly out of
curiosity, because there are spin-off technologies (neural nets, fuzzy
logic, genetic algorithms, etc.) that either have (or will) eventually
factor into the rest of my work.

In a recent message, Jeff Fox wrote the following (if you want the
full context, search for it):


Quote:
> Mind.Forth however does not require building a
> conscious entity with formal knowledge of natural
> languages or the ever elusive common sense program
> to get that holy grail of Artificial Mind.  It seems to
> provide a shockingly simple way to construct conscious
> machines.  It is a bit frightening that such a simple
> mechanism can do this and the implications are difficult
> to fathom.

This message isn't in response to Jeff, because others have echoed
what Jeff has written.  I asking a generic questions here that
hopefully others can answer-- especially Arthur Murray.

The first question is about the use of the word "conscious" as used in
the above quote and elsewhere.  Jeff (and Arthur Murray) seem to have
a wildly different definition of what that word means than what most
of us have in mind.  When I have looked at Mr. Mura's work in the
past, what struck me is that he basically seems to be constructing
sets of weighted Markov chains of input chunks of language.  The
weights seem to be related to other stimuli at the time, and possibly
other kinds of events that occur.  So instead of having any real deep
understanding of language, he's just relying on a kind of dynamic
discovery of sequences of sensations, and relating them to some kind
of expected outcome.

Sounds a lot like the family dog.  Say you have a dog who enjoys
chewing on the furniture.  You don't like this, so you scream "Don't
chew on the furniture" and swat the dog with a rolled-up newspaper.
Dog hears a sequence of meaningless sounds, feels the sting of the
newspaper, and registers the event as something to avoid.  This
sequence plays itself out a few times, which causes the dog's brain to
increase the synaptic weights of the meaningless sequence of "Don't
chew on the furniture" and the sting of the rolled-up newspaper.  Now,
when dog attempts this, the owner utters the same meaningless sequence
of sound, but this time the dog makes the connection and doesn't chew
the furniture.

Does the dog understand English?  If you uttered "I'm a little
teapot!" to the dog with the same kind of vocal inflection as "Don't
chew on the furniture" the dog would likely react the same-- at least
all the dogs I've ever known would.  This tells me that the dog
clearly doesn't understand the meaning of the words spoken, but has
instead learned there is an association between (a) chewing on the
furniture, (b) hearing the owner yell at them, (c) feeling a
unpleasant sting.  There is no intelligence here-- no understanding of
what the individual words mean, or an understanding of what the whole
sentence means.

Likewise, what Arthur Murray seems to be promoting is an AI
architecture that likewise doesn't understand the world, but instead
knows how to assign weights to certain sequences of stimuli that occur
within a context.

Is that "conscious?"  If it is, I suggest that Arthur, Jeff, and the
others who seem to find this interesting have a very low threshold of
what consciousness is.  And if that's what they are aiming for, then
fine-- but we aren't talking about an architecture that will (at least
without *tons* of training) gain anything near what we as humans
consider consciousness in ourselves.

What am I missing if I am wrong?

Years ago, I had great fun playing with "Travesty."  I first came
across it published in Byte magazine.  It basically is an algorithm
that looks at streams of input language, breaks it up into chunks,
builds a frequency table of sequences of those chunks, and then
outputs text using nothing but random numbers factored against those
frequencies.  The result is hilarious.  Feed such an algorithm text
from the Bible, Shakespeare, and William S. Burroughs (or all three),
and you get back out a weird stream of text that if done correctly,
looks like a convincing replication of the original.

I later found out that the concept was also used in music.  Encode
music in some way (such as MIDI), and record details like note,
duration, and context, and you can build a table of frequencies that
you can randomly hop around on to produce what sounds like elements of
the original.  A music professor friend once played for me a hilarious
recording of what happens when you generate random music based on this
algorithm-- the inputs were some selections from Bach crossed with
Elvis Presley.

The thing about both of these cases is that in both, the output of the
algorithm is generating something that normally would need human
intelligence.  You can read the text or hear the music, and in some
cases wouldn't be able to tell that it is nothing but random numbers
and frequency tables.  It seems like there is a consciousness there--
but of course, there isn't.

Arthur Murray's architecture seems to do largely the same thing.
Unless I'm missing something, he seems to have crossed the family dog
with "Travesty," and claims this represents intelligence or even
consciousness.  I just don't see it.

Jeff's statement that you don't need to build in knowledge of the
formal rules of a language into a system seems obvious.  Children do
this implicitly, as they don't have to learn the structure of language
to be able to understand it.  But understanding a language is a long
way from *reacting* to a language.  Like the example of the family dog
above, the dog doesn't *understand* the language, he is only reacting
to it.

So there you go.  Fill in what are obviously the blanks.  Maybe then I
(and others) will see the value in what Arthur Murray has posted.
Until then, what I see is largely a different way to encode "Travesty"
that might give better results.



Wed, 28 Nov 2001 03:00:00 GMT  
 
 [ 40 post ]  Go to page: [1] [2] [3]

 Relevant Pages 

1. Mind.Forth Robot AI Penultimate Release #27

2. Mind.Forth Robot AI Penultimate Release #27

3. Mind.Forth robot AI is now similar to JavaScript AI

4. Mentifex AI in JavaScript and Mind.Forth for Robots

5. Exotic robot sensors in Mind.Forth PD AI

6. Main Screen of Mind.forth AI for Robots

7. Distribution of Amiga Mind.forth Robot AI

8. Opening Screen of Mind.forth Robot AI

9. Mind.VB initial translation/port of Mind.Forth PD AI

10. Robot AI Mind SVO (subject-verb-object) Module

11. Interaction with Robot AI Minds

12. Robot AI Mind Namespace

 

 
Powered by phpBB® Forum Software