Sorry to give the impression of being so arrogant. I had no way of
telling this was an MT parody--as such it's pretty good. Um Gottes
Willen, German is painful enough to read without any parodies at
all!
so it would appear that we just might be on the same side of the
fence (or closer to it) on this matter.
Quote:
>> http:\\www.tlg.uci.edu/~opoudjis/Work/KK.html
> This gave error message "No such group http:\\www.tlg.uci.edu "
> when clicked.
Yes, it just did that for me too. But when i entered it in the
"Open File" field, it went right through. The WWW is strange.
Anyway, it's an abridged version of Koerner's put-down of Chomsky.
I guess the main "Kontentionsbein" between us might be this: i think
much ai natlang work is misguided, hasn't, can't, and won't work.
just so all you busy people don't have to go skuttling all over the
Net to find my references, i'll append an article i wrote five years
ago right here. looking over postings here, i see no reason to
suppose it isn't as true and relevant now as it was then.
The Next Wave of MT Publicity
By Alex Gross
(Originally published in the ATA Chronicle,
July, 1994)
The year is 1986, and I am sitting on my floor with my
hacker genius friend, whom I'll call Mike. We are discussing the
imminent wave of Artificial Intelligence programs which will soon
take over the world and make us both vastly rich. In this
venture I will provide the practical knowledge, Mike the
programming skills. I am eager to put together a medical
application, and Mike has some ideas of his own. We both believe
there is no limit to the power of AI to harness ideas, learning,
knowledge. Mike idly picks up a Roget's Thesaurus lying on my
floor and leafs through it.
"You see how easy it is, Alex" he tells me, "All we need is
the French equivalent of this book, I link them together with a
program, and bingo: perfect Machine Translation!" I am hesitant
and attempt to express my doubts. I try to tell Mike that it
isn't that simple, but he will have none of it. He is supremely
sure that language is a pushover, what programmers call a
"trivial task."
Mike never built his MT system, even though he did go on to
write an award-winning AI application that came closer than any
to passing the Turing Test (more about that test later). So
there is no doubt about his programming skills, nor those of many
other programmers. What remains in doubt is the capacity of
these highly specialized technicians to assess the deepest
problems connected with MT, AI, and NLP (Natural Language
Processing) applications in general.
Publicity about MT has come in waves. The first wave was
launched by Turing, Weaver, Shannon, and other computer pioneers.
A later wave emanated from IBM around the time of the 1964
World's Fair. The most recent wave started in the mid-Eighties
and has culminated in the various micro and mainframe systems now
familiar to us. Each wave has publicized much the same
arguments:
1. MT will be faster than human translators.
2. MT will be more accurate than human translators.
3. MT will be cheaper than HT (though more
recently this claim has been slurred over).
4. MT will break the language barrier and open the
way to true and lasting human understanding (this
point has also been deemphasised of late, though
early enthusiasts greatly stressed it).
Soon the next wave of MT publicity will burst upon us, and
the publicity mills are already gearing up. (1) In a year or two we
will be reading about the incredible breakthroughs achieved by
the "CYC" project, a unique Natural Language Processing
experiment using massive parallel processing to build the
supposed eight to ten million links embedded in human language.
CYC supposedly comprises an "EnCYClopedia" of what we have all
learned about the world around us. Once again all the familiar
arguments about MT are likely to resurface. Even though CYC is
not an MT system in itself, any success it enjoys will certainly
reach out to embrace MT and other branches of AI.
There can be no doubt that the CYC project is an important
one worthy of attention by all translators. For this reason--and
also because its home base is Austin, Texas--I have asked Peter
Krawutschke to determine if it will be possible for a group of
computer-oriented ATA members to look in on CYC while we are in
Austin this October. Perhaps it could also become possible for
representatives of CYC to take part in our conference program.
The arguments for and against MT seem to come and go in an
almost cyclical fashion, and some translators have come to view
this subject with apprehension. But we need to pay attention to
what is happening in MT and AI in general. Two unassailable
arguments in its favor remain: 1) no one opposes MT where it
really works, and 2) MT works quite well for those tasks where it
is suitable. The main questions concern which tasks these may
be, whether their number may grow, and how translators will come
to be integrated into the overall continuum of MT, Computer
Assisted Translation, and traditional techniques.
But as my friend Mike's attitude towards language shows,
there are still some larger concerns about MT, which have dogged
its development from its very beginnings and remain very much
with us. Underlying the basic assumptions of MT are much the
same notions often vocalized as "Why don't you just type it out
in Spanish?" or "Just look at it and say it in English." What MT
shares with such solecisms is the notion that the differences
between two languages can easily be predicted and routinized.
Noam Chomsky's concepts of "deep structure" or "universal
grammar" reflect the same fallacies, in this case beefed up with
many layers of academic terminology. Basic to all these
approaches is the half-truth that language is inherently
reasonable, which must be balanced against the other half-truth,
that it is not reasonable at all. It is altogether possible--as
I have argued elsewhere (2)--that on an evolutionary plane language
may be at least partially an outgrowth of the spray marks used by
animals to claim territory, attract mates, or repel rivals.
Similarities between MT and other coequal branches of AI--
"voice-writing," text retrieval, robotics, Machine Vision--also
cannot be overstressed: all have fallen behind schedule for
closely related reasons. Voice-writing--which was originally
supposed to catch every nuance of speech automatically--has now
settled for asking the speaker to confirm or correct it following
every word or phrase. Text retrieval has still not fully
recovered from the ambitious claims made surrounding its birth.
One thing the computer does best is to match up strings of text,
though this was never strictly speaking "AI." But anything less
than a perfect match requires what is called a "fuzzy search,"
which in turn often produces vast quantities of "quasi-results"
requiring highly qualified humans to determine their relevance.
This means we are still a long way from truly reliable research
based on a given text base. This is because searching according
to "key-words" is only as accurate as the key-words which have
been entered. In other words, a search through a legal data base
under the heading "{*filter*}age {*filter*}" will not find:
JUDGE: Did you have the baby?
REPLY: No, I decided not to.
Robotics, envisioned as supplying us all with unlimited
household servants, did not even succeed completely in taking
over the factory floor--rather, the factory floor had to be
redesigned from scratch to allow robots to work. And one still
hears the story of the two Japanese welding robots who during a
lull have been known to start welding each other. Even Isaac
Asimov, the father of Robotics, expressed his disappointment that
these machines were not robots as he envisioned them. And as for
Machine Vision, how many people are ready to have a computer make
the next left turn for them, much less drive them off into the
sunset?
Like Asimov, even AI's primary advocate, Marvin Minsky, has
taken to writing science fiction to promote his ideas, which
begin to sound indeed more and more like SF and less like viable
proposals. And even Minsky is now hedging on the future of MT,
as this account of instant Japanese-English interpreting from his
The Turing Option (co-authored with Harry Harrison) illustrates:
"He touched the phone disconnect button and the voxfax
machine behind him instantly sprang to life, humming lightly
as it disgorged the printed record of their phone
conversation. His words were in black, while Mura's were in
red for instant identification. The translation system had
been programmed well, and as he glanced through it he saw no
more than the usual number of errors.....The staff
translator would later verify the correctness of the
translation the computer had made." (3)
Of course this wondrous machine had already made a real-time
onscreen "rough," though some may wonder what "the usual number
of errors" was and how the protagonist recognized them without
himself being an expert translator. (And if he were, why would
he have needed this device in the first place?)
Thus, even science fiction has partially given up on old-
fashioned, red-{*filter*}ed AI. The whole point of the Turing Test
was to make a computer so lifelike that those communicating with
it via keyboard from another room would actually believe they
were talking to a human. No machine has yet fully passed this
test, which has since been subjected to many doubts and
objections--even Alan Turing himself never supposed such a ruse
could be maintained for more than a few minutes (4). And so the
days of the completely human computer may belong to the past
rather than
...
read more »