Turing, AI and ESP
Seb Thirlway
seb at thirlway.demon.co.uk
Thu Dec 30 15:04:55 CST 1999
I've been following this discussion, and can't say I've followed
precisely enough to butt into a particular post and refute
it...but, backtracking a great deal, I don't really buy Turing's
ESP argument at all.
The way it's been presented suggests that Turing is saying
something along the lines of
"ESP (which I, Turing, believe does occur) is a major obstacle to
my goal, which is to argue for the possibility of
x. A machine may well (if not now, then in the future)
participate as well in the game I have described as a human
would."
(this "game" may well be what I've heard of as the "Turing
test"). "as well", here, means "such that the human in the
observer role in this game would not be able to identify the
machine as a machine rather than a human being".
Turing sees a hypothetical "telepathy-proof room" as one solution
to this obstacle. So what he seems to be saying is
1. Humans have an ESP faculty which they could use in this test.
2. A machine (any machine) would not have this faculty.
3. Therefore, the human observer could ask questions that
require the use of ESP, and thus distinguish between a machine
participant and a human participant.
3. implies that Turing's contention (that a machine _could_
impersonate a human) is wrong - this is how Turing presents it,
as a counter-argument to his.
The thing that strikes me most obviously in this argument is 2.,
which seems to be a completely unwarranted assumption. OK it
does seem plausible, if we all imagine for a moment that we do
believe in ESP, that a machine would not have this faculty.
However, replace "has an ESP faculty" with "can enjoy strawberry
ice-cream", "can fall in love", or any of the other standard
attributes of humans in this type of argument, and then the
argument seems quite lame - in fact, an argument addressed and
disposed of elsewhere in the article.
Turing is right, I think, in identifying ESP as a major obstacle
to his contention. This is because his contention (as he makes
clear at the start) is not "can machines think" but "could a
machine participate in this game so as to give the impression
that it is human" - and the game is constructed such that any
human attributes (falling in love, liking strawberry ice-cream
etc) _could_ be simulated by the machine (using the word
"simulated" is already to invoke a whole shed-load of philosophy
of mind, but let it pass, please?). This is because the game is
based on verbal, very purely verbal (e.g. typed so as to appear
on a screen) responses to questions. A lot of interesting issues
arise about whether the machine could answer questions so as to
persuade the observer that it does fall in love, like ice-cream
etc. Even more interesting when the next question
Could the machine answer these questions "humanly" consistently,
get it "right" (i.e. "human") every time? If it can, there's
another question raised as to what exactly constitutes _really_
being capable of falling in love, liking ice-cream etc (easily
glossed as "what constitutes being _really_ human") beyond such
consistent behaviour.
Anyway, my main point is that ESP is a major obstacle to Turing
not because it's ESP, but because it's slightly outside the
framework of the game as he set it up. If ESP is not
allowed/doesn't exist/is prevented by a telepathy-proof room,
then the questions remain questions of a particular form:
questions that ask the respondent to report their response -
questions that are asked from a standpoint of ignorance: the
"observer/questioner" knows nothing about the respondent except
the respondent's answers, and the respondent knows nothing about
the observer except the questions.
In other words, in most cases all the observer is looking at is
the respondent's answer to the question and nothing but the
question (bearing in mind previous questions, to make the test
more difficult). An ESP question, in contrast, demands that the
respondent has additional knowledge, _beyond_ the question that
is asked - the psychic knowledge that the questioner is holding a
Star Zener card, for example.
This is outside the rules of the game (as people have said on the
list). The rules of the game are all about simulating (OK, let
it pass again) intelligence/humanity through strictly verbal
responses. A satisfactory performance on ESP questions is not
about simulating intelligence/humanity through strictly verbal
responses - it brings in the additional criterion of knowledge at
a distance, or as it's put in the usual explanation of the ESP
mechanism, knowledge of another mind.
There's a lot packed into Turing's ESP argument. HIs assumption
that a machine could not have an ESP faculty is, I think, another
form of the "Head-in-the-Sand" argument he presents in the
article. Which goes something like "let's hope that a machine
can never perform this game so as to pass for a human...think of
the consequences...".
The fact that this assumption ("a machine could not do ESP") is
unexamined, and not even presented explicitly, makes me very
suspicious that it is another HeadintheSand argument. The HITS
argument is based on fear of the enormous consequences of
admitting machines to membership of the community of intelligent
beings. I think that Turing's assumption is a safety mechanism
against this anxiety - something along the lines of "well, even
if machines do pass this test and many others, we'll still be
safe - there still is a difference, thank God: we can do ESP and
they, the machines can't."
A very good reason to get het up about ESP, but not a reason to
believe in it (I'm sceptical but ignorant on ESP, but admittedly
it would make me breathe easier if it was proven to be real).
Assuming that humans do have an ESP faculty. Who says a machine
couldn't? i.e. who says Turing's assumption is true? The best
argument that a machine _couldn't_ comes from considering only
determined machines: i.e. machines whose entire behaviour is
programmed in from the start. Are there not indeterminate
machines? (not to mean "beyond causality", but "not predictable
by the programmers"). Neural nets learn, but no-one quite knows
how. The idea of programming a machine with a teleology and a
set of "pain/pleasure", or more neutrally "correct/incorrect"
inputs from the environment, rather than with an explicit list of
operations to perform, seems to me to produce this sort of
indeterminacy.
So this is why I don't buy the argument:
1) ESP questions are asking for a performance beyond the rules of
the game.
2) If humans do have ESP, then the assumption that machines could
not have ESP has to be supported, beyond the intuition "well,
that ESP, that's a human thing, right? A MACHINE couldn't do
that...".
Another thing...
If a machine played Turing's game as well as a human, but failed
the ESP questions, how would we consider it? For best results,
assume that all human beings are brilliant at ESP.
This seems like a good question because Turing's assumption seems
very obviously (to me) to be a refuge from the conclusion that
machines may be intelligent/human.
There seem to be two questions conflated in Turing's article -
"can a machine show intelligence?" and "can a machine show
humanity?". If a machine failed the ESP questions, wouldn't the
questioner be justified in thinking that the (machine) respondent
was not a human intelligence, but still an intelligence?
regards
seb
seb at thirlway.demon.co.uk
More information about the Pynchon-l
mailing list