Rape (is also Re: Turing, A.I. and ESP

rj rjackson at mail.usyd.edu.au
Mon Jan 3 22:09:38 CST 2000


jp
> What if telepathy required volitional behavior on the part of both the
> receiver and the sender?

Then we could call it a transaction, some form of intellectual commerce,
or intercourse, rather than rape. 

My point is simply that, without the sender's volition (so not a
"sender" at all, but a dupe), which seems the more probable scenario, it
*is* something like rape. If there are ESPies and non-ESPies, as
Turing's comments and *personal* uncertainty imply, then the fact that
the latter are not cognizant of the former somewhat alters the
volitionality set-up you propose. I'm not saying that all ESPies would
be evil, unscrupulous ESPies, mind you, just some. (Because, they'd all
still be "human".)

> Further, what if it
> took work and practice to learn how to "clear" such a channel, make it
> operative, and send or receive telepathic communication between other
> willing individuals? How would you feel about it then?

We're moving a little into the realm of transcendentalism and
spiritualism here I think. Like I said, I'm merely positing an alternate
scenario. I root for the good guys (the ESPies) in *The Time Machine*
and *The Chrysalids* too. But I think the relationship between an ESPie
and a non-ESPie is, or more often than not would be, an unequal one;
though they could also function in complicity with one another in
certain (empirically-determined) situations. For instance,
non-volitional thoughts on the part of the non-ESPie pose another
variable in the example you cite. Also, once the channel has been opened
is it permanent, can it be closed by mutual consensus, or not? I can see
real problems at auto-teller machines. The happy-happy mutually-agreed
upon type of ESP is fine, but purely theoretical: I'm not so certain
about its practical operation for "real" human relationships.

> But such a putative machine, unless programmed beforehand by intentional
> human agents, would need to have intentionality of its own: desire, will,
> etc., otherwise why would it bother?

But isn't this the A.I. argument, the learning paradigm Turing is
proposing?

> It would not care one way or the
> other. Unless it had self-interest it would just be clever as directed. 

Which is about where we are with A.I. now, aren't we?

> If
> it exhibited self-interest, it would come close to being animate, but its
> self-interest would have to extend to the desire to preserve its form-
> repair itself, or even reproduce, which would require eating, excreting,
> etc. In other words, it would be alive.

Exactly. Frankenstein's monster.

Norbert Wiener: "The hour is very late, and the choice of good and evil
knocks at our door." (*The Human Use of Human Beings*, 1950, p 213)

best



More information about the Pynchon-l mailing list