Rape (is also Re: Turing, A.I. and ESP
jporter
jp4321 at idt.net
Mon Jan 3 23:54:59 CST 2000
>Date: Tue, 4 Jan 2000 00:58:08 -0400
>To:rjackson at mail.usyd.edu.au
>From:jp4321 at idt.net (jporter)
>Subject:Re: Rape (is also Re: Turing, A.I. and ESP
>
>rj
>
>>My point is simply that, without the sender's volition (so not a
>>"sender" at all, but a dupe), which seems the more probable scenario,
>
>I don't see any evidence for the type of phenonmenon you are describing,
>unless I have been duped!
>
>> it
>>*is* something like rape. If there are ESPies and non-ESPies,
>
>How about "adepts," it's easier to say.
>
>> as
>>Turing's comments and *personal* uncertainty imply, then the fact that
>>the latter are not cognizant of the former somewhat alters the
>>volitionality set-up you propose.
>
>I would suggest that the above is more of an inference by you than an
>implication by Turing or Turing's uncertainty, but in a scenario where
>ability might be more a function of empathy, like the narrator's in GR for
>Pointsman and Slothrop, e.g., rather than controlling, as Pirate's is-
>albeit because he's co-opted by Them, there would be a safeguard against
>abuse, not necessarily because of goodness or badness, but for physical
>reasons. I am agnostic w/r/t E.S.P., although there have been certain,
>call them coincidences, which I can deal with as either random, or
>something more (less?) than random, and laugh either way, certainly
>non-threatening.
>
>>I'm not saying that all ESPies would
>>be evil, unscrupulous ESPies, mind you, just some. (Because, they'd all
>>still be "human".)
>
>Yes, but what if E.S.P. does turn out to require mutual volition?
>
>
>>> Further, what if it
>>> took work and practice to learn how to "clear" such a channel, make it
>>> operative, and send or receive telepathic communication between other
>>> willing individuals? How would you feel about it then?
>>
>>We're moving a little into the realm of transcendentalism and
>>spiritualism here I think.
>
>Not if the mutuality was necessary purely for physical reasons, i.e., to
>limit the noise.
>
>>Like I said, I'm merely positing an alternate
>>scenario. I root for the good guys (the ESPies) in *The Time Machine*
>>and *The Chrysalids* too.
>
>I'm not really sure I'm rooting for anyone, since we haven't established
>the exisitence of ESPies, either good or bad. But if E.S.P. happens to
>exist, why would it be any different than other human skills which can go
>either way, but like language is used best by the good guys, like us.
>
> >But I think the relationship between an ESPie
>>and a non-ESPie is, or more often than not would be, an unequal one;
>>though they could also function in complicity with one another in
>>certain (empirically-determined) situations.
>
>Again, you are assuming it can be a one way street...as if the "non-ESPie"
>had no will, in which case, why bother with something fuzzy like E.S.P.,
>Darth Vader could just Tell them what to do, or demand they tell him their
>thoughts, outright.
>
>>For instance,
>>non-volitional thoughts on the part of the non-ESPie pose another
>>variable in the example you cite. Also, once the channel has been opened
>>is it permanent, can it be closed by mutual consensus, or not? I can see
>>real problems at auto-teller machines. The happy-happy mutually-agreed
>>upon type of ESP is fine, but purely theoretical: I'm not so certain
>>about its practical operation for "real" human relationships.
>
>I was not claiming that E.S,P, existed in a particular way, but made those
>alternative proposals to see if you would still have reservations. I
>gather you wouldn't, but seem to hear you saying that the potential evil
>of such a phenomenon would cause you to slam shut any window that might be
>open. I was just suggesting that the window might not be able to be
>openned unless both sides wanted it openned, and had disciplined
>themselves to allow, what must be a delicate link, to be created.
>
>>
>>> But such a putative machine, unless programmed beforehand by intentional
>>> human agents, would need to have intentionality of its own: desire, will,
>>> etc., otherwise why would it bother?
>>
>>But isn't this the A.I. argument, the learning paradigm Turing is
>>proposing?
>>
>>> It would not care one way or the
>>> other. Unless it had self-interest it would just be clever as directed.
>>
>>Which is about where we are with A.I. now, aren't we?
>>
>
>Sort of, but Turing's test allows the human's to provide recognition of
>intelligence in the machine, because it is an "observer-relative"
>phenonmenon. E.S.P., surprisingly, in the context of the test, is an
>objective, observer-independent phenonmenon- i.e., it can be objectively
>measured rather than subjectively inferred from the logical coherence or
>context of the answers. E.S.P. need not be contextual or logical. It just
>can't be random.
>
>
>jody
>
>
>
More information about the Pynchon-l
mailing list