AI Thinks LIke a Corporation/Death of Insects
rich
richard.romeo at gmail.com
Wed Nov 28 09:15:07 CST 2018
Hi all
glad everyone was engaged by the article.
my only added comment is no one referenced the NY times piece re:
disappearance of insects. i think that worries me more at the moment. and
what's interesting about that piece is the efforts of non-academics (I wont
call them amateurs), particularly in Germany in furthering the research
rich
On Wed, Nov 28, 2018 at 8:53 AM Arthur Fuller <fuller.artful at gmail.com>
wrote:
> Mark,
> I've played out the *Game of the Century*, and many other Fischer games, to
> my constant amazement. But all things considered, John Stuart Mill surely
> holds some sort of record; by the time he was three, he could read, write
> and speak English, Latin and Greek. I'm fluent in English competent in
> French, and am now trying to learn Mandarin. At age 71, it's not easy. Ah
> well. As T.S. Eliot wrote:
> I grow old, I grow old,
> I shall wear the bottoms of my trousers rolled.
>
> On Wed, Nov 28, 2018 at 11:46 AM Mark Kohut <mark.kohut at gmail.com> wrote:
>
> > The anecdote of the Go Champion being STUNNED, STUNNED I TELL YOU, by an*
> > early* move where the rock was placed way out of range of where the
> > action was happening is.......something.
> >
> > In chess, which I know a little, the AI's recapitulated the whole history
> > of various openings in days and hours as well, with increasingly
> > perfectible moves and, of course never a blunder but "surpises" at the
> > grandmaster level only in the perfection of conceived sequences (hard at
> > that level to be in a position for a surprise as in Fisher's Game of the
> > Century).
> >
> > On Wed, Nov 28, 2018 at 6:32 AM Arthur Fuller <fuller.artful at gmail.com>
> > wrote:
> >
> >> From an article on GeeksAreSexy.net:
> >>
> >> "By starting without any human-like preconceptions, AlphaGo Zero was
> able
> >> to develop strategies more suited to its capabilities. It still needs
> to be
> >> tested against human players, but one expert who analyzed the
> >> inter-computer games says it used techniques he had never previously
> seen.
> >>
> >> Google’s hope is that such an approach might work in other areas of
> >> artificial intelligence, with computers that develop techniques and
> >> procedures that make best use of a computer’s capacity rather than
> trying
> >> to refine the way human brains approach tasks."
> >>
> >> This is analogous to the history of man's attempts to fly. Early
> attempts
> >> were modeled on birds; men with wings attached by harness, etc. Then
> came
> >> the Wright brothers, and look what happened in 115 years, 1903 to last
> week
> >> -- 852 feet back then, Mars last week (54.6 km).
> >>
> >> Similarly, by teaching itself to play GO, AlphaGO Zero bypassed human
> >> preconceptions about the game, and came up with moves never before
> seen. AI
> >> will do the same, I think.
> >>
> >> On Wed, Nov 28, 2018 at 9:41 AM Mark Kohut <mark.kohut at gmail.com>
> wrote:
> >>
> >>> "Dave, I'm sorry. I can't do that, Dave"...
> >>>
> >>> On Wed, Nov 28, 2018 at 4:22 AM John Bailey <sundayjb at gmail.com>
> wrote:
> >>>
> >>>> Arthur's on the money. I was an AI skeptic like David for a long time,
> >>>> until I learned the current prevailing method of development:
> >>>> essentially pitting two AI against one another, each trying to
> >>>> convince the other that it is "real", although the criteria for that
> >>>> will vary. And each learns from the other's failures, and does a bit
> >>>> better, and so on and so on in a reciprocal manner that is only
> >>>> limited by the computing power and electricity. So yes, DM, they're
> >>>> already talking among themselves, so to speak. But they can have
> >>>> centuries of conversations in seconds.
> >>>> On Wed, Nov 28, 2018 at 8:00 PM Arthur Fuller <
> fuller.artful at gmail.com>
> >>>> wrote:
> >>>> >
> >>>> > There is an old religious/philosophical question, originally from
> old
> >>>> Jewish theology I think: if God is all-powerful, can he create
> something
> >>>> greater than Himself? Applied to AI, this question describes what Ray
> >>>> Kurzweil calls The Singularity. One has only to look at AlphaGO to see
> >>>> this. The original AlphaGO soundly thumped the world's best GO player,
> >>>> after having taught itself to play the game in two weeks, playing
> against
> >>>> itself. It successor, AlphaGO Zero, played a 100-game match against
> its
> >>>> progenitor, with a result of 100 games to zero.
> >>>> > One can generalize this phenomenon: an AI will design and build its
> >>>> own successor, and once that happens, further growth will proceed
> >>>> exponentially. Kurzweil defined The Singularity as the moment when AI
> >>>> becomes smarter than its creators. Once that happens -- and I (and
> others)
> >>>> believe it surely will, then all bets, and all considerations about
> our
> >>>> well-being, are off.
> >>>> >
> >>>> > Arthur
> >>>> >
> >>>> > On Wed, Nov 28, 2018 at 5:27 AM John Bailey <sundayjb at gmail.com>
> >>>> wrote:
> >>>> >>
> >>>> >> I think what the article makes clear is that what "we" want from AI
> >>>> >> doesn't matter - as far as I know nobody on the P-list is leading
> >>>> that
> >>>> >> charge, but certain people are and we shouldn't talk about the
> >>>> >> "progress" or "evolution" of a particular technology as if it's
> >>>> >> ahistorical and inevitable.
> >>>> >>
> >>>> >> A practical example: there's a lot of talk about the ethics of
> >>>> >> automated cars, and what their algorithms should take into account
> >>>> >> when deciding who dies in a crash. From all I've read/heard the
> >>>> >> discussion comes down to utilitarian ethics, and what would be the
> >>>> >> greater good in such a situation. But utilitarian ethics treats
> >>>> people
> >>>> >> as mathematical variables and is far from the only ethical model
> that
> >>>> >> could be applied, but it's the model that makes most sense from a
> >>>> >> programming standpoint, and perhaps the standpoint of a legal
> >>>> >> corporation trying to cover its posterior.
> >>>> >>
> >>>> >> Maybe the problem in AI thinking like a corporation is that
> >>>> >> corporations are very good at a lot of things (perpetuating their
> own
> >>>> >> survival, decentralised functioning, reorganising themselves to
> adapt
> >>>> >> to challenges, reducing individual culpability) but not so good at
> >>>> >> others (pretty much everything covered in the history of ethics).
> >>>> >> On Wed, Nov 28, 2018 at 4:08 PM David Morris <fqmorris at gmail.com>
> >>>> wrote:
> >>>> >> >
> >>>> >> > Does anyone think AI would be better with a chaos quotient? I
> >>>> don't think so. So Predictable Intelligence is our real goal. We want
> >>>> *smart* servants, not intelligence. So, of course predictable AI will
> >>>> support corporate structures.
> >>>> >> >
> >>>> >> > it seems to me that AI is essentially imitative, not creative,
> not
> >>>> spontaneous. It isn't really intelligent. We don't want it to talk
> back or
> >>>> even question us. We won't ever tolerate that.
> >>>> >> >
> >>>> >> > David Morris
> >>>> >> >
> >>>> >> > On Tue, Nov 27, 2018 at 9:47 PM Ian Livingston <
> >>>> igrlivingston at gmail.com> wrote:
> >>>> >> >>
> >>>> >> >> Yep. Chiming in with gratitude, Rick. Thanks.
> >>>> >> >> My answer to the concluding question is pending, though I tend
> >>>> toward the
> >>>> >> >> latter proposition.
> >>>> >> >>
> >>>> >> >> On Tue, Nov 27, 2018 at 1:58 PM John Bailey <sundayjb at gmail.com
> >
> >>>> wrote:
> >>>> >> >>
> >>>> >> >> > Thanks Rich, great read.
> >>>> >> >> > On Wed, Nov 28, 2018 at 3:41 AM bulb <bulb at vheissu.net>
> wrote:
> >>>> >> >> > >
> >>>> >> >> > > Really excellent article, thank you Rich. Working for a
> >>>> company that is
> >>>> >> >> > making massive investments in AI - this puts things in
> >>>> perspective..
> >>>> >> >> > >
> >>>> >> >> > > -----Original Message-----
> >>>> >> >> > > From: Pynchon-l <pynchon-l-bounces at waste.org> On Behalf Of
> >>>> rich
> >>>> >> >> > > Sent: dinsdag 27 november 2018 15:45
> >>>> >> >> > > To: “pynchon-l at waste.org“ <pynchon-l at waste.org>
> >>>> >> >> > > Subject: AI Thinks LIke a Corporation/Death of Insects
> >>>> >> >> > >
> >>>> >> >> > > thought you guys would be interested
> >>>> >> >> > >
> >>>> >> >> > >
> >>>> >> >> >
> >>>>
> https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying
> >>>> >> >> > >
> >>>> >> >> > > like everything else these days we're dazzled by the science
> >>>> not knowing
> >>>> >> >> > or caring about context, origins
> >>>> >> >> > >
> >>>> >> >> > > and this
> >>>> >> >> > >
> >>>> >> >> > >
> >>>> >> >> >
> >>>>
> https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html?action=click&module=Top%20Stories&pgtype=Homepage
> >>>> >> >> > > --
> >>>> >> >> > > Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>> >> >> > >
> >>>> >> >> > > --
> >>>> >> >> > > Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>> >> >> > --
> >>>> >> >> > Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>> >> >> >
> >>>> >> >> --
> >>>> >> >> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>> >> --
> >>>> >> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>> >
> >>>> >
> >>>> >
> >>>> > --
> >>>> > Arthur
> >>>> >
> >>>> --
> >>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>>
> >>>
> >>
> >> --
> >> Arthur
> >>
> >>
>
> --
> Arthur
> --
> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>
More information about the Pynchon-l
mailing list