AI Thinks LIke a Corporation/Death of Insects
Mark Kohut
mark.kohut at gmail.com
Thu Nov 29 00:02:22 CST 2018
https://www.nytimes.com/2017/11/05/technology/machine-learning-artificial-intelligence-ai.html
https://nypost.com/2017/12/06/google-ai-built-ai-child-more-advanced-than-anything-humans-have-ever-made/
https://www.express.co.uk/news/science/887864/artificial-intelligence-google-brain-AI-ray-kurzweil
On Wed, Nov 28, 2018 at 7:25 PM Joseph Tracy <brook7 at sover.net> wrote:
> I agree with David Morris( a rare but not unknown phenom) in his apparent
> doubt that anything like creative intelligence is going on, AI is problem
> solving, programmed, designed and directed by humans. It shows the amazing
> versatility and reach of binary code. But the leap that Arthur proposes of
> an AI designing and building something is a huge leap. So far no computer
> program or robot has designed or built anything that it has not been
> directed to do. Computer learning is just advanced calculation based on
> memory combined with programmed game strategy.
> The final sentiment of the article that we can make AI humane is rather
> a cliched notion about technology. Corporations too are a kind of
> technology, any bets on corporations breaking from violent competition to
> launch a new era of corporations for justice and sustainability? And who,
> one wonders, is this we that can do so much better?
> Also the article mentions the role of the corporate model but almost
> entirely ignores the AI growth industries of warfare, spying and policing.
>
> >
> > There is an old religious/philosophical question, originally from old
> > Jewish theology I think: if God is all-powerful, can he create something
> > greater than Himself? Applied to AI, this question describes what Ray
> > Kurzweil calls The Singularity. One has only to look at AlphaGO to see
> > this. The original AlphaGO soundly thumped the world's best GO player,
> > after having taught itself to play the game in two weeks, playing against
> > itself. It successor, AlphaGO Zero, played a 100-game match against its
> > progenitor, with a result of 100 games to zero.
> > One can generalize this phenomenon: an AI will design and build its own
> > successor, and once that happens, further growth will proceed
> > exponentially. Kurzweil defined The Singularity as the moment when AI
> > becomes smarter than its creators. Once that happens -- and I (and
> others)
> > believe it surely will, then all bets, and all considerations about our
> > well-being, are off.
> >
> > Arthur
> >
> > On Wed, Nov 28, 2018 at 5:27 AM John Bailey <sundayjb at gmail.com> wrote:
> >
> >> I think what the article makes clear is that what "we" want from AI
> >> doesn't matter - as far as I know nobody on the P-list is leading that
> >> charge, but certain people are and we shouldn't talk about the
> >> "progress" or "evolution" of a particular technology as if it's
> >> ahistorical and inevitable.
> >>
> >> A practical example: there's a lot of talk about the ethics of
> >> automated cars, and what their algorithms should take into account
> >> when deciding who dies in a crash. From all I've read/heard the
> >> discussion comes down to utilitarian ethics, and what would be the
> >> greater good in such a situation. But utilitarian ethics treats people
> >> as mathematical variables and is far from the only ethical model that
> >> could be applied, but it's the model that makes most sense from a
> >> programming standpoint, and perhaps the standpoint of a legal
> >> corporation trying to cover its posterior.
> >>
> >> Maybe the problem in AI thinking like a corporation is that
> >> corporations are very good at a lot of things (perpetuating their own
> >> survival, decentralised functioning, reorganising themselves to adapt
> >> to challenges, reducing individual culpability) but not so good at
> >> others (pretty much everything covered in the history of ethics).
> >> On Wed, Nov 28, 2018 at 4:08 PM David Morris <fqmorris at gmail.com>
> wrote:
> >>>
> >>> Does anyone think AI would be better with a chaos quotient? I don't
> >> think so. So Predictable Intelligence is our real goal. We want *smart*
> >> servants, not intelligence. So, of course predictable AI will support
> >> corporate structures.
> >>>
> >>> it seems to me that AI is essentially imitative, not creative, not
> >> spontaneous. It isn't really intelligent. We don't want it to talk
> back or
> >> even question us. We won't ever tolerate that.
> >>>
> >>> David Morris
> >>>
> >>> On Tue, Nov 27, 2018 at 9:47 PM Ian Livingston <
> igrlivingston at gmail.com>
> >> wrote:
> >>>>
> >>>> Yep. Chiming in with gratitude, Rick. Thanks.
> >>>> My answer to the concluding question is pending, though I tend toward
> >> the
> >>>> latter proposition.
> >>>>
> >>>> On Tue, Nov 27, 2018 at 1:58 PM John Bailey <sundayjb at gmail.com>
> wrote:
> >>>>
> >>>>> Thanks Rich, great read.
> >>>>> On Wed, Nov 28, 2018 at 3:41 AM bulb <bulb at vheissu.net> wrote:
> >>>>>>
> >>>>>> Really excellent article, thank you Rich. Working for a company
> >> that is
> >>>>> making massive investments in AI - this puts things in perspective..
> >>>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Pynchon-l <pynchon-l-bounces at waste.org> On Behalf Of rich
> >>>>>> Sent: dinsdag 27 november 2018 15:45
> >>>>>> To: “pynchon-l at waste.org“ <pynchon-l at waste.org>
> >>>>>> Subject: AI Thinks LIke a Corporation/Death of Insects
> >>>>>>
> >>>>>> thought you guys would be interested
> >>>>>>
> >>>>>>
> >>>>>
> >>
> https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying
> >>>>>>
> >>>>>> like everything else these days we're dazzled by the science not
> >> knowing
> >>>>> or caring about context, origins
> >>>>>>
> >>>>>> and this
> >>>>>>
> >>>>>>
> >>>>>
> >>
> https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html?action=click&module=Top%20Stories&pgtype=Homepage
> >>>>>> --
> >>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>>>>
> >>>>>> --
> >>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>>> --
> >>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>>>>
> >>>> --
> >>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >> --
> >> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
> >>
> >
> >
> > --
> > Arthur
> > --
> > Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>
> --
> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>
More information about the Pynchon-l
mailing list