AI Thinks LIke a Corporation/Death of Insects
Mark Kohut
mark.kohut at gmail.com
Wed Nov 28 16:54:22 CST 2018
Sorta like what we call living, eh?
Sent from my iPhone
> On Nov 28, 2018, at 5:15 PM, John Bailey <sundayjb at gmail.com> wrote:
>
> It's worth reading that link that Kevin posted from Wired - it's on a
> Nobel-tipped scientist who has come up with the model of "active
> inference", which I'm thinking is like curiosity for AI. So rather
> than just solving tasks we set the AI, it actively tries to explore it
> circumstances so it'll be better equipped when a task does come along.
> And building that curiosity into an AI makes it perform significantly
> better than a standard reward-based AI.
> Fascinating, especially since the scientist argues that every living
> thing functions in a similar way, trying to reduce the gap between our
> expectations and our perceptions.
>> On Thu, Nov 29, 2018 at 8:19 AM Arthur Fuller <fuller.artful at gmail.com> wrote:
>>
>> .. The AI successes that impress me more are those that develop hypotheses from sections of our knowledge, hypotheses that can be usefully tested to grow real world knowledge.
>>
>> I wish that I could hope for humans to pass this tough test that you propose. In my experience, few if any can meet your measure, myself included. So what 1% of the 1% are you describing?
>>
>>> On Wed, Nov 28, 2018 at 5:58 PM Peter Brawley <peter.brawley at gmail.com> wrote:
>>>
>>>> On Wed, 28 Nov 2018 at 03:00, Arthur Fuller <fuller.artful at gmail.com> wrote:
>>>>
>>>> There is an old religious/philosophical question, originally from old Jewish theology I think: if God is all-powerful, can he create something greater than Himself? Applied to AI, this question describes what Ray Kurzweil calls The Singularity. One has only to look at AlphaGO to see this. The original AlphaGO soundly thumped the world's best GO player, after having taught itself to play the game in two weeks, playing against itself. It successor, AlphaGO Zero, played a 100-game match against its progenitor, with a result of 100 games to zero.
>>>> One can generalize this phenomenon: an AI will design and build its own successor, and once that happens, further growth will proceed exponentially. Kurzweil defined The Singularity as the moment when AI becomes smarter than its creators. Once that happens -- and I (and others) believe it surely will, then all bets, and all considerations about our well-being, are off.
>>>
>>>
>>> We can generalise anything. The ones worth remembering will be those very few that prove out empirically .
>>>
>>> Some specific AI machines can build their own successors. AI in general is already continuously building its own successor, but AI in general is mainly human. And that AI isn't just far away from emulating human thought. It's hardly begun.
>>>
>>> Winning games is fun. A specific AI machine winning N games in a row is a far cry from analysing & managing the whole real world. The AI successes that impress me more are those that develop hypotheses from sections of our knowledge, hypotheses that can be usefully tested to grow real world knowledge.
>>>
>>> PB
>>>
>>>
>>>>
>>>> Arthur
>>>>
>>>>> On Wed, Nov 28, 2018 at 5:27 AM John Bailey <sundayjb at gmail.com> wrote:
>>>>>
>>>>> I think what the article makes clear is that what "we" want from AI
>>>>> doesn't matter - as far as I know nobody on the P-list is leading that
>>>>> charge, but certain people are and we shouldn't talk about the
>>>>> "progress" or "evolution" of a particular technology as if it's
>>>>> ahistorical and inevitable.
>>>>>
>>>>> A practical example: there's a lot of talk about the ethics of
>>>>> automated cars, and what their algorithms should take into account
>>>>> when deciding who dies in a crash. From all I've read/heard the
>>>>> discussion comes down to utilitarian ethics, and what would be the
>>>>> greater good in such a situation. But utilitarian ethics treats people
>>>>> as mathematical variables and is far from the only ethical model that
>>>>> could be applied, but it's the model that makes most sense from a
>>>>> programming standpoint, and perhaps the standpoint of a legal
>>>>> corporation trying to cover its posterior.
>>>>>
>>>>> Maybe the problem in AI thinking like a corporation is that
>>>>> corporations are very good at a lot of things (perpetuating their own
>>>>> survival, decentralised functioning, reorganising themselves to adapt
>>>>> to challenges, reducing individual culpability) but not so good at
>>>>> others (pretty much everything covered in the history of ethics).
>>>>>> On Wed, Nov 28, 2018 at 4:08 PM David Morris <fqmorris at gmail.com> wrote:
>>>>>>
>>>>>> Does anyone think AI would be better with a chaos quotient? I don't think so. So Predictable Intelligence is our real goal. We want *smart* servants, not intelligence. So, of course predictable AI will support corporate structures.
>>>>>>
>>>>>> it seems to me that AI is essentially imitative, not creative, not spontaneous. It isn't really intelligent. We don't want it to talk back or even question us. We won't ever tolerate that.
>>>>>>
>>>>>> David Morris
>>>>>>
>>>>>>> On Tue, Nov 27, 2018 at 9:47 PM Ian Livingston <igrlivingston at gmail.com> wrote:
>>>>>>>
>>>>>>> Yep. Chiming in with gratitude, Rick. Thanks.
>>>>>>> My answer to the concluding question is pending, though I tend toward the
>>>>>>> latter proposition.
>>>>>>>
>>>>>>>> On Tue, Nov 27, 2018 at 1:58 PM John Bailey <sundayjb at gmail.com> wrote:
>>>>>>>>
>>>>>>>> Thanks Rich, great read.
>>>>>>>>> On Wed, Nov 28, 2018 at 3:41 AM bulb <bulb at vheissu.net> wrote:
>>>>>>>>>
>>>>>>>>> Really excellent article, thank you Rich. Working for a company that is
>>>>>>>> making massive investments in AI - this puts things in perspective..
>>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Pynchon-l <pynchon-l-bounces at waste.org> On Behalf Of rich
>>>>>>>>> Sent: dinsdag 27 november 2018 15:45
>>>>>>>>> To: “pynchon-l at waste.org“ <pynchon-l at waste.org>
>>>>>>>>> Subject: AI Thinks LIke a Corporation/Death of Insects
>>>>>>>>>
>>>>>>>>> thought you guys would be interested
>>>>>>>> https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying
>>>>>>>>>
>>>>>>>>> like everything else these days we're dazzled by the science not knowing
>>>>>>>> or caring about context, origins
>>>>>>>>>
>>>>>>>>> and this
>>>>>>>> https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html?action=click&module=Top%20Stories&pgtype=Homepage
>>>>>>>>> --
>>>>>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>>>>>>>> --
>>>>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>>>>>>> --
>>>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>>>>> --
>>>>> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
>>>>
>>>>
>>>>
>>>> --
>>>> Arthur
>>>
>>>
>>> --
>>> Peter Brawley
>>> www.artfulsoftware.com
>>>
>>> Where money is speech, speech isn't free.
>>
>>
>>
>> --
>> Arthur
> --
> Pynchon-L: https://waste.org/mailman/listinfo/pynchon-l
More information about the Pynchon-l
mailing list