• Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    2 years ago

    Interesting video. At the core it can be summed up as:

    • “AI is existential threat” is a lie of big tech trying to use regulatory capture against competitors
    • the main competition for that big tech would be open source generative models
    • we should fight against big tech in this
    • HopeOfTheGunblade@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      2 years ago

      I’ve been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Those concerns mostly apply to artificial general intelligence, or “AGI”. What’s being developed is another can of worms entirely, it’s a bunch of generative models. They’re far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they’re going to go rogue.

        • HopeOfTheGunblade@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          I’m well aware, but we don’t get to build an AGI and then figure it out, and we can’t keep these ones on target, see any number of “funny” errors people posted, up to the paper I can’t recall the name of offhand that had all of the examples of even simpler systems being misaligned.

  • davidgro@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    I believe true AI might in fact be an extinction risk. Not likely, but not impossible. It would have to end up self-improving and wildly outclass us, then could be a threat.

    Of course the fancy autocomplete systems we have now are in no way true AI.

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        In your case that was a motor control issue, not a flaw in reasoning. In the LLM case it’s a pure implementation of a Chinese Room and the “book and pencils” (weights) semi-randomly generate text that causes humans to experience textual pareidolia more often than not.

        It can be useful - that book is very large, and contains a lot of residue of valid information and patterns, but the way it works is not how intelligence works (still an open question of course, but ‘not that way’ is quite clear.)

        This is not to say that true AI is impossible - I believe it is possible, but it will have to be implemented differently. At the very least, it will need the ability to self-modify in real time (learning)

          • davidgro@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            2 years ago

            What would convince me that we may be on the right path: Besides huge improvements in reasoning, it would (like I mentioned) need to be able to learn - and not just track previous text, I mean permanently adding or adjusting the weights (or equivalent) of the model.

            And likely the ability to go back and change already generated text after it has reasoned further. Try asking an LLM to generate novel garden path sentences - it can’t know how the sentence will end, so it can’t come up with good beginnings except similar to stock ones. (That said it’s not a skill I personally have either, but humans can do it certainly.)

            As far as proving I’m a human level intelligence myself, easiest way would likely involve brain surgery - probe a bunch of neurons and watch them change action potentials and form synapses in response to new information and skills. But short of that, at the current state of the art I can prove it by stating confidently that Samantha has 1 sister. (Note: that thread was a reply to someone else, but I’m watching the whole article’s comments)

              • davidgro@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                2 years ago

                An interesting criteria, why does going back to edit (instead of correcting itself mid-stream)

                I suppose those would be equivalent, I just haven’t seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.

                Couldn’t you perform this test on any animal with a discrete brain?

                Oh, right. Animals do exist. It simply hadn’t occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it’s feasible to say some things aren’t it.

                [Garden path sentences]

                No good. The difference between a good garden path and simple ambiguity is that the ‘most likely’ interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don’t like to put words together in an order that they don’t usually occur, even if in the end there’s a way to interpret it to make sense.

                The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.

                Just for fun I’ll try to make one here:

                “After dealing with the asbestos, I was asked to lead paint removal.”

                Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 years ago

    “Lie”

    It’s a theory. A plausible but unlikely one. Just like it was unlikely that the first atomic bomb would set the atmosphere on fire - but possible. I think that events with consequences of this magnitude deserves some consideration. I doubt humans are anywhere even near the far end of the intelligence spectrum and only a human is stupid enough to think that something that is would not posses any potential danger to us.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    Currently any AI models that have the ability to make complex decisions are trained to recreate patterns from their training data. At its current state, you’d have to be pretty exceptionally stupid to make an AI that wants to kill you, and give it that ability at the same time. Of course - who knows what’s going on at all of these private corporations and military contractors, but I think regular war, fascism, and nuclear weapons are by orders of magnitude the bigger threat.