• cm0002@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    16
    ·
    24 hours ago

    That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      7
      ·
      22 hours ago

      It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.

      What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        20 hours ago

        Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.

        • datalowe@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 hours ago

          It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.

          On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

      • reev@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          20 hours ago

          It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.

            • ThisIsNotHim@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 hours ago

              We also check to see if the word that popped into our heads actually rhymes by saying it out loud. Actual validation steps we can take is a bigger difference than being a little more robust.

              We also have non-list based methods like breaking the word down into smaller chunks to try to build up hopefully more novel rhymes. I imagine professionals have even more tools, given the complexity of more modern rhyme schemes.

    • Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      6
      ·
      23 hours ago

      Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.

      Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        22 hours ago

        Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

        Interesting that…

        Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          21 hours ago

          Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.

          Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.

          Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            20 hours ago

            I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.

            • Carrolade@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              Exactly. It’s sort of like a massively scaled up example of the blind man and the elephant.

        • MTK@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)

          But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 hours ago

            Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      12 hours ago

      I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.

      It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.

    • pelespirit@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      23 hours ago

      I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.

      • Captain Poofter@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        edit-2
        22 hours ago

        anything that claims it “thinks” in any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          10 hours ago

          Anybody who claims they don’t “think” before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.

          You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          12 hours ago

          You know they don’t think - even though “It’s a peculiar truth that we don’t understand how large language models (LLMs) actually work.”?

          It’s truly shocking to read this from a mess of connected neurons and synapses like yourself. You’re simply doing fancy word prediction of the next word /s

        • pelespirit@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          22 hours ago

          I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.

          • Captain Poofter@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            22 hours ago

            there has been a flooding of these articles. everyone wants to sell their llm as “the smartest one closest to a real human” even though the entire concept of calling them AI is a marketing misnomer

            • pelespirit@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              22 hours ago

              Maybe? Didn’t seem like a sales job at the time, more like a warning. You could be right though.

    • Shanmugha@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      22 hours ago

      It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete

        • Shanmugha@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?