• Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    1 day ago

    Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      17 hours ago

      You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

      And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

      Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?

      • Tattorack@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        It’s not a perspective. It just is.

        It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).

        A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          You do know how replication works?

          When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…

          That’s kinda the gold standard?

          The paper in question has been cited by 371 other papers.

          I’m pretty comfortable with it as a citation.

          • Tattorack@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            35 minutes ago

            Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.