• AppleTea@lemmy.zip
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    1
    ·
    5 hours ago

    The AI being hyped right now is not AI at all. It’s really important that we all acknowledge this, that the world is selling itself a multi-billion-dollar lemon: predictive text engines that have nothing intelligent about them. They’re giant sorting machines, which is why they’re so good at identifying patterns in scientific research, and could genuinely advance medicine in wonderful ways. But what they cannot do is think, and as such, it’s a collective mass-delusion that these systems have any use in our day-to-day lives beyond plagiarism.

    Goddamn, a gaming outlet saying what the serious grown-up press should have been saying from the start!

    • AFK BRB Chocolate (CA version)@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 hours ago

      I’m an old fart - I got my degree in CS in 1985, and I’ve been paying attention to the predictions and advancements in AI for a very long time. I have at least as much issue with the way people think and talk about it as the author, but probably less of an issue with it being called AI. Remember that for decades, the informal working definition of AI was “A computer doing anything that usually requires a human.” So for ages, they said we’d have AI if a computer could read a page of printed text out loud in English. That seemed almost unattainable when it was first talked about, but now it’s so trivial that no one would consider it AI.

      People have tried to make definitions that are crisper than that, but few if any of those definitions requires anything we’d call “thinking.” The frustrating thing is that the general public talks all the time about AI as if it’s conscious . Even when we’re talking about its flaws, we use words like “hallucinating,” which is something only thinking beings can do.

      To me, LLMs are the worst things because to so many people they seem like the are (or could be) thinking entities. They respond to questions in a lifelike manner and can construct (extrapolate?) somewhat novel responses. But they’re also the least useful to us as a society. I’m much more interested in the Machine Learning applications for distilling gobs of data to develop new medicines or identify critical items in images that humans don’t have the mental bandwidth for. But LLMs get all the press.

      • krunklom@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        No arguments to what you wrote.

        I’d add that llms are increasingly the only way I can find useful technical information on anything anymore.

        Of course this is solving a problem that shouldn’t fucking exist in the first place, and I still need to take that information back to a search engine to verify it and do actual research, which may be the point.

        Search is so. Fucking. Broken.

        • AFK BRB Chocolate (CA version)@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 minutes ago

          I honestly never look at the AI results because they’re so flawed so often. I don’t have an awful lot of problems finding answers to things with a standard search and then scrolling past any sources that are often crap. Worth noting, by the way, that search results, especially Google’s, were way more accurate several years ago, before there were so many sponsored results and they had agendas to push. So technologically, it’s a fixable situation, it’s just the enshitificaiton problem.