• scintilla@beehaw.org
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 days ago

    Can you not understand not learning how to think for yourself is bad? Or did the AI you use to run your life tell you not thinking anymore is more efficient?

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      3 days ago

      I’m talking about the general sentiment from articles like this, not the article itself. The content of the article doesn’t really matter in the grand scheme of things.

      It’s the Constant. Fucking. Beratement. of the technology.

      Like, we fucking get it: You’re a technophobe and hate technology, and love to write articles that shit on LLMs, because that’s what gets clicks. And judging from the votes from this forum, most everybody falls for the clickbait, which then generates even more hateful articles because they know it gets them views.

      Meanwhile, out there in the real world, people go to work, and use this sort of technology in their day-to-day jobs. There’s this extreme and jarring disconnect between public opinion, what the news report, and what’s actually happening in real life. I feel like I’m watching Fox News half the time. It’s like all of these haters of LLMs suffer from a massive cognitive dissonance when they are in the workplace. Or they are so behind the times that they aren’t using this technology. Or they don’t even realize the things they use are using this technology behind the scenes.

      • scintilla@beehaw.org
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        3 days ago

        Like, we fucking get it: You’re a technophobe and hate technology.

        I don’t think you get it or have even really bothered to read the articles if that is your take away.

        This is coming from a person who would probably get a gen 1 brain chip if they were actually useful like the people making them say they would be.

        Its not that I hate technology its that LLMs are effectively being forced upon people in a way where their bosses are literally telling them that they have to use the technology. There are genuinely companies that care more about their employees using LLMs than actually doing their job well because they are so invested into LLMs.

        There’s also evidence coming out that suggests that long term use of LLMs is not good for brain health because it involves offloading so much menial thinking. Without use your brain atrophes and you probably won’t even notice it because you are your brain.

        It will likely be a disaster for the already precarious state of literacy in the general public when people can just require to have the message re written simpler if they can’t understand what a message is saying so they have no incentive to improve.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Lol. Almost no one who has an opinion on AI is a technophobe. I’d argue it’s the opposite. To be informed enough to have an opinion means you must like technology. However, LLMs have been proven to be confidently inaccurate and misleading, and it creates situations where people believe they’re correct when it just made shit up.

        Sure, it help you get an answer pretty quickly, but then you have to check it for accuracy (if you aren’t a fucking idiot who just trusts the thing innately). It doesn’t actually save any time most of the time if you actually learn how to do it yourself. For example, I sometimes use a local model to write boilerplate code for me, but if I ask it to write anything that solves a problem it’s almost never correct. Then I have to parse it and figure out what it was doing to fix it, when I could have just written it myself and been done.

        Yeah, it’s great if you’re an idiot and just want to sound smart. It’ll give you an answer that seems reasonable, and you can move on with your day. However, there’s very good odds it isn’t correct if it’s anything complex and/or niche. (I think I saw something not long ago saying 70% chance of being wrong.) If it isn’t complex or is common, you didn’t need a fucking LLM to solve it.