• DaddleDew@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    The whole technology is based on a flawed simulation of intelligence. It has no understanding of the meaning of what it is saying. It will always be bullshitting by design. It will just get better at it.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      This kind of assertion wildly overestimates how well we understand intelligence.

      Higher levels of bullshitting require more abstraction and self-reference. Meaning must be inferred from observation, to make certain decisions, even when picking words from a list.

      Current models are abstract enough to see a chessboard in an Atari screenshot, figure out which pieces each jumble of pixels represents, and provide a valid move. Scoffing because it’s not actually good at chess is a bizarre line to draw, to say there’s zero understanding involved.

      Current models might be abstract enough to teach them a new game by explaining the rules.

      Current models are not abstract enough to explain why they’re bad at a game and expect them to improve.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    3 days ago

    WDYM, “still?” That implies they’re trying to make it something else, but they’re not.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    16
    ·
    3 days ago

    It’s like any other tool; it requires an user who knows how to use it and what the limitations are. It’s not as competent as Sam Altman wants you to believe but it’s not as incompetent as the haters wants you to believe either.

    • MCasq_qsaCJ_234@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      In short, GPT-5 improves in the coding aspect although there is no improvement in the rest of the aspects and this model saves resources for OpenAI because it does not require more computational power compared to its previous models.

  • Lembot_0004@discuss.online
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    29
    ·
    edit-2
    3 days ago

    GPT-5 struggles

    GPT helped me many more times and in an incredibly faster manner than forum humans.

    Most people are just too dumb to use GPT effectively. They assume that if the system allows to enter vague typo-riddled queries that means it is how it should be. Most people ask ChatGPT some nonsense questions and expect something “wow”. No. It doesn’t work this way.

    C++. SQLite. sqlite3_open() returns 14. Code is run with root privileges.

    Here is an example of the query if you want to get a useful answer.

    My program can’t create DB in Linux. Help, pretty please

    Here is an example of how imbeciles ask.

    • pearcake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      3 days ago

      But LLMs are tools for imbeciles. If you can formulate your question correctly, you can just google it and get similar results from stackoverflow or reddit. LLMs searching the internal internet archive or use live google search anyway, but outputs results in slightly different style and sometimes glues answers together in seemingly cohesive manner, but at the same time leaving other context clues out of the picture - for example, doing search in google and visiting actual website with source info, you can gauge how credible it is by looking at answer upvotes, comments, date of the original answer, etc. LLMs strip that valuable information away and just provide you with castrated answer. Not to mention limited context window of any LLM, which causes funny hallucinations if you overstretch it. LLMs are solution in search of a problem, they cannot help dumb people, and they do not provide enough value to smart people.

      • Lembot_0004@discuss.online
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        19
        ·
        edit-2
        3 days ago

        1st link has some useful information.
        2nd – useless.
        3rd – complete offtop. You just didn’t understand the problem.
        So yes, ChatGPT is much-much better.
        Anyway what’s your point? I don’t try to compare you to ChatGPT.

        GPT-5 struggles with basic questions.

        I just say that people struggle with using ChatGPT. ChatGPT doesn’t struggle with basic questions if you’re asking it in an adequate way.

        • pearcake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          edit-2
          3 days ago

          I just say that people struggle with using ChatGPT

          If people struggle with product that is advertised as something that should help people talking to computers and get more productive with them - then its a failed product.

          • mindbleach@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 hours ago

            Charles Babbage was once asked, ‘But if someone puts in the numbers wrong, how will your calculator get the right answer?’

            Using a chatbot to code is useful if you don’t know how to code. You still need to know how to chatbot. You can’t grunt at the machine and expect it to read your mind.

            Have you never edited a Google search, because the first try didn’t work?

            • dreadbeef@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              3 days ago

              How can chatgpt improve my software engineering life? I don’t mind the copilot PR stuff, but that is so small potatoes haha and saves me like an hour of my day reviewing junior code (which I should be doing by hand anyway, I don’t want to offload that to the toaster, I want my juniors knowing it was me reviewing their code). What else does chatgpt offer for me, an engineer who’s been doing this for over a decade now? I know how to create greenfield technology. I have created video conferencing applications that were HIPAA compliant, implemented authentication a bazillion times in every configuration, I’ve built a flight search and booking engine (with PCI compliance because American Express was a client), I’ve built CI tools for companies that automatically rewrote your PRs to use the company tech stack and committed on your behalf and fixed your PRs for you (we had a custom standard library that was easy to not use and our bot would rewrite your code using it).

              I’m just saying, I deal with problems that ChatGPT is horrible at solving. ChatGPT does not know how to build video conferencing (especially using web tech). It does not know how to do HIPAA compliance. It does not know about PCI compliance. It is only as good as its inputs and the problems I’m working with, what inputs does it have? It’s not like there are a million flight booking tools out there with source code available and extensive documentation (have you ever dealt with a GDS before? Sabre? Amadeus? Have you seen their docs? What inputs does ChatGPT have that I don’t?). These are all relatively solved problems, but ChatGPT’s inputs are not closed source software, which is where most of this stuff is in (and often not at all on GitHub, so again, what are ChatGPT’s inputs?)

              It has to be able to do all of this while justifying its energy costs. I’m a human being first and foremost and I see every day the destruction humans are doing to this planet. Lots of people dying in floods in the US, and that is definitely humans impact. I need to justify it.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 days ago

              The point of chat bots is that you are not supposed to need to know how to use them.

        • dreadbeef@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          3 days ago

          I don’t try to compare you to ChatGPT

          My boss does, and I’m a software engineer.

          I want chatgpt to remove code and keep tests passing. Why is that such a hard thing for ai? It cannot reliably remove useless code from my codebase without making tests fail. I want to see a PR from chatgpt that has a greater amount of of lines removed than added. I want it to actually improve my codebase by removing the places where bugs can exist (lines of code). I want my tests to keep passing.