A specific example from me would be implementing LLM AI into my code (genetically) and without more details than that I’ll get people demanding that I don’t do that and giving suggestions for what I should do.

Suggestions are cool, but I’m gonna ask why I should not put LLM in my code in a generic sense just to have my question ignored or have lies and insults hurled my way

It’s cool if you want to answer that question, I’m just curious about other people’s similar story about receiving resistance to follow up questions if you just have to say those people aren’t worth it or you feel like you missed something you shouldn’t have in those situations.

  • rbn@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Regarding your specific example, there pretty good reasons not to use AI if there’s an adequate alternative, so I can absolutely understand people arguing against that.

    AI is resource intensive and thus bad for the environment. Results usually aren’t deterministic, so the behavior is no longer reproducible. If there is a defined algorithm to solve the issue in a correct way, AI will be less accurate. If you use cloud services, you may run into privacy issues.

    Not saying there aren’t any use cases for LLMs or other forms of AI. But just applying it everywhere 'cause it’s fancy, is not a good idea.

    In general, I appreciate if people question my work or come up with proposals for improvement as long as it’s polite and the person is at least qualified to some degree. However, that does not mean that I change my mind immediately and follow their advice.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      2 days ago

      Yeah if you have better way of doing anything with no drawbacks you should do that I’ll just say out of pure reason.

      Thinking about deterministic results. I can think of a flawed code that gives a wrong result deterministically 1 out of its thousands of potential outputs and you can determine that 1 wrong answer is A) not big enough flaw to fix(code is good enough) B) not worth fixing since it’s rare (too much effort to fix). Now how that applies to LLM is that you can see the what LLM outputs and determine it’s execution is good enough or not working.

      Using a lot of resources at the cost of the environment is more a value thing. Cyanobacterial didn’t care about poisoning the environment with oxygen. Ironically I don’t think the electric grid should be restructured for ai since I don’t think so is doing anything important enough to warrant changing the electrical grid.

      I would care if someone was rude or unqualified on an issue. Id want to know why something I did was wrong, either technically or morally, or if there a better way of doing and why it’s better

      • crusa187@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I would care if someone was rude or unqualified on an issue

        Would you? Your tone reads as fairly rude in this post, and your qualifications seem quite lacking if you don’t even comprehend the dire environmental impact and obvious drawbacks of the vast majority of contemporary AI big compute. For that matter, most llm outputs are not deterministic, especially with certain configurations eg high temperature, etc, so I don’t even follow your contrived example here. Consider that Cyanobacteria are unaware of their environmental impact - humans are not so ignorant, unless they choose to be.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          21 hours ago

          i fucked up, i meant i wouldn’t care if someone is rude or unqualified. also forming attacks me based things i said is hilarious. i don’t even bother defending myself to people like you, mostly because you don’t want to hear me out.