• MTK@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    That’s the point though…

    Without censorship it just does what it thinks would be best fitting. It means that if the AI thinks that encouraging you to take drugs, suicide, murder, etc would fit best, then it will do that.

    Any censored model would immediately catch this specific case and give a more “appropriate” response such as “As an AI model I can’t help you with that…” But given a long enough and complex enough chat even a censored model might bypass the censorship and give an inappropriate response.

    This was just a SFW example, the results would be the same even if I asked it truly terrible things.

    • Zetta@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      11 hours ago

      Brother I’m aware of how it works, most uncensored models made by the community like the one you used are made for sexual role playing, or at least thats the largest crowd of home users of uncensored llms IMO. I’m not arguing with you on why or what the model does, I’m saying its intended design for these models. No its probably not great for wackos to play around with, but freedom is scary.

      • MTK@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        11 hours ago

        I agree. I guess my point was that people need to be aware of how crazy AI models can be and always be careful about sensitive topics with them.

        If I were to use an LLM as a therapist, I would be extremely skeptical of anything it says, and doubly so when it confirms my own beliefs.

        • Zetta@mander.xyz
          link
          fedilink
          arrow-up
          3
          ·
          11 hours ago

          Fair enough. I wouldn’t even consider seeing a therapist that used an llm in any capacity, let alone letting an llm be the therapist. Sadly I think the people that would make the mistake of doing just that probably wont be swayed, but fair enough to raise awareness.

          • MTK@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 hours ago

            Sadly with how this tech is going I don’t think it’s possible to stop it from being used like that by the masses.

            I just hope that the people who do, would at least be aware of it’s shortcomings.

            I myself would never use it like that, but I understand the appeal. There is no awkwardness because it isn’t a person, it tends to be extremely supportive and agreeable, and many people perceive it as intelligent. All of this combined makes it sound like a really good therapist, but that is of course missing the core issues of this tech.