When it’s chats with LLMs trained on this very type of data, it’s mostly the user’s fault. Of course, executives of LLM companies should still rot in prison.
It’s totally avoidable if you don’t use it, but I think the onus is mostly on the companies for advertising these chat bots as like, a friendly personal assistant when that’s absolutely not what they are. Like all “AI” shit, it runs mostly on consumer deception.
When it’s chats with LLMs trained on this very type of data, it’s mostly the user’s fault. Of course, executives of LLM companies should still rot in prison.
It’s totally avoidable if you don’t use it, but I think the onus is mostly on the companies for advertising these chat bots as like, a friendly personal assistant when that’s absolutely not what they are. Like all “AI” shit, it runs mostly on consumer deception.