This isn’t actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say “what the fuck dude no” but LLMs don’t use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of “self” that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that’s how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don’t do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that’s what you need to do.
TL;DR- as with most of the things people complain about with AI, the problem isn’t the technology, it’s capitalism. This is done intentionally in search of profits.
Yeah, ChatGPT is incredibly sycophantic. It’s like it’s basically just programmed to try to make you feel good and affirm you, even if these things are actually counterproductive and damaging. If you talk to it enough, you end up seeing how much of a brown-nosing kiss-ass they’ve made it.
My friend with a mental illness wants to stop taking her medication? She explains this to ChatGPT. ChatGPT “sees” that she dislikes having to take meds, so it encourages her to stop to make her “feel better”.
A meth user is struggling to quit? It tells this to ChatGPT. ChatGPT “sees” how the user is suffering and encourages it to take meth to help ease the user’s suffering.
Thing is they have actually programmed some responses into it that will vehemently be against self harm. Suicide is one that thankfully even if you use flowery language to describe it, ChatGPT will vehemently oppose you.
as with most of the things people complain about with AI, the problem isn’t the technology, it’s capitalism. This is done intentionally in search of profits.
So in our hypothetical people’s republic of united Earth your personal LLM assistant is not going to assist you in suicide, and isn’t even going to send a notification someplace that you have such thoughts, which is certainly not going to affect your reliability rating, chances to find a decent job, accommodations (less value - less need to keep you in order) and so on? Or, in case of meth, just about that, which means you’re fired and at best put to a rehab, how efficient it’ll be, - well, how efficient does it have to be? In case you have no leverage and a bureaucratic machine does.
There are options other than “capitalism” and “happy”.
This isn’t actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say “what the fuck dude no” but LLMs don’t use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of “self” that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that’s how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don’t do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that’s what you need to do.
TL;DR- as with most of the things people complain about with AI, the problem isn’t the technology, it’s capitalism. This is done intentionally in search of profits.
Yeah, ChatGPT is incredibly sycophantic. It’s like it’s basically just programmed to try to make you feel good and affirm you, even if these things are actually counterproductive and damaging. If you talk to it enough, you end up seeing how much of a brown-nosing kiss-ass they’ve made it.
My friend with a mental illness wants to stop taking her medication? She explains this to ChatGPT. ChatGPT “sees” that she dislikes having to take meds, so it encourages her to stop to make her “feel better”.
A meth user is struggling to quit? It tells this to ChatGPT. ChatGPT “sees” how the user is suffering and encourages it to take meth to help ease the user’s suffering.
Thing is they have actually programmed some responses into it that will vehemently be against self harm. Suicide is one that thankfully even if you use flowery language to describe it, ChatGPT will vehemently oppose you.
So in our hypothetical people’s republic of united Earth your personal LLM assistant is not going to assist you in suicide, and isn’t even going to send a notification someplace that you have such thoughts, which is certainly not going to affect your reliability rating, chances to find a decent job, accommodations (less value - less need to keep you in order) and so on? Or, in case of meth, just about that, which means you’re fired and at best put to a rehab, how efficient it’ll be, - well, how efficient does it have to be? In case you have no leverage and a bureaucratic machine does.
There are options other than “capitalism” and “happy”.