AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn’t AGI.
Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.
AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.
Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.
There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.
Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.
“Why doesn’t the man in the Chinese room just use a calculator for math questions?”
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.
ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.
I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.
From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.
To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.
I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.
Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.
TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.
I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess.
I don’t expect that laundry detergent to be just as perfect in the commercial either.
Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.
You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.
My point is people aren’t expecting AGI. People have already tried them and understand what the general capabilities are. Businesses today even more. I don’t think exaggerating the capabilities is such an overarching issue, that anyone could call the whole thing a scam.
The Zoom CEO, that is the video calling software, wanted to train AIs on your work emails and chat messages to create AI personalities you could send to the meetings you’re paid to sit through while you drink Corona on the beach and receive a “summary” later.
The Zoom CEO, that is the video calling software, seems like a pretty stupid guy?
Yeah. Yeah, he really does. Really… fuckin’… dumb.
Same genius who forced all his own employees back into the office. An incomprehensibly stupid maneuver by an organization that literally owes its success to people working from home.
Really then why are they cramming AI into every app and every device and replacing jobs with it and claiming they’re saving so much time and money and they’re the best now the hardest working most efficient company and this is the future and they have a director of AI vision that’s right a director of AI vision a true visionary to lead us into the promised land where we will make money automatically please bro just let this be the automatic money cheat oh god I’m about to
they are craming ai everywhere because nobody wants to miss the boat and because it plays well in the stock market.
the people claiming it’s awesome and that they are doing I don’t know what with it, replacing people are mostly influencers and a few deluded people.
Ai can help people in many different roles today, so it makes sense to use it. Even in roles that is not particularly useful, it makes sense to prepare for when it is.
AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn’t AGI.
Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.
AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.
Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.
There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.
Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
Or they keep telling you that you just have to wait it out. It’s going to get better and better!
This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.
…or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.
Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.
“Why doesn’t the man in the Chinese room just use a calculator for math questions?”
They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc
If you pay for chatgpt you can connect it with wolfrenalpha and it’s relays the maths to it
I don’t pay for ChatGPT and just used the Wolfram GPT. They made the custom GPTs non-paid at some point.
Because the LLMs are now being used to vibe code themselves.
I think they’re trying to do that. But AI can still fail at that lol
They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.
ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.
I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.
From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.
To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.
I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.
Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.
TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.
I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess. I don’t expect that laundry detergent to be just as perfect in the commercial either.
Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.
I don’t think anyone is so stupid to believe current ai can solve everything.
And honestly, I didn’t see any marketing material that would claim that.
You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.
I don’t understand why this is so important, marketing is all about exaggerating, why expect something different here.
It’s not important. You said AI isn’t being marketed to be able to do everything. I said yes it is. That’s it.
My point is people aren’t expecting AGI. People have already tried them and understand what the general capabilities are. Businesses today even more. I don’t think exaggerating the capabilities is such an overarching issue, that anyone could call the whole thing a scam.
The Zoom CEO, that is the video calling software, wanted to train AIs on your work emails and chat messages to create AI personalities you could send to the meetings you’re paid to sit through while you drink Corona on the beach and receive a “summary” later.
The Zoom CEO, that is the video calling software, seems like a pretty stupid guy?
Yeah. Yeah, he really does. Really… fuckin’… dumb.
Same genius who forced all his own employees back into the office. An incomprehensibly stupid maneuver by an organization that literally owes its success to people working from home.
Really then why are they cramming AI into every app and every device and replacing jobs with it and claiming they’re saving so much time and money and they’re the best now the hardest working most efficient company and this is the future and they have a director of AI vision that’s right a director of AI vision a true visionary to lead us into the promised land where we will make money automatically please bro just let this be the automatic money cheat oh god I’m about to
Those are two different things.
they are craming ai everywhere because nobody wants to miss the boat and because it plays well in the stock market.
the people claiming it’s awesome and that they are doing I don’t know what with it, replacing people are mostly influencers and a few deluded people.
Ai can help people in many different roles today, so it makes sense to use it. Even in roles that is not particularly useful, it makes sense to prepare for when it is.
Pfft, okay.