Reads bit on dictator-propping via propaganda in Sudan.
Nods. Yep, that sounds like my government alright. And Big Tech. Cries inside.
Veers off to “Ukraine Proxy War” with no reference to ChatGPT as promised in the headline.
Sighs. Closes tab.
Reads bit on dictator-propping via propaganda in Sudan.
Nods. Yep, that sounds like my government alright. And Big Tech. Cries inside.
Veers off to “Ukraine Proxy War” with no reference to ChatGPT as promised in the headline.
Sighs. Closes tab.
…iOS forces uses Apple services including getting apps through Apple…
Can’t speak to the rest of the claims, but Android practically does too. If one has to sideload an app, you’ve lost 99% of users, if not more.
It makes me suspect they’re not talking about the stock systems OEMs ship.
Relevant XKCD: https://xkcd.com/2501/
deleted by creator
Nah I meant the opposite. Journalistic integrity was learned through long, hard history.
Now that traditional journalism is dying, its like the streamer generation has to learn it from scratch, heh.
Its kinda like influencers (and their younger viewers) are relearning the history of journalism from scratch, heh.
Surpressing sponsors is a perverse incentive too; all the more reason to not disclose who’s paying the creator.
And yeah, any ‘moral’ justification for web ads is dead like 100 times over. I hate how hard it makes life for ‘old web’ style sites with like one innocent banner ad, but still.
One thing about Anthropic/OpenAI models is they go off the rails with lots of conversation turns or long contexts. Like when they need to remember a lot of vending machine conversation I guess.
A more objective look: https://arxiv.org/abs/2505.06120v1
https://github.com/NVIDIA/RULER
Gemini is much better. TBH the only models I’ve seen that are half decent at this are:
“Alternate attention” models like Gemini, Jamba Large or Falcon H1, depending on the iteration. Some recent versions of Gemini kinda lose this, then get it back.
Models finetuned specifically for this, like roleplay models or the Samantha model trained on therapy-style chat.
But most models are overtuned for oneshots like fix this table or write me a function, and don’t invest much in long context performance because it’s not very flashy.
Yes, but its clearly a building block of Meta’s LLM training effort, and part of a pattern.
One implication I didn’t mention, and don’t have hard proof I can point to, is garbage in garbage out. Meta let AI slop and human garbage proliferate on Facebook, squandering basically the biggest advantage (besides cash) they have. It’s often speculated that, as it turns out, Twitter and Facebook training data is kinda crap.
…And they’re at it again. Zuckerberg pours cash into corporate trash and get slop back. It’s an internal disaster, like their own divisions.
On the other side, it’s often thought that Chinese models are so good for their size/compute because they’re ahem getting data from the Chinese government, and don’t need to worry about legal issues.
The research community already knows this.
Llama 4 (Meta’s flagship ‘AI’ project) was as bad release. That’s fine. This is interative research; not every experiment works out.
…But it was also a messy and dishonest one.
The release was pushed early and full of bugs. They lied about its performance, especially at long context, going so far as to game Chat Arena with a finetune. Zuckerberg hyped the snot out of it, to the point I saw ads for it on Axios.
Instead of Meta saying they’ll do better, they said they’re reorganizing their divisions to focus on ‘applications’ instead of fundamental research, aka exactly the wrong thing. They’ve hermmoraged good researchers and kept AI bros, far as I can tell from the outside.
Every top LLM trainer has controversies. Just recently Qwen (Alibaba) closed off their top base models just to spite Deepseek, so they can’t distill them. Deepseek is almost certainly training on Google Gemini traces. Google hoards their best research for API models and has chased being sycophantic like ChatGPT. X’s Grok is a joke, and muddied by Musk’s constant lies about, for instance, open sourcing it. Some great outfits like 01ai (the Yi series) faded into the night.
…But I haven’t seen self-destruction quite like Meta’s. Especially considering the ‘f you’ money and GPU farm they have. They’re still pushing interesting research now, but the trajectory is awful.
ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.
Google Gemini is now too.
And they log and use your dark thoughts.
I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.
…I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.
TBH this is a huge factor.
I don’t use ChatGPT much less use it like it’s a person, but I’m socially isolated at the moment. So I bounce dark internal thoughts off of locally run LLMs.
It’s kinda like looking into a mirror. As long as I know I’m talking to a tool, it’s helpful, sometimes insightful. It’s private. And I sure as shit can’t afford to pay a therapist out of the gazoo for that.
It was one of my previous problems with therapy: payment depending on someone else, at preset times (not when I need it). Many sessions feels like they end when I’m barely scratching the surface. Yes therapy is great in general and for deeper feedback/guidance, but still.
To be clear, I don’t think this is a good solution in general. Tinkering with LLMs is part of my living, I understand the jist of how they work, I tend to use raw completion syntax or even base pretrains.
But most people anthropomorphize them because that’s how chat apps are presented. That’s problematic.
Yes, we know, you are preaching to the choir here, lemmy.ml.
But I’d rather we not back out of NATO, decouple trade and invade friendly neighbors all while the US continues to screw over far away countries first. At this point, backing out of NATO is not going to help our bad behavior.
After that, we can worry about not screwing with other countries so much, hopefully.
Good move TBH. Just like that, Trump likes NATO again.
Well not everyone in the machine learning space is an AI Bro, either. Many (most?) researchers see Altman et al. as snake-oil grifters.
Same with the P2P/networking junkies. They didn’t ask for a mountain of pyramid schemes.
deleted by creator
They are GPUs.
All of them, even the H100, B100, and MI300X all have texture units, pixel shaders, everything. They are graphics cards at a low level. Only the MI300X is missing ROPs, but the Nvidia cards have them (and can run realtime games on Linux), and they all can be used in Blender and such.
The compute programming languages they use are, fundamentally, hacked up abstractions to map to the same GPU hardware in consumer stuff.
That’s the whole point, they’re architected as GPUs so that they’re backwards compatible, as everything’s built on the days when consumer gaming GPUs were hacked to be used for compute.
Are there more dedicated accelerators? Yes. They’re called ASICs, or application specific integrated circuits. This is technically a broad term, but mostly its connotation is very purpose made compute.
TBH, if I had to be drafted, I’d feel 1000x better in Ukraine than Iran. Invading Iran would be like fighting against Ukraine in their straight up war of conquest, except even stupider.
Not that I want either.
Ideally not elect Big Tech to power?
Ehh…
It’s not so simple, there are papers on zero data ‘self play’ or other schemes for using other LLM’s output.
Distillation is probably the only one you’d want for a pretrain, specifically.
Both can be true, that the Democrats suck and that one should still vote strategically, especially if you’re gonna skip primaries (as most, statistically, do). The analogy holds, even if it’s fruity and won’t get anyone on.
The Republicans won because they have no problem swallowing bile; apparently thats the game now.