A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 2 Posts
  • 139 Comments
Joined 11 months ago
cake
Cake day: June 25th, 2024

help-circle
  • Oh wow, since when do we lump CS and AI together? One is basically studying maths and logic and how computers, networks and databases work. The other one is how to tell a chatbot to quote a Wikipdia article back to you. I think those are fundamentally different things. And what students should learn first is how to do a powerpoint presentation and write a letter. Or type a math formula into an electronic document, or use the spell checker. Because they rarely learn a lot about that in school.

    And yes. Investing in education would be a nice thing. Would have an immense effect on economy and society if we did that.




  • I don’t think it is about that. The information collection is an added bonus they happily accept and make use of. I think it’s mainly about power and money, though. They get rid of everyone who isn’t completely in line and subservient. That’s from the playbook on how to become an autocratic regime. And they’re oviously interested in the money as well. Cut off everyone and everything they don’t like. Like weak people, poor people, your grandma and children. That money can then be funneled towards other people. Guess whom. I think the power and control aspect is the original idea though. And money has power as well. So does information and data, so it’s more a combination of things.

    But the way they act, I’d say they had a look at other oligarchies and corrupt regimes and wanted in, too. Saw you need to replace all the people in any color of power and replace them with your own henchmen. Then they also hate a lot of people and always wanted to take their money. The AI and data thing looks more to me like something they discovered while at it. And I don’t believe the traditional MAGA people are smart enough to have anticipated that. But naturally, information is power. And AI can be used as a mindless slave to someone. I’d say it’s worth trying to foster it instead rely on human clerks and officials. It’ll be a new form of administration. One that does away with a lot of middle-men like the corrupt government workers other regimes have to pay.

    And Musk looks like he has his own motivation, which might or might not be aligned with the “grand plan” I can’t really see there is. He is (was) free to combine the useful with what’s enjoyable to him. Currently the tactics is mostly to break a lot of stuff. Doesn’t really matter how or what. So that’s what they’re doing right now. I think the struggle and in-fighting on who gets to replace what with exactly what kind of things hasn’t really started yet. It’s already there, but not the main concern as of now. So we can’t tell the exact dynamics we’re bound to see in the near future. I’d say mass surveillance plus yet more AI is likely a formula to success, though.




  • Thanks for your perspective. Sure, AI is here to stay and flood the internet with slop and arbitrary (mis)information phrased like a factual wikipedia article, journalism, a genuine user review or whatever its master chose. And the negative sides of the internet have been there long before we had AI to the current extent. I think it is extremely unlikely that the internet is going to move away from being powered by advertisements, though. That’s the main business model as of today, and I think it is going to continue that way. Maybe dressed in some new clothes, but social media platforms, Google etc still need their income. I wonder how it’ll turn out for the AI companies, though. To my knowledge, they’re currently all powered by hype and investor money. And they’re going to have to find some way to make profit at some point. Whether that’s going to be ads or having their users pay properly, and not like today where the majority of people I know use the free tier.



  • Hehe, as the article says, there is an abundance of them. Dozens of (paid) online services… You can do it on your beefy graphics card… And as per this article to some degree with your Instagram account. I’ve tried it on my own and it’ll generate something like internet fanfiction, or have a dialogue with you. It’s a steep learning curve, though and requires some fiddling. And it was text only and I don’t own a gaming computer, so it was unbearably slow. Other than that I try to avoid Meta’s social media services or paying for those kind of “scientific” experiments so I wouldn’t know how the voice conversation is like… Maybe someone can enlighten us.




  • If you’re talking about normal video surveillance, I think most of today’s cameras use regular light. These sunglasses would be effective against a biometric scanner, the iPhones depth camera, a Windows Hello screen unlock or an XBox Kinect. But the average camera on the street uses visible light and likely has an IR filter in place (at daytime), so it won’t even see infrared.

    What works against these are ski masks, wearing a motorcycle helmet… Or even large hat or golf hat, depending on the camera’s perspective.



  • Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.


  • I’ve also had that. And I’m not even sure whether I want to hold it against them. For some reason it’s an industry-wide effort to muddy the waters and slap open source on their products. From the largest company who chose to have “Open” in their name but oppose transparency with every fibre of their body, to Meta, the curren pioneer(?) of “open sourcing” LLMs, to the smaller underdogs who pride themselves with publishing their models that way… They’ve all homed in on the term.

    And lots of the journalists and bloggers also pick up on it. I personally think, terms should be well-defined. And open-source had a well-defined meaning. I get that it’s complicated with the transformative nature of AI, copyright… But I don’t think reproducibility is a question here at all. Of course we need that, that’s core to something being open. And I don’t even understand why the OSI claims it doesn’t exist… Didn’t we have datasets available until LLaMA1 along with an extensive scientific paper that made people able to reproduce the model? And LLMs aside, we sometimes have that with other kinds of machine learning…

    (And by the way, this is an old article, from end of october last year.)




  • Exactly. This is directly opposed to why we do AI in the first place. We want something to drive the Uber without earning a wage. Cheap factory workforce. Generate images without paying some artist $250… If we wanted that, we already have humans available, that’s how the world was for quite some time now.

    I’d say us giving AI human rights and reversing 99.9% of what it’s intended for is less likely to happen than the robot apocalypse.