“Exists” yea but not really usable, it’s more raw than Pixelfed
“Exists” yea but not really usable, it’s more raw than Pixelfed
I use Newsblur, it’s a web site and will fetch feeds even if my computer is offline.
Newsblur is the one I migrated to and haven’t looked back.
The instagram reels and YouTube shorts algorithms are utter shit. I’ve tried, but holy shit there’s some crap in there.
Same
The main thing I noticed that the “advanced” stuff disappeared, I can’t debug what signal my Arc is receiving any more.
On the other hand I’m not going to buy any more Sonos stuff either and if I find a valid replacement (Atmos, wireless, minimal app involvement) I’ll switch.
It’d be cheaper to buy a power bank for every person in the house vs this abomination
So you need to raid Battery Town and Gastown on your road trips while fighting off weirdos on the road? 😀
This doesn’t even replace the phone battery, it changes an external charging case.
We have these in bars etc, they let you rent a charged power bank. This is just that with added complexity.
Anything that’s not Twitter is good
Imagine being an openly gay CEO who funds a party who hates gays
There was no “huge privacy issue”.
First of all: You could turn off the local scanning by turning off iCloud sync - which would’ve sent the images to the cloud for scanning anyway. That’s it, nothing else, nobody at Apple would’ve touched a single super-private file on your device.
The local scanning required MULTIPLE (where n>3, they didn’t say the exact number for obvious reasons) matches to known and human-verified CSAM. This database is the one that would’ve been loaded from iCloud if you had it turned on. This is the exact same database all cloud providers are using for legal reasons. Some have other algos on top - at least Microsoft had an is_penis algorithm that shut down a German dude’s whole Live account for his kid’s pics being on OneDrive.
After the MULTIPLE matches (you can’t get flagged by “accidentally” having one on your phone, nor would pics of your kids in the pool trigger anything) a human checker would have had enough data to decrypt just those images and see a “reduced resolution facsimile” (Can’t remember the exact term) of the offending photos. This is where all of the brainpower used to create false matches would’ve ended up in. You would’ve had to create multiple matches of known CP images that looks enough like actual CP for the human to make an erroneous call multiple times to trigger anything.
If after that the human decided that yep, that’s some fucked up shit, the authorities would’ve been contacted.
Yes, a Bad Government could’ve forced Apple to add other stuff in the database. (They can do it right now for ALL major cloud storage providers BTW) But do you really think people wouldn’t have been watching for changes in the cloud-downloaded database and noticed any suspicious stuff immediately?
Also according to the paper the probability of a false match was 1 in 1 trillion accounts - and this was not disputed even by the most hardcore activists btw.
tl;dr If you already upload your stuff to the cloud (like iOS does automatically) the only thing that would’ve changed is that nobody would’ve had a legit reason to peep at your photos in the cloud “for the children”. But if you’ve got cloud upload off anyway, nothing would’ve changed. So I still don’t understand the fervour people had over this - the only reason I can think of is not understanding how it worked.
The irony is that the Apple CSAM detection system was as good as we could make it at the time, with multiple steps to protect people from accidental positives.
But, as usual, I think I was the only one who actually read the paper and didn’t go “REEEE muh privacy!!!” after seeing the headline.
CapCut was THE app for editing video. It was made by Bytedance. It was banned along with TikTok.
This is Zuck trying to take over that niche