

Watch time is pretty important on YouTube afaik, initial clocks themselves don’t count for that much
Watch time is pretty important on YouTube afaik, initial clocks themselves don’t count for that much
Kids will do stupid things sometimes, no avoiding that. In Germany you can pass a stopped bus on the other side of the road, but if it has its hazards on, you can’t go faster than walking speed.
I’ll do that for sure! Just gotta build it out a little more first, it’s too early right now to start inviting contributions. The core structures of the app are still changing too much for that to not just end up in chaos
So, after finishing my previous project, I have now actually started working on this, working title “Catana”. It’s not usable at all yet but I feel pretty good about where it’s going right now, so I thought I’d put something here to get some public accountability to help keep motivation up, lol.
Here’s a quick demo of what I have so far: https://www.youtube.com/watch?v=TyTTfCJxrRQ
I’ve got the core data model and editing actions down, the two next big steps are adding an equivalent to supertags, and actually being able to save things. Right now it’s in-memory only and resets on every restart, which makes it a lot easier to iterate on the data model quickly. The way it’s looking right now, I’m probably not going for full markdown-compatibility for the storage layer. That would bring with it some immense complexity that I don’t feel like tackling this early on. Instead, I’m planning to save data in a custom (but still open) format, and then in the future add markdown import/export separately, as well as general integration with the file system (representing arbitrary folders and files on your device as Nodes so you can link and manage them directly without leaving the app).
I already have a rudimentary Tana import working though! Since Tana is the main inspiration for the data model, their export shape is pretty easy to map to Catana’s internal model. It still needs a lot of refinement to be actually useful beyond testing the app quickly with a large, existing dataset, but it’s a very good start.
So, uh, yeah, if you’re still interested, I’ll keep you posted!
What? Since when does Valve prohibit companies from redirecting customers to non-Valve purchasing flows? Because that’s what this ruling is about, it says Apple can’t prohibit apps from telling users to go buy off-platform for lower prices. Valve isn’t doing that with Steam afaik, actually I’m not aware of any other platform that does this
I personally switched to wireless back when my phone still had a headphone jack. It’s just the better overall experience for me, and I suspect that I’m not alone in that. I’m going to continue arguing for manufacturers to keep including a headphone jack, but it’s not because I prefer wired headphones personally.
I feel like I’d forget to charge them
I thought that to but turned out to be a non-issue. Since most earbuds come in a case that holds multiple full charges for the earbuds themselves, and the case begins to complain about low battery early enough, even if I forget the first one or two times I notice the low battery state I’ve so far never run into a situation where I wanted to use them as had no charge left
There’s nothing to configure with modern android and Windows devices, it just works from my experience. Watching a video on YouTube or on the native media players at least you get a fraction of a second where it’s out of sync and then it pauses the video for whatever time necessary to get back in sync, and no issues from there on out.
The only instances where I notice it doesn’t work are games and video editing software, but yeah, those are just not use cases where wireless audio is appropriate
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens
Admittedly, that is a pretty big “if”. But yeah, if I manage to do it I certainly will!
Same boat here, recently discovered tana and its whole model is amazing. It’s fixing most of the things that bothered me a lot in Obsidian and Notion, respectively. I don’t want to go back to a service where I don’t have file-based control over my own data though, so now I’m seriously considering building something on my own that takes the mental model of tana, but implements it local-first based on regular files like Obsidian
I’m German, and I’ve never heard that before. I’d be seriously weirded out by someone saying that or teaching it to their kids
Here we go, the app is now public on Github and has its first release that can be installed on Windows and Linux! https://github.com/roschlau/catana
The app is now in a state where it can technically be used, although lots of important features are still missing before it can be considered anywhere near production-ready, that’s gonna be a long march. I’ll keep chipping away on it :)