• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle
  • Wait, was that a bug? I always figured it was just based on how insanely difficult it is to keep cities clean as they grow massive. You can still easily hold on to those cities, even very distant ones, by recruiting lots of peasant units to garrison the cities. The security bonus is based on the number of men you have garrisoned versus the number of civilians, and since peasants are the largest units by manpower, they grant the biggest bonus. You wind up with two rows of peasants that are only useful as bait in an actual battle, but give plenty of security bonus to offset the max squalor penalty.

    Edit, actually it gets even easier if you keep recruiting peasants as a sort of population control even after the garrison is full. Send excess peasant units to your most recently conquered cities to maintain control and free up militarily useful units from just standing guard, and for certain cities with super slow population growth you can disband the units as they arrive in order to boost the civilian numbers. It’s a makeshift, but effective way to transfer population from overcrowded cities to the empty ones.


  • Ehh. The broad strokes had the potential to be interesting, but the presentation and details are awful. Actually watching the prequels is such a chore, with 75% of the time spent thinking “why,” 24% “ooh pretty” (though a lot of the CGI hasn’t aged well), and maybe 1% is an actual “hmm yes interesting.”

    Palpatine and populism had a chance to be interesting, but it’s mostly done completely off screen, with lots of assumptions needing to be made by a viewer who needs to already have an understanding that this is the future Emperor. The closest we ever get to seeing the true corruption of the Senate is Palpatine’s speech denouncing the Jedi, and even that winds up being carried hard solely by Palpatine’s actor.

    They completely ignore the moral, logistical, and spiritual questions raised by usage of a clone army. Coverage in EU and Disney doesn’t count in a discussion of the prequels, but even there it’s rarely explored. You’d think the whole point of clones vs robots would be to raise interesting questions by way of contrasting the two, but no, it’s just so you don’t have to feel bad watching the armies blow each other up.

    Anakin and Padme. Good God.

    There’s so much more but honestly I don’t want to write more of an essay. Apologies for the YouTube link, but this is a video I really like about what made the Jedi so special in the originals. I think most of the problems in the prequels parallel their mishandling of the Jedi–a superficial understanding that Thing Is Cool, but then missing the point thanks to a formulaic, blunt, needs-to-be-marketable approach to making the movies.

    I dunno, they’re more bearable than the sequels. I can even enjoy watching them; I grew up on them and can put on the nostalgia goggles to get through them, but under any examination they completely fall apart.



  • I recently read a neat little book called “Rethinking Consciousness” by SA Graziano. It has nothing to do with AI, but is an attempt to describe the way our myriad neural systems come together to produce our experience, how that might differ between animals with various types of brains, and how our experience might change if some systems aren’t present. It sounds obvious, but the simpler the brain, the simpler the experience. For example, organisms like frogs probably don’t experience fear. Both frogs and humans have a set of survival instincts that help us detect movement, classify it as either threat or food or whatever, and immediately respond, but the emotional part of your brain that makes your stomach plummet just doesn’t exist in them.

    Humans automatically respond to a perceived threat in the same way a frog does–in fact, according to the book, the structures in our brains that dictate our initial actions in those instinctive moments are remarkably similar. You know how your eyes will automatically shift to follow a movement you see in the corner of your vision? A frog responds in much the same way. It’s not something you have to think about–often your eye will have darted over to the point of interest even before you realize you’ve noticed something. But your experience of that reaction is also much richer than it is possible for a frog’s to be, because we have far more layers of systems that all interact to produce what we call consciousness. We have a much deeper level of thought that goes into deciding whether that movement was actually important to us.

    It’s possible for us to continue to live even if we lose some parts of the brain–our personalities will change, our memory may get worse, or we may even lose things like our internal monologue, but we still manage to persist as conscious beings until our brains lose a large number of the overlying systems, or some very critical systems. Like the one that regulates breathing–though even that single function is somewhat shared between multiple systems, allowing you to breathe manually (have fun with that).

    All that to say the things we’re currently calling AI just don’t have that complexity. At best, these generative models could fill out a fraction of the layers that would be useful for a conscious mind. We have developed very powerful language processing systems, at least in terms of averaging out a vast quantity of data. Very powerful image processing. Audio processing. What we don’t have–what, near as I can tell, we haven’t made any meaningful progress on at all–is a system to coalesce all these processing systems into a whole. These systems always rely on a human to tell them what to process, for how long, and ultimately to check whether the result of a process is reasonable. Being able to process all of those types of input simultaneously, choosing which ones to focus on in the moment, and continuously choosing an appropriate response? Barely even a pipe dream. And even all of that would be distinct from a system to form anything like conscious thought.

    Right now, when marketing departments say “AI,” what they’re describing is like that automatic response to movement. Movement detected, eye focuses. Input goes in, output comes out. It’s one small piece of the whole that’s required when science fiction writers say “AI.”

    TL;DR no, the current generative model race is just tech stock market hype. The absolute best it can hope for is to reproduce a small piece of the conscious mind. It might be able to approximate the processing we’re capable of more quickly, but at a massively inflated energy expenditure, not to mention the research costs. And in the end it still needs a human double checking its work. We will need to develop a vast number of other increasingly complex systems before we even begin to approach a true AI.
















  • I think it’s possible to pasteurize eggs

    For sure it’s possible. Like you said, they do it for eggnog. I used to work for an ice cream company and we’d do it by thoroughly whisking them and then slooooowly stirring them into a hot mix of cream and sugar and whatnot. Not totally sure how you’d do it for this but I’m sure there’s a way; maybe if you’re getting the butter hot you could use that? But also not sure what benefit eggs would impart here. Maybe an extremely subtle flavor but as far as I can tell their purpose in cookies is their structure, which isn’t all that relevant for an edible dough.

    Browning the butter is an interesting idea, I might try that. I worry it could reduce the moisture content though; the reason I add extra is to make up for the lack of moisture from eggs and there’s already so much, I wouldn’t want to add even more butter or oil lol. Maybe I could straight up add water but then I usually freeze it and idk if that would be a problem long term