

If you are assuming we live in a simulation, then you have to assume everything else about it too - there is no evidence to point in any direction about anything higher than our own layer, so ours is the only one we can do science on. All else would be imagination, so make up whatever you like.
I do agree though that a simulator can’t fully simulate itself, so yeah, it would have to be bigger and more complex in at least some way, which could simply be runtime.
Why not just use the stored charge multiplied by the average cell discharge voltage at max load for Watt-hours? This may even encourage them not to go overboard on max load ratings.
Sure it could be a bit higher than what the user gets after voltage conversion, but if they are not maxing it they may get better?
I’m no electrical engineer, so this question isn’t actually rhetorical - I’m wondering if this would work.