Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
In practice there really is no incentive to avoid stochastic or pseudorandom elements, so don’t hold your breath, haha. It’s a pretty academic question if you could theoretically train an LLM without any randomness.
Thanks for writing that up, I learned a few things.
Exactly!
Thanks for reading :) Realised i was going on a bit of a rant, but thought why not keep going lol