Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
If it’s “good enough” for the task, yes.
Many tasks have loose success parameters, and an acceptable failure rate. If the automation fits in those, and it simplifies my day, then it’s reasonable automation.