Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?
If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?
These are distinct hypotheticals.
In the first case, the question is if it is equivalent, does the use-value change? The answer is no.
In the second case, the question is “if we can tell, does it matter?” And the answer is yes in some cases, no in others. If the reason we want a painting is for its artisinal creation, but it turns out it was AI-generated, then this fundamentally cannot satisfy the use of an image for its appretiation due to artisinally being generated. If the reason we want an image is to convey an idea, such that it would be faster, easier, and higher quality than an amateur sketch, but in no way needs to be appreciated for its artisinal creation, then it does not matter if we can tell or not.
Another way of looking at it is a mass-produced chair vs a hand-crafted one. If I want a chair that lets me sit, then it doesn’t matter to me which chair I have, both are equivalent in that they both satisfy the same need. If I have a specific vision and a desire for the chair as it exists artisinally, say, by being created in a historical way, then they cannot be equivalent use-values for me.
This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.
But that ship has sailed. We do care.
We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.
And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.
“We” in this moment is you, right now. If the end product is the same, then it is the same. If the process is the use-value then it matters, but if not, it doesn’t.
Ideas and symbols matter, sure, but not because of any metaphysical value you ascribe them, but the ideas they convey.
First you said “it doesn’t matter if we can tell or not”, which I responded to.
So I’m confused by your reply here.
I quite literally stated that it matters in some cases and not in others.
If the process is the purpose, then it matters. If the end product is the purpose, then it largely doesn’t.
I am saying that we can no longer meaningfully separate the two things.
Cognition and AI? We absolutely can, just because some people fail doesn’t mean it’s intrinsic.
No, I’m saying we can no longer meaningfully separate the product and the process.