Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?

  • patatas@sh.itjust.worksOP
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    2 days ago

    If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?

    • Cowbee [he/they]@lemmy.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      These are distinct hypotheticals.

      In the first case, the question is if it is equivalent, does the use-value change? The answer is no.

      In the second case, the question is “if we can tell, does it matter?” And the answer is yes in some cases, no in others. If the reason we want a painting is for its artisinal creation, but it turns out it was AI-generated, then this fundamentally cannot satisfy the use of an image for its appretiation due to artisinally being generated. If the reason we want an image is to convey an idea, such that it would be faster, easier, and higher quality than an amateur sketch, but in no way needs to be appreciated for its artisinal creation, then it does not matter if we can tell or not.

      Another way of looking at it is a mass-produced chair vs a hand-crafted one. If I want a chair that lets me sit, then it doesn’t matter to me which chair I have, both are equivalent in that they both satisfy the same need. If I have a specific vision and a desire for the chair as it exists artisinally, say, by being created in a historical way, then they cannot be equivalent use-values for me.

      • patatas@sh.itjust.worksOP
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        2 days ago

        This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.

        But that ship has sailed. We do care.

        We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.

        And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.