previous lemmy acct: @[email protected] see also: @[email protected]

  • 1 Post
  • 45 Comments
Joined 2 months ago
cake
Cake day: June 13th, 2025

help-circle







  • This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.

    But that ship has sailed. We do care.

    We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.

    And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.



  • I mean, it can’t really do ‘every random idea’ though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.

    In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a “stock image”, or did it generate something different in kind?












  • Will read your link, but when I saw the phrase “democratising creativity” I rolled my eyes hard and then grabbed this for you from my bookmarks. But I’ll read the rest anyway

    https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything

    Edit: yeah so that piece starts out by saying how art is about the development of what I’m taking to be a sort of ‘curatorial’ ability, but ends up arguing that as long as the slop machines are nominally controlled by workers, that it’s fine actually. I couldn’t disagree more.

    Elsewhere in a discussion with another user here, I attempted to bring up Ursula Franklin’s distinction between holistic and prescriptive technologies. AI is, to me, exemplary of a prescriptive process, in that its entire function is to destroy opportunities for decision-making by the user. The piece you linked admits this is the goal:

    “What distinguishes it is its capacity to automate aspects of cognitive and creative tasks such as writing, coding, and illustration that were once considered uniquely human.”

    I reject this as being worthwhile. The output of those human pursuits can be mimicked by this technology, but, because (as the link I posted makes clear) these systems do not think or understand, they cannot be said to perform those tasks any more than a camera can be said to be painting a picture.

    And despite this piece arguing that the people using these processes are merely incorporating a ‘tool’ into their work, and that AI will open up avenues for incredible new modes of creativity, I struggle to think of an example where the message some GenAI output conveyed was anything other than “I do not really give a shit about the quality of the output”.

    These days our online environment suffers constantly from this stream of “good enough, I guess, who cares” stuff that insults the viewer by presuming they just want to see some sort of image at the top of a page, and don’t care about anything beyond this crass consumptive requirement.

    The banner image in question is a great example of this. The overall aesthetic is stereotypical of GenAI images, which supports the notion that control of the process was more or less ceded to the system (or, alternately, that these systems provide few opportunities for directing the process). There are bizarre glitches that the person writing the prompt couldn’t be bothered to fix, the composition is directionless, the question-marks have a jarring crispness that clashes with the rest of the image, the tablets? signs? are made from some unknown material, perhaps the same indistinct stuff as the ground these critters are standing on.

    It’s all actively hostile to a sense of community, as it pretends that communication is something that can just as well be accomplished by a statistical process, because who cares about trying to create something from the heart?

    These systems are an insult to human intelligence while also undermining it by automating our decision-making processes. I wrote an essay about this if you’re interested, which I’ll link here and sign off, because I don’t want to be accused again of repeating myself unnecessarily: https://thedabbler.patatas.ca/pages/ai-is-dehumanization-technology.html