• tehn00bi@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    11 hours ago

    I keep thinking our AI will lead us to something like the Eloi of the Time Machine, and the Morlocks will be the machines that run everything.

  • liquefy4931@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    23 hours ago

    After I learned how LLMs function, the “AI” we use in reality was categorized within my mind as something entirely new and different from the fictional, cognizant, sapient artificial intelligence in my favorite novels.

  • diptchip@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    18 hours ago

    Hate is a strong word… I feel like humans and machines coexist a little too well in the movies, except when the lack of coexistence IS the plot.

  • lime!@feddit.nu
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    2 days ago

    not at all, just as how boston dynamics’ atlas didn’t change how i viewed robocop.

    text generators just have very little in common with intelligent, autonomous artificial entities.

  • MourningDove@lemmy.zip
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    It is not now, nor ever will be anything like the way it’s depicted in sci-fi fantasy. We are never going to achieve anything close to a Star Trek-level of symbiosis with tech. Everything we ever do will be weaponized, and what can’t be turned on our adversaries and ultimately ourselves, will be used to make the less intelligent even more-so.

    It’s going to drain our last vestige of creativity as it runs headlong through our every culture, and in its wake will be the unmotivated remains of what passion for the arts we once had, until one day- we will be nothing more than animals walking in and out of rooms.

    Trust that nothing good lies that way.

  • ℕ𝕖𝕞𝕠@slrpnk.net
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    No, but actually studying Artificial Intelligence a decade ago in college did.

    We had language models back then, too, they just weren’t as good.

  • NKBTN@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    They’ve made fictional AI seem that much more far-fetched.

    Obviously, we all learn by imitation and instruction - but LLMs have shown that’s only part of the puzzle

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      I think LLMs could provide a human friendly interface for robots. There’s a lot of interesting work happening with embodied AI now, and in my opinion embodiment is the key ingredient for making AI intelligent in a human sense. A robot has to interact with the environment and it builds an internal model of the world for making decisions. This creates a feedback loop where the robot can learn the rules of the world and do meaningful interaction, and that’s precisely what’s missing with LLMs.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          12 hours ago

          Not necessarily just an LLM on its own. The key part is that the internal model is coupled with reinforcement learning where it becomes rooted in the behaviors of the physical world. Real time continuous learning is the way to get there, but it can be done using different approaches. For example, neurosymbolic AI combines deep neural networks with symbolic logic. The LLM is used to parse and classify noisy input data, while a logic engine is used to make decisions about it. My expectation is that we’ll see more of these types of approaches where different machine learning techniques are combined together going forward. LLMs will just be one part of the bigger whole.

  • m532@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    I noticed that authors are mostly completely wrong about everything. They can’t write machines. They can’t write animals either. And of course they can’t write aliens. They can only write humans and then use that for “the machine has feelings” bs. Those things in the stories are not machines, they are badly written humans.

  • Sibyls@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    It’s given me an idea of how we get there. Clearly, modern LLMs aren’t near the level as seen in movies, but we will get there. We will move on from LLMs within a few years to a more adaptive model, as we further increase our understanding of AI and neural networks.

    I see modern LLMs as task tools, they can interpret our requests to pass onto a more intelligent model type which will save processing power needed from the newer AIs.

    People in this thread seem to have a lot of bias, they can’t see how the tech will evolve. You need to keep an open mind and look at where tech is being developed, with AI, it will be new architectures.

    • DecaturNature@yall.theatl.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      Their bias is a direct response to the rhetoric from the ‘leaders’ of the AI industry, who have collected billions of dollars and turned it into BS expectations.

  • slst@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    My favoeite character is a robot, and while sometimes she sounds like an llm she’s much more than that. She actually learns how humans are and it’s beautiful and I love her

  • FRYD@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    2 days ago

    AI in fiction is a boring concept to me. It’s presented either as “What is a person?” or “What if we create an evil god?”. To me anything with feelings is a person and the other is just a chrome paint job on evil god characters in non sci-fi genres, so it’s just a speculative dead end.

    AI in real life is much more interesting and its proliferation makes fictional AI seem even more bland. Real life AI is first and foremost not intelligent and probably not even close, that said we have no rubric to grade it by because we don’t even really know what intelligence is yet. That said, machine learning algorithms highlight patterns in the world and in our behaviors that are fascinating just because they show just how complicated the world and people are in ways our brains just passively process. Kind of like how QWOP highlights just how difficult and complicated walking is.

  • tias@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    In sci-fi, AI devices (like self-driving cars or ships, or androids) seem like an integrated unit where any controls or sensors they have are like human limbs and senses. The AI “wills” the engine to start. I always imagined AI would be like a single organism where neurons are connected directly to the body.

    Given the development of LLM:s and how they are used, it now seems more likely that AI will be an additional “smart layer” on top of the dumb machinery, and actions are performed by emitting tokens/commands (“raise arm 35 degrees”) that are sent to API:s. The interaction will be indirect in the way that we control the TV with the remote.