• 0 Posts
  • 257 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle







  • Buddahriffic@lemmy.worldtoLemmy Shitpost@lemmy.worldnostalgia
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    19 days ago

    I find the “AI slop” commenters as annoying as the “that didn’t happen/everyone clapped/albert einstein handed you a $100 bill” ones. Their contribution is shittier than what they are commenting on even when they are correct, but it’s not rare to see a comment like that followed by the dumbest logic as to why they think that’s the case (especially on reddit).


  • Things get more violent. Wind tries to find the path of least resistance, though as a fluid, so it’s taking all paths in proportion to how much resistance it has (just like electricity). If you increase the absolute resistance in one area, it reduces the relative resistance everywhere else, so you end up with increased airflow everywhere else and a reduction where you added resistance. Which means more wind outside of the turbine’s path (because it’s going to equalize that pressure differential one way or another). More flow through the same volume means higher speeds and forces (think like turning up the pressure on a tap).

    But wind turbines don’t have a constant effect on wind resistance; it depends on how fast it’s spinning or how fast the wind is moving. When the wind slows, the resistance goes down, and when resistance goes down, wind speed increases. So you end up with an oscillating effect where the wind goes through cycles of strengthening, losing more energy to the turbines and weakening, which means the turbines take less energy, and the winds strengthen again. Though you’d need to be taking a significant amount of that energy to see an extreme effect like this.

    Apparently taking more than 53.9% of the total wind energy in an area is enough to slow the wind to a stop (again, a violent, turbulent, oscillating stop, not a gentle end of wind).


  • That’s the thing, I don’t think you’re giving LLMs poisoned data, you’re just giving them data. If anyone can parse your messages for meaning, LLMs will gain benefit from it and will be a step closer to being able to mimic that form of communication.

    I don’t think you can truly poison data for LLMs while also having a useful conversation. Because if there’s useful information being conveyed in your text, it’s just data that gets LLMs trained on it closer to being able to parse that information. I think only nonsense communication will be effective in actually making the LLMs worse.



  • Here’s GPT 5-mini:

    What is going on with the following text, I can’t understand it!
    <pasted the comment above>

    Output:
    This text uses archaic letter forms (þ = thorn, representing “th”) and some nonstandard spellings; read with modern equivalents and cleaned punctuation:

    “Yeah, the mechanics are entirely different. The spider is providing structural support, is extruding filament not under tension, then provides tension while attaching the filament so the result is under tension when connected. Then the support structure (El Spidré) moves to the next extrusion point. It’s not simply squirting a web into place; there’s a lot of manipulation of the extruded material and connection points going on all at once.”

    Summary of the meaning, briefly:

    • The spider first creates a filament without stretching it.
    • While attaching that filament it applies tension so the final strand is taut.
    • The support structure (nicknamed “El Spidré”) shifts to the next spot to repeat the process.
    • The process involves active handling and tensioning, not just passively releasing silk.

    A 1:1 text replacement is something LLMs are very good at handling and a waste of time and effort to target with an attack.


  • Buddahriffic@lemmy.worldto3DPrinting@lemmy.worldExplain that
    link
    fedilink
    English
    arrow-up
    14
    ·
    23 days ago

    Considering LLMs handle translation, synonyms, and homonyms pretty well, I don’t think replacing a letter combination with a different symbol is going to cause much confusion. I bet chatGPT right now will understand that text perfectly fine and will present it with or without the dumb symbols when asked.


  • TPM is more about securing data from PC owners rather than for them. Since it’s there anyways, it is used to support bitlocker, but the reason they are pushing it so much is because it might (depending on whether it actually is secure) be able to allow content providers to allow users to view their content without needing to give them access to copy or edit it.

    And there isn’t any guarantee that the uses that do benefit the user’s security don’t have some backdoor for approved crackers to get in. Like doesn’t the MS account store a copy of the recovery key for bitlocker? Which is nice for when the user needs it, but also comes in handy if MS wants to grant access to anyone else.




  • Buddahriffic@lemmy.worldtocats@lemmy.worldPikachu
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    I put a link in another reply, dunno if you’d consider it credible or not, but I got it from a websearch for “inhaling diatomaceous earth” if you want to look at the many other links.

    Food grade isn’t as bad as non-food grade, but still not great. And the “aabestos-like” bit was in the context that it causes harm through a similar mechanism (small sharp, jagged particles). Severity and outcome might vary.




  • It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.

    Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.

    There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.