• havocpants@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    7 hours ago

    At least Scale AI isn’t 700 Indians in a trenchcoat like that company Microsoft poured money into.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      8 hours ago

      Step 1: Go to an Ivy League school

      Step 2: Make friends with a failson/daughter of a prevailing plutocrat

      Step 3: Put the Matrix-code screensaver on your laptop (apparently, this worked on Elon Musk in the early Twitter takeover days)

      Step 4: ???

      Step 5: Get a $10M Series A and a $100M Series B thanks to the family of your rich friends buying into your hair-brained Theranos knock-off.

      • grue@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        6 hours ago

        Somewhere in there there’s a step “come up with the most sociopathic business plan possible.”

        • themeatbridge@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 hours ago

          You know how you have to pay extra to have insurance to pay to take care of your mouth bones and your face balls? Well, what if we did that but with all the bones and stuff? Like, why are your foot bones included in the same insurance that pays for you to have knee bones or neck giblets? Why not do all the bones and stuff a la cart? And then maybe skin can be a premium add-on. We could charge separate for the red goo that’s all on the inside everywhere, and then it’s like a subscription model for having parts. We can sell it like “don’t pay for the parts you don’t have,” and people will think that they are saving money because each part costs less than the whole, but paying for everything costs more.

          -some Health Insurance board member somewhere, probably.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        8
        ·
        6 hours ago

        I read a book about startup stuff at the request of the CEO of my old company. Some of it was at least superficially interesting, but one part stood out as haunting. It casually mentioned how a “founder” at Zoom was trying to get money, and the investors thought it was a stupid idea. It was a “solved problem” that already had big players in the space. But they were personally friends with the guy, so they gave him a few hundred million dollars.

        That’s not something to be proud of. Nepotism and bro-driven-investment isn’t the ideal.

        But these rich assholes pretend they’re such visionaries. Gatekeepers of the future. Fuck them. Fuck them all. The climate is a disaster and they’re pouring billions into “Cats in the metaverse”? Crimes against humanity.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    3
    ·
    edit-2
    9 hours ago

    The research community already knows this.

    Llama 4 (Meta’s flagship ‘AI’ project) was as bad release. That’s fine. This is interative research; not every experiment works out.

    …But it was also a messy and dishonest one.

    The release was pushed early and full of bugs. They lied about its performance, especially at long context, going so far as to game Chat Arena with a finetune. Zuckerberg hyped the snot out of it, to the point I saw ads for it on Axios.

    Instead of Meta saying they’ll do better, they said they’re reorganizing their divisions to focus on ‘applications’ instead of fundamental research, aka exactly the wrong thing. They’ve hermmoraged good researchers and kept AI bros, far as I can tell from the outside.

    Every top LLM trainer has controversies. Just recently Qwen (Alibaba) closed off their top base models just to spite Deepseek, so they can’t distill them. Deepseek is almost certainly training on Google Gemini traces. Google hoards their best research for API models and has chased being sycophantic like ChatGPT. X’s Grok is a joke, and muddied by Musk’s constant lies about, for instance, open sourcing it. Some great outfits like 01ai (the Yi series) faded into the night.

    …But I haven’t seen self-destruction quite like Meta’s. Especially considering the ‘f you’ money and GPU farm they have. They’re still pushing interesting research now, but the trajectory is awful.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        9 hours ago

        Yes, but its clearly a building block of Meta’s LLM training effort, and part of a pattern.

        One implication I didn’t mention, and don’t have hard proof I can point to, is garbage in garbage out. Meta let AI slop and human garbage proliferate on Facebook, squandering basically the biggest advantage (besides cash) they have. It’s often speculated that, as it turns out, Twitter and Facebook training data is kinda crap.

        …And they’re at it again. Zuckerberg pours cash into corporate trash and get slop back. It’s an internal disaster, like their own divisions.

        On the other side, it’s often thought that Chinese models are so good for their size/compute because they’re ahem getting data from the Chinese government, and don’t need to worry about legal issues.