So my theory is that with the help of telemetry or something else, AI can learn from data stored on users’ computers, meaning AI can steal your completed work, as well as your edits and corrections to your work, etc., even offline if you’re a Windows user for example.

In short, AI will be able to learn from you even when you edit your articles, edit your drawings, improve your music, etc. In other words, AI will literally steal your soul.

What do you think about it?

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    …No.

    The way “AI” is set up now is as big blocks of “weights” and can run in two modes:

    • Inference: Running a model to generate some kind of prediction from output, e.g. the next word in a block of text for an LLM. Typically, this not so hard and ‘batched,’ e.g. a single GPU may serve 16 people at once, in parallel (though I am skipping many intricacies here).

    • Training: Taking a bunch of data (like big blocks of specifically formatted, processed text for LLMs), and altering the weights to fit it with glorified linear regression. This is typically done on TONS of tightly networked GPUs in more specialized setups. Usually 8x big ones in one server, at a minimum. And all the data selection/formatting is done by humans, by hand, though sometimes enhanced with algorithms that, say, generate a thinking trace. There’s also a distinction between ‘pretraining’ (making the initial model) and ‘finetuning’ (slightly altering it with new data, which is tricky).


    Point I’m making is: you cannot do both at once.

    You can ‘infer’ AI, but internally, it will never change. Its never learns.

    You can train it with new data, but this is a huge and very manual/finicky and infrequent endeavor. And it takes a long time.

    Theoretically ‘learning on the go’ is a goal of machine learning, but right now Big Tech is acting about as innovative as a brick, and just scaling up architectures and stoking egos rather than paying attention to this kind of research. The Chinese LLM companies are also being relatively conservative too, but in a different way.

    Also, the recent fad (especially in China) is to make and use synthetic data instead, eg data some AI made up all by itself. This (in combination with smaller amounts of really clean/high quality ‘real’ data, e.g. not some random files stolen from your computer) is actually quite effective.


    If you’re worried about privacy, the advertising ‘models’ companies like Facebook and Google already make, and have been making for over a decade, are closer to what you describe. It’s already happened, and we’ve been living with it for years.

    Some of that is oldschool machine learning, but not all of it.