

1·
28 days agoI see. Yeah, I agree with you there.
I see. Yeah, I agree with you there.
I think you’re right circa a few years ago. However, as someone working in AI, I don’t think it is true any longer. I’m not saying the substack article is legit, btw, just that the fulcrum has shifted–fewer people can now do much more, aided by algorithms and boosted by AI system prompts. Especially if it’s a group internal to a company that has database access etc.
It’s tricky. There is code involved, and the code is open source. There is a neural net involved, and it is released as open weights. The part that is not available is the “input” that went into the training. This seems to be a common way in which models are released as both “open source” and “open weights”, but you wouldn’t necessarily be able to replicate the outcome with $5M or whatever it takes to train the foundation model, since you’d have to guess about what they used as their input training corpus.