nickwitha_k (he/him)

  • 1 Post
  • 83 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.

    Same here. There’s a lot of denial going on but, LLMs are not good for anything that requires factual information. They likely will never be on account of just being statistical models for language. Summarizing long text where correctness isn’t an issue is really one of the only places where I still think that they are good.

    Search? Not if you want anything factual with citations.

    Code? Fuck no. They constantly produce code of poor quality that may depend on non-existent libraries or functionality. More time it’s spent debugging than writing code and it leaves the dev with a poor understanding of what the code actually does and ways to optimize/extend/etc.

    Generating literary smut? Well, it’s not going to do as good of a job as a person who can create something completely novel but can be passable without likely harm to authors (I’d classify it as a tier below erotic fan fiction).









  • Oh it’s the same shit as feudalism, but with technology… Thanks for letting me know that’s what Techno-Feudalism means.

    Understanding the meaning and context of terms is very important.

    … I guess we could add “global” to the front of it so you know it’s not just happening in a castle in 14th century Europe, but all across the planet.

    I find “neo-feudalism” more appropriate. The previous incarnation already spanned the known world at the time.

    Like, how many castles were in Europe? Okay, compare that to how many Amazon’s there are? It’s not the same thing at all

    That’s really a comparison that makes me think that, perhaps, learning more about feudal history would do us all good. A more apt comparison would be “how many Vaticans were there?” (depending on the time period, two).

    Rome was the seat of power through much of feudalism in the Common Era in Europe. Castles were extensions of the theocratic empire centered there, providing physical and visual/psychological enforcement of that power. Despite all of the war and megalomaniacal bickering, the feudal lords and kings all had the same boss.

    There’s less difference than you apparently think.

    Sorry, I don’t have time for this mind dulling discussion.

    I’m sorry that you don’t know enough about history to understand how nearly identical the two are and didn’t mean to cause distress, not knowing how attached to the term you were.

    G’luck.



  • I’ve read Varifakous and don’t find his claim that it’s anything new beyond the technologies used to be at all compelling.

    So no, the use of fuedalism isn’t to indicate something about old school mechanisms of war, weaponry, brutality, or repression. It’s a reference to the role of economic serfdom and the economic aspects of fuedalism.

    Teotihuacan was the center on an empire but it had no military.

    What I’m saying is that they even go with divine mandate at this point. Just because their not jousting and are using abstractions that are enabled by modern technology instead of castles doesn’t make it fundamentally a different, new thing. Commerce and who could engage in it was heavily regulated by feudal lords and organizations that they ran or allowed to run.

    It’s literally just the same shit with better technology. The far-right isn’t that creative.



  • It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM.

    My apologies if it seems “nit-picky”. Not my intent. Just that, to my brain, the difference in semantic meaning is very important.

    I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.

    In my thinking, that’s exactly what asking “can an LLM achieve sentience?” is, so, I can see the confusion. Because I am strict in classification, it is, to me, literally line asking “can the parahippocampal gyrus achieve sentience?” (probably not by itself - though our meat-computers show extraordinary plasticity… so, maybe?).

    For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.

    Precisely. And I suspect that it is very much related to the constrained context available to any language model. The world, and thought as we know it, is mostly not language. Not everyone has an internal monologue that is verbal/linguistic (some don’t even have one and mine tends to be more abstract when not in the context of verbal things) so, it follows that more than linguistic analysis is necessary.




  • Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.

    I’ll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).

    Isn’t this analagous to short term memory?

    As one with AuADHD, I think a good deal about short-term and working memory. I would say “yes and no”. It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

    Would you consider that LLM system to have persistent context?

    Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI. As a component of a system, it becomes much more promising.

    On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

    This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it’s a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).

    Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn’t impact that for the individual or the LLM. It’s a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they’re just trapped in a hellish sliding window of temporal consciousness).

    Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.


  • LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse’s corpse.

    LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical models to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they’re not going to save the world for us.

    They could be a part in a more complicated system that forms an AGI. There’s nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it’s not a stochastic parrot.

    I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.