• 0 Posts
  • 248 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • 30 years ago the Internet was tiny, and to this day you can largely get the same experience if you opt to ignore some of the more frustrating Internet. In practice it is a problem that extremist views can come closer together without the moderating influence of those physically near you. would definitely appreciate a harder push back towards federation and a break from subscription based software, though compared to 30 years ago, the free software today is better than anything we had back then.

    Our cars were not durable, drive trains can take a whole lot more negligence than they used to and hoses and gaskets last longer than they did back then. There have been struggles with some cars adding turbos for efficiency, but even those are way less problematic than they used to be.

    We can interact in real life, we just largely don’t. As an adult I probably interact with peers about as much as my parents did when they were my age, not much at all. Constant hanging out goes away with age for most people.

    There’s a lot of regression in the world but that pendulum swings back and forth.



  • That’s fine, just saying so long as there are people pumping ridiculous amounts of money into the fiction that it can do anything and everything I won’t fault folks for having the counter reaction of being overly dismissive/repulsed by mentions of it.

    I’m hopeful for the day when the hype subsides and it settles into the appropriate level of usefulness and expectations, complete with perhaps less ludicrous overspend on the infrastructure.


  • jj4211@lemmy.worldtoComic Strips@lemmy.worldFour Eyes Principle
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    The “luddite” reaction is largely a reaction to the overhype applied by the industry that pretends the current wave of text/image generators is general intelligence and in conjunction with robotics can replace every job and allow the upper class folks to live a full life without that pesky labor class.

    So it’s naturally to expect a wave of such hype pretending it’s unambiguously amazing and perfect to get hit with a counter that’s overly dismissive and treats AI as a very bad brand. Also, in some contexts even if it is a net win, it’s still kind of annoying. In my haystack example, a human would have reviewed 23 things confidently declared by the AI to be needles and said no to them. Practically speaking, that’s unimaginably better than reviewing millions of not-needles to get to some needles, but we are more annoyed because in our mind the things presented were supposed to be needles. Same applies to a lot of generative AI use, it might provide a decent chunk of content that’s nearly usable 20% of the time so quick as to be worth it, but it’s hard to ignore the 80% of suggestions that it throws at you that are unusably bad. Depends on your job and your niche as to what the percentage will be. From a creative perspective, it generates milquetoast stuff, which may suffice for backgrounds and stuff that doesn’t matter, but is a waste of time when attempted as the key creative elements.

    Broadly society has to navigate the nuanced middle ground, where it can be pretty good assistive technology but not go all out on it. Except of course there are areas likely to be significantly fully automated, like customer support or food order taking (though I prefer kiosks/apps for more precise ordering through tapping my way through, but either way not a human).




  • If someone is claiming God is on their side, then absolutely they should not be trusted.

    A good example was Huckabee’s message to Trump where he says he shouldn’t listen to humble old Huckabee, but he should listen to God, who, coincidentally, is saying exactly the same thing as Huckabee.

    If you have your faith but make no assertions about it’s validity over other opinions nor that it confers divine authority to the words or deeds of any person, cool, I respect that faith. I’m inclined to have some faith myself, but I’m not about to claim any of it is more than my personal wild guesses and hope.

    However organized religion is generally exploitable and bad people take advantage…


  • Basically AI is generally a decent answer to the needle in a haystack problem. Sure, a human with infinite time and attention can find the needle and perhaps more accurately than an AI could, but practically speaking if there’s just 10 needles in a haystack it’s considered a lost cause to find any of them.

    With AI it might find in that same stack 30 needles, of which only 7 of them are the needles, which means the AI finds more wrong answers than right, but ultimately you do end up finding 7 needles when you would have missed all 10 before, coming out ahead.

    So long as you don’t let an AI rule out review of a scan that a human really would have reviewed, it seems a win to potentially have more overall scans get a decent review and maybe catch things earlier in otherwise impractical preventative scans


  • The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.

    I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.

    I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.


  • I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…

    So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.

    It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.

    So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.

    However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.

    That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.



  • It’s not like the road test is particularly rigorous. Worth while to administer and you have to be in super bad shape if the person even notices you doing anything off, so it’s not like the risk is high.

    Though that written test, I took a practice one and my driving experience did not keep me in shape to pass those… Of course the questions are stupid like “which of the following violations carries the harshest penalty” or “exactly how many feet away from an intersection must you park when doing street parking on an unmarked street”








  • To reinforce this, just had a meeting with a software executive who has no coding experience but is nearly certain he’s going to lay off nearly all his employees because the value is all in the requirements he manages and he can feed those to a prompt just as well as any human can.

    He does tutorial fodder introductory applications and assumes all the work is that way. So he is confident that he will save the company a lot of money by laying off these obsolete computer guys and focus on his “irreplaceable” insight. He’s convinced that all the negative feedback is just people trying to protect their jobs or people stubbornly not with new technology.


  • jj4211@lemmy.worldtoFuck Cars@lemmy.worldThe dream
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    Sure, it’s just an interesting challenge for funding development with public money.

    You draw funds from people who can’t benefit unless they further will spend even more money to relocate. Hard to get initiatives passed when your tax base is largely not going to benefit. The chicken and egg effect is harsher than just the time it will take.