• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle

  • I’m okay with game prices going up – they’ve fallen far behind inflation over the decades – though personally I favor DLC rather than one large shebang. Lower risk on both sides.

    And there are a lot of games out there that, when including DLC, run much more than $100. Think of The Sims series or a lot of Paradox games. Stellaris is a fun, sprawling game, but with all DLC, it’s over $300, and it’s far from the priciest.

    But if I’m paying more, I also want to get more utility out of what you’re selling. If a game costs $100, I expect to get twice what I get out of a competing $50 game.

    And to be totally honest, most of the games that I really enjoy have complex mechanics and have the player play over and over again. I think that most of the cost that game studios want is for asset creation. That can be okay, depending upon genre – graphics are nice, music is nice, realistic motion-capture movement is nice – but that’s not really what makes or breaks my favorite games. The novelty kind of goes away once you’ve experienced an asset a zillion times.


  • tal@lemmy.todaytoTechnology@lemmy.worldTerminal colours are tricky
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Not to mention that the article author apparently likes dark-on-light coloration (“light mode”), whereas I like light-on-dark (“dark mode”).

    Traditionally, most computers were light-on-dark. I think it was the Mac that really shifted things to dark-on-light:

    My understanding from past reading was that that change was made because of the observation that at the time, people were generally working with computer representations of paper documents. For ink economy reasons, paper documents were normally dark-on-light. Ink costs something, so normally you’d rather put ink on 5% of the page rather than 95% of the page. If you had a computer showing a light-on-dark image of a document that would be subsequently printed and be dark-on-light on paper, that’d really break the WYSIWYG paradigm emerging at the time. So word processors and the like drove that decision to move to dark-on-light:

    Prior to that, a word processor might have looked something like this (WordPerfect for DOS):

    Technically, I suppose it wasn’t the Mac where that “dark-on-light-following-paper” convention originated, just where it was popularized. The Apple IIgs had some kind of optional graphical environment that looked like a proto-Mac environment, though I rarely saw it used:

    Update: apparently that wasn’t actually released until after the Mac. This says that that graphical desktop was released in 1985, while the original 128K Mac came out in 1984. So it’s really a dead-end side branch offshoot, rather than a predecessor.

    The Mac derived from the Lisa at Apple (which never became very widespread):

    And that derived from the Xerox Alto:

    But for practical purposes, I think that it’s reasonably fair to say that the Mac was really what spread dark-on-light. Then Windows picked up the convention, and it was really firmly entrenched:

    Prior to that, MS-DOS was normally light-on-dark (with the basic command line environment being white-on-black, though with some apps following a convention of light on blue):

    Apple ProDOS, widely used on Apple computers prior to the Mac, was light-on-dark:

    The same was true of other early text-based PC environments, like the Commodore 64:

    Or the TRS-80:

    1000009146

    When I used VAX/VMS, it was normally off a VT terminal that would have been light-on-dark, normally green, amber, or white on black, depending upon the terminal:

    And as far as I can recall, terminals for Unix were light-on-dark.

    If you go all the way back before video terminals to teleprinters, those were putting their output directly on paper, so the ink issue comes up again, and they were dark-on-light:

    But I think that there’s a pretty good argument that, absent ink economy constraints, the historical preference has been to use light-on-dark on video displays.

    There’s also some argument that for OLED displays – and, one assumes, any future displays, where you only light up what needs to be light, rather than the LCD approach of lighting the whole thing up and then blocking and converting to heat what you don’t want to be light – draw somewhat less power for light-on-dark. That provides some battery benefits on portable devices, though in most cases, that’s probably not a huge issue compared to eye comfort.






  • In August 1993, the project was canceled. A year of my work evaporated, my contract ended, and I was unemployed.

    I was frustrated by all the wasted effort, so I decided to uncancel my small part of the project. I had been paid to do a job, and I wanted to finish it. My electronic badge still opened Apple’s doors, so I just kept showing up.

    I asked my friend Greg Robbins to help me. His contract in another division at Apple had just ended, so he told his manager that he would start reporting to me. She didn’t ask who I was and let him keep his office and badge. In turn, I told people that I was reporting to him. Since that left no managers in the loop, we had no meetings and could be extremely productive.

    They created a pretty handy app that was bundled with the base OS, and which I remember having fun using. So it’s probably just as well that Apple didn’t hassle them. But in all seriousness, that’s not the most amazing building security ever.

    reads further

    Hah!

    We wanted to release a Windows version as part of Windows 98, but sadly, Microsoft has effective building security.


  • It’s something in the charging circuitry. It works fine when it’s on wall power, but it just does not charge the battery.

    And it’s not the battery itself because I’ve tried getting new batteries for it. It’s something in the charging circuitry. It works fine when it’s on wall power, but it just does not charge the battery.

    At least some Dell laptops authenticate to the charger so that only “authentic Dell chargers” can charge the battery, though they’ll run off third-party chargers without charging the battery.

    Unfortunately, it’s a common problem – and I’ve seen this myself – for the authentication pin on an “authentic Dell charger” to become slightly bent or something, at which it will no longer authenticate and the laptop will refuse to charge the battery.

    I bet the charger on yours is a barrel charger with that pin down the middle.

    hits Amazon

    Yeah, looks like it.

    https://www.amazon.com/dp/B086VYSZVL?psc=1

    I don’t have a great picture for the 65W one, but the 45W charger here has an image looking down the charger barrel showing that internal pin.

    If you want to keep using that laptop and want to use the battery, I’d try swapping out the charger. If you don’t have an official Dell charger, make sure that the one you get is one of those (unless some “universal charger” has managed to break their authentication scheme in the intervening years; I haven’t been following things).

    EDIT: Even one of the top reviews on that Amazon page mentions it:

    I have a DELL, that has the straight barrel plug with the pin in it. THEY REALLY made a BAD DECISION when they made these DELL laptops with that type of plug instead of making it with a dog leg style plug. I have to replace my charger cord A LOT because the pin gets bent inside and it stops charging at that plug, but the rest of the charger is still good…


  • Up until the early 2000s, serial computation speed doubled about every 18 months. That meant that virtually all software just ran twice as quickly every 18 months of CPU advances. And since taking advantage of that was trivial, new software releases did, traded CPU cycles for shorter development time or more functionality, demanded current hardware to run at a reasonable clip.

    In that environment, it was quite important to upgrade the CPU.

    But that hasn’t been happening for about twenty years now. Serial computation speed still increases, but not nearly as quickly any more.

    This is about ten years old now:

    https://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance/

    Throughout the 80’s and 90’s, CPUs were able to run virtually any kind of software twice as fast every 18-20 months. The rate of change was incredible. Your 486SX-16 was almost obsolete by the time you got it through the door. But eventually, at some point in the mid-2000’s, progress slowed down considerably for single-threaded software – which was most software.

    Perhaps the turning point came in May 2004, when Intel canceled its latest single-core development effort to focus on multicore designs. Later that year, Herb Sutter wrote his now-famous article, The Free Lunch Is Over. Not all software will run remarkably faster year-over-year anymore, he warned us. Concurrent software would continue its meteoric rise, but single-threaded software was about to get left in the dust.

    If you’re willing to trust this line, it seems that in the eight years since January 2004, mainstream performance has increased by a factor of about 4.6x, which works out to 21% per year. Compare that to the 28x increase between 1996 and 2004! Things have really slowed down.

    We can also look at about the twelve years since then, which is even slower:

    https://www.cpubenchmark.net/compare/2026vs6296/Intel-i7-4960X-vs-Intel-Ultra-9-285K

    This is using a benchmark to compare the single-threaded performance of the i7 4960X (Intel’s high-end processor back at the start of 2013) to that of the Intel Ultra 9 285K, the current one. In those ~12 years, the latest processor has managed to get single-threaded performance about (5068/2070)=~2.448 times the 12-year-old processor. That’s (5068/2070)^(1/12)=1.07747, about a 7.7% performance improvement per year. The age of a processor doesn’t matter nearly as much in that environment.

    We still have had significant parallel computation increases. GPUs in particular have gotten considerably more powerful. But unlike with serial compute, parallel compute isn’t a “free” performance improvement – software needs to be rewritten to take advantage of that, it’s often hard to parallelize solving problems, and some problems cannot be solved in parallel.

    Honestly, I’d say that the most-noticeable shift is away from rotational drives to SSDs – there are tasks for which SSDs can greatly outperform rotational drives.




  • I’ve kind of felt the same way, would rather have a somewhat-stronger focus on technology in this community.

    The current top few pages of posts are pretty much all just talking about drama at social media companies, which frankly isn’t really what I think of as technology.

    That being said, “technology” kind of runs the gamut in various news sources. I’ve often seen “technology news” basically amount to promoting new consumer gadgets, which isn’t exactly what I’d like to see from the thing, either. I don’t really want to see leaked photos of whatever the latest Android tablet from Lenovo or whatever is either.

    I’d be more interested in reading about technological advances and changes.

    I suppose that if someone wants to start a more-focused community, I’d also be willing to join that, give it a shot.

    EDIT: I’d note that the current content here kind of mirrors what’s on Reddit at /r/Technology, which is also basically drama at social media companies. I suppose that there’s probably interest from some in that. It’s just not really what I’m primarily looking for.



  • I think that California should take keeping itself competitive as a tech center more-seriously. I think that a lot of what has made California competitive for tech is because it had tech from earlier, and that at a certain threshold, it becomes advantageous to do more companies in an area – you have a pool of employees and investors and such. But what matters is having a sufficiently-large pool, and if you let that advantage erode sufficiently, your edge also goes away.

    We were just talking about high California electricity prices, for example. A number of datacenters have shifted out of California because the cost of electricity is a significant input. Now, okay – you don’t have to be right on top of your datacenters to be doing tech work. You can run a Silicon Valley-based company that has its hardware in Washington state, but it’s one more factor that makes it less appealing to be located in California.

    The electricity price issue came up a lot back when people were talking about Bitcoin mining more, since there weren’t a whole lot of inputs and it’s otherwise pretty location-agnostic.

    https://www.cnbc.com/2021/09/30/this-map-shows-the-best-us-states-to-mine-for-bitcoin.html

    In California and Connecticut, electricity costs 18 to 19 cents per kilowatt hour, more than double that in Texas, Wyoming, Washington, and Kentucky, according to the Global Energy Institute.

    (Prices are higher now everywhere, as this was before the COVID-19-era inflation, but the fact that California is still expensive electricity-wise remains.)

    I think that there is a certain chunk of California that is kind of under the impression that the tech industry in California is a magic cash cow that is always going to be there, no matter what California does, and I think that that’s kind of a cavalier approach to take.

    EDIT: COVID-19’s remote-working also did a lot to seriously hurt California here, since a lot of people decided “if I don’t have to pay California cost-of-living and can still keep the same job, why should I pay those costs?” and just moved out of state. If you look at COVID-19-era population-change data in counties around the San Francisco Bay Area, it saw a pretty remarkable drop.

    https://www.apricitas.io/p/california-is-losing-tech-jobs

    California is Losing Tech Jobs

    The Golden State Used to Dominate Tech Employment—But Its Share of Total US Tech Jobs has Now Fallen to the Lowest Level in a Decade

    Nevertheless, many of the tech industry’s traditional hubs have indeed suffered significantly since the onset of the tech-cession—and nowhere more so than California. As the home of Silicon Valley, the state represented roughly 30% of total US tech sector output and got roughly 10% of its statewide GDP from the tech industry in 2021. However, the Golden State has been bleeding tech jobs over the last year and a half—since August 2022, California has lost 21k jobs in computer systems design & related, 15k in streaming & social networks, 11k in software publishing, and 7k in web search & related—while gaining less than 1k in computing infrastructure & data processing. Since the beginning of COVID, California has added a sum total of only 6k jobs in the tech industry—compared to roughly 570k across the rest of the United States.

    For California, the loss of tech jobs represents a major drag on the state’s economy, a driver of acute budgetary problems, and an upending of housing market dynamics—but most importantly, it represents a squandering of many of the opportunities the industry afforded the state throughout the 2010s.



  • tal@lemmy.todaytoTechnology@lemmy.worldWorld's First MIDI Shellcode
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 days ago

    The thing the guy is poking at is a synthesizer, a device that lets you compose music and synthesizes the audio.

    He got a service manual that showed some technical information about a similar synthesizer that indicated that some of the pins on one of the chips were used for a standard interface used to diagnose problems on devices, called JTAG. He guessed correctly that his similar synthesizer also used the same pins for this.

    He made some guesses about what functionality was present, and was able to identify the microprocessor and download the device firmware using this port.

    He then went looking for interesting bits of text in the firmware. What he ran across was something that appeared to be a diagnostic shell (I.e. you enter commands and can see a response) as well as the password to access it.

    He didn’t know how one reached the shell. He went digging in the firmware further and discovered that the device – which acted as a MIDI device over USB to a host computer – took in special MIDI commands that would go to this shell.

    Now he had a way to access the shell any time he had one of these synths plugged into his computer via USB – he didn’t need to physically connect to the diagnostic pins on the chip.

    One feature of the shell permitted modifying RAM on the synthesizer. It wasn’t intended to let one upload executable code, but he uploaded it into some unused memory and then overwrote the frame pointer on the stack used by the shell program to point to that code (which a processor uses to know where to continue executing after running a subroutine) and then returned into his code, which let him get to the point where he could not just upload code to the microprocessor but also run it.

    He wrote his own transfer program for high-speed data transfer over USB and modified the in-RAM code that displayed video.

    This then let him upload video to part of the display and display it at a relatively high frame rate, which is the anime video shown in the last section. I believe that the laptop in the foreground is showing the original frames.

    My understanding from two articles recently posted here is that it is a fad for hardware hackers to play this “bad apple” anime video on all sorts of old and low end devices.