• somedev@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    21 hours ago

    I would not risk 36TB of data on a single drive let alone a Seagate. Never had a good experience with them.

    • ByteOnBikes@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      16 hours ago

      Ignoring the Seagate part, which makes sense… Is there a reason with 36TB?

      I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.

      So this growth seems right.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 hours ago

        It’s raid rebuild times.

        The bigger the drive, the longer the time.

        The longer the time, the more likely the rebuild will fail.

        That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you’re fucked.

        • notfromhere@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.

          • GamingChairModel@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            It’s still the same issue, RAID or Ceph. If a physical drive can only write 100 MB/s, a 36TB drive will take 360,000 seconds (6000 minutes or 100 hours) to write. During the 100-hour window, you’ll be down a drive, and be vulnerable to a second failure. Both RAID and Ceph can be configured for more redundancy at the cost of less storage capacity, but even Ceph fails (down to read only mode, or data loss) if too many physical drives fail.

            • notfromhere@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              While true, it can fill the drive replacement with data spread from way more number of drives than raid can, so the point I was trying to make is that a second failure due to resilvering cam be greatly mitigated by using a Ceph setup.

      • katy ✨@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        I recall IT people losing their minds when we hit the 1TB

        1TB? I remember when my first computer had a state of the art 200MB hard drive.

        • Keelhaul@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          14 hours ago

          Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore’s law. SSDs do use transistors/nano structures (NAND) for storage and it’s storage capacity is more related to Moore’s law.

    • Kairos@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      The only thing I want is reasonably cheap 3.5" SSDs. Sata is fine just let me pay $500 for a 12TB SSD please.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      19 hours ago

      You couldn’t afford this drive unless you are enterprise so there’s nothing to worry about. They don’t sell them by the 1. You have to buy enough for a rack at once.

    • boonhet@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      20 hours ago

      They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.

      That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I’d likely be going with smaller drives because however much a 36 TB drive costs, I don’t wanna feel like I’m spending 2x the cost of one of those just for redundancy lmao

      • BorgDrone@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 hours ago

        I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me.

        Repeat after me: RAID is not a backup solution, RAID is a high-availability solution.

        The point of RAID is not to safeguard your data, you need proper backups for that (3-2-1 rule of backups: 3 copies of the data on 2 different storage media, with 1 copy off-site). RAID will not protect your data from deletion from user error, malware, OS bugs, or anything like that.

        The point of RAID is so everyone can keep working if there is a hardware failure. It’s there to prevent downtime.

        • boonhet@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          It’s 36 TB drives. Most people are planning on keeping anything legal or self-produced there. It’s going to be pirated media and idk about you but I’m not uploading that to any cloud provider lmao

          • BorgDrone@lemmy.one
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 hours ago

            These are enterprise drives, they aren’t going to contain anything pirated. They are probably going to one of those cloud providers you don’t want to upload your data to.

            • boonhet@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 hours ago

              I can easily buy enterprise drives for home use. What are you on about?

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        I use mirrors, so RAID 1 right now and likely RAID 10 when I get more drives. That’s the safest IMO, since you don’t need the rest of the array to resilver your new drive, only the ones in its mirror pool, which reduces the likelihood of a cascading failure.

  • Ugurcan@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    I’m going to remind you that these fuckers are LOUD, like ROARING LOUD, so might not be suitable for your living room server.

  • iturnedintoanewt@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 day ago

    OK…what’s this HAMR technology and how does it play compared to the typical CMR/SMR performance differences?

    • JayleneSlide@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      Heat-Assisted Magnetic Recording. It uses a laser to heat the drive platter, allowing for higher areal density and increased capacity.

      I am ignorant on the CMR/SMR differences in performance

      • iturnedintoanewt@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        I fear HAMR sounds like a variation on the idea of getting a coarser method to prepare the data to be written, just like on SMR. These kind of hard drives are good for slow predictable sequential storage, but they suck at writing more randomly. They’re good for surveillance storage and things like that, but no good for daily use in a computer.

        • drosophila@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 hours ago

          That sounds absolutely fine to me.

          Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

          In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.

          If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.

          • iturnedintoanewt@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            These are still not good for a RAID array, was my point. Unless just storing sequentially, at a kinda slow rate. At least for SMR. I fear HAMR might be similar (it reminds me of Sony’s minidisk idea but applied to a hard drive).

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          My poor memory is telling me the heat is used to make the bits easier to flip, so you can use a weaker magnetic field that only affects a smaller area, allowing you to pack in bits more closely. It shouldn’t have the same problem as SMR.

      • JGrffn@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        I mean, newer server-grade models with independent actuators can easily saturate a SATA 3 connection. As far as speeds go, a raid-5 or raid-6 setup or equivalent should be pretty damn fast, especially if they start rolling out those independent actuators into the consumer market.

        As far as latency goes? Yeah, you should stick to solid state…but this breathes new life into the HDD market for sure.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      The speed usually increases with capacity, but this drive uses HAMR instead of CMR, so it will be interesting to see what effect that has on the speed. The fastest HDDs available now can max out SATA 3 on sequential transfers, but they use dual actuators.

      • Lost_My_Mind@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        7
        ·
        1 day ago

        I bought a seagate. Brand new. 250gb, back when 250gb on one hard drive cost a fuckton.

        It sat in a box until I was done burning the files on my old 60gb hard drive onto dvd-r’s.

        Finally, like 2 months later, I open the box. Install the drive. Put all the files from dvds onto the hard drive.

        And after I finished, 2 weeks later it totally dies. Outside of return window, but within the warranty period. Seagate refused to honor their warranty even though I still had the reciept.

        That was like 2005. Western Digital has now gotten my business ever since. Multiple drives bought. Not because the drives die, but because datawise I outgrow them. My current setup is 18TB and a 12TB. I figure by 2027 I’ll need to update that 12TB to a 30TB. Which I assume will still cost $400 at that point.

        Return customer? No no. We’ll hassle our customer and send bad vibes. Make him frustrated for ever shopping our brznd! Gotta protect that one time $400 purchase! It’s totally worth losing 20 years of sales!

        • renegadespork@lemmy.jelliefrontier.net
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          1 day ago
          1. Seagate drives are generally way more reliable now than the pre-TB days.
          2. There is always a risk of premature failure with all hard drives (see the bathtub curve). You should never have only one copy of any data you aren’t okay with losing.

          FYI: Backblaze is a cloud storage provider that uses HDDs at scale, and they publish their statistics every year regarding which models have the highest and lowest failure rates.

          • Lost_My_Mind@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            At this point it’s less about the current quality of the product, and more about the company. I had every right to have my item replaced. I was within warrenty. It’s not MY warrenty policy. I didn’t set the terms. I didn’t set the duration. They did. They said if any issues arrise within a certain time of purchase I could get a replacement. I had the proof. I sent them the proof. I was told something along the lines of “In this case we’re not able to replace the drive”. When I asked what was wrong, I was told it was a high capacity drive with an electronic failure point. I even called on the phone, pulled up a pdf of their warrenty and asked them to show me where in the warrenty there was an exclusion for this situation. They didn’t even attempt to try. They just argued that it couldn’t be done, because the drive failed. I said "Yes. The drive certainly did fail within warrenty period. That’s what’s covered within the warrenty. That’s the whole purpose of the warrenty. To provide reassurance to the customer that if they should so happen to buy one of the 1% of drives with a malfunction beyond their control, that the product they paid for will be replaced without worry. "

            They then told me I was wrong, transfered me to their boss, and while on hold hung up.

            I understand if I buy a western digital, I run the risk of also buying a dud drive. However I assume they will honor their warrenty.

            Seagate doesn’t need to honor any warrenty. They don’t need to offer any warrenty. However as the customer, I’m free to inquire about warrenty terms before buying. If I see a product that does not offer warrenty on new items, or doesn’t allow returns? That tells me the company doesn’t stand by their product. It’s then MY decision on if I want to gamble.

            Seagate DID offer a warrenty that they set the terms for. That tells me they stand behind their product. So when they told me no, and gave no reason besides “the drive is dead”? That’s called bait and switch. Which breaks trust between customer and business.

            They might have 36TB SSD hard drives at $100 that they guarentee will last 100 years. I still won’t buy it, because I’ve lost trust in the company to stand behind their claims.

            And here we are, 20 years later. Still haven’t bought a single seagate product since. And often times being interested in a sale or offer, until I saw the brand. I’ve multiple times in 20 years went out of my way to avoid seagate.

            And if they would have honored the warrenty? I’d have moved on from any grudge. Back when Logitech was still a good company I called and asked how much to repair an out of warrenty mouse I have. I understood I’d have to pay. I was getting a price quote to see if it was worth it, as I LOVED that mouse model in 2000. Sad when it died in 2006. Dude on the phone just said “Ah, here. Lets not even repair it. I’m just going to send you the same model”

            And sent me a brand new (old stock) replacement of the same mouse I had. That mouse lasted until 2014.

            So I used the same model mouse from 2000-2014. And I also still buy logitech products, even though I recognize the company is not as high quality as they used to be. Call it nostolgia, call it brand loyalty, whatever. It still just feels right buying logitech, and a huge part of that is what they did in the past.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            4
            ·
            1 day ago

            Backblaze… failure rates

            Take this data with a grain of salt. They buy consumer drives and run them in data centers. So unless your use case is similar, you probably won’t see similar results. A “good” drive from their data may fail early in a frequent spin up/down scenario, and a “bad” drive may last forever if you’re not writing very often.

            It’s certainly interesting data, but don’t assume it’s directly applicable to your use case.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                It’s absolutely useful data, but there are a bunch of caveats that are easy to ignore.

                For example, it’s easy to sort by failure rate and pick the manufacturer with the lowest number. But failures are clustered around the first 18 months of ownership, so this is more a measure of QC for these drives and less of a “how long will this drive last” thing. You’re unlikely to be buying those specific drives or run them as hard as Backblaze does.

                Also, while Seagate has the highest failure rates, they are also some of the oldest drives in the report. So for the average user, this largely impacts how likely they are to get a bad drive, not how long a good drive will likely last. The former question matters more for a storage company because they need to pay people to handle drives, whereas a user cares more about second question, and the study doesn’t really address that second question.

                The info is certainly interesting, just be careful about what conclusions you draw. Personally, as long as the drive has >=3 year warranty and the company honors it without hassle, I’ll avoid the worst capacities and pick based on price and features.

                • renegadespork@lemmy.jelliefrontier.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  14 hours ago

                  You’re correct, but this is pretty much “Statistics 101”. Granted most people are really bad at interpreting statistics, but I recommend looking at Backblaze reports because nothing else really comes close.

            • boonhet@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              Is a home NAS a frequent spin up/down scenario though? I’d imagine you’d keep the drives spinning to improve latency and reduce spin-up count. Not that I own any spinning drives currently though - so that’s why I’m wondering.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                My drives are usually spun down because it’s not used a ton. Everything runs off my SSD except data access, so unless there’s a backup or I’m watching a movie or something, the drives don’t MHD need to be spinning.

                If I was running an office NAS or something, I’d probably keep them spinning, but it’s just me and my family, so usage is pretty infrequent.

            • Boomkop3@reddthat.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Or just read their raw charts. Their claims don’t tend to line up with their data. But their data does show that Seagate tends to fail early

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                23 hours ago

                All that tells you is that Seagate drives fail more in their use case. You also need to notice that they’ve consistently had more Seagate drives than HGST or WD, which have lower failure rates on their data. Since they keep buying them, they must see better overall value from them.

                You likely don’t have that same use case, so you shouldn’t necessarily copy their buying choices or knee-jerk avoid drives with higher failure rates.

                What’s more useful IMO is finding trends, like failure rate by drive size. 10TB drives seem to suck across the board, while 16TB drives are really reliable.

                • Boomkop3@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  21 hours ago

                  Ye, Seagate is cheap, that’s the value. I’ve had a tonne myself and they’re terrible for my use too

        • morbidcactus@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          As @renegadespork@lemmy.jelliefrontier.net said, infant mortality is a concern with spinning disks, if I recall (been out of reliability for a few years) things like bearings are super sensitive to handling and storage, vibrations and the like can totally cause microscopic damage causing premature failure, once they’re good though they’re good until they wear out. A lot of electronics follow that or the infant mortality curve, stuff dying out of the box sucks, but it’s not unexpected from a reliability POV.

          Shitty of Seagate not to honour the warranty, that’d turn me off as well. Mine is pettier, when I was building my nas/server I initially bought some WD reds, returned those and went for some Seagate ironwolf drives because the reds made this really irritating whine you could hear across the room, at the time we had a single room apartment so was no good.

        • Boomkop3@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 day ago

          I’ve had a lot of seagates simply because they’re the cheapest crap on the market and my budget was low. But unfortunately, crap is what you get.

      • ryan213@lemmy.ca
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        1 day ago

        I’ve bought 2 Seagate drives and both have failed. Meanwhile, I still have my 2 15yo WD drives working.

        I hope I didn’t just jinx myself. Lol

        • deranger@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 day ago

          I’ve got the opposite experience, with WD.

          You know who uses loads of Seagate drives? Backblaze. They also publish the stats. They wouldn’t be buying Seagate drives if they were significantly worse than the others.

          The important thing is to back up your shit. All drives fail.

        • neon_nova@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I get it, I’ve had the opposite experience with wd, but they were 2.5” portable drives. All my desktop stuff works perfectly still 🤞

        • ShepherdPie@midwest.social
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          1 day ago

          Same here. I have a media server and just spent an afternoon of my weekend replacing a failed Seagate drive that was only used to to backup my more important files nightly that was purchased maybe 4-5 years ago. In the past 10 years, this is the third failed Seagate drive I’ve encountered (out of 5 total) while I have 9 WD drives that have had zero issues. One of them is even dedicated to torrents with constant R/W that is still chugging along just fine.

        • AlternateRoute@lemmy.ca
          link
          fedilink
          English
          arrow-up
          19
          ·
          1 day ago

          Nearly all brands have produced unreliable and a reliable series of hard drives.

          Really have to look at them based on series / tech.

          None of the big spinning rust brands really can be labeled as unreliable across the board

            • deranger@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              edit-2
              1 day ago

              Why would Backblaze use so many Seagate drives if they’re significantly worse? Seagate also has some of the highest Drive Days on that chart. It’s clear Backblaze doesn’t think they’re bad drives for their business.

              • frezik@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                22 hours ago

                I can only speculate on why. Perhaps they come as a package deal with servers, and they would prefer to avoid them otherwise.

                There are plenty of drives of equivalent or more runtime than the Seagate drives. They cycle their drives every 10 years regardless of failure. The standout failure rate, the Seagate ST12000NM0007 at 11.77% failure, has less than half that average age.

            • Baggie@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 day ago

              Seconding this. Anecdotally from my last job in support, every drive failure we had was a Seagate. WDs and samsungs never seemed to have an issue.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            I wouldn’t call those numbers okay. They have noticeably higher failure rates than anybody else. On that particular report, they’re the only ones with failure rates >3% (save for one Toshiba and one HGST), and they go as high as 12.98%. Most drives on this list are <1%, but most of the Seagate drives are over that. Perhaps you can say that you’re not likely to encounter issues no matter what brand you buy, but the fact is that you’re substantially more likely to have issues with Seagate.

          • Spacehooks@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Looks like another person commented above you with some stuff. I recall looking this up a year ago and the ssd I was looking at was in the news for unreliability. It was just that specific model.