• Konstant@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      49 minutes ago

      if I’m still alive

      That goes without saying, unless you anticipate something. Do you?

  • mrvictory1@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    Me who stores important data on seagate external HDD with no backup reading the comments roasting seagate:

  • needanke@feddit.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    22 hours ago

    What is the usecase for drives that large?

    I ‘only’ have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.

    • Hadriscus@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      It’s like the petronas towers, everytime they’re finished cleaning the windows they have to start again

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      12 hours ago

      Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?

    • remon@ani.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      12 hours ago

      Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) … scrubbing takes less than 3 days.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 hours ago

      High capacity storage pools for enterprises.
      Space is at a premium. Saving space should/could equal to better pricing/availability.

      • frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Not necessarily.

        The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.

        I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.

        • Appoxo@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.

          • frezik@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            If there’s higher redundancy, then they are already giving up on density.

            We’ve pretty much covered the likely ways to calculate parity.

    • SuperUserDO@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it’s not for consumers.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        That’s a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it’s hard to actually make use of it all.

        For example, let’s say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you’re talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.

          • SuperUserDO@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.

        • Aermis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          I’m not in the know of having your own personal data centers so I have no idea. … But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?

          • Gagootron@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            5 hours ago

            You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.

            Accessing the data does not need a scrub, it is only a routine maintenance task. A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.

            There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.

            ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.

    • Hugin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      I worked on a terrain render of the entire planet. We were filling three 2 Tb drives a day for a month. So this would have been handy.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      19 hours ago

      What drives do you have exactly? I have 7x6TB WD Red Pro drives and I can do a scrub less than 24 hours.

      • needanke@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.

        How full is your pool? I have about 2/3rds full which impacts scrubbing I think. I also frequently access the pool which delays scrubbing.

        • ipkpjersi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 hours ago

          It’s like 90% full, scrubbing my pool is always super fast.

          Two weeks to scrub the pool sounds like something is wrong tbh.

  • zapzap@lemmings.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 day ago

    This hard drive is so big that when it sits around the house, it sits around the house.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Nah, the other stuff will all fit on your computer’s hard drive, this is only for porn. They should call it the Porn Drive.

      • Wolf@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        I “only” have a 1TB SDD. If I wanted to download a new game I would have to delete one that’s already on here.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      It isn’t as much as you think, high resolution, high bitrate video files are pretty large.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        high resolution, high bitrate video files are pretty large.

        Can it actually transfer data fast enough to save or play them back in real-time, though?

  • Ænima@feddit.online
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    Is it worth replacing within a year only to be sent a refurbished when it dies?

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    2 days ago

    no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.

    see you in hell.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 hour ago

      I can certainly understand holding grudges against corporations. I didn’t buy anything from Sony for a very long time after their fuckery with George Hotz and Nintendo’s latest horseshit has me staying away from them, but that was a single firmware bug that locked down hard drives (note, the data was still intact) a very long time ago. Seagate even issued a firmware update to prevent the bug from biting users it hadn’t hit yet, but firmware updates at the time weren’t really something people thought to ever do, and operating systems did not check for them automatically back then like they do now.

      Seagate fucked up but they also did everything they could to make it right. That matters. Plus, look at their competition. WD famously lied about their red drives not being SMR when they actually were. And I’ve only ever had WD hard drives and sandisk flash drives die on me. And guess who owns sandisk? Western Digital!

      I guess if you must go with a another company, there’s the louder and more expensive Toshiba drives but I have never used those before so I know nothing about them aside from their reputation for being loud.

      • needanke@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 hours ago

        And I’ve only ever had WD hard drives and sandisk flash drives die on me

        Maybe it’s confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.

        • muusemuuse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          The only flash drive I ever had fail me that wasn’t made by sandisk was a generic microcenter one, which was so cheap I couldn’t bring myself to care about it.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      I had a similar experience with Samsung. I had a bunch of evo 870 SSDs up and die for no reason. Turns out, it was a firmware bug in the drive and they just need an update, but the update needs to take place before the drive fails.

      I had to RMA the failures. The rest were updated without incident and have been running perfectly ever since.

      I’d still buy Samsung.

      I didn’t lose a lot of data, but I can certainly understand holding a grudge on something like that. From the other comments here, hate for Seagate isn’t exactly rare.

    • Vinstaal0@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      Some of Seagate’s drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.

      I have been running a raid of old Seagate barracuda’s for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won’t die.

      I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO’s and those failed as well.

      Whenever the Seagate start to die, I guess ill be replacing them with Toshiba’s unless somebody has another suggestion.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.

        I personally found an 8tb exos on serverpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.

        I’m not an expert, but this is just from the research I did before buying that backup drive.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 day ago

        If you’re relying on one hard drive not failing to preserve your data you are doing it wrong from the jump. I’ve got about a dozen hard drives in play from seagate and WD at any given time (mostly seagate because they’re cheaper and I don’t need speed either) and haven’t had a failure yet. Backblaze used to publish stats about the hard drives they use, not sure if they still do but that would give you some data to go off. Seagate did put out some duds a while back but other models are fine.

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          The back blaze stats were always useless because they would tell you what failed long after that run of drives was available.

          There are only 3 manufactures at this point so just buy one or two of each color and call it a day. ZFS in raid z2 is good enough for most things at this point.

      • Ushmel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        My WD Red Pros have almost all lasted me 7+ years but the best thing (and probably cheapest nowadays) is a proper 3-2-1 backup plan.

      • daq@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Hard drives aren’t great for archival in general, but any modern drive should work. Grab multiple brands and make at least two copies. Look for sales. Externals regularly go below $15/tb these days.

        • Ushmel@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 day ago

          Word for the wise, those externals usually won’t last 5+ years of constant use as an internal.

          • daq@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I’ve got 6 in a random mix of brands (Seagate and WD) 8-16Tb that are all older than that. Running 24/7 storing mostly random shit I download. Pulled one out recently because the USB controller died. Still works in a different enclosure now.

            I’d definitely have a different setup for data I actually cared about.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          1 day ago

          they were selling wd red (pro?) drives with smr tech, which is known to be disastrous for disk arrays because both traditional raid and zfs tends to throw them out. the reason for that is when you are filling it up, especially when you do it quickly, it won’t be able to process your writes after some time, and write operations will take a very long time, because the disk needs to rearrange its data before writing more. but raid solutions just see that the drive is not responding to the write command for a long time, and they think that’s because the drive is bad.

          it was a few years ago, but it was a shitfest because they didn’t disclose it, and people were expecting that nas drives will work fine in their nas.

          • IronKrill@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            they were selling wd red (pro?) drives with smr tech

            Didn’t they used to have only one “Red” designation? Or maybe I’m hallucinating. I thought “Red Pro” was introduced after that curfuffel to distinguish the SMR from the CMR.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              I don’t know, because haven’t been around long enough, but yeah possibly they started using the red pro type there

          • Ushmel@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            I’ve had a couple random drop from my array recently, but they were older so I didn’t think twice about it. Does this permafry them or can you remove from the array and reinitiate for it to work?

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise…

              what array system do you use? some raid software, or zfs?

              • Ushmel@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 hours ago

                Windows Server storage solutions. I took them out of the array and they still weren’t recognized in Disk Management so I assume they’re shot. It was just weird having 2 fail the same way.

                • WhyJiffie@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  I don’t have experience with windows server, but that indeed sounds like these are dead. you could check them with some pendrive bootable live linux, whether it sees them, like gparted’s edition, in case windows just hides them because it blacklisted them or something

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        https://www.eevblog.com/forum/chat/whats-behind-the-infamous-seagate-bsy-bug/

        this thread has multiple documented instances of poor QA and firmware bugs Seagate has implemented at the cost of their own customers.

        my specific issue was even longer ago, 20+ years. there was a bug in the firmware where there was a buffer overflow from an int limit on runtime. it caused a cascade failure in the firmware and caused the drive to lock up after it ran for the maximum into limit. this is my understanding of it anyway.

        the only solution was to purchase a board online for the exact model of your HDD and swap it and perform a firmware flash before time ran out. I think you could also use a clip and force program the firmware.

        at the time a new board cost as much as a new drive, finances of which I didn’t have at the time.

        eventually I moved past the 1tb of data I lost, but I will never willingly purchase another Seagate.

      • skankhunt42@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        In my case, 10+years ago I had 6 * 3tb Seagate disks in a software raid 5. Two of them failed and it took me days to force it back into the raid and get some of the data off. Now I use WD and raid 6.

        I read 3 or 4 years ago that it was just the 3tb reds I used had a high failure rate but I’m still only buying WDs

        • HiTekRedNek@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          I had a single red 2TB in an old tivo roamio for almost a decade.

          Pulled out this weekend, and finally tested it. Failed.

          I was planning to move my 1.5T music collection to it. Glad I tested it first, lol.

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      7
      ·
      edit-2
      2 days ago

      More like zero, cause modern AAA games require an NVME (or at least an SSD) and this is a good old fashioned 7200 RPM drive.

        • Psythik@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          3
          ·
          edit-2
          1 day ago

          A lot of modern AAA games require an SSD, actually.

          On top of my head: Cyberpunk, Marvel’s Spider-Man 2, Hogwarts Legacy, Dead Space remake, Starfield, Baulder’s Gate 3, Palworld, Ratchet & Clank: Rift Apart

          • RisingSwell@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 hours ago

            Forza Horizon 4 and 5 don’t say they require an SSD I think, but when I had it on my hard drive any cars that did over 250kph caused significant world loading issues, as in I’d fall out of the world because it didn’t load the map.

            • Psythik@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              Forza Horizon 4 actually does include an SSD in its requirements. Thank you for reminding me about that.

              • RisingSwell@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                19 hours ago

                It does technically work without it, just don’tgo over A class, don’t do sprints and there was 1 normal circuit that’s a tad big in a forest bit

                • Psythik@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  19 hours ago

                  If a game isn’t fully playable without an SSD, then I consider it a requirement.

                  Ever try playing Perfect Dark without an Expansion Pak back in the day? It’ll technically work, but you’ll get locked out of 90% of the game, including the campaign. Similar thing with SSDs today.

          • Nalivai@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 day ago

            Both Cyberpunk and BG3 work flawlessly on the external USB hard drive that I use. The loading times suffer a bit, but not to an unplayable degree, not even close

          • TyrantTW@lemmy.ml
            link
            fedilink
            English
            arrow-up
            10
            ·
            2 days ago

            Indeed, as others have said this isn’t a hard requirement. Anyone with a handheld (e.g. Steam Deck) playing off a uSD card uses a device that’s an order of magnitude slower for sequential I/O

            • Psythik@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              2 days ago

              I can personally guarantee that it is a hard requirement for Spider-Man and Ratchet

                • Psythik@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  1 day ago

                  Okay well try telling that to my computer when the games wouldn’t run without constantly freezing to load assets every few seconds.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              edit-2
              2 days ago

              They stream data from it while you play, so if you don’t have an SSD you’ll get pauses in game play.

              • tobogganablaze@lemmus.org
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                2 days ago

                Sure, you might.

                But Baulder’s Gate 3 for example, which claims to require an SSD in it’s system requirements runs just fine on a HDD.

                It’s just the developer making sure you get optimal performance.