Ive been wanting to get proper storage for my lil server running nextcloud and a couple other things, but nc is the main concern. Its currently running on an old ssd ive had laying around so i would want a more reliable longer term solution.
So thinking of a raid1 (mirror) hdd setup, with two 5400rpm 8tb drives, bringing the choices down to ironwolf or wd red plus, which both are in the same price range.
Im currently biased towards the ironwolfs because they are slightly cheaper and have a cool print on them, but from reddit threads ive seen that wd drives are generally quieter, which currently is a concern since the server is in my bedroom.
Does anyone have experience with these two drives and or know better solutions?
Oh and for the os, being a simple linux server, is it generally fine to have that on a separate drive, an ssd in this case?
Thanks! :3
I got myself some N300 Toshiba NAS drives.
I have been burned by WD Red on SMR drives, so I will just say Fuck You WD. That is all.
Backblaze reports HDD reliability data on their blog. Never rely on anecdata!
https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2024/
Kind of frustrating that Seagate is the only company to have multiple drives with ZERO failures, but then that 12TB model with over 12% failure… ouch.
That said, I’ve been on Team Seagate IronWolf for years without issues.
And as backblaze says themselves every time, this data is not.the complete truth either, they have large sample size but this sample size is still too small to large claims about reliability. Especially about brand reliability.
I mean… It’s better than “I bought two drives for my homelab and they’re fine” reports on social media.
These all seem to be 7200rpm drives, would 5400rpm drives make a large difference in terms of longevity relative to that? Also seeing mixed results from seagate there, first they mention there being 0 failures of a couple seagate models, but then later in the graph of annualized failures, seagate in general has the highest failure rate
The wd60efrx is a 5400 with 0.00% failure. I think all the WD reds were 5400.
I wouldn’t think so. 5400 rpm drives might last longer if we’re specifically thinking about mechanical wear. My main takeaway is that WDC has the best. I would use the largest number available which is the final chart which you also point out. One thing which others have also pointed out that there is no metadata included with these results. For example the location of different drives, i.e. rack and server-room specific data. These would control for temperature, vibration and other potential confounders. Also likely that as new servers are brought online, different SKUs are being bought in groups, i.e. all 10 TB Seagate Ironwolf. I don’t know why they haven’t tried to use linear or simple machine learning models to try to provide some explanatory information to this data, but nevertheless I am deeply appreciative that this data is available at all.
deleted by creator
My methodology is to look at BackBlaze, throw out any data with less than 100k hours, and pick the drive with the lowest AFR (annualized failure rate).
It’s maybe $50-$100 between the cheapest and best enterprise drive and I’m not buying 1,000 drives so I do not care about price.
How often do you get new drives? I’ve been through 2 drives that store all my data since the last 15 or so years. I’m wanting to get a nas together now though but I’ve no server experience.
Around every 6 years.
I’ll be honest, I havent used Seagate in 15 years, so maybe theyve improved, but the only hard drives I’ve ever had fail on me have all been Seagate.
Never had any other drive hard fail. The ones that did were always Seagate.
After (ugh) 30 years of having PCs and many, many, drives, Seagate has been the worst.
But I’ve had WD fail too. Just not as much, and I’ve had far more WD drives. I currently have about 20 drives of varying ages, 98% of them are WD, because more of the Seagate drives have failed and been trashed.
I picked Toshiba drives personally. However, I know of a bunch of WD reds running for years with no issues.
I think since the nas cases aren’t insolated for sound, your going to hear them moving that head around…
Have you considered an ssd populated nas? It does cost more however :(
WD is a shit company doing shit things, so Seagate. They are less shit, for now.
Can you elaborate?
Western Digital, among other things, has been selling SMR drives using CMR SKUs. So if you’re building something and wanted specific performance, and selected some WD drives based on what the SKU says, you might end up with SMR drives, which are not nearly as performnt. It was a bate and switch tactic and they never really acknowledged it except for making the “red pro” line of drives, which are CMR. Regular “red” drives are SMR now.
I see. That’s good to know, thanks.
Look, there’s 2 things here:
- NAS - meaning storage
and
- NAS - meaning a virtualisation / container server that’s doing lots of fairly random disk access
Which are you wanting?
For the first, just consider capacity (you’ll fill it) and noise (spinning away all night)
For the 2nd, really really consider SSDs as they’re silent and fast.
RAID1 is just a convenience factor, so whatever you do, don’t get too caught up in the drive mechanics as you’ll have a full backup (right?) and can restore your data at a moment’s notice.
Honestly, honestly, just go for something large & quiet and you’ll be fine.
And yes, SSD for the OS
Who says NAS to mean anything other than Network-Attached Storage?!?
“When I say left, I kinda mean right half the time almost.”
I agree, the acronym NAS does indeed mean that.
But would you call a Hypervisor a NAS?
When I say NAS, I mean NAS. Bulk storage remotely accessible on the network.
When someone starts talking about all their VMs/Containers, I understand that to mean something else… I’d prefer to use a generic term like “server” instead.
So it will be the latter, as it will run nextcloud, a couple websites i have, a pihole and possible game servers when the need comes again. For the websites i plan on using the mass storage as a backup to the sites’ data, so the actual running files will stay on the ssd running the os, currently considering proxmox actually.
for nextcloud i am not sure, since i use it to sync, and ‘backup’ (i know its not meant as a backup program), so it would need lots of random r/w, but it cant fit on the os ssd as that would get too large, currently only have a 250gb ssd on hand for the os.
Id love to get ssds instead of mass storage hdds, but theyre simply 3 times as expensive for the same storage which is out of budget for me.
Going seagate is a great way to save cash and lose data
So then save cash, set up a raid array, and get multiple proper backups running. You need to do steps 2 & 3 regardless.
And then have them all fail within week of each other, under a couple months. They’re garbage, get something decent like a Toshiba or something
You’re not going to be looking at them. If one is quieter, get that one.
Also, be prepared to buy a replacement when, not if, one of them fails. It might be years from now, but it’ll happen.
And backup, proper backup.
Go with the drive with the best money/TB rate that meets your criterias. I would consider everything above ironwolf or red plus fitting for NAS use. Data center drives are in my experience often cheaper than NAS drives. (wd ultrastar or seagate exos or toshiba enterprise capacity)
Look for the warranty. Some times another drive for just a couple of bucks gives you a way longer warranty.
Yes, its fine to have your OS on a separate SSD and use your HDD as data storage.
Its also important to maintain your drives. Be sure to have SMART alerts, and do spinrite or badblocks occasionally to let the drive firmware remove bad sectors from use.
One of each. There is a small chance that drives made in the same factory will fail at exactly the same time for the same reason when used in RAID 1. While this probably won’t happen (if it does it would be in the first month and you will hear about others with the same failures), why risk it. Besides you want hard drive makers to stay in business - all hard drives will crash in the future, the only question is when.
I didn’t take my advice for a RAID I built years ago. I just placed the order (one hour ago) to replace a WD red with a Seagate. God only knows when the next drive will fail. I’ve overall been fine, but I only have one disk redundancy in my zfs system until Thursday.
That actually sounds really smart, but can that cause issues with the raid controller, since the drives will act slightly differently?
I have that exact same setup but with 4 TB disks on zfs in mirrored mode. Have not noticed any performance issues in my home lab setup mainly being used for immich and media serving. I had purposely chosen disks of different brands specifically for this reason. My vote goes to this setup.
ZFS, btrfs, and other software RAID solutions can use mixed drives w/o much issue as long as you make sure that the capacities match or that you set the array up with the smallest disk size in mind.
Do not use hardware raid controllers. They provide no meaningful performance benefit over software raid and make data recovery much more difficultm(if not impossible) in the event of hardware failure.
Not tried those drives in particular, but got 4 Seagate’s exos (can’t recall if they are 16 or 18tb ea), very happy with them. A bit loud but they are also 7200 spin. Can get them on a real good offer sometimes
“A bit loud” is understating it, those drives rip and tear (we use exos X18 drives). I pity the person trying to sleep next to those.
They are good though, while we had one (of 5) fail within the first week but that was quickly resolved.
Yes I need to do something about picking a location for them in the house. Loft sounds great, but it gets hot in summer. Downstairs would be nice and chilly but you don’t want to hear this… So much to think about
Anecdotal plug, I have had the best luck with Toshiba drives. In my current NAS I am using seagate 12tb recert ironwolf drives but that only has a month of uptime so far.
Before those I ran Toshiba 4tb NAS drives, and before that Toshiba 2tb red drives for 8years with no issues and 100% uptime in a drivepool windows setup. My last couple of backup drives were 6th WD drives and I am 2 for 2 on premature drive failures with those.
I also refuse to run WD online type devices since the mybooklive/mycloud security issues that resulted in significant data loss and WD refused to patch it. Instead they gave pile a $50 credit for a new drive
@Know_not_Scotty_does same experience with toshiba, have been using a 3tb drive for the past 10 years and its still going strong!