I was pretty surprised to learn that Interac e-transfer or equivalent isn’t commonplace everywhere.
I was pretty surprised to learn that Interac e-transfer or equivalent isn’t commonplace everywhere.
Found the Canadian.
“Create an Haitian cooking a tabby cat in a spit fire, the background should look like a typical Ohio town”
Oh the units conversion looks promising. Might replace Ultra Measure Master for me. I think it doesn’t quite work yet.
Is the author’s username a play on “jerkoff”? No his name is Jost Herkenhoff.
It might also save it from shit controllers and cables which ECC can’t help with. (It has for me)
Unless you need RAID 5/6, which doesn’t work well on btrfs
Yes. Because they’re already using some sort of parity RAID so I assume they’d use RAID in ZFS/Btrfs and as you said, that’s not an option for Btrfs. So LVMRAID + Btrfs is the alternative. LVMRAID because it’s simpler to use than mdraid + LVM and the implementation is still mdraid under the covers.
It is marketing and it does have meaningful connection to the litho features, but the connection is not absolute. For example Samsung’s 5nm is noticeably more power hungry than TSMC’s 5nm.
And you probably know that sync writes will shred NAND while async writes are not that bad.
This doesn’t make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.
Also most of the argument around speed doesn’t make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They’ll be as fast as those benchmarks on average. If that’s enough for the person’s use case, it’s enough. And they’ll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.
And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I’m not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.
Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.
Great news for EU auto workers!
Doesn’t uBlock Origin already have a Manifest V3 version of the extension?
To add a concrete example to this, I worked at a bank during a migration from a VMware operated private cloud (own data center) to OpenStack. In several years, the OpenStack cloud got designed, operationalised, tested and ready for production. In the following years some workloads moved to OpenStack. Most didn’t. 6 years after the beginning of the whole hullabaloo the bank cancelled the migration program and decided they’ll keep the VMware infrastructure intact and upgrade it. They began phasing out OpenStack. If you’re in North America, you know this bank. Broadcom can probably extract 1000% price increase and still run that DC in a decade.
Why would MS not use this opportunity to also hike the prices of their equivalent offerings? 1000% increase leaves a lot of room for an increase while still being cheaper.
Not sure where you’re getting that. Been running ZFS for 5 years now on bottom of the barrel consumer drives - shucked drives and old drives. I have used 7 shucked drives total. One has died during a physical move. The remaining 6 are still in use in my primary server. Oh and the speed is superb. The current RAIDz2 composed of the shucked 6 and 2 IronWolfs does 1.3GB/s sequential reads and write IOPS at 4K in the thousands. Oh and this is all happening on USB in 2x 4-bay USB DAS enclosures.
That doesn’t sound right. Also random writes don’t kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I’ve used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven’t reached the limit and haven’t died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.
Yes we run ZFS. I wouldn’t use anything else. It’s truly incredible. The only comparable choice is LVMRAID + Btrfs and it still isn’t really comparable in ease of use.
Yup. All of these “solutions” that sound original are known. The reason we don’t apply them isn’t because we don’t know how to solve these issues, it’s because capital has pulled the handbrake. This is the problem we have to solve. All the other problems fall downstream and will magically start getting solved if we can release the handbrake. If we’re not talking about how to reduce regulatory capture, we’re not taking about real solutions.
Have you tried Llama? If so, is it useful according to your criteria?
Yup. You can grab any unencrypted data passed between the user’s browser and a server literally out of thin air when they’re connected to an open access point. You sit happily at the Starbucks with your laptop, sniffing them WiFi packets and grabbing things off of them.
Oh and you have no idea what the myriad of apps you’re using are connecting to and whether that endpoint is encrypted. Do not underestimate the ability of firms to produce software at the absolute lowest cost with corners and walls missing.
If I was someone who was to make money off of scamming people, one thing I’d have tried to do is to rig portable sniffers at public locations with large foot traffic and open WiFi like train stations, airports, etc. Throw em around then filter for interesting stuff. Oh here’s some personal info. Oh there’s a session token for some app. Let me see what else I can get from that app for that person.