

Why? I can touch type COLEMAK and type like I did on QWERTY years ago whenever I need, which is not often.
Why? I can touch type COLEMAK and type like I did on QWERTY years ago whenever I need, which is not often.
I use characters from whichever book I’m reading at the time. Examples:
If all you care is money, then it’s even less on hertzner at 48/year. But the reason I recommended Borgbase is because it’s a bit more known and more trustworthy. $8 a year is a very small difference, sure it will be more than that because, like you said, you won’t use the full TB on B2, but still I don’t think it’ll get that different. However there are some advantages to using a Borg based solution:
And the most important part, migrating from one to the other is simple, just changing config, so you can start with Borgbase, and in a year buy a minicomputer to leave on your parents house and having all of the config changes needed in seconds. Whereas migrating away from B2 will involve a secondary tool. Personally I think that this flexibility is worth way more than those $8/year.
Also Borg has deduplication, versioning and cryptography, I think B2 has all of that but I’m not entirely sure, because it’s my understanding that they duplicate the entire file when some changes happen so you might end up paying lots more for it.
As for the full system backup I still think it’s not worth it, how do you plan on restoring it? You would probably have to plug a liveusb and perform the steps there, which would involve formating your disks properly, connect to the remote server and get your data, chroot into it and install a bootloader. It just seems easier to install the OS and run a script, even if you could shave off 5 minutes if everything worked correctly in the other way and you were very fast doing stuff.
Also your system is constantly changing files, which means more opportunities for files to get corrupted (a similar reason why backing up the folder of a database is a worse idea than backing um a dump of it), and some files are infinite, e.g. /dev/zero
or /dev/urandom
, so you would need to be VERY careful around what to backup.
At the end of the day I don’t think it’s worth it, how long do you think it takes you to install Linux on a machine? Because I would guess around 20 min, restoring your 1TB backup will certainly take much longer than that (probably a couple of hours) and if you have the system up you can get critical stuff that doesn’t require the full backup early. Another reason why Borg is a good idea, you can have a small critical stuff backup to restore in seconds, and another repository for the stuff that takes longer. So Immich might take a while to come back, but authentik and Caddy can be up in seconds. Again, I’m sure B2 can also do this, but probably not as intuitively.
I figure the most bang for my buck right now is to set up off-site backups to a cloud provider.
Check out Borgbase, it’s very cheap and it’s an actual backup solution, so it offers some features you won’t get from Google drive or whatever you were considering using e.g. deduplication, recover data at different points in time and have the data be encrypted so there’s no way for them to access it.
I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.
The vast majority of your system is the same as it would be if you install fresh, so you’re wasting backup space in storing data you can easily recover in other ways. You would only need to store changes you made to the system, e.g. which packages are installed (just get the list of packages then run an install on them, no need to backup the binaries) and which config changes you made. Plus if you’re using docker for services (which you really should) the services too are very easy to recover. So if you backup the compose file and config folders for those services (and obviously the data itself) you can get back in almost no time. Also even if you do a full system backup you would need to chroot into that system to install a bootloader, so it’s not as straightforward as you think (unless your backup is a dd of the disk, which is a bad idea for many other reasons).
I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.
Yes and no. You can backup the file completely, but it’s not a good practice. The reason is that if the file gets corrupted you will lose all data, whereas if you dumped the database contents and backed that up is much less likely to corrupt. But in actuality there’s no reason why backing up the files themselves shouldn’t work (in fact when you launch a docker container it’s always an entirely new database pointed to the same data folder)
So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.
Am I going down the wrong path here, or is this just the best way to do it?
That seems like the safest approach. If you’re concerned about it being too much work I recommend you write a script to automate the process, or even better an Ansible playbook.
I use a 40% (corne keyboard specifically). Before that I had a 60% (Redrafon K530). Neither is bad but I prefer my crone for typing and programming by a LONG shot.
When you look at it it seems like a 40% would be too small for typing, but in reality it’s much more efficient because you have layers, so for example with one button my right hand is now standing on a numpad, and with a different button it’s symbols, both of which are much harder to reach on a 60% or even a full sized keyboard. This (and other videos from this guy) pushed me over the edge to build my own keyboard, and I’m really glad I did.
Edit: since you asked about arrows on my keyboard you press a button and esdf become arrows. Why not wasd you might ask, and the answer is that esdf is just like wasd but in the position where your hand is resting when typing.
Oh no, a random on the internet who can’t read the list of issues to understand they are all service and not deployment related, with no qualification whatsoever and who reads like an angry teen thinks I don’t know about a technology I’ve been using since before he knew how to talk. I’m devastated.
¯\_(ツ)_/¯ what do I know, I only do this for a living plus manage a couple of home servers with dozens of services for almost a decade.
I would bet that the problem is with Plex being inside docker. Might be one of those situations where being more experienced causes issues because I’m trying to do things “right” and not run the service on my server directly or with root or on network host mode.
But being inside a container causes these many issues I can’t even begin to imagine how it would be to get it to do more complex stuff like be accessible through Tailscale or being behind authorization.
Nope, Jellyfin works directly same as always has
Even though they’re both on the same LAN? That sounds stupid, why would I need my videos to travel half across the globe to go from one room to the next?
Some of it yes, the claim for example, but the rest is still pretty bad UX (and even that is stupid, I shouldn’t need a claim to watch locally), I’m an experienced self hosing person and I’m getting frustrated every step of the way, imagine someone who doesn’t know their way around docker or is not familiar with stuff… Jellyfin might be less polished as some claim, but setting it up is a breeze, never had to look at documentation to do it.
Jellyfin has an app for fire stick, it works flawlessly
It’s curious that I’m almost in the opposite boat, have been using Jellyfin without issues for around 5 years, but recently was considering trying Plex because Jellyfin is becoming too slow on certain screens (probably because I have too much stuff, but it shouldn’t be this slow).
Edit: this made me want to check in Plex, so I’ll leave my story for people amusement:
My experience with Plex:
It’s now been 1 hour of trying to set this up and I give up. Jellyfin is much more easy to setup, and even if Plex was instantaneous I could have loaded my TV library hundreds of times in the 1h I just wasted trying to get this to work. Probably every other time I tried I got similar results which is why I have an account there even though I don’t remember ever using Plex.
Edit2: after some nore more fiddling managed to get it working, not sure what I changed, so now:
The renderer of your window server is completely unrelated to the renderer of the game. If you were using i3 before you were using Xorg and therefore not Vulkan anyways.
At least in 2013 when I started using Steam more seriously if your connection dropped it would prompt you asking if you wanted to switch to offline mode. And I know this because I had Steam on a laptop that I carried in my bag hibernating and I didn’t had internet in some places I went to. So that has been fixed for over a decade.
But that is an apples to oranges comparison, just because you personally don’t care about those features doesn’t mean others don’t care either. For games without those features mentioned in the original comment (like Baldur’s Gate 3) not having join by IP is ridiculous we agree there. But for games that do it’s just not feasible, there’s too much of what makes the game the game in those features. Don’t get me wrong, I personally think that companies should not just kill the game and should provide ways to make their game playable offline after closing the servers, but it’s not as simple as allowing you to join by IP for the games being discussed here. What level would your character be? What load out would it have set? Which items would be unblocked? Etc, etc, etc, the servers that control all of that are too engrained into the fabric of the game, and that’s something that happened organically because people liked those features and wanted cross-progression, security, etc. Can all of that be removed? Sure but then you’re left with a shell of what the game is/was, still I believe companies should make such a release before closing the servers, but again this has absolutely nothing to do whatsoever with direct join by IP.
You’re again mixing the point, your friends IP doesn’t have authentication, progress, chat, etc, etc, etc. You’re talking about a different kind of server.
This is the relevant bit of what you’re replying to:
I don’t see how modern games would function without that service running. Who am I playing against? What’s their name? How did I get my account progress?
None of that comes from the game-server but rather from the service-server. Even if social games that have those features allowed you to connect to a server directly, you would still need to connect to their servers for all of that stuff.
Direct IP connection has nothing to do with authentication and social flows (e.g. names and progress like the comment you’re replying to mentioned) and would not help in the slightest with it.
You’re mixing stuff up, the direct connect for multiplayer where you put the IP has nothing to do with authentication that he’s talking about. Whenever you open up a multiplayer game it will authenticate yourself with PSN using the account you have on the playstation, then if your authentication succeeded it will authenticate with the game service-servers which will reply with stuff like your progression in the game, whether someone has sent you a message or a friend request, etc. Modern games are a platform in and of themselves, essentially they have an entire Discord on steroids internally which you’re using before, during and after playing online matches. If the PSN is down you can’t authenticate with those servers… I mean, they could allow you to login using username and password, but that’s: 1 not needed since the PSN is almost never down and 2 probably against some TOS from Sony for you to release games on their platform. So if the PSN is down you would not be able to get into the main screen for multiplayer anyways, so there’s no place where you could input the IP for the game-server you want to connect to.
I’m not defending the system, but it is what it is, games have organically evolved to have all of these social features which people do use and like, it makes sense that Sony won’t allow you to go over them and authenticate directly with the game specific service-servers and it makes sense that if you’re relying on all of that for login you also rely on it for matchmaking (which is where the IP would come in place). Could it be better? Sure, but there’s no incentive for it to be, PSN is rarely down and games (at least large ones) take forever to be sunset, and by that time there are almost no people playing them anyways.
That’s a solved problem, the answer is Calibre. If you want a nicer interface and some other fluff you can install calibre-web as a frontend for it. Calibre-web is very interesting if you have a Kobo e-reader because you can configure it as your store and get the books you add to calibre to magically appear on the e-reader with a nice download button next to it.