• 6 Posts
  • 102 Comments
Joined 3 years ago
cake
Cake day: June 19th, 2023

help-circle
    • Bootstrap 2-5 sucked, but it was great for throwaway internal tools that will never become public.
    • Foundation died to bootstrap.
    • Pure was made by yahoo and went unmaintained so some other people wanted something like pure but they wanted to monetize it, then Pure got remaintained by it’s community
    • Tailwind isn’t great, it sold devs on a lie by limiting choices to limit mistakes, but limiting choices also restricted the features that devs needed to do their projects properly. They tried to figure out a way to add those features but suffered from gradual scope creep while trying to maintain it’s original lie and has ended up more complicated than it needed to be while being less featureful than what devs needed. What they should have done instead of making everything a utility class, was keep the build system (lightning css), add a token system (e.g. open props, or their own version of it), allow people to add their own css instead of trying to shoehorn it into a class, then went out of their way to teach people how to make things as classless as possible.


  • Yeah, I’m the lead on a bunch of websites that all have to be localised.

    There’s a lot of weird footguns to watch out for, and a lot of retraining devs when they’re used to only working on a single language/locale.

    Two biggest head scratchers I’ve had to deal with are computers treating “fr-FR” different from “fr-fr” (due to file system case-sensitivity differences between developers), and having to undo the coded assumption that languages and locales follow an [a-z]{2}-[a-z]{2} pattern (e.g. “en-gb”) once we stumbled upon Latino Americano: “es-419”.


    EDIT: My left ear really loves the Erlang talk.

    This fixes it:

    const audioContext = new AudioContext();
    const audioElement = audioContext.createMediaElementSource(document.querySelector("video"));
    audioContext.destination.channelCount = 1;
    audioElement.connect(audioContext.destination);
    



  • I’m a:

    • Gamer
    • Full stack web dev
    • Android/iOS/MacOS/Windows Dev

    So I have a lot of machines


    Machine 1

    • Purpose: MacOS/iOS app builder/publisher
    • Usage: 100% work
    • Location: Work
    • OS: Modified MacOS Sequoia
      • Sequoia to avoid the glass interface disaster that Apple released
      • Uses custom window manager built in hammerspoon because fuck macos’s window management
      • Modified firmware so Caps + IJKJ = Arrows
    • Shell: ZSH
    • IDE: VSCode

    Machine 2

    • Purpose: Personal computer
    • Usage: 90% games / 10% work
    • Location: Home
    • OS: Modified Windows 11
      • All the ads and AI bloat is removed but it requires increasing maintenance to maintain
    • Shell: ZSH through WSL Ubuntu
    • IDE: VSCode

    Machine 3:

    • Purpose: do everything on the go
    • Usage: 50% games / 50% work
    • Location: Wherever
    • OS: Modified Windows 11
      • All the ads and AI bloat is removed but it requires increasing maintenance to maintain
    • Shell: ZSH through WSL Ubuntu
    • IDE: VSCode

    Machine 4:

    • Purpose: Disposable environments to test new things
    • Usage: 100% work
    • Location: Work
    • OS: Kubuntu 25.10 (Current plasma version is great so far)
    • Shell: ZSH
    • IDE: VSCode

    Also:

    • Android Tablets
    • Android Phones
    • iPads
    • iPhones

    Future:

    • Helix
      • I want to learn Helix’s keyboard workflow
      • Helix’s lack of extensions has held me back.
        • Helix has been working on extensions for a while though and I’ll re-evaluate it once it does and the community builds the needed extensions
      • Zed has some helix commands, so I may switch to that from vscode to get helix commands + extensions.
    • OSs
      • I want to reduce my windows 11 maintenance
      • Held back by anti-cheat games (PUBG, then Helldivers 2, and will try Arc Raiders these holidays, potentially Marathon next year)
      • I’ll experiment with KDE / Cosmic / Niri in 2026.
      • If no anti-cheat games have captured my attention in 2027, I’ll switch another one of my personal machines to Linux



  • Serious question for anyone who actually uses Bun:

    Why are you using Bun instead of Deno or Node?

    If you would have asked me 10 years ago, what were the biggest problems with JS as a whole, I would have stated:

    1. Poor type safety

    2. No standard library which leads people into dependency hell

    3. Poor security (installing a project should not even allow the possibility of key stealing or ransomware)

    4. No runtime ergonomic immutable data structures with fast equality checks (looked like it was going to be resolved with the Records and Tuples proposal, but it was withdrawn and discussions are continuing in the composites proposal)


    Today I consider point 1 mostly resolved and point 4 a problem for TC39 and engine implementers, and not resolvable by runtimes themselves.

    That leaves us with problems 2 and 3.

    I see Node having poor solutions for 2 and 3.

    I see Bun having poor solutions for 2 and 3.

    I see Deno having great solutions for 2 and 3.


    As far as I can tell, people have chosen Bun for either hype or speed reasons.

    Hype doesn’t seem like an important reason to choose Bun since it’s always fleeting and there’s enough investment in the industry to keep each runtime going for a long time.

    I do see speed being a moderate issue with JS, but that’s mainly due to:

    • dependency install times which should be a one time cost, and which can be reduced anyway by using a standard library

    • slow framework slop, which isn’t really a runtime issue.

    So I’m not sure speed fits as a reason for choosing Bun.

    I’m not sure what the other reasons are, but I’m genuinely curious.

    If you’re using Bun in projects today, why have you chosen bun?







  • I’ll repost my comment yesterday about this very thing (link: https://news.ycombinator.com/item?id=45279384#45283636)

    It’s also too early to worry about DOM apis over wasm instead of js.

    The whole problem with the DOM is that it has too many methods which can’t be phased out without losing backwards compatibility.

    A new DOM wasm api would be better off starting with a reduced API of only the good data and operations.

    The problem is that the DOM is still improving (even today), it’s not stabilized so we don’t have that reduced set to draw from, and if you were to mark a line in the sand and say this is our reduced set, it would already not be what developers want within a year or two.

    New DOM stuff is coming out all the time, even right now we two features coming out that can completely change the way that developers could want to build applications:

    • being able to move dom nodes without having to destroy and recreate them. This makes it possible so you can keep the state inside that dom node unaffected, such as a video playing without having to unload and reload a video. Now imagine if that state can be kept over the threshold of a multi-page view transition.

    • the improved attr() api which can move a lot of an app’s complexity from the imperative side to the declarative side. Imagine a single css file that allows html content creators to dictate their own grid layouts, without needing to calculate every possible grid layout at build time.

    And just in the near future things are moving to allow html modules which could be used with new web component apis to prevent the need for frameworks in large applications.

    Also language features can inform API design. Promises were added to JS after a bunch of DOM APIs were already written, and now promises can be abortable. Wouldn’t we want the new reduced API set to also be built upon abortable promises? Yes we would. But if we wait a bit longer, we could also take advantage of newer language features being worked on in JS like structs and deeply immutable data structures.

    TL;DR: It’s still too early to work on a DOM api for wasm. It’s better to wait for the DOM to stabalize first.


  • I paid for it too!

    It’s the first piece of shareware I actually went out of my way to pay for because it was so good that I’d be genuinely pissed off if it died. I’d probably end up switching to pijul or something else for my projects if it ever did.


    I’ve seen a bunch of people messing the windows version running in linux in the fork forums, so it may be coming in an unsupported capacity.


  • Yeah, I use it when ssh’d into a server, but it’s just so awkward to use.

    Sometimes it just really doesn’t want to separate a hunk. Other times you want to stage all lines except one, and you have to do a million splits just to target the lines you want to keep.

    It’d be far easier if you could just select the lines you want to affect. It’s literally the first feature shown in lazygit’s readme. I think half the reason that people use lazygit is that partial commits are so awkward to perform in most other clients.

    Luckily Fork does it as well as lazygit


  • Fork !!!

    It’s hands down the best git client.

    It’s free as in: sublime text or winzip where they ask you once a month if you want to pay for it but you can just select: I’m still trying it out, and it gets out of your way.

    • It’s got a well designed tree graph like in GitKraken except it doesn’t lag
    • It’s interactive rebasing is as smooth as JJ / LazyGit, so you can edit/rename/reorder your commits except you don’t have to have to remember CLI flags since it has its own UI
    • It’s lets you commit individual lines by selecting them instead of adding/removing whole hunks like Sourcetree except it isn’t filled with paper cuts where a feature breaks in an annoying way for 2 years and you have to do extra steps to keep using it how you want.

    And one killer feature that I haven’t seen any other git clients handle: allowing me to stage only one side of the diff. As in: if I change a line (so it shows up as one removed line and one new line in git), I can decide to add the new line change while still keeping the old line.

    So changing this:

    doThing(1);
    

    into this:

    doThing(2);
    

    Shows up in git as:

    - doThing(1);
    + doThing(2);
    

    But if I still want to keep doThing(1);, I don’t have to go back into my code to retype doThing(1);, or do any manual copy-pasting. I can just highlight and add only doThing(2); to the staging area and discard the change to doThing(1);.

    So now the code exists as:

    doThing(1);
    doThing(2);
    

    Now with a one-liner example like this, we could always re-enter the code again. But for larger code changes? It’s far easier to just highlight the code in the diff and say: yes to this and no to the other stuff.

    And when you get used to it, it makes it really easy to split what would be large git commits into smaller related changes keeping your git history clean and easy to understand.


  • The first problem is they’re letting AI touch their code.

    The second problem is they’re relying on a human to pick up changes in moved code while using git’s built-in diff tools. There’s a whole bunch of studies that show how git’s diff algorithms are terrible, and how swapping to newer diff algos improves things considerably.

    TL;DR on the studies:

    • Only supporting add/remove/move operations is really bad.
    • Adding syntax awareness to understand if differences in indentation should be brought to a reviewer’s attention, improves code and makes code reviews more accurate. (But this is hard because it’s language dependent)
    • Adding extra operations (indent/deindent/move/rename-symbol/comment/un-comment/etc…) makes code review easier, faster and more accurate. (But again, most of this requires syntax awareness.

    There’s also a bunch of alternative diff algos you can use, but the best ones are paid, and the free ones have fewer features. See: