• 0 Posts
  • 105 Comments
Joined 9 months ago
cake
Cake day: April 8th, 2025

help-circle
  • There is absolutely no way you’re using an LLM to rewrite the Linux kernel in any way. That’s not what they do, and whatever it produces wouldn’t be even a fraction of effective as the current kernel.

    They’re text prediction machines. That’s it. Markov generators on steroids.

    I’d also be curious about where that 15-20% productivity increase comes from in aggregate. That’s an extremely misleading statistic. The truth is there are no consensus data on any productivity improvements with LLMs today in aggregate. Anything anyone has is made up. It’s also not taking into account the additional bugs and issues caused by LLMs, which are significant, and also not a thing you want to have happening on every PR with kernel code, I promise.

    Regardless of all of that, the companies with these LLMs are using free software to train their models to make money without making their models free and open source or providing a way for people to use it for free/open source projects, so this is a clear violation of every single FOSS license model I’m familiar with (most commonly used is the Apache one).

    TL;DR; they are stealing code meant to be free and public with any derivative works, profiting off it, and then refusing to honor the license model of the code/project they stole.

    This is illegal. The only reason why we’re not seeing a lot about it is these FOSS generally have no money and are not going to sue them and potentially lose a substantial sum of their negligible funds in court. That’s it. Otherwise, what they are doing is very illegal. The sort of thing any professional software development company you work for’s legal team warns you about the second you start using an OSS project in your for profit business application codebase.

    LLMs get away with it because $$$$$$$$$$$$$$$$$. That’s it.

    Edit: added link to security article with LLMs


  • Essentially almost all FOSS software is under an OSS license of some sort, which allows anyone to re-use their code or software as long as what re-uses it also remains free and open source or at least having at least as open/permissive of a license policy as the original work/code.

    LLMs ignore that, hide it behind a subscription, and use it to train their models for selling to soulless corporate entities who will never ever allow their code to be in the FOSS world, thus, breaking the contract.

    It’s not even an implicit contract, it’s explicit, and LLM companies are ignoring this and using their investment to squash any FOSS projects that want to challenge them in court on it.










  • That’s not what I said.

    I said it’s not any more AI than things in the 90’s were. I didn’t say we haven’t improved things since then.

    Neural networks and GPUs alone are huge improvements to the paradigm and design that allow for LLMs to exist.

    They’re still as far from real AI as the chatbots in the 90’s were.

    Again, they are a vast, vast improvement over those in ways that nobody in the 90’s could have ever predicted. Nobody even knew what a neural network was or how to make one back then (I mean, a few researchers were working on it, to be fair, but we didn’t have the hardware to do much than posture).

    We’re still light years away from real AI. LLMs do not bring us closer. They solve a different problem.



  • Your feelings and opinion are wrong in this case.

    They could mislead people into sharing your opinion/feeling and then you’d both be wrong.

    You’re getting downvoted because you’re wrong and are contributing the opposite of a benefit to a conversation around the security of signal without any facts or proof other than your “gut”.

    That is not upvote worthy. People are correct to downvote your comment to let others know that they shouldn’t take it with any degree of seriousness. That’s how this works. That’s how the whole comment voting system is supposed to work.

    Your feelings are not special when they muddy the waters of facts.



  • If you’ve tried to build chatbots before, you’ll quickly understand how impressive LLMs are.

    We essentially solved the problem of a chatbot sounding human and having reasonably intelligent things to say by throwing insane amounts of hardware at it. This wasn’t possible before now really.

    The algorithms are impressive, but still naive compared to what people believe AI before.

    This is not AI anymore than chatbots from the 90’s were.

    This is just the best chatbot from the 90’s we’ve made so far.