![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
The issue with “Human jobs will be replaced” is that society still requires humans to have a paying job to survive.
I would love a world where nobody had to do dumb labour anymore, and everyone’s needs are still met.
The issue with “Human jobs will be replaced” is that society still requires humans to have a paying job to survive.
I would love a world where nobody had to do dumb labour anymore, and everyone’s needs are still met.
What part of “we paid these guys and they said we’re fine” do you not? Why would they choose and pay and release the results from a company they didn’t trust to clear them?
I’m not saying it’s rotten, but the fact that the third party was unilaterally chosen by and paid for LMG makes all the results pretty questionable.
It’s hard to trust a firm that is explicitly being paid by the company they’re investigating. I could be convinced that they are actually a neutral third party and that their investigation was unbiased if they had a track record of finding fault with their clients a significant portion of the time. (I haven’t done the research to see if that’s the case.)
However, you have to ask yourself - how many companies would choose to hire a firm which has that track record? Wouldn’t you pick one more likely to side with you?
The way to restore credibility is to have an actually independent third party investigation. Firm chosen by the accuser, perhaps. Or maybe something like binding arbitration. Even better, a union that can fight for the employees on somewhat even footing with the company.
The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.
It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.
I don’t know about the regulatory side, but Boeing gutted their experienced engineering corps starting about 10 years ago. In the pursuit of profit of course. I think we’re seeing the effects of that finally coming to the fore.
My understanding of the role of the regulatory agencies for stuff like this is that they can ground a model of plane if they believe there’s a systemic issue. Like we saw with the MAX.
NFTs do not solve the problem of proof of ownership. Nor can they. If someone steals it from you - whether by trickery, force, or any other means - it’s just as lost to you as any other stolen thing, digital or physical. (Not to touch on the fact that NFTs to date have just been URLs to web hosted media, i.e. ridiculously non-unique and insecure.)
Also, your whole paragraph about theoretical NFT replacement for DRM is just describing a different kind of DRM.
Agreed. Don’t make a threat - just make the GDPR complaint. Inform the company if you want. How many times have you remembered to follow up on one of those threats to see if you should still make a complaint?
It’s not infantilization. These bills are designed to prevent “one more hoop” design by the company to make it too annoying to unsubscribe. Your position assumes good faith behaviour by the company with the newsletter. That is absolutely not a given.
In any industrial context, a “robot” is short for robotic arm. Those things you see in footage of automotive factories.
They also don’t have any kind of AI. It’s just a regular (if specialized) computer in control.
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.