![](https://sh.itjust.works/pictrs/image/d6d748ee-ad58-496c-a059-75d92e724307.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Yes, absolutely. That is a concern that I too share, fellow meat being. We should be vigilant against superior, more capable, and really friendly artificial intelligences.
Linux server admin, MySQL/TSQL database admin, Python programmer, Linux gaming enthusiast and a forever GM.
Yes, absolutely. That is a concern that I too share, fellow meat being. We should be vigilant against superior, more capable, and really friendly artificial intelligences.
Even if this manages to pass, it’d only apply to those currently in or candidating for the Senedd. This wouldn’t affect the UK government (and thus Farage) at all, even if he were attempting to get re-elected.
to have this relationship between A and B you have to make a third database
Probably just a mistake here, but you make a third table, not a new database.
Apart from that (and the fact that one to many and many to one is the same thing), yeah, looks correct.
Even the question of “who” is a fascinating deep dive in and of itself. Consciousness as an emergent property implies that your gut microbiome is part of the “who” doing the thinking in the first place :))
So, first of all, thank you for the cogent attempt at responding. We may disagree, but I sincerely respect the effort you put into the comment.
The specific part that I thought seemed like a pretty big claim was that human brains are “simply” more complex neural networks and that the outputs are based strictly on training data.
Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation?
While true, this is way too reductive to be a one to one comparison with LLMs. Humans have genetic instinct and body-mind connection that isn’t cleanly mappable onto a neural network. For example, biologists are only just now scraping the surface of the link between the brain and the gut microbiome, which plays a much larger role on cognition than previously thought.
Another example where the brain = neural network model breaks down is the fact that the two hemispheres are much more separated than previously thought. So much so that some neuroscientists are saying that each person has, in effect, 2 different brains with 2 different personalities that communicate via the corpus callosum.
There’s many more examples I could bring up, but my core point is that the analogy of neural network = brain is just that, a simplistic analogy, on the same level as thinking about gravity only as “the force that pushes you downwards”.
To say that we fully understand the brain, to the point where we can even make a model of a mosquito’s brain (220,000 neurons), I think is mistaken. I’m not saying we’ll never understand the brain enough to attempt such a thing, I’m just saying that drawing a casual equivalence between mammalian brains and neural networks is woefully inadequate.
That’s a strong claim. Got an academic paper to back that up?
This is why I strictly refer to these things as LLMs. That’s what they are.
I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.
LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.
If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.
If he has said otherwise I’m open to being corrected, of course.
I went back to double-check what I’d heard from the horse’s mouth. I misunderstood the first time (they had just finished talking about Rwanda). This was in relation to third country processing of migrants.
Labour are promising the biggest expansion of workers’ rights in decades and the most ambitious environmental policies
You made me interested in what exactly they’re promising, so I tracked down their manifesto.
The fines on river and ocean polluting sounds long overdue. Hardly revolutionary (every EU country does this) but it’s definitely needed from some headlines I’ve read. There’s also some stuff there about taxing oil and gas companies. That’s honestly a good thing! Wouldn’t exactly call that incredibly ambitious though.
EDIT: My eyes completely glossed over the “Clean Power by 2030” investment plan somehow. That sounds pretty great, and definitely counts as ambitious. My napkin math says 95 GW of electricity could power about 18 million homes, which according to this is more than half of UK homes. Pretty ambitious.
I couldn’t find anything in there about workers’ rights though. Maybe I missed something?
EDIT2: Why wasn’t Starmer mentioning any of this in the debate, I wonder?
Fair enough. So if I understand you correctly, this isn’t really about Labor having any positive policies, but more that they won’t actively make things much worse.
Btw, on the Rwanda plan, didn’t Starmer say in the debate he’d do that if it complied with international law? EDIT: I’m wrong, he said he’d do third country processing of migrants if it complied with international law.
Not British, but interested in your opinions: isn’t Labor almost as bad as the Tories these days? I watched the Sunak - Starmer debate, and they seemed to just be angrily agreeing with eachother on every point that mattered. There was a lot of “Yes, we should do that, but your track record shows you’re not serious about [policy]!”
Ah, I misunderstood then, sorry. But still, even with all the investment in the world, LLM is a bubble waiting to burst. I have a hunch we will see truly world-altering technology in the next ~20 years (the kind that’d put huge swathes of people out of work, as you describe), but this ain’t it.
There’s an upper ceiling on capability though, and we’re pretty close to it with LLMs. True artificial intelligence would change the world drastically, but LLMs aren’t the path to it.
You can always refund it. Even if you’ve gone over the 2 hours for an auto-accept refund, if you explain the issues in the ticket Steam will always accept it in my experience.
Even got a refund for a game after 20 hours of game time due to them adding aggressive client-side anti-cheat.
Should be an option to allow/disallow non-instance users to vote. That’d be really useful here in sh.itjust.works for the Agora.
I was actually very unsure which of you was right, but the best source I could find was this, and software on an online store is specifically exempted from the 14 day cooling off period.
I guess it would depend on whether remote deactivation would be considered a faulty product?
Actually, that’s not the real reason patents are public. The reason is to allow everyone to freely use the patent after the expiry.
The tradeoff is supposed to be the inventor gets exclusive use for a decade in exchange for detailing exactly how the thing works for everyone else.
The best use case for purchasing FOSS software is contractor work, specific modules for existing platforms and/or FOSS projects. I’ve done that myself in the past. The client pays for the custom software, it’s written, and then they gets to do absolutely whatever they want with it. If the client wants to publish it, they’re well within their rights. Most of the time it’s too entangled with their internal company workflow to be useful to anyone else though.
So basically the Lemmy version of Subreddit Simulator, but allowing users as well?