That’s a great line of thought. Take an algorithm of “simulate a human brain”. Obviously that would break the paper’s argument, so you’d have to find why it doesn’t apply here to take the paper’s claims at face value.
That’s a great line of thought. Take an algorithm of “simulate a human brain”. Obviously that would break the paper’s argument, so you’d have to find why it doesn’t apply here to take the paper’s claims at face value.
There’s a number of major flaws with it:
IMO there’s also flaws in the argument itself, but those are more relevant
This is a silly argument:
[…] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
EDIT: From the paper:
The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
Oh also, it was kind of strange how little he mentioned AI. I like how he wrote that the AI wasn’t trusted to be a Swordholder because it was “too logical”. Completely missed the boat on ChatGPT. “Write a play in which Harry Potter pressed the button to destroy humanity, and then act it out” or something like that would be all it takes.
It was also strange that hundreds of years in the future, humans are still doing hard manual labor. Where are the robots?
IMO Hacker News handles this better. Threads/comments are rarely deleted, they’re mostly minimized and you have to log in to expand them
Not sure if this my app or a Lemmy setting, but I only see it once, because it’s properly cross-posted
I couldn’t view this with Firefox or Gnome. ImageMagick to the rescue, though:
convert https://pub-be81109990da4727bc7cd35aa531e6b2.r2.dev/weofihweiof.jpg meme.jpg
There’s a good chance there will be a virtuous cycle, where the Steam Deck’s popularity makes it easier to game on Linux for regular PC users too, which will help out everyone gaming on Linux. Especially as Microsoft keeps dicking around with Windows and trying to turn it into a subscription OS and people just get sick of it.
I have no idea, and that’s kind of my point. I’ve never bothered checking, because charging off of a regular outlet is enough for me, and it will be enough for a lot of other people too.
Those questions really don’t need to be asked. I charge off of a regular outlet, level 2 charging at home is nice but unnecessary. If those questions keep coming up, it’s likely from dealers that are fearmongering.
You don’t need a level 2 charger at home. You don’t need gas stations equivalents. EV companies won’t make infrastructure, because we’ve already built tons of infrastructure for EVs and it’s called the electric grid. Everywhere has electricity. I was recently in a very remote area for vacation in my EV, and just plugged my car into a regular outlet to charge it up. To get there, I stopped for lunch and plugged my car in at a supercharger while I ate.
Target is putting in superchargers at lots of their locations around me. Other places are or will follow suit. If you can’t charge at home, you’ll simply stop by the store/mall/whatever, do your normal shopping, and have your car charge in the meantime. Or you’ll charge at work, or any number of other places.
EVs aren’t hard, they just require a mindset shift. People worry about this and that, but it’s because they haven’t actually tried it and have given too much weight to FUD spread about EVs.
I’m not worried about that, but I’ve seen some more cautious people get the cable underneath one of their wheels so that you’d have to move the car to take it. I’m quite sure you could also find another way of attaching or securing it to your car to make it fairly difficult to walk away with. The chargers also aren’t really worth much, so it seems unlikely that even someone desperate for cash would put much effort into it.
I charge my car off of a regular outlet outside in a very cold climate, and charging like that will actually likely make the battery last quite a while. The only way to find out for sure is to wait, but it has been 4 years and the battery hasn’t lost any capacity. My car also has a 320 km range, so even in your scenario, if you charged 50km away and came home, you’d still have 270km of range.
I think you may have given too much weight to FUD about EVs from companies that would like to see them fail. I’ve seen a lot of concerns posted online that just don’t practically matter, once you actually try it. There’s also some really nice minor things about owning an EV, like not having to breathe in toxic fumes when walking around the car. Especially nice if you have kids that are right at the level of the tailpipe.
It is also fine to wait a bit, of course. In my area chargers are springing up in lots of places, and I think we’re not far off from a tipping point away from ICE cars, which will spread even to rural areas pretty quickly when gas stations start becoming unprofitable.
There’s no goalpost-shifting, the evil maid is still getting your keys. I’m not sure what you’re not getting here.
The point is that the system is useful for exactly nobody, because you still have to trust that someone hasn’t had their private keys compromised via an evil maid attack, and publishing timestamps on a blockchain is irrelevant to the problem.
It doesn’t have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you’re on vacation in a remote area without wifi/cell, and then they publish an image and say “oh, i got wifi for a bit and published this”. You then get back from vacation, see the fake image and claim that you didn’t have any wifi/cell service the whole time and couldn’t have published an image. Why should people trust you? Switch out vacation for “war zone” if you’d like for a relevant example. Right now many people in Gaza or Ukraine don’t exactly have reliable ways to use the internet, and that’s exactly the sort of situation where you’d want to be able to verify images.
Alternatively as I put in another comment, if it’s got the ability to publish stuff straight from the camera, it’s got the ability to be hacked and publish a fake image, straight from the camera.
Publishing things on the blockchain adds nothing here. The tech isn’t telling anyone anything useful, because the map is not the territory.
These are not implausible scenarios. They wouldn’t happen every day because they’re valuable attack vectors, but they’re 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.
The “incorrect information” is provably published before the supposed “correct information” was.
Rephrased, some information was published before some other information. Sure, that’s provable, but what of it? How do you know which is correct and which isn’t? You’re back to trust.
Physical access means all bets are off, but it’s not required for these attacks. If it’s got a way to communicate with the outside world, it can get hacked remotely. For example here’s an attack that silently took over iphones without the user doing anything. That was used for real to spy on many people, and Apple is pretty good at security. Most devices you own such as cameras with wifi will likely be far worse security-wise.
original image’s timestamp has already been published
“Oh the incorrect information was published, here’s the correct info”. Again, the map is not the territory.
the whole point of this technology is to remove the need for that trust.
And it utterly fails to achieve that here. I’ll put it another way: You have this fancy camera. You get detained by the feds for some reason. While you’re detained, they extract your private keys and publish a doctored image, purportedly from your camera. The image is used as evidence to jail you. The digital signature is valid and the public timestamp is verifiable. You later leave jail and sue to get your camera back. You then publish the original image from your camera that proves you shouldn’t have been jailed. The digital signature is valid and the public timestamp is verifiable. None of that matters, because you’re going to say “trust me, bro”. Introducing public signatures via the blockchain has accomplished absolutely nothing.
You’re trying to apply blockchain inappropriately. The one thing that publishing like this does is prove that someone knew something at that time. You can’t prove that only that person knew something. You can prove that someone had a private key at time X, but you cannot prove that nobody else had it. You can prove that someone had an image with a valid digital signature at time X, but you cannot prove that it is the unaltered original.
The evil maid could take a copy of a legitimate image, modify it, publish it, and say that the original image was faked. If there’s a public timestamp of the original image, just say “Oh, hackers published it before I could, but this one is definitely the original”. The map is not the territory, and the blockchain is not what actually happened.
Digital signatures and public signatures via blockchain solve nothing here.
Not sure how ollama integration works in general, but these are two good libraries for RAG:
https://github.com/facebookresearch/faiss
https://pypi.org/project/chromadb/