• 1 Post
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle


  • There’s a number of major flaws with it:

    1. Assume the paper is completely true. It’s just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? That’s been true for other problems, why not this one?
    2. It proves something in a model. So what? Prove that the result applies to the real world
    3. Replace “human-like” with something trivial like “tree-like”. The paper then proves that we’ll never achieve tree-like intelligence?

    IMO there’s also flaws in the argument itself, but those are more relevant


  • This is a silly argument:

    […] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

    That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    EDIT: From the paper:

    The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.


  • Oh also, it was kind of strange how little he mentioned AI. I like how he wrote that the AI wasn’t trusted to be a Swordholder because it was “too logical”. Completely missed the boat on ChatGPT. “Write a play in which Harry Potter pressed the button to destroy humanity, and then act it out” or something like that would be all it takes.

    It was also strange that hundreds of years in the future, humans are still doing hard manual labor. Where are the robots?









  • You don’t need a level 2 charger at home. You don’t need gas stations equivalents. EV companies won’t make infrastructure, because we’ve already built tons of infrastructure for EVs and it’s called the electric grid. Everywhere has electricity. I was recently in a very remote area for vacation in my EV, and just plugged my car into a regular outlet to charge it up. To get there, I stopped for lunch and plugged my car in at a supercharger while I ate.

    Target is putting in superchargers at lots of their locations around me. Other places are or will follow suit. If you can’t charge at home, you’ll simply stop by the store/mall/whatever, do your normal shopping, and have your car charge in the meantime. Or you’ll charge at work, or any number of other places.

    EVs aren’t hard, they just require a mindset shift. People worry about this and that, but it’s because they haven’t actually tried it and have given too much weight to FUD spread about EVs.



  • I charge my car off of a regular outlet outside in a very cold climate, and charging like that will actually likely make the battery last quite a while. The only way to find out for sure is to wait, but it has been 4 years and the battery hasn’t lost any capacity. My car also has a 320 km range, so even in your scenario, if you charged 50km away and came home, you’d still have 270km of range.

    I think you may have given too much weight to FUD about EVs from companies that would like to see them fail. I’ve seen a lot of concerns posted online that just don’t practically matter, once you actually try it. There’s also some really nice minor things about owning an EV, like not having to breathe in toxic fumes when walking around the car. Especially nice if you have kids that are right at the level of the tailpipe.

    It is also fine to wait a bit, of course. In my area chargers are springing up in lots of places, and I think we’re not far off from a tipping point away from ICE cars, which will spread even to rural areas pretty quickly when gas stations start becoming unprofitable.



  • It doesn’t have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you’re on vacation in a remote area without wifi/cell, and then they publish an image and say “oh, i got wifi for a bit and published this”. You then get back from vacation, see the fake image and claim that you didn’t have any wifi/cell service the whole time and couldn’t have published an image. Why should people trust you? Switch out vacation for “war zone” if you’d like for a relevant example. Right now many people in Gaza or Ukraine don’t exactly have reliable ways to use the internet, and that’s exactly the sort of situation where you’d want to be able to verify images.

    Alternatively as I put in another comment, if it’s got the ability to publish stuff straight from the camera, it’s got the ability to be hacked and publish a fake image, straight from the camera.

    Publishing things on the blockchain adds nothing here. The tech isn’t telling anyone anything useful, because the map is not the territory.

    These are not implausible scenarios. They wouldn’t happen every day because they’re valuable attack vectors, but they’re 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.




  • original image’s timestamp has already been published

    “Oh the incorrect information was published, here’s the correct info”. Again, the map is not the territory.

    the whole point of this technology is to remove the need for that trust.

    And it utterly fails to achieve that here. I’ll put it another way: You have this fancy camera. You get detained by the feds for some reason. While you’re detained, they extract your private keys and publish a doctored image, purportedly from your camera. The image is used as evidence to jail you. The digital signature is valid and the public timestamp is verifiable. You later leave jail and sue to get your camera back. You then publish the original image from your camera that proves you shouldn’t have been jailed. The digital signature is valid and the public timestamp is verifiable. None of that matters, because you’re going to say “trust me, bro”. Introducing public signatures via the blockchain has accomplished absolutely nothing.

    You’re trying to apply blockchain inappropriately. The one thing that publishing like this does is prove that someone knew something at that time. You can’t prove that only that person knew something. You can prove that someone had a private key at time X, but you cannot prove that nobody else had it. You can prove that someone had an image with a valid digital signature at time X, but you cannot prove that it is the unaltered original.


  • The evil maid could take a copy of a legitimate image, modify it, publish it, and say that the original image was faked. If there’s a public timestamp of the original image, just say “Oh, hackers published it before I could, but this one is definitely the original”. The map is not the territory, and the blockchain is not what actually happened.

    Digital signatures and public signatures via blockchain solve nothing here.