• 0 Posts
  • 13 Comments
Joined 23 days ago
cake
Cake day: June 8th, 2024

help-circle



  • justaderp@lemmy.worldtoaww@lemmy.worldCow sleeping in someone's lap.
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    5 days ago

    I know it’s just a joke. But, black and brown bears are very intelligent and quite peaceful creatures. I’ve run into forty or fifty in the wilderness. I’ve never once felt the bear was considering an attack. They’re smart enough to recognize our complex behaviors as a large risk to their safety.

    The story of the vast majority of humans mauled by bears:

    Your dog has a perfect record of defending the pack. Every single time the target either runs or turns out to be friendly. No other pack member defends. Its primary reason to exist is to defend. A bear has a perfect record of fights with anything but another bear.

    One day the bear smells some food, good stuff it can’t find normally. It’s some campers with their dog. The dog smells the bear, full adrenaline drops for its whole reason to exist, and defends the pack. The bear wins in about one second.

    The human defends the dog. The bear fights because that’s what it’s doing right now. Then, it reconsiders and runs away. Finally, the Forest Rangers track down and kill the bear quietly, preserving the tourism the community relies on.

    We’re really shitty to bears, at least here in the US. They’re not even very dangerous relative an wild elk, moose, or even free range livestock. It’s the big and dumb ones you need to watch out for. And marmot. Never disagree with a marmot.



  • I’m not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.

    Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?

    Shall we replace our judges with an AI?

    Does the software understand the human meaning in what it does?

    The problem with the majority of the AI projects I’ve seen (in rejecting many offers) is that the stakeholders believe they’ve significantly more influence over the human meaning of the results than exists in the quality and nature of the data they’ve access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with “AI voodoo”. Then, someone comes along and sells the suckers their snake oil.








  • The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.


  • Assuming you’re coming from a linear programming and OOP background, then data (incl. SQL) kinda sucks because it’s not always clear how to apply existing concepts. But, doing so is absolutely critical to success, perhaps more so than in most OOP environments. Your post isn’t funny to me because I’d be laughing at you, not with you.

    If a variable is fucked, the first questions you should answer are, “Where’d it come from?” and “What’s its value along the way?”. That looks a lot different in Python than SQL. But, the troubleshooting concept is the same.

    If object definitions were replaced by table/query definitions in planning then you’d probably not have made the initial error. Again, it looks different. But, the concept is the same.


  • When under duress Archer and Janeway are the shallow, emotional humans we often reveal ourselves to be. We throw away our principles as soon as the adrenaline hits, often before. It’s relatable.

    Picard’s how we think ourselves. He’s privileged in mature procedure & technology, and has strong external support. He respects justice, law, and principle, then reasons a way to honor all of them in the specific situation. It’s thematically Jesus, OT judge meets NT wisdom, and unsustainably inhuman.

    It’s not fair to compare Sisko. He’s lesser Starfleet authority than Archer and Janeway, reasons as well as Picard, he suffered little hope of a life worth living before the series even begins, and Jake’s youth prevents too much initial risk. Sisko by far has the best initial situation to be the best of the lot.