• 0 Posts
  • 123 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • A reminder that this is still how they think.

    Here’s a fact check OF a fact check about Project 2025, something that has been stated recently will gut the National Hurricane Center.

    USA Today’s fact check of that claim

    Now when I first ran across this link, I thought, hmmm…are liberal Youtubers making up stuff to sell their position as a hurricane approaches? Maybe so. Then I read the article and actual text from Project 2025.

    Project 2025 “does not call for the elimination of” the National Hurricane Center, Heritage Foundation spokesperson Ellen Keenan told USA TODAY.

    Not in the text, this part of the fact check is correct. The text calls for review of it as well as other agencies and downsize or move resources around as needed. But then I see:

    Data collected by the department should be presented neutrally, without adjustments intended to support any one side in the climate debate.

    Well, that set off some alarm bells in my head. They aren’t actively proposing to shut it down, but there does seem to be an agenda here.

    Project 2025 accuses NOAA of “climate alarmism” and calls for it to be "broken up and downsized.” “That is not to say NOAA is useless, but its current organization corrupts its useful functions,” the playbook says of the agency.

    I read all this as exactly how MAGA Republicans in power have been treating anything tied to climate change. They aren’t completely cutting things out, only the parts that are inconvenient to their agenda. Which of course is terrible science, and will absolutely affect the ability to learn and respond to future threats.

    USA Today is a tool for them if they are marking such claims as completely false.


  • Even a hypothetically true artificial general intelligence would still not be a moral agent

    That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.

    It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.



  • I can understand that, and it’s one of the many drawbacks of party systems. It’s also exactly what Republicans have done for decades for anything in government.

    Ina world where corporations only care about the next quarter, and politicians begin their term by starting the next campaign, how can we get long range plans completed? We’ve done many huge projects over many years in the past, but in today’s instant gratification that seems impossible. Anything worth doing is going to cost a lot up front, it’s called an investment in the future.


  • Maybe you had a different history of development then, unlike what I mentioned in the second part. Lots of our 19th-20th century urban areas had trolleys and such, which “mysterious” disappeared when the car came along. Even now in the past decades the public has overwhelming wanted development of things like high speed rail, and yet somehow even mandates voted for on ballots are refused for “reasons”.









  • I’m not comparing them, I’m saying that it’s inaccurate to ignore the effects that solar has.

    The chemicals in producing PV panels are toxic. Part of why production got shifted to countries like China is because without regulation on the waste disposal they are so much cheaper to make there. Sucks for the residents, but that’s capitalism.

    Energy is used to make PV. True of everything, but when solar is advertised it leans heavy on the free energy that the device generates, not how much it took to make it. But at least that energy can come from solar too…except it comes from fossil fuels.

    The heavy metals that make up part of the other 10% are the later waste problem. I don’t know if you can consider those metals inert since they are considered hazardous waste, but they can’t be discounted either. A recycling program to recover everything possible and then controlling the hazardous leftovers would make this less of a point, but we’re not doing that fully yet, so there are things going in the landfills now that could leach into the environment.

    All of this can be improved of course. I’m just introducing the fact that solar, like anything we do to keep our society at its level, has drawbacks too.

    Nuclear has its problems, as I mentioned. I didn’t pretend that solar is bad and nuclear is all flowers. But the issues it faces are much different and have their own solutions, and nuclear energy density and flexibility is far better than solar ever could be.

    I never understand why people pick their sides and then try to make other choices seem like the antithesis to help their cause. Why not find the best solutions for all of the non-fossil fuel sources, and have them all where they make the most sense? Diversity and redundancy is far better than a monopoly won by falsehoods.


  • Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it’s much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn’t understand the code it gives you, nor can it reason if it will function.

    Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.

    Basically, if you can help the LLM hone in its probabilities on the better data from the start, you’re more likely to get what may be good code.