• 0 Posts
  • 88 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • When I was young I remember that banks often had large drive-thrus with pneumatic tube systems at each car stall.

    There would only be one teller but they could serve quite a few lanes.

    If you wanted a cash withdrawal, you might put your ID and your withdrawal slip in the tube, and a few minutes later it would come back with cash in it.

    It was pretty rad. But ATMs seem like a better bet overall.




  • One of the best things ever about LLMs is how you can give them absolute bullshit textual garbage and they can parse it with a huge level of accuracy.

    Some random chunks of html tables, output a csv and convert those values from imperial to metric.

    Fragments of a python script and ask it to finish the function and create a readme to explain the purpose of the function. And while it’s at it recreate the missing functions.

    Copy paste of a multilingual website with tons of formatting and spelling errors. Ask it to fix it. Boom done.

    Of course, the problem here is that developers can no longer clean their inputs as well and are encouraged to send that crappy input straight along to the LLM for processing.

    There’s definitely going to be a whole new wave of injection style attacks where people figure out how to reverse engineer AI company magic.







  • Well thought-out and articulated opinion, thanks for sharing.

    If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

    When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

    Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

    To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.



  • Well stated and explained. I’m not an AI researcher but I develop with LLMs quite a lot right now.

    Hallucination is a huge problem we face when we’re trying to use LLMs for non-fiction. It’s a little bit like having a friend who can lie straight-faced and convincingly. You cannot distinguish whether they are telling you the truth or they’re lying until you rely on the output.

    I think one of the nearest solutions to this may be the addition of extra layers or observer engines that are very deterministic and trained on only extremely reputable sources, perhaps only peer reviewed trade journals, for example, or sources we deem trustworthy. Unfortunately this could only serve to improve our confidence in the facts, not remove hallucination entirely.

    It’s even feasible that we could have multiple observers with different domains of expertise (i.e. training sources) and voting capability to fact check and subjectively rate the LLMs output trustworthiness.

    But all this will accomplish short term is to perhaps roll the dice in our favor a bit more often.

    The perceived results from the end users however may significantly improve. Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

    The system that prevents me from knowingly stating something as true, despite not knowing, without some ability to back up my claims is my reputation and my personal values and ethics. LLMs can only pretend to have those traits when we tell them to.




  • There are a few things humans (and thus a healthy society) require for survival. Water, food, shelter.

    When we start to point unadulterated VC backed capitalism at those resources, I think we give up something in our society and culture that we don’t actually want to give away.

    I travel a lot worldwide and have used Airbnb quite a few times. However I’m now on the side of “Airbnb is evil”.

    A couple years ago had a horrific experience in a villa and Airbnb customer support didn’t give a rats ass. Fortunately, my bank did and my credit card chargeback for $4,000 was successful. While I was going through that experience I came across a multitude of communities of travelers who have had equally horrific, oftentimes more horrific experiences with Airbnb where they’ve failed to step in and assist in any way.

    Random dudes who own houses are on average unqualified in the hospitality business and not incentivized by maintaining a brand reputation. There are so many issues caused by shitty Airbnb hosts that hotels - real hotels - just don’t suffer from.

    So now we have this situation where a lot of spaces are allocated to hotel businesses, more space is allocated to residential housing, And any random dude who can qualify for a mortgage can take a house off the market, fill it for 10 or 15 days out of the month, and keep both a domicile unused for a resident and a hotel room empty.

    This is one of the few areas where I think hotel regulations are smart.


  • Will be interesting to read the arguments and hear what experts have to say.

    There is some precedence that corporations do have first amendment rights.

    A hypothetical argument from TikTok is they think they are allowed constitutional rights, in this case to publish whatever they want, in the act of doing a commercial activity and that the law which was passed to force a sale to a local owner is a violation of their right to speak freely.

    I suspect TikTok operates in the USA under an American registered entity that is wholly owned by a foreign entity. Whether that grants or removes any such constitutional rights seems unclear.

    Next, it doesn’t seem like the law intends to block TikTok’s “speech”, rather it specifically allows the executive branch to block this particular type of foreign entity from doing business on American soil on the grounds of security, enforced most likely by blocking it from doing business with the app stores. This also has precedence - a lot of it, in fact - when it comes to security. The US blocks all kinds of foreign businesses from trading with American businesses. Like arms dealers and drug dealers.

    So TikTok will need to defeat the idea that even as a foreign businesses they don’t need to be subject to the whims of the executive branches power to block foreign businesses AND that even congress doesn’t have the power to write a law that gives the executive branch this power (because, ya know, they just DID write that law).

    And then TikTok will need to win on the idea that somehow their rights have been suppressed.

    Seems like a long shot to me and the precedence that would be established by making it difficult for Congress to write laws that give the executive power to block foreign entities because it risks their unlikely right to speech in the US seems a bit whack.