• 0 Posts
  • 333 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • NaibofTabr@infosec.pubtoPeople Twitter@sh.itjust.worksNone. Suffer.
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    Essentially, verifiability (the token exists on the blockchain), de-duplication (each token can only exist once on the blockchain), and proof of ownership (only one account number can be associated with each token on the blockchain). There’s nothing wrong with this idea in a technical sense and it could be useful for some things.

    But… the transaction process is computationally expensive. For the transaction to be trustworthy, many nodes on the blockchain network must process the same transaction, which creates a whole bunch of issues around network scaling and majority control and real-world resource usage (electricity, computer hardware, network infrastructure, cooling, etc).

    And beyond that, the nature of society and economics created a community around this unregulated financial market that was filled with… well, exactly the kind of people you’d expect would be most interested in an unregulated financial market - scammers, speculative investors, thieves, illegal bankers, exploitatitive gambling operators, money launderers, and criminals looking to get paid without the government noticing.

    The technology can solve some interesting problems around verifying that a particular digital file is unique/original (which can be useful, because it’s extremely easy to make copies of digital information) but it creates a long list of other problems as a side effect.
















  • NaibofTabr@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    17 days ago

    I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.

    If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?


  • the presentation and materials viewed by 404 Media include leadership saying AI Hub can be used for “clinical or clinical adjacent” tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients’ personally identifying and protected health information. The demonstration also showed potential capabilities that included “detect pancreas cancer,” and “parse HL7,” a health data standard used to share electronic health records.

    Because as everyone knows, LLMs do a great job of getting specific details correct and always produce factually accurate output. I’m sure this will have no long term consequences and benefit all the patients greatly.


  • NaibofTabr@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    18 days ago

    We’re not talking about a “style”, we’re talking about producing finished work. The image generation models aren’t style guides, they output final images which are produced from the ingestion of other images as training data. The source material might be actual art (or not) but it is generally the product of a real person (because ML ingesting its own products is very much a garbage-in garbage-out system) who is typically not compensated for their work. So again, these generative ML models are ripoff systems, and nothing more. And no, typing in a prompt doesn’t count as innovation or creativity.