• 0 Posts
  • 193 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle


  • They aren’t. From a comment on https://www.reddit.com/r/ublock/comments/32mos6/ublock_vs_ublock_origin/ by u/tehdang:

    For people who have stumbled into this thread while googling “ublock vs origin”. Take a look at this link:

    http://tuxdiary.com/2015/06/14/ublock-origin/

    "Chris AlJoudi [current owner of uBlock] is under fire on Reddit due to several actions in recent past:

    • In a Wikipedia edit for uBlock, Chris removed all credits to Raymond [Hill, original author and owner of uBlock Origin] and added his name without any mention of the original author’s contribution.
    • Chris pledged a donation with overblown details on expenses like $25 per week for web hosting.
    • The activities of Chris since he took over the project are more business and advertisement oriented than development driven."

    So I would recommend that you go with uBlock Origin and not uBlock. I hope this helps!

    Edit: Also got this bit of information from here:

    https://www.reddit.com/r/chrome/comments/32ory7/ublock_is_back_under_a_new_name/

    TL;DR:

    • gorhill [Raymond Hill] got tired of dozens of “my facebook isnt working plz help” issues.
    • he handed the repository to chrismatic [Chris Aljioudi] while maintaining control of the extension in the Chrome webstore (by forking chrismatic’s version back to himself).
    • chrismatic promptly added donate buttons and a “made with love by Chris” note.
    • gorhill took exception to this and asked chrismatic to change the name so people didn’t confuse uBlock (the original, now called uBlock Origin) and uBlock (chrismatic’s version).
    • Google took down gorhill’s extension. Apparently this was because of the naming issue (since technically chrismatic has control of the repo).
    • gorhill renamed and rebranded his version of ublock to uBlock Origin.



  • hedgehog@ttrpg.networktoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    15 days ago

    Tons of laptops with top notch specs for 1/2 the price of a M1/2 out there

    The 13” M1 MacBook Air is $700 new from Best Buy. Better specced versions are available for $600 used (buy-it-now) on eBay, but the base specced version is available for $500 or less.

    What $300 or less used / $350 new laptops are you recommending here?


  • reasonable expectations and uses for LLMs.

    LLMs are only ever going to be a single component of an AI system. We’ve only had LLMs with their current capabilities for a very short time period, so the research and experimentation to find optimal system patterns, given the capabilities of LLMs, has necessarily been limited.

    I personally believe it’s possible, but we need to get vendors and managers to stop trying to sprinkle “AI” in everything like some goddamn Good Idea Fairy.

    That’s a separate problem. Unless it results in decreased research into improving the systems that leverage LLMs, e.g., by resulting in pervasive negative AI sentiment, it won’t have a negative on the progress of the research. Rather the opposite, in fact, as seeing which uses of AI are successful and which are not (success here being measured by customer acceptance and interest, not by the AI’s efficacy) is information that can help direct and inspire research avenues.

    LLMs are good for providing answers to well defined problems which can be answered with existing documentation.

    Clarification: LLMs are not reliable at this task, but we have patterns for systems that leverage LLMs that are much better at it, thanks to techniques like RAG, supervisor LLMs, etc…

    When the problem is poorly defined and/or the answer isn’t as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.

    TBH, so would a random person in such a situation (if they produced anything at all).

    As an example: how often have you heard about a company’s marketing departments over-hyping their upcoming product, resulting in unmet consumer expectation, a ton of extra work from the product’s developers and engineers, or both? This is because those marketers don’t really understand the product - either because they don’t have the information, didn’t read it, because they got conflicting information, or because the information they have is written for a different audience - i.e., a developer, not a marketer - and the nuance is lost in translation.

    At the company level, you can structure a system that marketers work within that will result in them providing more correct information. That starts with them being given all of the correct information in the first place. However, even then, the marketer won’t be solving problems like a developer. But if you ask them to write some copy to describe the product, or write up a commercial script where the product is used, or something along those lines, they can do that.

    And yet the marketer role here is still more complex than our existing AI systems, but those systems are already incorporating patterns very similar to those that a marketer uses day-to-day. And AI researchers - academic, corporate, and hobbyists - are looking into more ways that this can be done.

    If we want an AI system to be able to solve problems more reliably, we have to, at minimum:

    • break down the problems into more consumable parts
    • ensure that components are asked to solve problems they’re well-suited for, which means that we won’t be using an LLM - or even necessarily an AI solution at all - for every problem type that the system solves
    • have a feedback loop / review process built into the system

    In terms of what they can accept as input, LLMs have a huge amount of flexibility - much higher than what they appear to be good at and much, much higher than what they’re actually good at. They’re a compelling hammer. System designers need to not just be aware of which problems are nails and which are screws or unpainted wood or something else entirely, but also ensure that the systems can perform that identification on their own.




  • And when you’re comparing two closed source options, there are techniques available to evaluate them. Based off the results of people who have published their results from using these techniques, Apple is not as private as they claim. This is most egregious when it comes to first party apps, which is concerning. However, when it comes to using any non-Apple app, they’re much better than Google is when using any non-Google app.

    There’s enough overlap in skillset that pretty much anyone performing those evaluations will likely find it trivial to configure Android to be privacy-respecting - i.e., by using GrapheneOS on a Pixel or some other custom ROM - but most users are not going to do that.

    And if someone is not going to do that, Android is worse for their privacy.

    It doesn’t make sense to say “iPhones are worse at respecting user privacy than Android phones” when by default and in practice for most people, the opposite is true. What we should be saying is “iPhones are better at respecting privacy by default, but if privacy is important to you, the best option is to put in a bit of extra work and install GrapheneOS on a Pixel.”





  • That’s still a single point of failure.

    So is TLS or the compromise of a major root certificate authority, and those have no bearing on whether an approach qualifies as using 2FA.

    The question is “How vulnerable is your authentication approach to attack?” If an approach is especially vulnerable, like using SMS or push notifications (where you tap to confirm vs receiving a code that you enter in the app) for 2FA, then it should be discouraged. So the question becomes “Is storing your TOTP secrets in your password manager an especially vulnerable approach to authentication?” I don’t believe it is, and further, I don’t believe it’s any more vulnerable than using a separate app on your mobile device (which is the generally recommended alternative).

    What happens if someone finds an exploit that bypasses the login process entirely?

    Then they get a copy of your encrypted vault. If your vault password is weak, they’ll be able to crack it and get access to everything. This is a great argument for making sure you have a good vault password, but there are a lot of great arguments for that.

    Or do you mean that they get access to your logged in vault by compromising your device? That’s the most likely worst case scenario, and in such a scenario:

    • all of your logged in accounts can be compromised by stealing your sessions
    • even if you use a different app for your 2FA, those TOTP secrets and passkeys can be stolen - they have to be on a different device
    • you’re also likely to be subject to a ransomware attack

    In other words, your only accounts that are not vulnerable in this situation solely because their TOTP secret is on a different device are the ones you don’t use on that device in the first place. This is mostly relevant if your computer is compromised - if your phone is compromised, then it doesn’t matter that you use a separate password manager and authenticator app.

    If you use an account on your computer, since it can be compromised without having the credentials on device, you might as well have the credentials on device. If you’re concerned about the device being compromised and want to protect an account that you don’t use on that device, then you can store the credentials in a different vault that isn’t stored on your device.

    Even more common, though? MITM phishing attacks. If your password manager verifies the url, fills the password, and fills your TOTP, then that can help against those. Start using a different device and those protections fall away. If your vault has been compromised and your passwords are known to an attacker, but they don’t have your TOTP secrets, you’re at higher risk of erroneously entering them into a phishing site.

    Either approach (same app vs different app) has trade-offs and both approaches are vulnerable to different sorts of attacks. It doesn’t make sense to say that one counts as 2FA but the other doesn’t. They’re differently resilient - that’s it. Consider your individual threat model and one may be a better option than the other.

    That said, if you’re concerned about the resiliency of your 2FA approach, then look into using dedicated security keys. U2F / WebAuthn both give better phishing resistance than a browser extension filling a password or TOTP can, and having the private key inaccessible can help mitigate device compromise concerns.






  • Ah, fair enough. I was just giving people interested in that method a resource to learn more about it.

    The problem is that your method doesn’t consistently generate memorable passwords with anywhere near 77 bits of entropy.

    First, the example you gave ended up being 11 characters long. For a completely random password using alphanumeric characters + punctuation, that’s 66.5 bits of entropy. Your lower bound was 8 characters, which is even worse (48 bits of entropy). And when you consider that the process will result in some letters being much more probable, particularly in certain positions, that results in a more vulnerable process. I’m not sure how much that reduces the entropy, but it would have an impact. And that’s without exploiting the fact that you’re using quoted as part of your process.

    The quote selection part is the real problem. If someone knows your quote and your process, game over, as the number of remaining possibilities at that point is quite low - maybe a thousand? That’s worse than just adding a word with the dice method. So quote selection is key.

    But how many quotes is a user likely to select from? My guess is that most users would be picking from a set of fewer than 7,776 quotes, but your set and my set would be different. Even so, I doubt that the set an attacker would need to discern from is higher than 470 billion quotes (the equivalent of three dice method words), and it’s certainly not 2.8 quintillion quotes (the equivalent of 5 dice method words).

    If your method were used for a one-off, you could use a poorly known quote and maybe have it not be in that 470 billion quote set, but that won’t remain true at scale. It certainly wouldn’t be feasible to have a set of 2.8 quintillion quotes, which means that even a 20 character password has less than 77.5 bits of entropy.

    Realistically, since the user is choosing a memorable quote, we could probably find a lot of them in a very short list - on the order of thousands at best. Even with 1 million quotes to choose from, that’s at best 30 bits of entropy. And again, user choice is a problem, as user choice doesn’t result in fully random selections.

    If you’re randomly selecting from a 60 million quote database, then that’s still only 36 bits of entropy. When the database has 470 billion quotes, that’ll get you to 49 bits of entropy - but good luck ensuring that all 470 billion quotes are memorable.

    There are also things you can do, at an individual level, to make dice method passwords stronger or more suitable to a purpose. You can modify the word lists, for one. You can use the other lists. When it comes to password length restrictions, you can use the EFF short list #2 and truncate words after the third character without losing entropy - meaning your 8 word password only needs to be 32 characters long, or 24 characters, if you omit word separators. You can randomly insert a symbol and a number and/or substitute them, sacrificing memorizability for a bit more entropy (mainly useful when there are short password length limits).

    The dice method also has baked-in flexibility when it comes to the necessary level of entropy. If you need more than 82 bits of entropy, just add more words. If you’re okay with having less entropy, you can generate shorter passwords - 62 bits of entropy is achieved with a 6 short-word password (which can be reduced to 18 characters) and a 4 short-word password - minimum 12 characters - still has 41 bits of entropy.

    With your method, you could choose longer quotes for applications you want to be more secure or shorter quotes for ones where that’s less important, but that reduces entropy overall by reducing the set of quotes you can choose from. What you’d want to do is to have a larger set of quotes for your more critical passwords. But as we already showed, unless you have an impossibly huge quote database, you can’t generate high entropy passwords with this method anyway. You could select multiple unrelated quotes, sure - two quotes selected from a list of 10 billion gives you 76.4 bits of entropy - but that’s the starting point for the much easier to memorize, much easier to generate, dice method password. You’ve also ended up with a password that’s just as long - up to 40 characters - and much harder to type.

    This problem is even worse with the method that the EFF proposes, as it’ll output passphrases with an average of 42 characters, all of them alphabetic.

    Yes, but as pass phrases become more common, sites restricting password length become less common. My point wasn’t that this was a problem but that many site operators felt that it was fine to cap their users’ passwords’ max entropy at lower than 77.5 bits, and few applications require more than that much entropy. (Those applications, for what it’s worth, generally use randomly generated keys rather than relying on user-generated ones.)

    And, as I outlined above, you can use the truncated short words #2 list method to generate short but memorable passwords when limited in this way. My general recommendation in this situation is to use a password manager for those passwords and to generate a high entropy, completely random password for them, rather than trying to memorize them. But if you’re opposed to password managers for some reason, the dice method is still a great option.