Article: https://proton.me/blog/deepseek

Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.

  • ReversalHatchery@beehaw.org
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    8
    ·
    edit-2
    5 days ago

    im not an expert at criticism, but I think its fair from their part.

    I mean, can you remind me what are the hardware requirements to run deepseek locally?
    oh, you need a high-end graphics card with at least 8 GB VRAM for that*? for the highly distilled variants! for more complete ones you need multiple such graphics card interconnected! how do you even do that with more than 2 cards on a consumer motherboard??

    how many do you think have access to such a system, I mean even 1 high-end gpu with just 8 GB VRAM, considering that more and more people only have a smartphone nowadays, but also that these are very expensive even for gamers?
    and as you will read in the 2nd referenced article below, memory size is not the only factor: the distill requiring only 1 GB VRAM still requires a high-end gpu for the model to be usable.

    https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-released-instructions-for-running-deepseek-on-ryzen-ai-cpus-and-radeon-gpus

    https://bizon-tech.com/blog/how-to-run-deepseek-r1-locally-a-free-alternative-to-openais-o1-model-hardware-requirements#a6

    https://codingmall.com/knowledge-base/25-global/240733-what-are-the-system-requirements-for-running-deepseek-models-locally

    so my point is that when talking about deepseek, you can’t ignore how they operate their online service, as most people will only be able to try that.

    I understand that recently it’s very trendy, and cool to shit on Proton, but they have a very strong point here.

    • ImFineJustABitTired@lemmy.ml
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      5 days ago

      Just because the average consumer doesn’t have the hardware to use it in a private manner does not mean it’s not achievable. The article straight up pretends self hosting doesn’t exist.

      • ReversalHatchery@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        14 hours ago

        of course, move the ducking goalposts! buying a big yacht is also achievable, technically! but very little of the people can actually do it.

        don’t forget what did OP say:

        failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers

        don’t believe me? look at the post text. this is as large a misunderstanding as the Eiffel Tower. Virtually nobody can run it on their private devices, that fraction of a percent is basically a rounding error.

        I’m so tired of this fucking bullshit. but let’s hate proton for it if that’s what’s trendy!

    • Danitos@reddthat.com
      link
      fedilink
      arrow-up
      7
      ·
      5 days ago

      The 1.5B version that can be run basically on anything. My friend runs it in his shitty laptop with 512MB iGPU and 8GB of RAM (inference takes 30 seconds)

      You don’t even need a GPU with good VRAM, as you can offload it to RAM (slower inference, though)

      I’ve run the 14B version on my AMD 6700XT GPU and it only takes ~9GB of VRAM (inference over 1k tokens takes 20 seconds). The 8B version takes around 5-6GB of VRAM (inference over 1k tokens takes 5 seconds)

      The numbers in your second link are waaaaaay off.

    • oktoberpaard@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      There are plenty of other online platforms where you can use the unmodified model without siphoning your data to China. The model itself is just an offline blob and doesn’t need to be modified to make a “more secure” and “privacy friendly” version like the article says it does, because the model is not tasked with collecting and sharing your data. The author doesn’t seem to be aware of that.

      • ReversalHatchery@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        except that the model is not offline if you access it through yet another online service, doing who knows what with your data

        • oktoberpaard@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          The platform you’re using is indeed an online platform with its own data practices, but that’s common sense and has nothing to do with this model being Chinese. It’s up to the user to decide wether or not to use online AI services at all and which ones (not) to trust. The model itself isn’t doing anything with your data.