deleted by creator
deleted by creator
The selling point for M365 Copilot is that it is a turnkey AI platform that does not use data input by its enterprise customers to train generally available AI models. This prevents their internal data from being output to randos using ChatGPT. OpenAI definitely does use ChatGPT conversations to further train ChatGPT so there is a major risk of data leakage.
Same situation with all other public LLMs. Microsoft’s investments in OpenAI aren’t really relevant in this situation.
So sorry to interrupt your circlejerk about this guy’s opinion on 3d V-Cache technology with a tangentially related discussion about 3d V-Cache technology here on the technology community.
I fully understand the point you’re trying to make here, but just as you think my comments added nothing to the discussion, your replies to them added even less.
I was comparing the 7950x and the 7950x3d because those are the iterations that are available right now and what I have been personally comparing as I mentioned. I apologize if I wasn’t clear enough on that point.
My point was that the essence of the take, which I read to be, “CPUs with lower clocks but way more cache only offer major advantages in specific situations” is not particularly off base.
Ok so I am about to build a new rig, and looking at the specs the X3D does seem less powerful and more expensive than the regular 7950.
While I completely agree that this guy seems extremely biased and that he comes off like an absolute dickbag, I don’t think the essence of his take is too far off base if you strip off the layers of spite.
Really, it seems like the tangible benefit of the X3D that most people will realize is that it offers similar performance with lower energy consumption, and thus lower cooling requirements. Benchmarks from various sources seem to bear this out as well.
It seems like a chip that in general performs on par with the 7950x but with better efficiency, and if you have a specific workload that can benefit from the extra cache it might show a significant improvement. Higher end processors these days already have a fuckton of cache so it isn’t surprising to me that this doesn’t benchmark much better than the cheaper 7950x.
In your mind, do you really think that is the intention here? Seems more like a convenience for people who use both Linux and Windows.
I have to use both so I welcome it.
You would want to look for an R730, which can be had for not too much more. The 20 series was the “end of an era” and the 30 series was the beginning of the next era. Most importantly for this application, R30s use DDR4 whereas R20s use DDR3.
RAM speed matters a lot for ML applications and DDR4 is about 2x as fast as DDR3 in all relevant measurements.
If you’re going to offload any part of these models to CPU, which you 99.99% will have to do for a model of this size with this class of hardware, skip the 20s and go to the 30s.
The Ford Mach-E is excellent. I have also heard great things about Kia/Hyundai, VW and Volvo EVs as well.
In 2016 I drove a Tesla Model S P85D and I was surprised at how crappy the interior was considering it was a six figure car. And I don’t mean minimalist, I mean poor quality.
Back then, Tesla was you only real option. Today, there’s a lot of great competition in the market.
I’ve had good luck recently with Gigabyte. I know it’s circumstantial but my hope is that they are recovering.