brick

joined 1 year ago
[–] brick@lemm.ee 4 points 6 months ago (2 children)

I’ve had good luck recently with Gigabyte. I know it’s circumstantial but my hope is that they are recovering.

[–] brick@lemm.ee 0 points 6 months ago (1 children)

Exactly. It is a rebrand of hatchbacks. Which is fine because hatchbacks are great.

[–] brick@lemm.ee 7 points 7 months ago* (last edited 7 months ago) (1 children)

Lemmy is just bursting at the seams with miserable fucks like this.

[–] brick@lemm.ee 1 points 7 months ago

They are saying that because Boeing doesn’t make the engines.

[–] brick@lemm.ee 0 points 8 months ago

But I want it ALL NOW!!!!!!

[–] brick@lemm.ee 30 points 8 months ago (1 children)

The selling point for M365 Copilot is that it is a turnkey AI platform that does not use data input by its enterprise customers to train generally available AI models. This prevents their internal data from being output to randos using ChatGPT. OpenAI definitely does use ChatGPT conversations to further train ChatGPT so there is a major risk of data leakage.

Same situation with all other public LLMs. Microsoft’s investments in OpenAI aren’t really relevant in this situation.

[–] brick@lemm.ee -1 points 8 months ago* (last edited 8 months ago)

So sorry to interrupt your circlejerk about this guy’s opinion on 3d V-Cache technology with a tangentially related discussion about 3d V-Cache technology here on the technology community.

I fully understand the point you’re trying to make here, but just as you think my comments added nothing to the discussion, your replies to them added even less.

[–] brick@lemm.ee -4 points 8 months ago (2 children)

I was comparing the 7950x and the 7950x3d because those are the iterations that are available right now and what I have been personally comparing as I mentioned. I apologize if I wasn’t clear enough on that point.

My point was that the essence of the take, which I read to be, “CPUs with lower clocks but way more cache only offer major advantages in specific situations” is not particularly off base.

[–] brick@lemm.ee 8 points 9 months ago (1 children)

In your mind, do you really think that is the intention here? Seems more like a convenience for people who use both Linux and Windows.

I have to use both so I welcome it.

[–] brick@lemm.ee 5 points 9 months ago

You would want to look for an R730, which can be had for not too much more. The 20 series was the “end of an era” and the 30 series was the beginning of the next era. Most importantly for this application, R30s use DDR4 whereas R20s use DDR3.

RAM speed matters a lot for ML applications and DDR4 is about 2x as fast as DDR3 in all relevant measurements.

If you’re going to offload any part of these models to CPU, which you 99.99% will have to do for a model of this size with this class of hardware, skip the 20s and go to the 30s.

view more: next ›