I like a very small amount of RGB.
I didn't always, I wanted full no color, but the ONLY GPU I could find had just a smidge of RGB in the logo (MSI something 5060 ti) and I like it as a highlight.
I like a very small amount of RGB.
I didn't always, I wanted full no color, but the ONLY GPU I could find had just a smidge of RGB in the logo (MSI something 5060 ti) and I like it as a highlight.
Bro how much did Desantis pay to get this angle on the header?
General Kenobi
(I can't help)
That's a shower thought of all time.
For simply productivity like Copilot or Text Gen like ChatGPT.
It absolutely is doable on a local GPU.
Source: I do it.
Sure I can't do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That's honestly all I want
Also, massive compute projects for the @home project are good?
Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I'm upgrading to a 5060ti just because I wanted to play with image Gen.
Whack. I just set up a Forgejo too.
Which is funny since that does solve a lot of the problems.
If it's completely open source at least.
Like OS data sets and model that can be ran locally means it's not trained on stolen data and it's not spying on people for more data.
And if it runs locally on a GPU, it's no worse for the environment than gaming. Really the big problem with the data center compute is the infrastructure of getting that data around.
Weird. There used to be screen shot receipts I saw years ago.
Maybe she scrubbed it and turned a leaf? I hope so at least because B'Elanna was my favorite character.
That's crazy.
Anyways I'm gonna pitch never buying a Samsung phone again to the HR people if this comes true.
I am a fan of LLMs and what they can do, and as such have a server specifically for running AI models. However, I've been reading "Atlas of AI" by Kate Crawford and you're right. So much of the data that they're trained on is inherently harmful or was taken without consent. Even in the more ethical data sets it's probably not great considering the sheer quantity of data needed to make even a simple LLM.
I still like using it for simple code generation (this is just a hobby to me so Vibe coding isn't a problem in my scenario) and corporate tone policing. And I tell people non stop that it's worthless outside of these use cases and maybe as a search engine, but I recommend Wikipedia as a better start almost Everytime.
I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.
$100 used and it was seconds to get responses.
Fully agree. I tried to make the SC work and wrote off a lot of it as "I'm just not used to it", but it really is asking a lot. In its defence, it was a first run product. The fact that it's still ass usable and as weird is impressive enough to me. But it's better as a piece of gaming history than a good product. It was just a good try.
I also agree with the Steam deck controls being actually good. I want the SC2 that's just a steam deck without the screen or computer.
So I guess the opposite of the steam brick.
I'd gladly pay $100 to have a steam deck like control scheme for my desktop. Rechargeable batteries and a Linux first design would be awesome. I don't mind just using cables all the time, but I would like better wireless options for Linux gamepads (though to be fair, I haven't tried connecting a wireless controller to a Linux box in 5 years).