this post was submitted on 01 Jan 2024
14 points (75.0% liked)

Buildapc

3776 readers
38 users here now

founded 1 year ago
MODERATORS
 

I have been teaching myself Linux on really old hardware. I am looking into building a new system so I can learn SDXL and maybe mess around a little with LLMs.

I have been reading as much as I can, but get a lot conflicting info. Ideally I would like a build that I can get started with, without being at bare minimums if possible. Just best value at a realistic starting point. Willing to save up more if it will save me from waiting forever while my PC is maxed out. With options to expand easily as I go. Don't mind using used hardware. I have also read some about cheap enterprise hardware being an option that can expand easily?

Any help would be awesome. Thank you in advance.

P.S. Happy New Year! Wishing everyone all the best. After the past few years, we could all use a better one.

top 4 comments
sorted by: hot top controversial new old
[–] MightEnlightenYou@lemmy.world 13 points 10 months ago (1 children)

I run a lot of LLMs locally, as well as doing image generation locally with Stable Diffusion.

The most important factor is the GPU. If you're gonna do AI stuff with your GPU it basically has to be a CUDA GPU. You'll get the most bang for the buck with a 3090 TI, (amount of VRAM is also important). And get at least 64 GB of RAM.

If you get this you'll be set for a year until you learn enough to want better hardware.

A lot of people try to buy their way out of a lack of knowledge and skill about these things, don't do that. I'm able to get better results with 7B models than many get with 70B models.

Get LM Studio for the LLMs and get A1111 (or ComfyUI or Foooocus) for image generation.

[–] dm_me_your_boobs@lemm.ee 1 points 10 months ago (1 children)

How is comfyUI these days? I use a similar node based setup for my home automation and really liked the idea of using it for image gen. But, also, I kinda wanna just type and go for image gen, so StableDiffusionWebUI has been my go-to.

[–] MightEnlightenYou@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

I'd say that ComfyUI is superior in most ways (including speed and features), but I know A1111 much better than ComfyUI so I just use ComfyUI when it can do a thing that A1111 can't

[–] Starbuck@lemmy.world 3 points 10 months ago

I have an old jetson nano that’s pretty neat for getting into ML. It’s basically a raspberry pi with a GPU strapped to it. I’ve had it for a few years, so you could probably get one cheap.

Any bigger than that and I would say just look into paying for Google Colab. https://colab.google

You aren’t going to want to buy dedicated resources for local training just yet. Learn the skills to interact with big hardware today, no need to wait. Only buy when you know what you need.