this post was submitted on 16 Apr 2026
35 points (97.3% liked)
Homelab
2137 readers
1 users here now
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're clearly an expert.
This is a scam and you know it.
The first joke was running an LLM for a company on a 5070ti... That's nowhere near enough VRAM. Even two 3090s linked together would have been way more plausible.
I'm not an expert at all, I've never built out an enterprise system before so that's why I wanted to verify.
I know you can run really small models on some 16gb VRAM cards, so I didn't know if people use tiny models for very basic automation systems or something. I haven't heard of that being viable but wanted to sanity check myself