this post was submitted on 16 Apr 2026
35 points (97.3% liked)
Homelab
2137 readers
1 users here now
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Local AI is a great option to look into, but I can't imagine that's going to go well... 16GB of VRAM is going to limit you to very small models and small context size. I imagine for a law firm, you're going to want the AI to be reading lots of documents, so lots of context. Maybe it will be fine depending on how the 15-20 people access it, but I'm doubtful.
Is this machine just a proof of concept to start putting together a process and testing the waters? I wouldn't call total bullshit immediately, but I'd expect you'll eventually find that you need much more VRAM and probably a heavier development lift to integrate with n8n.
That's what I was thinking, they were saying that a 5070 Ti would be good enough for "whatever Ai automations the firm wants to build" which is where I was calling BS
Then yeah, that's for sure BS. For getting started and testing a PoC, it is "reasonable" depending on a lot of factors, but there's almost no way that a 5070 Ti will be a permanent production solution.