this post was submitted on 30 Jan 2026
71 points (90.8% liked)

Selfhosted

55359 readers
889 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scrubbles@poptalk.scrubbles.tech 5 points 1 day ago (2 children)

I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It's all gated, I control what items I expose and what I don't. I personally don't want it reading my emails, but since I host it it's really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.

I'm really sick of the "AI is just bad because AI is bad". It can be incredibly useful - IF you know it's limitations and understand what is wrong with it. I don't like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don't trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it'd be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it's when I'm working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.

AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.

[–] non_burglar@lemmy.world 4 points 1 day ago

Fair enough.

I am trying to be careful not to disparage the technology, it's not the tech, it's the mad rush to AI everything that's the problem. And in our space, it is causing folks who normally think critically to abandon basic security and stability concerns.

It wasn't my intention to criticize your choice. Have a good one.

[–] Armillarian@pawb.social 1 points 1 day ago (1 children)

I think its better if their github mention the minimum token count requirement to selfhost this. I don't think it will ever reach something usable for normal selfhost user.

Based on your statement i think most of your experience come from corporate AI usage... Which deploy multiple agent system in their AI and hosted in large data center.

I do selfhost my own, and even tried my hand at building something like this myself. It runs pretty well, I'm able to have it integrate with HomeAssistant and kubectl. It can be done with consumer GPUs, I have a 4000 and it runs fine. You don't get as much context, but it's about minimizing what the LLM needs to know while calling agents. You have one LLM context that's running a todo list, you start a new one that is charge of step 1, which spins off more contexts for each subtask, etc. It's not that each agent needs it's own GPU, it's that each agent needs it's own context.