this post was submitted on 14 Jul 2025
85 points (96.7% liked)

Technology

3499 readers
596 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 6 points 2 days ago* (last edited 2 days ago) (8 children)

AI is a tool (sorry)

This should be a bumper sticker. Also, thanks for this, bookmarking 404, wish I had the means to subscribe.


My hope is that the "AI" craze culminates in a race to the bottom where we end up in a less terrible state: local models on people's phones, reaching out to reputable websites for queries and redirection.

And this would be way better for places like 404, as they'd have to grab traffic individually and redirect users there.

[–] p03locke@lemmy.dbzer0.com 1 points 2 days ago (1 children)

My hope is that the “AI” craze culminates in a race to the bottom where we end up in a less terrible state: local models on people’s phones, reaching out to reputable websites for queries and redirection.

We're already heading there. But, it's not going to happen by sitting on your hands and waiting for the billionaires to hand you these local models on a silver platter. You honestly believe the overlords that own your phone will give you shit for free? They want you hooked on subscriptions, that send all of your personal data and social security numbers to their huge databases, until the day you die. And then they'll sell that data to your children and your grandchildren just to make even more profit.

You have to take it. You have to find it yourself. You found Lemmy. Good. So, go find other shit. Discover open source. Discover piracy. Discover Linux. Stay on top of it.

Google just killed uBlock Origin, but I'm using Firefox, because the writing was on the wall at least a year ago.

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

I mean, I run Nemotron and Qwen variants every day, in a couple of UIs, and am experimenting with Hunyan and Falcon. You are preaching to the choir here :P

The NPU backends are not easy to write though. They really need extensive support from the phone chip makers, but fortunately, the HW makers are very interested in this exact future (and, specifically, selling SoCs for people to use it with).

[–] p03locke@lemmy.dbzer0.com 1 points 1 day ago (1 children)

Have you used any good ComfyUI workflows specifically for chat LLMs?

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Not specifically. Ultimately, ComfyUI would build prompts/API calls, which I tend to do in scripts.

I tend to use Mikupad or Open Web UI for more general testing.

There are some neat tools with 'lower level' integration into LLM engines, like SGlang (which leverages caching and constrained decoding) to do things one can't do over standard APIs: https://docs.sglang.ai/frontend/frontend.html

[–] p03locke@lemmy.dbzer0.com 1 points 13 hours ago (1 children)

ComfyUI is just a bunch of Python code tied into I/O nodes. I'm surprised there isn't a good set of nodes for SGLang yet.

[–] brucethemoose@lemmy.world 1 points 13 hours ago* (last edited 12 hours ago) (1 children)

SGLang is partially a scripting language for prompt building leveraging its caching/logprobs output, for doing stuff like filling in fields or branching choices, so it's probably best done in that. It also requires pretty beefy hardware for the model size (as opposed to backends like exllama or llama.cpp that focus more on tight quantization and unbatched performance), so I suppose theres not a lot of interest from more local tinkerers?

It would be cool, I guess, but ComfyUI does feel more geared for diffusion. Image/video generation is more multimodel and benefits from dynamically loading/unloading/swapping all sorts of little submodels, loras and masks, applying them, piping them into each other and such.

LLM running is more monolithic: you have the 1 big model, maybe a text embeddings model as part of the same server, and everything else is just processing strings to build the prompts which one does linearly with python or whatever. Stuff like CFG and purpose-build loras do exist, but aren't used much.

[–] p03locke@lemmy.dbzer0.com 1 points 9 hours ago (1 children)

It's a shame, because ComfyUI can be so much more than just image generation. And just because there's a lot of string processing for LLMs doesn't mean that it isn't important to capture in an I/O interface, especially when it comes to preserving chat history. Save data, load data, ask new questions, etc.

ChatGPT is pretty damn powerful, I'll admit. But, all of its components need to be localized, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.

[–] brucethemoose@lemmy.world 1 points 8 hours ago* (last edited 8 hours ago)

, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.

It doesn't work that way. All MoE experts are 'interleaved' and you need all of them loaded at once, for every token. Some API servers can hotswap whole models, but its not fast, and rarely done since LLMs are pretty 'generalized' and tend to serve requests in parallel on API servers.

The closest to what you're thinking of is LoRAX (which basically hot-swaps Loras efficiently). But it needs an extremely specialized runtime derived from its associated paper, hence people tend to not use it since it doesn't support quantization and some other features as well: https://github.com/predibase/lorax

There is a good case for pure data processing, yeah... But it has little integration with LLMs themselves, especially with the API servers generally handling tokenizers/prompt formatting.

But, all of its components need to be localized

They already are! Local LLM tooling and engines are great and super powerful compared to ChatGPT (which offers no caching, no raw completion, primitive sampling, hidden thinking, and so on).

load more comments (6 replies)