this post was submitted on 27 Apr 2026
162 points (96.6% liked)
Programming
26701 readers
579 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Open source models ? https://huggingface.co/ + https://lmstudio.ai/
In the long run, deep-pocketed companies will not have any distinct advantage on developing core models, but they will always have an advantage on computing infrastructure (both for training and servicing queries) and access to content (either by owning major content sources like social media properties or having exclusive license access to key sources).
The useful ones are still provided by big companies because the rest of us can't afford the hardware to train them.
AI won't be "democratized" anytime soon like the rest of the computer software world has.
We have computing power in our pockets a million times more powerful than we used to send man to the moon, why do you think we'll never have enough power?
I have already pointed out https://eurollm.io/
The EuroLLM project includes Instituto Superior Técnico, the University of Edinburgh, Instituto de Telecomunicações, Université Paris-Saclay, Unbabel, Sorbonne University, Naver Labs, and the University of Amsterdam. Together they created EuroLLM-22B, a multilingual AI model supporting all 24 official EU languages. Developed with support from Horizon Europe, the European Research Council, and EuroHPC, this open-source LLM aims to enhance Europe’s digital sovereignty and foster AI innovation. Trained on the MareNostrum 5 supercomputer, EuroLLM outperforms similar-sized models. It is fully open source and available via Hugging Face.
So long as someone doesn't want to rely on big tech there will be people pushing for independence just like Linux users such as myself
There are 700B+ parameter open weight models now. Frontier models are in the trillions.
And even that model apparently took a supercomputer to train. I don't have a supercomputer so I can't train my own models like I can compile my own software. This is not comparable to running Linux where you can just compile your own kernel or even whole operating system (former Gentoo user here).
I've tried running the models my 8 GB card can handle. They're OK for a quick question, but they won't be doing anything useful for me.
Not the person you replied to, but I have thoughts on this point in particular:
Because companies are using so much computing power it requires as much electricity as a city. Or you can take your pocket computing resources and see his long it takes to train an LLM.
So how would I create such an "Open Source" model? They don't share the data used to create them do they? Let's not even get started on how much computing power I would need to train one of those things. These selfhosted models solve nothing except some data privacy issues. Sure you no longer send all your code to a shady AI company but you are still 100% dependent on them sharing their models.
No, and going by the OSI definition of "open source AI" they don't have to, acknowledging that the training material is often copyrighted and can't be shared.
It's a strange definition of "open source", one where you're not actually allowed to see the source.
https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html
There is also a move into synthetic data and human trained so we will have to see where the training data goes copyright wise in the future
Do you build your own Linux from scratch? If so why would you assume you can build an LLM from scratch?
It's mad easy to build your own Linux from scratch in comparison to building an LLM. You can have your own distro running in like an hour. With buildroot you can have it in even less than that.
I have no idea what you're talking about
... Then why did you use it as an example?
Because the average person is not building Linux from scratch nor would they know how to
The average person wouldn't be building an open source LLM either. I don't think I follow. I was just saying that your comparison wasn't going to hit correctly at all due to how easy it actually is to build Linux and a full Linux distribution.
Yeah that's why I'm saying:
The OP is basically saying it's not really open source unless I can personally build it! Which I am saying I don't think is a requirement of open source software (your personal ability to compile software does not negate from it it's open sourceness)
tbh I wouldn't have an idea on how to build either, they are way above my skill level, i have no idea how to make a linux distro either, but i'm certain most are open source
https://unsloth.ai/docs/new/studio
This was only recently released, maybe in the future we'll have training material uber compressed down in an open source format that anyone with the skill and knowledge can use and different 'distro' releases of LLM's, we already have tons of smaller models especially from European Universities and others
https://digital-strategy.ec.europa.eu/en/policies/ai-factories
We are only like 3-4 years into AI going mainstream if that, afaik the heat death of the universe is at least 1000 years away, we have lots of time to work and improve on them, I can only wonder where they will be at in 100 years, so I try not to make any damning facebook boomer tier statements about the future
Look at the state of software today. Every corporation and government are blindly sticking with Microsoft, Google or similar. Even though there are some ideas to move away and embrace OSS, I doubt it will happen with governments, even less with corps. I foresee something similar in future with AI.
Are you sure?
https://www.rfi.fr/en/france/20260417-france-to-remove-windows-from-government-computers-in-sovereignty-push
https://tuta.com/blog/countries-ditching-microsoft-choosing-linux-digital-sovereignty
It does not take much for things to change, you might like this:
We've Hit A Wall With Transport. Here's Why | Black Swans 3 | If You're Listening
https://youtu.be/o1R6Aq19A6Y?t=1281
Great, all we need is a few decades and a world superpower becoming world-threateningly corrupt
Sure but it’s mostly been that way for awhile. The players on the board shift, but it’s almost always Java, or Microsoft’s flavor of the decade or classic C or objective c or switch or whatever. Are you arguing that big tech will lock down their documentation on APIs and proprietary language behind their own AIs so that developers are focred to “vibe code” them through AI interaction only, and open source models will be unable to train on them?
For which you still need massive amounts of memory and compute to run reliably. That, and the fact that chatbots and agents nowadays rely on all sorts of proprietary customizations, outside of the realm of LLMs, to perform certain tasks.
The gap will take decades to close, if it ever does.
2026's average gaming PC is massive amounts of memory and compute apparently
lol there are plenty of open source models in the top 100 with multiple SOTA models released in the last few months alone
There's also smaller LLM's being made like https://eurollm.io/ which excel in their own ways
Funny that just came up: https://discourse.ubuntu.com/t/the-future-of-ai-in-ubuntu/81130?=0
😁
Any model that can run on 16GB or less, is not going to be any close in real world tasks, to any other cloud based model. It just cannot be. There are people out there running Qwen on the Mac Studio with 96GB, and it falls short of cloud based models in both performance and speed.
The top 100 of what, exactly? Many blended benchmark results are notoriously biased, and LLMs “cheat” on benchmarks on every single opportunity, so it is still hard to tell, outside of real world tasks and speed, which models are actually better than others.
But regardless, the main point of the gap is resources. Even if the average gaming computer was really enough to run meaningful models, the vast majority of the world wouldn’t have access to it, even more so in this day and age, where a single RAM stick couldn’t be bought with a whole monthly salary in most parts of the world.
What makes you think we won't have the resources in the future?
Well you can compare Gemma 4 running in LM Studio on an average gaming PC to ChatGPT3.5 and you tell me? Or is your benchmark purely based on right at this very moment between open source models today vs cloud today?
For reference Gemma 4 is 26 billion parameters, gp3 thought to be over 175 billion and of course had no optimisations like MoE, it was searching its entire library every single question so was rather slow as well
We know as well that there is no slow down in pushing for optimisations, Deepseeks initial release was the initial driver for you don't have to just scale up using hardware alone
https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/
They're also pushing with Chinese native chips from Huawei trying to diversify away from nvidia holding the crown
The problem I've got is that you all have a god of the gaps, the conversation I was having 3 years ago was different to 2 years ago was different to 1 year ago, I was told AI could never do songs good enough then suddenly people were worried they couldn't tell the difference, then they said they could never do movies, now apparently not only is it good enough it's hilarious
https://www.youtube.com/watch?v=fgHn7PI55J4
The open source LLM's we have today are incredible and in the last few months we've had Qwen, GLM, Nemotron/Nvidia, Mistral, Google and heeaaps of others released, it feels like you're just looking for a reason to be dour and pessimistic but that's just me
Any way I'm off to sleep, have a good one :)
And I guess the problem I have with you, is that you seem to think that you can get results with 16GB, competitive with models that run on a Blackwell 6000 with 96GB, while ignoring the fact that the vast majority of the people in the world are running GPUs with 4 to 8 GB of VRAM, if they even have access to GPUs, at all.
That’s the gap. Most people don’t have the kind of money you think they do, and even those who do have some money, they will never achieve the same results as with cloud models, because if there’s a state of the art optimization that makes models 10 times smaller, cloud models will become 10 times bigger with that advantage. It’s pretty simple.