this post was submitted on 07 Jul 2024
835 points (92.6% liked)
linuxmemes
21192 readers
552 users here now
Hint: :q!
Sister communities:
- LemmyMemes: Memes
- LemmyShitpost: Anything and everything goes.
- RISA: Star Trek memes and shitposts
Community rules (click to expand)
1. Follow the site-wide rules
- Instance-wide TOS: https://legal.lemmy.world/tos/
- Lemmy code of conduct: https://join-lemmy.org/docs/code_of_conduct.html
2. Be civil
- Understand the difference between a joke and an insult.
- Do not harrass or attack members of the community for any reason.
- Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
- Bigotry will not be tolerated.
- These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
3. Post Linux-related content
- Including Unix and BSD.
- Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of
sudo
in Windows. - No porn. Even if you watch it on a Linux machine.
4. No recent reposts
- Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.
Please report posts and comments that break these rules!
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've heard this from several people, but you're the lucky number by which I'd heard it enough that I bothered to gather some references to refute this.
First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It's been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L^4^Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.
This is not MUCH slower, and - indeed - unless you're doing HPC applications, is close enough to be unnoticeable.
Edit: I was originally going to omit this, as it's propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I'm post-facto including it.
Indeed, first generation microkernels were so bad, that Jochen Liedtke in rage created L3 "to show how it's done". While it was faster than existing microkernels, it was still slow.
I'll mark quotes from paper as doublequotes.
Wait, what? Co-located in-kernel? So, loadable module?
Right now I stopped at the end of second page of this paper. Maybe will continue later.
Will read.