Rewrite the entire kernel exclusively in rust!
-hehehe-
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Rewrite the entire kernel exclusively in rust!
-hehehe-
And that's how WW3 started..!
There is a separate kernel which is being written entirely in rust from scratch that might interest you. I'm not sure if this is the main one https://github.com/asterinas/asterinas but it is the first one that came up when I searched.
By the tone of your post you might just want to watch the world burn in which case I'd raise an issue in that repo saying "Rewrite in C++ for compatibility with wider variety of CPU archs" ;)
I'm of the opinion that a full rewrite in rust will eventually happen, but they need to be cautious and not risk alienating developers ala windows mobile so right now it's still done in pieces. I'm also aware that many of the devs who sharpened their teeth on the kernel C code like it as it is, resist all change, and this causes lots of arguments.
Looking at that link, I'm not liking the MPL.
I'm not sure whether this should be a "standard", but we need a Linux Distribution where the user never has to touch the command line. Such a distro would be beneficial and useful to new users, who don't want to learn about command line commands.
And also we need a good app store where users can download and install software in a reasonably safe and easy way.
@gandalf_der_12te @original_reader
Linux Mint and some Kind of Ubuntu-Flavour are the Goto. Preferably the LTS Vefsions. For Ubuntu its 24.04, for Mint it is 22. So you ever need the commandline only for one short line and only in 2029.
So for the next few years you don't need to touch the commandline.
I really don't understand this. I put a fairly popular Linux distro on my son's computer and never needed to touch the command line. I update it by command line only because I think it's easier.
Sure, you may run into driver scenarios or things like that from time to time, but using supported hardware would never present that issue. And Windows has just as many random "gotchas".
I try to avoid using the command line as much as possible, but it still crops up from time to time.
Back when I used windows, I would legitimately never touch the command line. I wouldn't even know how to interact with it.
We're not quite there with Linux, but we're getting closer!
I try to avoid using the command line as much as possible
Why would you do that?
I think there are some that are getting pretty close to this. Like SteamOS (although not a traditional DE) and Mint.
Ubuntu as well. I wish I could say OpenSuse...
Mint is pretty good, but I found the update center GUI app to always fail to update things like Firefox with some mirror error (regardless of whether you told it to use it or not). It happened for my old desktop (now my dad’s main computer), my LG laptop or used HP elitedesk G4. Using “sudo apt update” + “sudo apt upgrade” + Y (to confirm) on the command line was 10x easier and just worked. I do feel better/safe now that they use Linux for internet browsing instead of windows too.
Why do people keep saying this? If you don't want to use the command line then don't.
But there is no good reason to say people shouldn't. It's always the best way to get across what needs to be done and have the person execute it.
The fedora laptop I have been using for the past year has never needed the command line.
On my desktop I use arch. I use the command line because I know it and it makes sense.
Its sad people see it as a negative when it is really useful. But as of today you can get by without it.
It’s always the best way to get across what needs to be done and have the person execute it.
Sigh. If you want to use the command line, great. Nobody is stopping you.
For those of us who don't want to use the command line (most regular users) there should be an option not to, even in Linux.
Its sad people see it as a negative when it is really useful.
It's even sadder seeing people lose sight of their humanity when praising the command line while ignoring all of its negatives.
lose sight of their humanity
Ok this is now a stupid conversation. Really? Humanity?
Look, you can either follow a flowchart of a dozen different things to click on to get information about your thunderbolt device or type boltctl -list
Do you want me to create screen shots of every step of the way to use a gui or just type 12 characters? That is why it is useful. It is easy to explain, easy to ask someone to do it. Then they can copy and paste a response, instead of yet another screenshot.
Next thing you know you will be telling me it is against humanity to "right click". Or maybe we all should just get a Mac Book Wheel
Look, I am only advocating that it is a very useful tool. There is nothing "bad" about it, or even hard. What is the negative?
But I also said, I have been using a Fedora laptop for over a year and guess what? I never needed the command line. Not once.
Ok this is now a stupid conversation. Really? Humanity?
Yeah, humanity. The fact you think it's 'stupid' really just proves my point that you're too far gone.
or type boltctl -list
Really? You have every command memorized? You never need to look any of them up? No copy-pasting!
Come on, at least try to make a decent argument to avoid looking like a troll.
I'm glad rational people have won out and your rhetoric is falling further and further by the wayside. The command line is great for development and developers. It's awful for regular users which is why regular users never touch it.
You lost sight of your humanity, which is why you don't even think about how asinine it is to say "just type this command!" as though people are supposed to know it intuitively.
Gonna block ya now. Arguing with people like you is tiresome and a waste of time.
Have fun writing commands. Make sure you don't use a GUI to look them up, or else you'd be proving me right.
You blocked me over a difference of opinion?
Wow.
All I am trying to say it that it is a tool in the toolbox. Telling people Linux needs it is not true, telling people it's bad is not true.
Quit trying to make it a negative. I would encourage anyone to explore how to use this tool. And when trying to communicate ideas on the internet it is a very useful one.
I have never blocked anyone, I find that so strange. It's like saying because of our difference on this issue, we could never have common ground on any other.
And you ask me to remember my humanity?
Each monitor should have its own framebuffer device rather than only one app controlling all monitors at any time and needing each app to implement its own multi-monitor support. I know fbdev is an inefficient, un-accelerated wrapper of the DRI, but it's so easy to use!
Want to draw something on a particular monitor? Write to its framebuffer file. Want to run multiple apps on multiple screens without needing your DE to launch everything? Give each app write access to a single fbdev. Want multi-seat support without needing multiple GPUs? Same thing.
Right now, each GPU only gets 1 fbdev and it has the resolution of the smallest monitor plugged into that GPU. Its contents are then mirrored to every monitor, even though they all have their own framebuffers on a hardware level.
This right here is why i moved to a single display setup.
So this is why multi monitor support has been a never ending hot mess?!
Yes and no. It would solve some problems, but because it has no (non-hacky) graphics acceleration, most DEs wouldn't use it anyway. The biggest benefit would be from not having to use a DE in some circumstances where it's currently required.
While all areas could benefit in terms of stability and ease of development from standadization, the whole system and each area would suffer in terms of creativity. There needs to be a balance. However, if I had to choose one thing, I'd say the package management. At the moment we have deb, rpm, pacman, flatpak, snap (the latter probably should not be considered as the server side is proprietary) and more from some niche distros. This makes is very difficult for small developers to offer their work to all/most users. Otherwise, I think it is a blessing having so many DEs, APIs, etc.
Generally speaking, Linux needs better binary compatibility.
Currently, if you compile something, it's usually dynamically linked against dozens of libraries that are present on your system, but if you give the executable to someone else with a different distro, they may not have those libraries or their version may be too old or incompatible.
Statically linking programs is often impossible and generally discouraged, making software distribution a nightmare. Flatpak and similar systems made things easier, but it's such a crap solution and basically involves having an entire separate OS installed in parallel, with its own problems like having a version of Mesa that's too old for a new GPU and stuff like that. Applications must be able to be packaged with everything they need with them, there is no reason for dynamic linking to be so important in Linux these days.
I'm not in favor of proprietary software, but better binary compatibility is a necessity for Linux to succeed, and I'm saying this as someone who's been using Linux for over a decade and who refuses to install any proprietary software. Sometimes I find myself using apps and games in Wine even when a native version is available just to avoid the hassle of having to find and probably compile libobsoletecrap-5.so
This.
From the perspective of software preservation, we need this. Sometimes we won't have the source, and just need it to work while also getting security updates.
From the perspective of software delivery: read up on JangaFX's recent article about this topic and the problems they run into delivering software in the present
Static linking is a good thing and should be respected as such for programs we don't expect to be updated constantly.
One that Linux should've had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.
A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they're backwards-compatible with, so new versions don't necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren't present, allowing programs to drop certain features gracefully, so we wouldn't need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.
Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.
ARM support. Every SoC is a new horror.
Armbian does great work, but if you want another distro you’re gonna have to go on a lil adventure.