GnuLinuxDude

joined 2 years ago
[–] GnuLinuxDude@lemmy.ml 10 points 1 day ago (1 children)

The student who wants to actually go learn something and become an expert in a field has to contend with the fact that universities are just gilded vocational schools. And at least in the USA you will go into a lot of debt if you don’t come from wealth just to get through it. And there are no promises of stable income and employment when you do get through it.

So, while I think this person comes out the other end functionally no more informed than before, and I would not want to work with her, I can’t fault her for recognizing the bullshit that is the American education system and exploiting it.

For my own part, I busted my ass through university and now I’m seeing all my efforts get gobbled up into AI, cheapening everything I’ve ever done and worked for, and possibly evicting me from my career sometime in the coming few years. That wasn’t a concern when I graduated, LLMs didn’t exist yet. But they do now for current and future students.

[–] GnuLinuxDude@lemmy.ml 6 points 4 days ago

"We drove the movie industry out of Hollywood. Let's not do the same with car culture," Leno said in a Facebook video supporting the bill.

actually, let's kill car culture completely

[–] GnuLinuxDude@lemmy.ml 68 points 6 days ago (3 children)

Thought something this stupid was just for shits and giggles. Then I saw this is LinkedIn and he’s a senior product manager.

Haha the joke is on the rest of us

[–] GnuLinuxDude@lemmy.ml 14 points 1 week ago

And permissively licensed utils have been around thanks to BSD and it’s never been an issue.

The distinction is that BSD coreutils are not attempting to be a drop-in 1:1 compatible replacement of GNU coreutils. The Rust coreutils has already accomplished this with its inclusion into Ubuntu 26.04.

If I wanted a permissively licensed system, I'd use BSD. I don't, so I primarily use Linux. I think citing a proprietary OS like macOS as a reason why permissively licensed coreutils are OK is kind of funny. It's easy to forget that before before the GPL there were many incompatible UNIX systems developed by different companies, and IMO the GPL has kept MIT and BSD-licensed projects "Honest", so-to-speak. Without the GPL to keep things in check, we'd be back to how things were in the 80s.

So what's next on the docket for Ubuntu? A permissively licensed libc?

[–] GnuLinuxDude@lemmy.ml 22 points 1 week ago (3 children)

Not interested in an MIT-licensed coreutils. Thanks, but no thanks!

[–] GnuLinuxDude@lemmy.ml 50 points 1 week ago (1 children)

Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.

[–] GnuLinuxDude@lemmy.ml 32 points 1 week ago

If you want to simulate running Claude while it's offline, just go run the faucet in your kitchen.

[–] GnuLinuxDude@lemmy.ml 5 points 1 month ago

My media server, which is just my server generally, is an old thinkpad I have from 2014. For media I use Jellyfin and I ensure the content is already in a format that will not require transcoding on any device I care to serve to (typically mp4 1080p hevc + aac).

If you look at the used computer market, there are endless options to attain what you are asking for. My only real advice is make sure the computer doesn’t draw much power and, if possible, doesn’t emit much or any fan noise. A laptop is a decent choice because the battery kind of serves as an uninterruptible power supply. I just cap my charge limit at 80% since I never unplug it.

[–] GnuLinuxDude@lemmy.ml 3 points 1 month ago (1 children)

Vista called it SuperFetch, and preloading pages into memory is not a bad technique. macOS and Linux do it, too, because it's a simple technique for speeding up access to data that would otherwise have to be fetched from disk. You can see that Linux does it as you check the output of free and read out the buff/cache column. Freeing unused pages from memory is very fast, because you can just overwrite dirty pages.

[–] GnuLinuxDude@lemmy.ml 20 points 1 month ago (7 children)

For Windows if 8 gb of RAM is not enough that’s an own-goal. Because it is. Or it should be. Windows 11 is not so dramatically better than Windows Vista SP3 to require a 10x better computer to use comfortably. Actually, in many ways Windows 11 is a massive downgrade from what came before it.

I’m glad the MacBook neo is only 8gb. That means they have to support it as a usable low-end target. That means we aren’t jumping the gun on saying “actually you need 12 gigs of RAM” as if that should be normal for a usable computer.

[–] GnuLinuxDude@lemmy.ml 32 points 1 month ago (4 children)

How are you supposed to make peace with a pedophile rapist who has staffed his office with racist Warhawk morons who are convinced they can do whatever they want with complete impunity, and they start bombing you while you are under diplomatic negotiations (twice!)?

 

I'm installing 3x2TB HDDs into my desktop pc. The drives are like-new.

Basically they will replace an ancient 2tb drive that is failing. The primary purpose will basically be data storage, media, torrents, and some games installed. Losing the drives to failure would not be catastrophic, just annoying.

So now I'm faced with how to set up these drives. I think I'd like to do a RAID to present the drives as one big volume. Here are my thoughts, and hopefully someone can help me make the right choice:

  • RAID0: Would have been fine with the risk with 2 drives, but 3 drives seems like it's tempting fate. But it might be fine, anyhow.
  • RAID1: Lose half the capacity, but pretty braindead setup. Left wondering why pick this over RAID10?
  • RAID10: Lose half the capacity... left wondering why pick this over RAID1?
  • RAID5: Write hole problem in event of sudden shutoff, but I'm not running a data center that needs high reliability. I should probably buy a UPS to mitigate power outages, anyway. Would the parity calculation and all that stuff make this option slow?

I've also rejected considering things like ZFS or mdadm, because I don't want to complicate my setup. Straight btrfs is straightforward.

I found this page where the person basically analyzed the performance of different RAID levels, but not with BTRFS. https://larryjordan.com/articles/real-world-speed-tests-for-different-hdd-raid-levels/ (PDF link with harder numbers in the post). So I'm not even sure if his analysis is at all helpful to me.

If anyone has thoughts on what RAID level is appropriate given my use-case, I'd love to hear it! Particularly if anyone knows about RAID1 vs RAID10 on btrfs.

 

In some e-waste bin I found an entry level M1 MacBook Pro. The display didn’t work (visible crack lines on the panel and the screen lights up but only shows black), but everything else about the computer is totally fine, I determined after testing.

I managed to factory reset the thing and now I have an extra computer on my hands. And a good one, at that, because I think even an entry level M1 is still a good computer.

I already have a laptop, desktop, and an old server. So I feel like all my needs are met. But are there creative uses with an extra Mac?

 

I have, within the context of my job, things to do that will take various lengths of time and are of various priorities. If I get blocked on one it'd be useful to know what to switch to, and on.

I have, within the context of my personal life, things that I want to do that will take undetermined amounts of time and are of various priorities.

It'd also be nice to have a record to go back and reflect on when I did what. And it'd be nice to plan a little ahead so that I can decide what I hope to do next.

So... how do you do it? I am so bad at time management. Is there a useful software I can use (if so, is it foss)? Is there a way to keep consistent with my planner so that I don't fall behind on managing my time management, without falling into the trap of spending much effort on creating a time management system that all my time is spent managing my time.

Send help :(

 

I was walking home yesterday and I just happened to come across an HP LaserJet p2035n sitting by the dumpster, waiting to be taken away. I've never owned a printer, but this thing looked like it came from an era when such devices were made to be reliable instead of forcing DRM-locked cartridges, so I picked it up and took it with me. After getting situated I started some online research and I figure this brand of printers was manufactured from about 2008-2012, and my printer has a 2012 date.

As it turns out, this tossed printer works perfectly fine. I plugged it into power and ran a test sheet, and it prints almost perfectly. I plugged it via USB-B into my PC running Fedora 41 and immediately it gets picked up and added as usable printer. I then plugged the printer into its Ethernet port and fortunately this thing is new enough to have Bonjour (i.e. mdns) services so once again my PC just immediately finds it and can print. Awesome!

My laptop is a MacBook. While it did detect the printer over the network, it couldn't add the printer because it couldn't find a driver to operate it. I honestly don't understand why that's a problem since I assume macOS also uses CUPS just like Linux. But at any rate, I found the solution:

With CUPS on Linux I can share the printer. After configuring firewall-cmd to allow the ipp service now my iPhone and my MacBook can also print to the shared printer using the generic PostScript driver. So, in conclusion, Linux helped me 1) use this printer with no additional effort of installing drivers, 2) share this printer to devices which were not plug-and-play ready, and 3) print pics of Goku and Vegeta. As always, I love Linux.

 

When I first set up my web server I don't think Caddy was really a sensible choice. It was still immature (The big "version 2" rewrite was in beta). But it's about five years from when that happened, so I decided to give Caddy a try.

Wow! My config shrank to about 25% from what it was with Nginx. It's also a lot less stuff to deal with, especially from a personal hosting perspective. As much as I like self-hosting, I'm not like "into" configuring web servers. Caddy made this very easy.

I thought the automatic HTTPS feature was overrated until I used it. The fact is it works effortlessly. I do not need to add paths to certificate files in my config anymore. That's great. But what's even better is I do not need to bother with my server notes to once again figure out how to correctly use Certbot when I want to create new certs for subdomains, since Caddy will do it automatically.

I've been annoyed with my Nginx config for a while, and kept wishing to find the motivation to streamline it. It started simple, but as I added things to it over the years the complexity in the config file blossomed. But the thing that tipped me over to trying Caddy was seeing the difference between the Nginx and Caddy configurations necessary for Jellyfin. Seriously. Look at what's necessary for Nginx.

https://jellyfin.org/docs/general/networking/nginx/#https-config-example

In Caddy that became

jellyfin.example.com {
  reverse_proxy internal.jellyfin.host:8096
}

I thought no way this would work. But it did. First try. So, consider this a field report from a happy Caddy convert, and if you're not using it yet for self-hosting maybe it can simplify things for you, too. It made me happy enough to write about it.

 

For many, many years now when I want to browse a man page about something I'll type man X into my terminal, substituting X for whatever it is I wish to learn about. Depending on the manual, it's short and therefore easy to find what I want, or I am deep in the woods because I'm trying to find a specific flag that appears many times in a very long document. Woe is me if the flag switch is a bare letter, like x.

And let's say it is x. Now I am searching with /x followed by n n n n n n n n N n n n n n. Obviously I'm not finding the information I want, the search is literal (not fuzzy, nor "whole word"), and even if I find something the manual pager might overshoot me because finding text will move the found line to the top of the terminal, and maybe the information I really want comes one or two lines above.

So... there HAS to be a better way, right? There has to be a modern, fast, easily greppable version to go through a man page. Does it exist?

P.S. I am not talking about summaries like tldr because I typically don't need summaries but actual technical descriptions.

 

There are a lot of good improvements and fixes in this release. As a remorseful Nvidia on Linux user, I am extremely excited that GAMMA_LUT is finally making its debut in the Nvidia driver. This means I can actually try to use Gnome Wayland at night with the night shift feature, assuming other Wayland issues are also resolved.

 

tl;dr question: How do I get the Handbrake Flatpak to operate at a high niceless level in its own cgroup by default? I'm using Fedora Linux.


So if I understand things correctly, niceness in Linux affects how willing the process scheduler is to preempt a process. However, with cgroups, niceness only affects this scheduling relative to other processes within a cgroup. This means a process running with a high niceness in its own cgroup has the same priority as other processes in equivalent cgroups, and it will not in fact be preempted in a way one would expect.

So why does this matter to me at all? I have a copy of Handbrake installed from Flatpak. And sometimes I want to encode a video in the background while still having a decently responsive desktop experience so I can do other things, and basically let Handbrake occupy the cpu cycles I'm not using. Handbrake and the video encoding process should be at the bottom priority of everything to the maximum extent possible.

But it does not appear to be enough to just go into htop and set the handbrake process's niceness level to 19 and then start an encode, because of the cgroup business I mentioned above.

Furthermore, in my opinion Handbrake should always be the lowest priority process without my having to intervene. I would like to be able to launch it without having to set its niceness. Does anybody have suggestions on this? Is my understanding of the overall picture even correct?

222
PipeWire 0.3.77 Released (gitlab.freedesktop.org)
 

PipeWire 0.3.77 (2023-08-04)

This is a quick bugfix release that is API and ABI compatible with previous 0.3.x releases.

Highlights

  • Fix a bug in ALSA source where the available number of samples was miscaluclated and resulted in xruns in some cases.
  • A new L permission was added to make it possible to force a link between nodes even when the nodes can't see each other.
  • The VBAN module now supports midi send and receive as well.
  • Many cleanups and small fixes.
184
submitted 2 years ago* (last edited 2 years ago) by GnuLinuxDude@lemmy.ml to c/linux_gaming@lemmy.ml
 

After approximately 10 months in a release candidacy phase, OpenMW 0.48 has finally been released. A list of changes can be found in the link.

The OpenMW team is proud to announce the release of version 0.48.0 of our open-source engine!

So what does another fruitful year of diligent work bring us this time? The two biggest improvements in this new version of OpenMW are the long-awaited post-processing shader framework and an early version of a brand-new Lua scripting API! Both of these features greatly expand what the engine can deliver in terms of visual fidelity and game logic. As usual, we've also solved numerous problems major and minor, particularly pertaining to the newly overhauled magic system and character animations.

A full list of changes can be found in the link to Gitlab.

What is OpenMW?

"OpenMW is a free, open source, and modern engine which re-implements and extends the 2002 Gamebryo engine for the open-world role-playing game The Elder Scrolls III: Morrowind."

It is an excellent way to play Morrowind on modern systems, and on alternative systems other than MS Windows. It requires the a copy of the original game data from Morrowind, as OpenMW does not include assets or any other game data - it is simply a recreation of the game engine. OpenMW can be found on Flathub for Linux users here. https://flathub.org/apps/org.openmw.OpenMW

view more: next ›