this post was submitted on 05 Jan 2026
60 points (98.4% liked)

Linux

61050 readers
661 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS
 

While trying to move my computer to Debian, after allowing the installer to do it's task, my machine will not boot.

Instead, I get a long string of text, as follows:

Could not retrieve perf counters (-19)
ACPI Warning: SystemIO range 0x00000000000000B00-0x0000000000000B08 conflicts withOpRegion 0x0000000000000B00-0x0000000000000B0F (\GSA1.SMBI) /20250404/utaddress-204)
usb: port power management may beunreliable
sd 10:0:0:0: [sdc] No Caching mode page found
sd 10:0:0:0: [sdc] Assuming drive cache: write through
amdgpu 0000:08:00.0 amdgpu: [drm] Failed to setup vendor infoframe on connector HDMI-A-1: -22

And the system eventually collapses into a shell, that I do not know how to use. It returns:

Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
 - Check rootdelay= (did the system wait lomg enough?)
- Missing modules (cat /proc/modules; ls /dev)

Alert! /dev/sdb2 does not exist. Dropping to a shell!

The system has two disks mounted:

-- an SSD, with the EFI, root, var, tmp and swap partition, for speeding up the overall system -- an hdd, for /home

I had the system running on Mint until recently, so I know the system is sound, unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning. Under Debian, it booted once and then stopped booting all together.

The installation I made was from a daily image, as I am/was aiming to put my machine on the testing branch, in order to have some sort of a rolling distro.

If anyone can offer some advice, it would be very much appreciated.

top 50 comments
sorted by: hot top controversial new old
[–] qyron@sopuli.xyz 4 points 5 days ago* (last edited 5 days ago) (2 children)

@mvirts@lemmy.world @kumi@feddit.online @wickedrando@lemmy.ml @IsoKiero@sopuli.xyz @angband@lemmy.world @doodoo_wizard@lemmy.ml

Update - 2026.01.12

After trying to follow all advices I was given and failling miserably, I caved in and reinstalled the entire system, this time using a Debian Stable Live Image.

The drives were there - sda and sbd - the SSD and the HDD, respectively. sda was partioned from 1 through 5, while sbd had one single partition. As I had set during the installation. No error here.

However, when trying to look into /etc/fstab, the file listed exactly nothing. Somehow, the file was never written. I could list the devices through ls /dev/sd* but when trying to mount any one of it, it returned the location was not listed under /etc/fstab. And I even tried to update the file, mannually, yet the non existence of the drives persisted.

Yes, as I write this from the freshly installed Debian, I am morbidly curious to go read the file now. See how much has changed.

Because at this point I understood I wouldn't be going anywhere with my attemps, I opted to do a full reinstall. And it was as I was, again, manually partitoning the disk to what I wanted that I found the previous instalation had created a strange thing.

While all partions had a simple sd* indicator, the partition that should have been / was instead named "Debian Forky" and was not configured as it shoud. It had no root flag. It was just a named partition in the disk.

I may be reading too much into this but most probably this simple quirk botched the entire installation. The system could not run what simply wasn't there and it could not find an sda2 if that sda2 was named as something completely different.

Lessons to be taken

I understood I wasn't clear enough of how experienced with Debian I was. I ran Debian for several years and, although not a power-user, I gained a lot of knowledge about managing my own system tinkering in Debian, something I lost when I moved towards more up-to-date distros, more user-friendly, but less powerful learning tools. And after this, I recognized I need that "demand" from the system to learn. So, I am glad I am back to Debian.

Thank you for all the help and I can only hope I can returned it some day.

[–] mvirts@lemmy.world 1 points 4 days ago

Sounds like the right choice! I'm glad you got Debian up and running,

[–] IsoKiero@sopuli.xyz 1 points 4 days ago

It wasn't for nothing, you got some learning out of the experience and a story to tell. Good luck with the new system, maybe hold upgrading that to testing for a while, there's plenty to break and fix even without extra quirks from non-stable distribution :)

Have fun and feel free to ask for help again, I and others will be around to share what we've learned on our journeys.

[–] okwhateverdude@lemmy.world 34 points 1 week ago (1 children)

Sounds like your /etc/fstab is wrong. You should be using UUID based mounting rather than /dev/sdXY. Very likely you'll need to boot from a usb stick with a rescue image (the installer image should work), and fix up /etc/fstab using blkid

[–] qyron@sopuli.xyz 11 points 1 week ago (5 children)

You made me think that perhaps the BIOS/EFI is fudging something up. I checked and I had four separate entries pointing towards the SSD.

[–] okwhateverdude@lemmy.world 20 points 1 week ago (1 children)

When you do fix it, the internet would appreciate a follow up comment on what you did to fix the problem

[–] qyron@sopuli.xyz 14 points 1 week ago (1 children)

I will. Don't know when, but I will.

load more comments (1 replies)
load more comments (4 replies)
[–] GNUmer@sopuli.xyz 12 points 1 week ago (1 children)

Can you run lsblk within the emergency shell? Sounds a bit like the HDD has taken theplacde of /dev/sdb, upon which there's no second partition nor a root filesystem -> root device not found.

[–] qyron@sopuli.xyz 4 points 1 week ago* (last edited 1 week ago)

Perhaps? It fell into a busybox. How can I do what you are requesting?

[–] just_another_person@lemmy.world 11 points 1 week ago
  1. Boot into a LiveUSB of the same version of distro you tried to install
  2. View the drive mappings to see what they are detected as
  3. Ensure your newly created partitions can mount correctly
  4. Check /etc/fstab on your root drive (not the LveUSB filesystem) to ensure they match as the ones detected while in LiveUSB
  5. Try rebooting

Report changes here.

[–] doodoo_wizard@lemmy.ml 8 points 1 week ago

Since you dont know what’s happening you dont need to be fucking around with busybox. Boot back into your usb install environment (was it the live system or netinst?) and see how fstab looks. Pasting it would be silly but I bet you can take a picture with your phone and post it itt.

What you’re looking for is drives mounted by dynamic device identifiers as opposed to uuids.

Like the other user said, you never know how quick a drive will report itself to the uefi and drives with big cache like ssds can have hundreds of operations in their queue before “say hi to the nice motherboard”.

If it turns out that your fstab is all fucked up, use ls -al /dev/disk/by-uuid to show you what th uuids are and fix your fstab on the system then reboot.

[–] wickedrando@lemmy.ml 6 points 1 week ago (1 children)

Can you reinstall? If possible, use the whole disk (no dual booting and bootloader to deal with).

[–] qyron@sopuli.xyz 5 points 1 week ago (6 children)

I can, already done before coming here and I risk I'm going to do it again because people are telling me to do this and that and I'm feeling way over my head.

But not in the mood to quit. Yet.

I'm running a clean machine. No secondary OS. The only thing more "unusual" that I am doing is partitioning for different parts of the system to exist separately and putting /home on a disk all to itself.

[–] wickedrando@lemmy.ml 4 points 1 week ago* (last edited 1 week ago)

Ah, yes I saw all the comment suggestions and was hoping a fresh reinstall would work for you.

Did you format before reinstall? Definitely seems like fstab issue dropping you into initramfs that would need some manual intervention.

A format and fresh install should totally resolve this (assuming installation options and selections are sound).

What does ‘ls /dev/sd*’ ran from shell show you?​​​​​​​​​​​​​​​​

[–] IsoKiero@sopuli.xyz 3 points 1 week ago (1 children)

Just in case you end up with reinstallation, I'd suggest using stable release for installation. Then, if you want, you can upgrade that to testing (and have all the fun that comes with it) pretty easily. But if you want something more like rolling release, Debian testing isn't really it as it updates in cycles just like the stable releases, it just has a bit newer (and potentially broken) versions until the current testing is frozen and eventually released as new stable and the cycle starts again. Sid (unstable) version is more like a rolling release, but that comes even more fun quirks than testing.

I've used all (stable/testing/unstable) as a daily driver at some point but today I don't care about rolling releases nor bleeding edge versions of packages, I don't have time nor interest anymore to tinker with my computers just for the sake of it. Things just need to work and stay out of my way and thus I'm running either Debian stable or Mint Debian edition. My gaming rig has Bazzite on it and it's been fine so far but it's pretty fresh installation so I can't really tell how it works in the long run.

load more comments (1 replies)
[–] pinball_wizard@lemmy.zip 2 points 1 week ago (1 children)

Once time I've had two bad installs in a row, it was due to my install media.

Many install media tools have an image checker (check-sum) step, which is meant to prevent this.

But corrupt downloads and corrupt writes to the USB key can happen.

In my case, I think it turned out that my USB key was slowly dying.

If I recall, I got very unlucky that it behaved during the checksums, but didn't behave during the installs. (Or maybe I foolishly skipped a checksum step - I have been known to get impatient.)

I got a new USB key and then I was back on track.

[–] qyron@sopuli.xyz 4 points 1 week ago (1 children)

I'm fairly confident at this point that the worst of my problems is to be found between the chair and the keyboard.

load more comments (1 replies)
load more comments (3 replies)
[–] JamesBoeing737MAX@sopuli.xyz 6 points 1 week ago (1 children)
[–] qyron@sopuli.xyz 6 points 1 week ago (1 children)

Not exactly the aknowedgement I was aiming for but definetely the one I needed.

[–] pinball_wizard@lemmy.zip 3 points 1 week ago (1 children)

Sorry for your headaches. The door prize is you get to tell this story - to the un-envy of peers - in the future.

[–] qyron@sopuli.xyz 3 points 1 week ago

Bragging rights of the bad kind.

[–] IsoKiero@sopuli.xyz 6 points 1 week ago (3 children)

Do you happen to have any USB (or other) drives attached? Optical drive maybe? In the first text block kernel suggests it found 'sdc' device which, assuming you only have ssd and hdd plugged in and you haven't used other drives in the system, should not exist. It's likely your fstab is broken somehow, maybe a bug in daily image, but hard to tell for sure. Other possibility is that you still have remnants of Mint on EFI/whatever and it's causing issues, but assuming you wiped the drives during installation that's unlikely.

Busybox is pretty limited, so it might be better to start the system with a live-image on a USB and verify your /etc/fstab -file. It should look something like this (yours will have more lines, this is from a single-drive, single-partition host in my garage):

# / was on /dev/sda1 during installation
UUID=e93ec6c1-8326-470a-956c-468565c35af9 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=19f7f728-962f-413c-a637-2929450fbb09 none            swap    sw              0       0

If your fstab has things like /dev/sda1 instead of UUID it's fine, but those entries are likely pointing to wrong devices. My current drive is /dev/sde instead of comments on fstab mentioning /dev/sda. With the live-image running you can get all the drives from the system running 'lsblk' and from there (or running 'fdisk -l /dev/sdX' as root, replace sdX with actual device) you can find out which partition should be mounted to what. Then run 'blkid /dev/sdXN' (again, replace sdXN with sda1 or whatever you have) and you'll get UUID of that partition. Then edit fstab accordingly and reboot.

[–] Bane_Killgrind@lemmy.dbzer0.com 1 points 1 week ago (1 children)

Tbf he said he doesn't know how to use the terminal, and he'll need to use at least sudo, vim and cat plus the stuff you mentioned. A drive getting inserted into the disk order is probably the correct thing, I thought UUID was the default on new installs for that reason...

[–] IsoKiero@sopuli.xyz 2 points 1 week ago (1 children)

I'd argue that if the plan is to run Debian testing it's at the very least beneficial, if not mandatory, to learn some basics of the terminal. Debian doesn't ship with sudo by default, so it's either logging in directly as root or 'su'. Instead of vim (which I'd personally use) I'd suggest nano, but with live setup it's also possible to use mousepad or whatever gui editor happens to be available.

I suppose it'd be possible to use gparted or something to dig up the same information over GUI but I don't have debian testing (nor any other live distro) at hand to see what's available on it. I'm pretty sure at least stable debian installs with UUIDs by default, but I haven't used installer from testing in a "while" so it might be different.

The way I'd try to solve this kind of problem would be to manually mount stuff from busybox and start bash from there to get "normal" environment running and then fix fstab, but it's not the most beginner friendly way and requires some prior knowledge.

[–] Bane_Killgrind@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

mandatory

Yes but, not in the first few weeks.

My holistic suspicion is that OP has his home folder on a USB/esata drive and he's not telling yet.

Edit

Apparently no

load more comments (2 replies)

unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning

This happened to me. It still showed up in kde's partition manager (when I plugged the ssd into another computer), with the drive named as an error code.

[–] angband@lemmy.world 4 points 1 week ago
[–] Telorand@reddthat.com 4 points 1 week ago (3 children)

I think everyone here has offered good advice, so I have nothing to add in that regard, but for the record, I fucked up a Debian bookworm install by doing a basic apt update && apt upgrade. The only "weird" software it had was Remmina, so I could remote into work; nothing particularly wild.

I recognize that Debian is supposed to be bulletproof, but I can offer commiseration that it can be just as fallible as any other base distro.

[–] qyron@sopuli.xyz 8 points 1 week ago (1 children)

Debian is well known for its stability but it is also known for being tricky to handle when moving into the Testing branch and I did just that, by wanting to have a somewhat rolling distro with Debian.

I'm no power user. I know how to install my computer (which is a good deal more than most people), do some configurations and tinker a bit but situations like this throw me into uncharted territory. I'm willing to learn but it is tempting to just drop everything and go back to a more automated distro, I'll admit.

Debian is not to blame here. Nor Linux. Nor anyone. We're talking about free software in all the understandings of the word. Somewhere, somehow, an error is bound to happen. Something will fail, break or go wrong.

At least in Linux we know we can ask for help and eventually someone will lend a pointer, like here.

[–] IcyToes@sh.itjust.works 2 points 1 week ago (1 children)

OpenSuse Tumbleweed is a great balance between stable and updates (rolling updates). Worth considering if Debian doesn't work out.

load more comments (1 replies)
[–] LeFantome@programming.dev 3 points 1 week ago (3 children)

Nothing that uses apt is remotely bullet-proof. It has gotten better but it is hardly difficult to break.

pacman is hard to break. APK 3 is even harder. The new moss package manager is designed to be hard to break but time will tell. APK is the best at the moment IMHO. In my view, apt is one of the most fragile.

load more comments (3 replies)
[–] FooBarrington@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (5 children)

And that's why I immediately fell in love with immutable distros. While such problems are rare, they can and do happen. Immutable distros completely prevent them from happening.

load more comments (5 replies)
[–] Eggymatrix@sh.itjust.works 4 points 1 week ago (2 children)

Congrats, you found the only debian that breaks regularly: testing

You can file a bug report and then install something that does not require you to debug early boot issues, like debian 13 or if you really want a rolling release arch or tubleweed.

load more comments (2 replies)
[–] LeFantome@programming.dev 2 points 1 week ago* (last edited 1 week ago) (1 children)

It could be that /dev/sdb2 really does not exist. Or it could be mapped to another name. It is more reliable to use UUiD as others have said.

What filesystem though? Another possibility is that the required kernel module is not being loaded and the drive cannot be mounted.

[–] qyron@sopuli.xyz 4 points 1 week ago (1 children)

Ext4 on all partitions, except for swap space and the EFI partition, that autoconfigures the moment I set it as such.

At the moment, I'm tempted to just go back and do another reinstallation.

I haven't played around with manually doing anything besides setting up the size of the partitions. Maybe I left some flag to set or something. I don't know how to set disk identification scheme. Or I do, just don't realize it.

Human error is the largest probability at this point.

[–] kumi@feddit.online 2 points 1 week ago* (last edited 1 week ago)

OP, in case you still haven't given up I think I can fill in the gaps. You got a lot of advice somewhat in the right direction but no one telling you how to actually sort it out I think.

It's likely that your /dev/sdb2 is now either missing (bad drive or cable?) or showing up with a different name.

You want to update your fstab to refer to your root (and /boot and others) by UUID= instead of /dev/sdbX. It looks like you are not using full-disk encryption but if you are, there is /etc/crypttab for that.

First off, you actually have two /etc/fstabs to consider: One on your root filesystem and one embedded into the initramfs on your boot partition. It is the latter you need to update here since it happens earlier in the boot process and needed to mount the rootfs. It should be a copy of your rootfs /etc/fstab and gets automatically copied/synced when you update the initrams, either manually or on a kernel installation/upgrade.

So what you need to do to fix this:

  1. Identify partition UUIDs
  2. Update /etc/fstab
  3. Update initramfs (update-initramfs -ukall or reinstall kernel package)

You need to do this every time you do changes in fstab that need to be picked up in the earlier stages of the boot process. For mounting application or user data volume it's usually not necessary since the rootfs fstab also gets processed after the rootfs has been successfully mounted.

That step 3 is a conundrum when you can't boot!

Your two main options are a) boot from a live image, chroot into your system and fix and update the initramfs inside the chroot, or b) from inside the rescue shell, mount the drive manually to boot into your normal system and then sort it out so you don't have to do this on every reboot.

For a), I think the Debian wiki instructions are OK.

For b), from the busybox rescue shell I believe you probably won't have the lsblk or blkid like another person suggested. But hopefully you can ls -la /dev/disk/by-uuid /dev/sd* to see what your drives are currently named and then mount /dev/XXXX /newroot from there.

In your case I think b) might be the most straightforward but the live-chroot maneuver is a very useful tool that might come in handy again in other situations and will always work since you are not limited to what's available in the minimal rescue shell.

Good luck!

[–] mvirts@lemmy.world 1 points 1 week ago

Don't be afraid of the command line, breaking Linux is how you end up learning how to use it!

I haven't done this tutorial but if that kind of thing helps you this one looks pretty good.

My best guess is you need to do something like:

(In the shell, one line at a time, enter runs the command)

mkdir /mnt/tmp
mount /dev/sda2 /mnt/tmp
nano /mnt/tmp/etc/fstab

Nano is a text editor that uses your whole terminal, so you will see the contents of /mnt/tmp/etc/fstab (the file that controls where disks are mounted) and replace 'sdb' with 'sda' on the line starting with /dev/sdb2. The bottom of nano's screen shows you the keyboard shortcuts, I think Ctrl W will make it write the file, asking for confirmation of the filename, which should stay the same. Exit nano (Ctrl+x maybe?) then reboot with the command 'reboot'

If you get any errors about access denied or permissions, run 'sudo bash' to get a shell with more power and try again.

Good luck!

What most likely happened is your disk order switched and, as others have mentioned, using /dev/sda1 or something similar to point to partitions is unstable and can't be trusted. Once your system is back up, look up how to specify partitions in /etc/fstab using UUID (something like /dev/disks/by-uuid/xxxx-xxxxxxxxxx-xxxx instead of /dev/sda2)

load more comments
view more: next ›