Looking forward, I'm still more-worried about the fact that state-backed threat actors are targeting open source projects via this social engineering route than the technical issues.
I think that the technical issues that the attacker used can be addressed to at least some degree.
-
If autoconf is a good place to hide stuff, maybe shift away from distributing autoconf-generated files in the source tarball; I don't know all of the problems that come up with that, but I understand that at least some of them were backwards-compatibility-breaking issues across autoconf versions in the past.
-
If maintainers are putting up bogus tarballs that differ from what's in git, have github disallow this, and for projects that are currently doing it, have github figure out how to address their needs otherwise. If this can't be done, at the very least, highlight the differences.
-
The ifunc hooks can probably have some kind of automated auditing added, if they're a favored vector to hide things.
-
If, in a project takeover, attackers try to embargo any reporting of security holes and ask that nobody but them be notified, it'd be possible to have some trusted third-party notified.
-
If threat actors try to inject attacks right before freeze -- I understand that Ubuntu may have been a target -- then apply greater scrutiny to those changes.
-
If distros linking code into sshd with distro patches is exposing sshd to security threats that the opensshd team isn't looking at, disallow that practice in distro policy for some set of security-sensitive projects.
-
Systemd may be too ready to link libraries into things that don't require it.
Maybe it makes sense to have a small number of projects that are considered "security-critical" and then require that they only rely on other projects that are also security-critical. That's not a magic fix, but it maybe tamps down on the damage a supply-chain attack could cause. Still...my suspicion is that if an attacker could get code into something like xz, they could probably ultimately, even with only user-level privileges, figure out ways to escalate to control of a system. I mean, all you need is a user with admin privileges to run something as a user anywhere with their account. Maybe Linux and some other software projects just fundamentally don't have enough isolation. That is, maybe the typical software package should be expected to run in a sandbox, kind of the way smartphone software or video game console software does. That doesn't solve everything, but it at least reduces the attack surface.
But the social side of this is a pain. We don't want to break down the system of trust that lets open-source work well more than is necessary...but clearly, there are people being attacked by people who have a lot of time to spend on putting together tactics to attack them. I'm not sure that your typical open-source maintainer -- health issues or no -- can realistically constantly be on their guard against coordinated social engineering attacks.
The attacker came via a VPN (well, unless they messed up) and had no history. The (probable) sockpuppets also had no history. It might be a good idea to look for people entering open source projects who have no history and are only visible from a VPN...but my guess is that if we rely on reputation more, attackers will just seek to subvert that as well. In this case, they probably committed non-malicious commits for the purpose of building reputation for years. If you're willing to put three years into building reputation on a given project, I imagine that you can do something similar to have an account lying in wait for the next open source project to attack. And realistically, my guess is that if we trust non-VPN machines, a state-backed attacker could get ahold of one...it's maybe more convenient for them to bounce through a VPN. It's not something that they absolutely have to do.
But without some way to help flag potential attackers, it just seems really problematic from a social standpoint. I mean, it's a lot harder to run an open-source project if one is constantly having to think "okay, has this person just spent the past three years just building reputation so that they can go bad on me, along with a supporting host of bogus other accounts?" I'm not sure that it's possible, even for really paranoid people.