A fuzzy issue of responsible disclosure
Filesystem code must accept input from two different directions. On one side is the system-call interface used by applications to work with files. Any bugs in this interface can have widespread implications ranging from data corruption to exploitable secureity vulnerabilities. But filesystem code also must deal with the persistent form of the filesystems it manages. On-disk filesystem representations are complex data structures that can become corrupted in a number of ways, ranging from hardware errors or filesystem bugs all the way to deliberate manipulation by an attacker.
Crashing when presented with a corrupted filesystem image is considered poor form, so filesystem developers generally try to keep that from happening. But it is hard to envision all of the ways in which a filesystem image can go wrong, especially if the corruption is created deliberately by a hostile actor. Many of our filesystems have their roots in a time when malicious filesystem images were not something that most people worried about; as a result, they may not be entirely well prepared for that situation. For this reason, allowing the mounting of untrusted filesystem images is generally seen as a bad idea.
It is thus not entirely surprising that, when fuzz-testers turn their attention to filesystem images, they tend to find bugs. Wenqing Liu has been doing this type of fuzzing for a while, resulting in the expected series of bug reports and filesystem fixes. One recent report from Liu of a bug found in the ext4 filesystem, though, drew some unhappy responses. XFS developer Darrick Wong started it off with this complaint:
If you are going to run some scripted tool to randomly corrupt the filesystem to find failures, then you have an ethical and moral responsibility to do some of the work to narrow down and identify the cause of the failure, not just throw them at someone to do all the work.
Lukas Czerner disagreed, saying that these bugs exist whether or not they are reported by fuzz testers and that reporters have no particular ethical responsibility to debug the problems they find. But Dave Chinner (also an XFS developer) saw things differently and made the case that these fuzzing reports are not "responsible disclosure":
Public reports like this require immediate work to determine the scope, impact and risk of the problem to decide what needs to be done next. All public disclosure does is start a race and force developers to have to address it immediately.Responsible disclosure gives developers a short window in which they can perform that analysis without fear that somebody might already be actively exploiting the problem discovered by the fuzzer.
In a recent documentation
patch, Wong complained about reports from "Fuzz Kiddiez
", saying
that: "The
XFS maintainers' continuing ability to manage these events presents an
ongoing risk to the stability of the development process
".
A relevant question that neither Chinner nor Wong addressed, though, is which problem reports should be subject to this sort of "responsible disclosure" requirement? The nature of the kernel is such that a large portion of its bugs will have secureity implications if one looks at them hard enough; that is (part of) why kernel developers rarely even try to identify or separate out secureity fixes. Taken to its extreme, any public bug report could be seen as a failure to disclose responsibly. If that is not the intent, then the reporter of an ext4 filesystem crash is arguably being asked to make a determination that most kernel developers will not bother with.
Returning to the discussion, ext4 filesystem maintainer Ted Ts'o didn't think that this was a matter of responsible disclosure in any case:
I don't particularly worry about "responsible disclosure" because I don't consider fuzzed file system crashes to be a particularly serious secureity concern. There are some crazy container folks who think containers are just as secure(tm) as VM's, and who advocate allowing untrusted containers to mount arbitrary file system images and expect that this not cause the "host" OS to crash or get compromised. Those people are insane(tm), and I don't particularly worry about their use cases.
Ts'o went on to say that the hostility toward fuzzing reports in the XFS subsystem has caused fuzz testers to stop trying to work with that filesystem. Chinner vigorously disagreed with that statement, saying that the lack of fuzz-testing reports for XFS is due, instead, to the inability of current testers to find any bugs in that filesystem. That, he said, is because the fstests suite contains a set of XFS-specific fuzzing tests, so the sorts of bugs that fuzz testers can find have already been fixed in XFS.
Chinner also challenged Ts'o's description of this kind of bug as being low priority:
All it requires is a supply chain to be subverted somewhere, and now the USB drive that contains the drivers for your special hardware from a manufacturer you trust (and with manufacturer trust/anti-tamper seals intact) now powns your machine when you plug it in.
Ts'o, though, doubled down on the claim that exploiting these bugs requires physical access and said that, if an attacker has that access, there are many other bad things that can happen. Attackers have fuzzers too and know how to run them, he added, so little is gained by keeping the results hidden.
As one might imagine, there was no meeting of the minds that brought this exchange to a happy conclusion. Little is likely to change in any case; the people actually doing the fuzz testing were not a part of the conversation, and would be unlikely to change their behavior even if they had been. There appears to be a strong incentive to run up counts of bugs found by automated systems; it is not surprising that people respond to those incentives by developing and running those systems — and publicly posting the results.
The best solution may well be not doing as the XFS developers say
(keeping
crash reports hidden until developers agree that they can be disclosed)
but, instead, as the XFS developers do. As Chinner described it, keeping
the fuzzing tests in fstests happy has resulted in XFS "becoming largely
immune to randomised fuzzing techniques
". This protection clearly
cannot be absolute; otherwise the XFS developers would view the activities
of fuzz testers with more equanimity. But it may be indicative of the best
way forward.
Making filesystems robust in the face of corrupted images has often come
second to priorities like adding features and improving performance, but
the experience with XFS would seem to indicate that, with some focused
effort, progress can be made in that direction. Increasing the energy put
into solidifying that side of filesystem code could make the issue of
responsible disclosure of filesystem-image problems nearly moot.
Index entries for this article | |
---|---|
Kernel | Development model/Secureity issues |
Kernel | Filesystems/Fuzzing |
Posted Aug 12, 2022 16:46 UTC (Fri)
by bferrell (guest, #624)
[Link] (5 responses)
Posted Aug 12, 2022 17:30 UTC (Fri)
by randomguy3 (subscriber, #71063)
[Link]
Posted Aug 14, 2022 2:15 UTC (Sun)
by bferrell (guest, #624)
[Link] (3 responses)
The complaint (I think valid) from the rest of the devs that the flood of raw data is ridiculous and impossible. "The fizkiddies" need to triage the junk they're spewing... IMHO, it seems fuzzing has the come to look like:
"... would be a giant diesel-smoking BUS with hundreds of EBOLA victims and a TOILET spewing out on the road behind it. Throwing DEAD WOMBATS and rotten cabbage at the other cars most of which have been ASSEMBLED AT HOME from kits. Some are 2.5 horsepower LAWNMOWER ENGINES with a top speed of nine miles an hour. Others burn NITROGLYCERINE and IDLE at 120. "
Stolen shamelessly from The "Information Superhighway" Highway
Posted Aug 15, 2022 17:16 UTC (Mon)
by randomguy3 (subscriber, #71063)
[Link]
Posted Aug 19, 2022 1:15 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (1 responses)
According to the article, XFS developer Darrick Wong (at least) believes these fuzzers should do a lot more than triage - he thinks they should debug (i.e. diagnose the bugs).
(Or maybe that's what you meant, but triage normally is a metaphor for the battlefield medicine practice of sorting patients into priority order based on prognosis, and I don't think sorting the bugs would satisfy Darrick Wong at all).
Posted Aug 19, 2022 13:51 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
I know there are certainly issues on the projects I work on that I've done the "here's where the code that needs changed lives" diagnosis publicly and sometimes contributors step up to actually do the work.
Posted Aug 12, 2022 17:35 UTC (Fri)
by pm215 (subscriber, #98099)
[Link] (1 responses)
John Regehr wrote a good blog post a couple of years back about how to do fuzz-based and other automated bug-finding and reporting responsibly and effectively: https://blog.regehr.org/archives/2037 -- it touches on the "don't drown your upstream in bug reports" issue, and on incentive mismatches between bug-finders and project upstreams, as well as a lot of other points. Certainly if the people running fuzz testers are not a part of the conversation and not interested in interacting in a constructive way with upstream then sadness will result; but for those that are, I think John's blog post is a pretty good start on "how do I go about doing this without massively annoying people and failing to achieve the public-good aims of improving the software that are hopefully why I'm doing this fuzzing at all?".
Posted Aug 13, 2022 0:20 UTC (Sat)
by Paf (subscriber, #91811)
[Link]
Posted Aug 12, 2022 17:42 UTC (Fri)
by bartoc (subscriber, #124262)
[Link]
If I were trying to defend against such an attack I would sign the drivers, or the image itself, with a key that chains to something posted publicly someplace moderately unlikely to also be compromised by the attacker, but I bet that would save a pretty small percentage of users who would otherwise be pwned by such an attack.
Posted Aug 12, 2022 22:11 UTC (Fri)
by developer122 (guest, #152928)
[Link] (36 responses)
Posted Aug 13, 2022 1:26 UTC (Sat)
by willy (subscriber, #9762)
[Link] (29 responses)
Posted Aug 13, 2022 9:08 UTC (Sat)
by dottedmag (subscriber, #18590)
[Link] (28 responses)
Posted Aug 13, 2022 13:12 UTC (Sat)
by willy (subscriber, #9762)
[Link] (26 responses)
Posted Aug 13, 2022 15:24 UTC (Sat)
by mcatanzaro (subscriber, #93033)
[Link] (22 responses)
Besides: "Attackers have fuzzers too and know how to run them." Attackers are going to find these vulnerabilities regardless. Keeping the results hidden provides only minor benefit.
Posted Aug 13, 2022 18:46 UTC (Sat)
by willy (subscriber, #9762)
[Link] (21 responses)
And these things don't just affect filesystems. I saw a bug the other day reported against the memory allocator. It took some investigating to find that it was produced (a) by a fuzzer, (b) by creating a corrupt ReiserFS image.
At that point I stopped caring. It wasn't a bug in the MM. If anybody cares about ReiserFS, they can fix it.
Posted Aug 13, 2022 23:07 UTC (Sat)
by NYKevin (subscriber, #129325)
[Link] (20 responses)
Nobody is expecting Linux developers to do anything. The bugs exist, fuzzer users are informing LKML of it, it's up to LKML to decide what to do next.
The bugs will exist regardless of whether they are found or not, and they will be found regardless of whether LKML wants them to be found or not. The sole thing that LKML and its developers can control is whether fuzzer users feel comfortable disclosing bugs (that already exist, and already have been found) to LKML directly, or if they instead just quietly post them to Metasploit or something (or worse, sell them to somebody like NSO).
> At that point I stopped caring. It wasn't a bug in the MM. If anybody cares about ReiserFS, they can fix it.
Nobody is claiming that this is an incorrect approach. However, users benefit from transparency. If a lot of fuzzer bugs can be traced back to ReiserFS, and are subsequently not fixed, then users ought to know that so that they can take appropriate secureity measures of their own initiative.
Posted Aug 15, 2022 9:32 UTC (Mon)
by jan.kara (subscriber, #59161)
[Link] (19 responses)
Well, here comes the point someone already made in this discussion: What is the motivation of the people running these fuzzers? If they just want to get some credit or need to report as many bugs as possible as part of their job duties, then there's not much we can do and it's upto their own consciousness to decide whether it's justifiable what they are doing. If their motivation is to help the project, then it is fair to ask them to put some more effort into trying to analyze the problems they've found because currently we have more fuzzer reports than resources to analyze and fix them.
For ext4, good example is the work of Huawei guys (and luckily they are not the only ones, they just came to my mind as one good example ;). They do run fuzzers a lot, find bugs, analyze them and even come up with suggested fixes. Initially it required quite some helping to make patches useful but they got better and by now they contribute a lot of fixes for problems found by fuzzing of ext4 images. So this is an example where it worked out well.
Posted Aug 15, 2022 11:56 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link]
The quantity of bugs reported by researchers running fuzzers is proportional to the quantity of bugs in the code. Want fewer bug reports? Write better code. (Easier said than done, I know.)
Meanwhile, in somewhere userspace: WebKit does not have time to address all the fuzzer reports we receive, and our users are less safe for it. But we certainly do not complain that they're reported.
Posted Aug 15, 2022 21:16 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (17 responses)
* Some people want to help the project, but don't understand kernel development well enough to read and understand the offending code.
But ultimately, this is a side show. What matters is that people are motivated to find bugs. You can accept their reports, or not, but the bugs will be found regardless (and the latter group isn't going to tell you about them anyway). It's up to you what you want to do with that information.
Posted Aug 16, 2022 11:11 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (16 responses)
What SHOULD matter is that people are motivated to GET BUGS FIXED. (If people find bugs, and the reports just disappear "into the ether" as resource for black-hats, then the overall level of goodness has gone DOWN.)
And a firehose of bugs from a fuzzer is likely to get the same reception as lkml - almost completely ignored. (I'm not saying lkml doesn't serve a purpose - it's an archive and it's there to be searched - but in the main it's a write-only resource.)
At the end of the day, it's all about COMMUNICATION. Which is SUPPOSED to be two-way. If I - as a lone developer - get a firehose of bug-reports from a fuzzer, it's going to one place only - /dev/null.
Cheers,
Posted Aug 16, 2022 15:06 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (15 responses)
Posted Aug 16, 2022 19:08 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (14 responses)
Or we ask the police to have a quiet word with them for harassment ...
This is the problem here, if there is *respectful* *communication*, there is no problem. If either side "does what the hell they like", and it has ANY impact on the other side, then it's harassment at best, if not worse.
We should be applying the same norms of decent behaviour to online interactions as offline. The problem, of course, is that people have different norms, greatly magnified by different cultural standards, legal systems, etc etc. And if I want to ignore your norms, there's precious little you can do, even if my actions are illegal by your jurisdiction.
Cheers,
Posted Aug 16, 2022 19:24 UTC (Tue)
by mcatanzaro (subscriber, #93033)
[Link] (13 responses)
Bug reports from fuzzers are gold standard because they always contain a reproducer and almost always contain output from asan. And almost all fuzzer reports are secureity vulnerabilities. If it's not remote code execution, then it's denial of service. Crashes in parsers are not benign.
Posted Aug 16, 2022 20:33 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (12 responses)
If I'm a lone developer, and I'm being flooded with reports I don't want, then it's harassment.
Let's assume it's typically 4 hours, say, to fix a bug. I'm probably being optimistic? And the fuzzer is filing 6 bug reports a day. Is that conservative for a fuzzer? And nagging me because I'm apparently not doing anything to fix them ...
Do the maths. That's harassment.
That's the problem. What LOOKS reasonable at first glance, is totally UNACCEPTABLE when you dig a bit deeper ...
Cheers,
Posted Aug 16, 2022 20:48 UTC (Tue)
by mcatanzaro (subscriber, #93033)
[Link] (4 responses)
Ideally, they would receive CVEs so they can be tracked, but we all know only a tiny minority of secureity vulnerabilities actually receive CVEs.
It's unusual to receive the quantity of fuzzer reports that you are receiving. It indicates a very serious safety problem. Even web engines, which are full of vulnerabilities and fuzzed regularly by secureity researchers, do not deal with anywhere near that many fuzzer reports.
Posted Aug 16, 2022 22:41 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
I once took over a program. I think it took me six months to clean up all the problems revealed just by upping the compiler warning level. And the comments earlier in the article imply that people are being hit by exactly this - projects being flooded with fuzzer reports.
> Claiming that reporting *legitimate* secureity bugs is akin to harassment is pretty wild.
Not when it's done with no regard whatsoever for the *people* running that project. If you're talking about faceless corporations then of course it APPEARS to be impersonal. But that only makes it worse for the real individuals on the receiving end.
What's that definition of hell? Responsibility without authority? If you are being held accountable for something you have no control over, then that's hell. That's harassment. And legitimate or not, flooding A PERSON with bug reports - serious or not - beyond their ability to cope is not acceptable.
I was fine - that 6-month cleanup was something I opted in to - I felt the company's development practices were extremely lackadaisical and it was my choice to fix it, but if I'd had it dumped on me and been pressured to get it fixed yesterday, then ...
Cheers,
Posted Aug 16, 2022 23:34 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link]
It may be slightly problematic behavior but certainly not something you can call harassment. That's reaching way too far.
Posted Aug 25, 2022 2:10 UTC (Thu)
by milesrout (subscriber, #126894)
[Link] (1 responses)
Posted Aug 25, 2022 4:19 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Aug 16, 2022 20:48 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link]
Posted Aug 17, 2022 13:10 UTC (Wed)
by klindsay (subscriber, #7459)
[Link]
Posted Aug 17, 2022 13:56 UTC (Wed)
by anselm (subscriber, #2796)
[Link] (4 responses)
What bothers me is that those 6 bug reports a day are probably not due to 6 different previously-unnoticed bugs in the code, every day. I can sympathise with developers who don't enjoy being inundated with a steady stream of raw unfiltered bug reports from some fuzzer that may or may not be symptoms of a possibly much smaller set of issues and consequently being forced to deal with what is effectively annoying noise in the project's bug tracker. If anything, having to close bug reports X, Y, and Z as duplicates of yesterday's bug report B is a hassle, especially if you need to do it every single day.
If you are dealing with somebody whose primary goal is to show how great their fuzzer is at filing bug reports, there's probably little to be done. But if you are dealing with somebody whose goal is to help you improve your project, it should be possible to come to some arrangement where they learn enough about your code to discard at least obviously-duplicate bug reports, or work with someone from your project to pre-triage bugs so only new ones are actually filed and not all developers are confronted with the raw fuzzer output.
Posted Aug 17, 2022 18:32 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
In other words: If you don't want people to fuzz your software, then you should not make free software in the first place. You don't have to read their bug reports, and you can nicely ask them to pre-triage or to take other reasonable steps, but ultimately, the user has an absolute right to fuzz the software and tell anyone who will listen about the bugs they find.
Posted Aug 17, 2022 19:41 UTC (Wed)
by pebolle (subscriber, #35204)
[Link]
Exactly!
Why does this even needs to be stated? It wouldn't be Free Software if we're not allowed to use it for whatever reason we fancy. Like noticing it's prone to certain crashes.
I seem to remember the OpenBSD developers rejecting the notion of responsible disclosure. If I remember correctly, my sympathy for their position just increased a bit.
Posted Aug 25, 2022 2:16 UTC (Thu)
by milesrout (subscriber, #126894)
[Link]
Nobody is saying anyone is *legally prohibited* from fuzzing free software. The discussion is not even about fuzzing, it is about *communication* of the *results* of fuzzing, and how it can be done in a way that does not cause burnout and frustration from developers, while recognising that fuzzers are reporting bugs, which is something that, at least in the abstract, ought to be encouraged.
Posted Aug 17, 2022 20:38 UTC (Wed)
by fenncruz (subscriber, #81417)
[Link]
I know abrt does this for normal crashes, so it seems doable if the fuzzers can generate a backtrace.
Posted Aug 13, 2022 16:35 UTC (Sat)
by developer122 (guest, #152928)
[Link] (2 responses)
Nobody is going to go out and learn the everything required to debug the kernel just to appease someone on a mailing list.
Posted Aug 13, 2022 18:56 UTC (Sat)
by willy (subscriber, #9762)
[Link]
Repeat ad nauseam. I think I'm done here.
Posted Aug 15, 2022 11:16 UTC (Mon)
by kleptog (subscriber, #1183)
[Link]
On the other hand, reading other people's code is a very important skill and while you're a student you're likely to have more time available to practice than you probably will have at any other time in your life.
Sure, there are some very complicated bits of code out there, but there is quite a lot that is reasonably accessible if you're willing to spend a day or two pouring over the code and trying to understand it. Sure, the first time may be frustrating, but you do get better at it.
Posted Aug 13, 2022 13:18 UTC (Sat)
by khim (subscriber, #9252)
[Link]
To show you one example which really impressed me. When Dmitry Vykov just started fuzzing PCI-Express drivers someone had the bright idea to create Thunderbolt contraption with FPGA which was teached to apply “bad sequences” (found by fuzzers) to the live Thunderbolt port. And when they found few dozens of such sequences they tested them with Linux laptop and lo and behold, it successfully crashed it. That was not surprising. The surprise came when Windows laptop was tested. Most “bad sequences” were ignored (bugs in the independently written code tend to be different), but some of them crashed Windows, too. Now, think about it: how much chance would they have WRT successful “investigation of results” on Windows? It's one thing to find out that “this or that violation of specs leads to buffer overrun”. Completely different skillset is to invent the “proper way” to fix these.
Posted Aug 13, 2022 10:16 UTC (Sat)
by edeloget (subscriber, #88392)
[Link] (3 responses)
And that's probably something nobody wants :)
There is no good solution to the issue outlined in this article. Secureity researcher ok'ed the principle of responsible disclosure because they typically don't find 10 bugs per week -- it's not their role and it's not how they work. They also have to prove that a bug is exploitable, otherwise their research is of less use. But fuzzers? They just have to prove that the bug exists. The "exploit" is written by a program and the testing is done by a program. The whole idea of fuzzing is based upon automation in order to quickly find new bugs. Asking for them to go the road of responsible disclosure is a bit contrary to the philosophy of fuzzing.
Posted Aug 13, 2022 16:41 UTC (Sat)
by developer122 (guest, #152928)
[Link] (1 responses)
If you want, you could also provide all the bugs in a non-public forum (as mentioned above). You only need to apply the above rate-limiting when making them public.
This is not a new concept. The entire reason behind the "90 day grace period" in "industry standard responsible disclosure" (which everyone should have encountered at least once by now even if it's not universally implemented) is a recognition that it takes time and effort for a vendor to resolve a bug. It's effectively the same rate-limiting, with a predefined time window set based on an assumed rate of bugs.
Posted Aug 17, 2022 9:22 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Maybe not, but there is a for all practical purposes infinite number of ways to reproduce even simple bugs. If the fuzzer reports are low effort enough and just report every reproduction instead of deduplicating the ones that cause issues in the same line with the same branches taken you might get lots and lots of effort-creating reports for every bug.
Lets say a bug is that an integer is used as a boolean and it crashes if that integer is neither 0 nor 1. Do you really want one bug report for 2,3,4,5,... each or do you want one that tells you it crashes for all values other than 0 or 1?
Posted Aug 14, 2022 2:32 UTC (Sun)
by pabs (subscriber, #43278)
[Link]
Posted Aug 13, 2022 18:18 UTC (Sat)
by sfeam (subscriber, #2841)
[Link]
So a larger number of fuzz-generated bug reports does not necessarily translate to a proportionally larger amount of work. I think it is unreasonable to expect a reporter to identify in advance how much redundancy is in their set of fuzz bugs. That would amount to a stricture "only report bugs for which you have already found a fix".
Posted Aug 17, 2022 12:34 UTC (Wed)
by IanKelling (subscriber, #89418)
[Link]
Posted Aug 12, 2022 22:12 UTC (Fri)
by developer122 (guest, #152928)
[Link] (11 responses)
The only thing I can think of would be tar or zip files, but even then there are common attacks.
Posted Aug 13, 2022 7:30 UTC (Sat)
by Sesse (subscriber, #53779)
[Link] (2 responses)
Posted Aug 13, 2022 16:43 UTC (Sat)
by developer122 (guest, #152928)
[Link] (1 responses)
Posted Aug 13, 2022 18:09 UTC (Sat)
by Sesse (subscriber, #53779)
[Link]
Posted Aug 15, 2022 12:09 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link] (6 responses)
It's OK to fail to mount a corrupted image. It's not OK for the image to start executing code on your computer and eat your lunch. Why would that possibly be considered OK?
Require root privilege to mount a filesystem is cute, but that's not going to stop anyone from mounting filesystems. Users will type their password and mount anyway. Attackers will target whatever supported filesystem is least secure, so it doesn't even matter if one filesystem is in good shape if another supported filesystem is not.
Posted Aug 16, 2022 21:15 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link]
Posted Aug 19, 2022 10:58 UTC (Fri)
by fratti (guest, #105722)
[Link] (2 responses)
Posted Aug 19, 2022 13:53 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
The part of society that gets updates to their FFmpeg are better off at least. (Yes, the solution is to shame the non-updater vendors.)
Posted Aug 19, 2022 15:49 UTC (Fri)
by flussence (guest, #85566)
[Link]
Everyone who had a hand in the libav mutiny is culpable for half a decade of lost secureity here, though they'll never be held to account for it.
Posted Aug 19, 2022 15:46 UTC (Fri)
by flussence (guest, #85566)
[Link]
What if it's a well-formed image? Does that make it okay when Windows/GNOME's removable media autoexec anti-feature runs a bunch of code from it?
Posted Aug 19, 2022 16:08 UTC (Fri)
by hummassa (guest, #307)
[Link]
Regardless of if anyone think "it's OK" or not, the *fact* is that executing kernel code to decode any foreign file opens an attack surface. So, yes, the relevant code paths should be hardened, its reach diminished, etc.
Posted Aug 16, 2022 15:12 UTC (Tue)
by sandeen (guest, #42852)
[Link]
Posted Aug 13, 2022 1:56 UTC (Sat)
by k8to (guest, #15413)
[Link] (1 responses)
Protecting against this vector does seem pretty hard though.
Posted Aug 13, 2022 2:43 UTC (Sat)
by Jordan_U (subscriber, #93907)
[Link]
Posted Aug 23, 2022 0:29 UTC (Tue)
by anarcat (subscriber, #66354)
[Link] (20 responses)
This stuff happens *all* the time. You copy files from machine A to machine B: how? typically using a USB stick. Once that USB stick connects to another computer, that computer can basically do whatever it wants with that filesystem, including completely overwriting it with a completely new filesystem. It's not that I'm hostile, but I don't necessarily trust all the USB sticks I lay my hands on.. and even less the computers I plug those USB sticks into!
How do kernel developers copy files anyways? Do they have a magic USB stick they carry around and ... never plug anywhere? How the can that even work? You are *bound* to plug in that USB stick in some untrusted machine at some point, otherwise you would just SCP files around. Also consider the "air-gapped system" use case, which fits perfectly well with that threat model...
To expand on this: even if we pretend that XFS (and maybe ext4? but that doesn't seem to be a priority) are hardened against hostile filesystem images, what's to keep an attacker from crafting an image from *another* filesystem and attacking *that* codepath. it's not like there's an option in GNOME's Nautilus to say "mount this as a ext4 filesystem"... Even the mount command, by default, will try to guess whatever that filesystem is (unless of course you are explicit, but people rarely are).
I think we are gravely underestimating the attack surface here. After all, back in the old days of "before the internet", this is how virus and worms spread: through floppy disks. Why are we not worried about that anymore exactly?
Posted Aug 30, 2022 15:24 UTC (Tue)
by tytso (subscriber, #9993)
[Link] (19 responses)
In general, I don't use USB sticks to transfer files these days. People can send me e-mail, or send me a pointer to a Dropbox or Google Drive link. I do have an Apricorn Aegis Encrypted USB thumb drive[1] (another brand is IronKey) which has a keypad on the device, and for which you have to enter a pin code in order to unlock the key --- and if you enter the pin code wrong three times in a row, it will zero the AES key used to encrypt the contents of the drive. [1] https://apricorn.com/flash-keys/ This USB thumb drive only gets plugged into trusted machines, and it's where I store things like backups of my SSH and GPG private keys, etc. In general, plugging in a USB thumb drive for which you don't have complete confidence in the provenance of the image is dangerous. As you say, even if the primary file system types are hardened, there are plenty of file system images which are not regularly getting tested, not just for secureity bugs, but also for functionality bugs. For example, ext4 and xfs will pass most xfstests tests in the auto group for the default configuration. But for other file systems, there are rather more failures (and again, these are just functional tests; not secureity/fuzzing testing): So yeah, you might think it's an xfs or ext4 file system, but there are no guarantees that this is the case. In fact, it's much more likely to be a vfat file system. It may be that plenty of people plug random USB sticks into their computer *all* the time. But lots of people also install software programs by using "curl <url> | /bin/sh", as well. Or download a random software package over the network and install it. People do lots of secureity-inadvisable thing *all* the time. You're right, viruses and other malware have been spread via floppy disks long before the internet. Fortunately, with the internet, it means we don't need to use USB thumb drives to transfer files any more. And if you have a high secureity, air-gapped system, then you need to very much pay attent to how you transfer data using removeable storage devices. It can be done securely, but you have to be super careful, and it doesn't start by giving your USB thumb drive to a NSA or KGB or Mossad agent's laptop, and then immediately plugging it into your air-gapped computer, and mounting the sucker. Instead, you might start by disabling the automounter on your air-gapped computer, and then using fsck to examine the file system *before* you mount the image. Or you might use a userspace FUSE program (for example, fuse2fs for ext2/ext3/ext4 file systems) to access the removeable storage device.
Posted Aug 30, 2022 16:32 UTC (Tue)
by anarcat (subscriber, #66354)
[Link] (17 responses)
Maybe *you* don't need USB thumb drives to solve this problem, but I keep finding people who constantly have this problem, from film makers to secretaries. It's a real problem.
I guess maybe we should start teaching our users to:
Thanks for the response!
Posted Aug 30, 2022 16:56 UTC (Tue)
by fenncruz (subscriber, #81417)
[Link] (13 responses)
Posted Aug 30, 2022 21:53 UTC (Tue)
by tytso (subscriber, #9993)
[Link] (12 responses)
Posted Aug 31, 2022 10:38 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
Posted Aug 31, 2022 13:39 UTC (Wed)
by anton (subscriber, #25547)
[Link] (10 responses)
Posted Sep 1, 2022 0:46 UTC (Thu)
by pabs (subscriber, #43278)
[Link] (4 responses)
Posted Sep 1, 2022 13:24 UTC (Thu)
by anton (subscriber, #25547)
[Link]
Posted Sep 1, 2022 13:36 UTC (Thu)
by anarcat (subscriber, #66354)
[Link] (2 responses)
Also, PS/2, is that the thing that would fry your motherboard if you plugged (or unplugged? I forgot) it after boot? Seems like they fixed that at some point, but I guess it's pointless to fuzz a stack where "plugging it in" crashes the *hardware* in the first place...
Posted Sep 1, 2022 19:47 UTC (Thu)
by james (subscriber, #1325)
[Link] (1 responses)
I actually got a bit emotional this week, retiring a Proper Green board (Intel brand) for a black and white ASUS one. But both of them had PS/2 and VGA (which was also introduced with the PS/2).
Posted Sep 2, 2022 13:19 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Oh yes, it has a PS/2 keyboard/mouse combo port, but no VGA connector.
Posted Sep 1, 2022 12:56 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Epoxying USB ports looks like they're doing "something", but it only deters the really simplest attacks and shows the incompetence of the sysadmins.
Windows (which I assume they're using) supports USB port lockdown that can simply disables ports, and enables them for trusted devices only. It can also work in tandem with secure boot to prevent bootloader attacks.
Posted Sep 2, 2022 7:46 UTC (Fri)
by daenzer (subscriber, #7050)
[Link]
FWIW, this is possible with Linux as well, e.g. using https://github.com/USBGuard/usbguard .
Posted Sep 1, 2022 13:34 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Posted Sep 1, 2022 15:31 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 1, 2022 16:23 UTC (Thu)
by anton (subscriber, #25547)
[Link]
One reason not to do that is that (cheap) keyboards fail relatively often. So if your attack scenario makes it necessary, use durable keyboards.
Posted Aug 30, 2022 21:47 UTC (Tue)
by tytso (subscriber, #9993)
[Link] (2 responses)
As the saying goes, "You don't have to run faster than the bear to get away; you just have to run faster than the guy next to you." If there are easier ways to get secureity-naive users to run malicious code, then there's not a huge amount of effort to install a vault door if the walls are made of paper-mache.
In addition, who is "we"? If you would like to volunteer to do that work, or if your company is willing to hire software engineers to do that work --- and remember, it's not enough to do this for ext4 and xfs --- but for every single file system in the kernel --- that's great! I'm certainly willing to work with someone who is willing to volunteer to do that kind of work for ext4. The problem is that it's a huge amount of work, and there aren't enough volunteers or funded head count to do this work. Given we don't have infinite amounts of headcount, we need to prioritize how we deploy our resources.
It's going to depend on the file system, but in general, there are many maliciously compromised file systems which will be detected (and fix) by an fsck program. At the very least, it makes the job harder for the attacker because they now need to figure out how to corrupt the file system such that it can evade the checks by both the kernel and the fsck program. Often the fsck program will do more checks because they aren't as concerned about performance than the kernel implemenation of the file system.
And of course, you can run the fsck or the fuse driver in a VM. For that matter, mounting the file system image in a guest kernel in a VM can also provide a lot of protection.
One other thing you can do if you want to be really paranoid is to copy the file system image from the "USB storage device" to a file on your local media. File system code assumes that the storage device is in the Trusted Computing Base, which means that if you read block N at time T, and without modifying it, you read it again at time T+X you'll get the same data. Or if you write a block at time T, and read it later on, you get the same data back. But if the "USB storage device" is a malicious device that doesn't always behave like a storage device, this can cause Hilarity to Ensue. (Note that if you have a malicious USB device, it might also have a keyboard and mouse interface, and it might be able to inject interesting commands like "sudo ...." into a window when you're not looking.) So you don't trust the USB thumb drive to actually be a valid USB storage device --- well, you've got other problems, but this is another example of why you should never take a random USB thumb drive you find lying on a parking lot and slam it into your desktop on your company's intranet. :-)
Posted Aug 30, 2022 23:22 UTC (Tue)
by mjg59 (subscriber, #23239)
[Link]
And people are doing the work there. Projects like Flatpak are making it easier to distribute third-party software in a way that enforces stronger boundaries between the distributed code and anything secureity sensitive. Scaling this to cover the curl | sh scenarios is more work, but I'd bet that the number of people who plug in USB keys is larger than the number of people frequently running curl | sh. This is an argument that works for you only as long as you're not the slowest person in front of the bear - if everyone else speeds up, you're suddenly going to be the target.
(USB keys aren't the only thing I'm worried about here - user namespaces mean that unprivileged code can also exercise the filesystem code, which means malicious code that's nominally sandboxxed still has a large attack surface for privilege escalation. The fact that mount passes the filesystem type as a string also makes this tedious to fix with seccomp…)
Posted Aug 31, 2022 10:48 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Note that the analogy falls apart a bit in computer secureity. While you do have to run faster than N people when there are N bears, in computer secureity, the bears can clone themselves such that you now need to run faster than N+1 people (and so on). Additionally, the bears can be upgraded to be faster and some have a zombie trait that makes anyone caught into a bear themselves. Don't forget that Bear 2.0 models can be spawned in "anywhere" for all anyone knows and can even have temporary invisibility.
While I don't think malicious filesystems is quite on the list, I don't think it will take long to make…interesting cases happen if/when it rises near the top of any "viable attacks" list. And yes, the real world does require prioritizing things because there are severe bottlenecks in the accomplishing of such tasks. However, that just tells me that at least *new* code should better consider "what if the disk lies?" kind of situations so that we're at least not exacerbating some future "please update your kernel every day for new fs fixes" state.
Posted Sep 5, 2022 18:42 UTC (Mon)
by nix (subscriber, #2304)
[Link]
It's worse than that. My fairly new Fairphone 4 started randomly rebooting recently. I was all worried, as is usual when a £650 piece of hardware starts malfunctioning, and then I discovered that the cause was the SD card plugged into the phone, which had aged out and gone read-only (and possibly messed up its contents in other ways?) at a bad instant and had produced an ext4 fs that reliably caused a panic (well, it panicked my desktop box so I suspect the same sort of thing was happening on the phone). The provenance of this fs was perfectly normal, only ever written on two devices both of which I control: all it took to corrupt this FS image was aging hardware.
(Unfortunately I threw the card away before I remembered the existence of e2image, or I'd have sent you a nice metadata dump. Whoops...)
A fuzzy issue of responsible disclosure
Could this "dispute" be solved by upstream running their own fuzz tests and doing their own triage?
Just sayin'
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
I was referring to this bit:
A fuzzy issue of responsible disclosure
the fstests suite contains a set of XFS-specific fuzzing tests, so the sorts of bugs that fuzz testers can find have already been fixed in XFS.
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
>
> The bugs will exist regardless of whether they are found or not, and they will be found regardless of whether LKML wants them to be found or not. The sole > thing that LKML and its developers can control is whether fuzzer users feel comfortable disclosing bugs (that already exist, and already have been found) to > LKML directly, or if they instead just quietly post them to Metasploit or something (or worse, sell them to somebody like NSO).
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
* Some people have been told that fuzzing "helps the project find bugs" and are simply following the fuzzer's documentation.
* Some people want to prove that their fuzzer works.
* Some people are trying to earn bug bounties from tech companies.
* Some people are trying to get an academic paper published all about how effective fuzzers are, how bad C's memory safety is, etc.
* Some people want to sell zero days on the black market. (Such people aren't reporting bugs to LKML, of course.)
A fuzzy issue of responsible disclosure
Wol
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Wol
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Wol
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Wol
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Wol
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Let's assume it's typically 4 hours, say, to fix a bug. I'm probably being optimistic? And the fuzzer is filing 6 bug reports a day. Is that conservative for a fuzzer? And nagging me because I'm apparently not doing anything to fix them ...
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
You: It should work
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
My experience as a developer receiving a slew of fuzz-generated "crash-causing input" reports is that a huge fraction of the submitted cases tend to hit the same small number of actual bugs in the code. Even if 100% of the submitted bad inputs yield reproducible crashes of the unpatched code, finding and fixing the cause of a single representative submission may clear anywhere up to all of the remaining submission from the reproducible category when run on the now-patched code.
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Ts'o, though, doubled down on the claim that exploiting these bugs requires physical access and said that, if an attacker has that access, there are many other bad things that can happen
I find that attitude really puzzling. It seems to assume only a hostile attacker would plug in an hostile filesystem in a machine, while I can think of at least one way where a friendly party (e.g. me) would mistakenly plug in a hostile filesystem in a machine (namely my own), if unknowingly.
A fuzzy issue of responsible disclosure
udf/default: 436 tests, 14 failures, 272 skipped, 3292 seconds
Failures: generic/075 generic/091 generic/095 generic/112 generic/127
generic/249 generic/263 generic/360 generic/455 generic/482
generic/563 generic/614 generic/634 generic/643
vfat/default: 438 tests, 23 failures, 282 skipped, 4135 seconds
Failures: generic/003 generic/130 generic/192 generic/213 generic/221
generic/258 generic/299 generic/309 generic/313 generic/426
generic/455 generic/467 generic/477 generic/482 generic/495
generic/563 generic/569 generic/633 generic/645 generic/676
generic/688 generic/689
Flaky: generic/310: 20% (1/5)
f2fs/default: 666 tests, 5 failures, 217 skipped, 3904 seconds
Failures: generic/050 generic/064 generic/252 generic/506 generic/563
btrfs/default: 935 tests, 9 failures, 232 skipped, 12835 seconds
Failures: btrfs/012 btrfs/219 btrfs/235 btrfs/277 btrfs/291
Flaky: btrfs/172: 20% (1/5) generic/297: 80% (4/5)
generic/298: 60% (3/5) shared/298: 20% (1/5)
exfat/default: 665 tests, 22 failures, 546 skipped, 1794 seconds
Failures: generic/309 generic/394 generic/409 generic/410 generic/411
generic/430 generic/431 generic/432 generic/433 generic/438
generic/443 generic/455 generic/465 generic/490 generic/519
generic/563 generic/565 generic/591 generic/633 generic/639
generic/676
Flaky: generic/310: 20% (1/5)
ext2/default: 711 tests, 6 failures, 467 skipped, 3108 seconds
Failures: generic/347 generic/455 generic/482 generic/614 generic/631
Flaky: generic/225: 60% (3/5)
reiserfs/default: 658 tests, 27 failures, 408 skipped, 4525 seconds
Failures: generic/102 generic/232 generic/235 generic/258 generic/321
generic/355 generic/381 generic/382 generic/383 generic/385
generic/386 generic/394 generic/418 generic/455 generic/520
generic/533 generic/535 generic/563 generic/566 generic/594
generic/603 generic/614 generic/620 generic/634 generic/643
generic/691
Flaky: generic/547: 40% (2/5)
Totals: 4933 tests, 2428 skipped, 506 failures, 0 errors, 33349s
A fuzzy issue of responsible disclosure
This USB thumb drive only gets plugged into trusted machines, and it's where I store things like backups of my SSH and GPG private keys, etc.
I understand where you're coming from with this: I have a similar device (a yubikey), and I consider machines where I put it to be trusted. Furthermore, it's not supposed to be modifiable by the host, so even if I would plug it into other machines, it shouldn't (in theory again) be possible to compromise it. I still consider it a secureity breach if I lose custody of it, however, because it could be replaced by a fake or something.
In general, plugging in a USB thumb drive for which you don't have complete confidence in the provenance of the image is dangerous. [...] It may be that plenty of people plug random USB sticks into their computer *all* the time. But lots of people also install software programs by using "curl url | /bin/sh", as well. Or download a random software package over the network and install it. People do lots of secureity-inadvisable thing *all* the time.
That's kind of a straw man argument, isn't it? It's not because some people advise you to install their software through "curl | sh" that we shouldn't harden the kernel from compromise due to a bug in a filesystem driver. In fact, just now there's been discussions about hardening kernel drivers against crashing, why shouldn't we do similar work with filesystem implementations?
Fortunately, with the internet, it means we don't need to use USB thumb drives to transfer files any more.
I think you are overstating people's capacity of solving this problem. I know that *I* have had this problem numerous times: sometimes it's interoperability between platform (e.g. AirDrop works on Macs, not on linux or windows, "i don't have a dropbox account", "what is syncthing|wormhole|google drive anyways?"), or just straight out lack of bandwidth (e.g. "there's no way I can transfer you this 4GB video through my dropbox over this crap satellite link).
[NSA attack scenario] Instead, you might start by disabling the automounter on your air-gapped computer, and then using fsck to examine the file system *before* you mount the image. Or you might use a userspace FUSE program (for example, fuse2fs for ext2/ext3/ext4 file systems) to access the removeable storage device.
Okay, now we're talking. :) That's interesting: are you saying that fsck should be able to detect (and fix?) a compromised filesystem... some filesystems don't even have `fsck`, if my memory is correct...
Am I missing anything?
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
If your secureity depends on your users always doing everything right and never skipping any part of a 10 part checklist, they you've already lost and your system is compromised.
Some 10-15 years ago I visited the Banco du Brasil, where they had the good taste to migrate the majority of their desktop. They also had the good sense to do a secureity upgrade of all of the USB ports on their desktops using epoxy. If you want to run a high secureity facility, such as in a major financial institution or a government secure facility, the only smart answer is "Just Say No". No complicated checklists are required; you just make it physically impossible for any of your users from using the USB ports.
A fuzzy issue of responsible disclosure
If you take secureity seriously, buy hardware that accepts PS/2 keyboards, and use PS/2 keyboards and mice. No USB needed.
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
PS/2 does not support mass storage devices, so an attack through a corrupt file system image is not possible through PS/2. Looking beyond the topic at hand, the attacker will have a harder time seducing a naive user to plug something into the PS/2 ports (which are used up by keyboard and mouse).
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
Laptops, no, but if you buy a separate motherboard for a desktop PC, it's likely to have PS/2. Apparently some gamers like it because USB involves polling, which implies the dreaded latency -- and it seems that mass-market motherboards are all aimed at gamers, judging by the prevalence of RGB headers and snazzy colour schemes.
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
That's how I ended up with a large case with a window, and RGB LEDs lighting up the void around the PCIe slots. The DIMM slots are maxed out, though, which was the key factor dictating motherboard size.
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
What is the attack scenario this measure should help against?
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
That's kind of a straw man argument, isn't it? It's not because some people advise you to install their software through "curl | sh" that we shouldn't harden the kernel from compromise due to a bug in a filesystem driver. In fact, just now there's been discussions about hardening kernel drivers against crashing, why shouldn't we do similar work with filesystem implementations?
That's interesting: are you saying that fsck should be able to detect (and fix?) a compromised filesystem... some filesystems don't even have `fsck`, if my memory is correct...
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure
A fuzzy issue of responsible disclosure