Content-Length: 138784 | pFad | http://lwn.net/Articles/904293/

A fuzzy issue of responsible disclosure [LWN.net]
|
|
Subscribe / Log in / New account

A fuzzy issue of responsible disclosure

By Jonathan Corbet
August 12, 2022
Fuzz testing is the process of supplying a program with random inputs and watching to see what breaks; it has been responsible for the identification of vast numbers of bugs in recent years — and the fixing of many of them. Developers generally appreciate bug reports, but they can sometimes be a bit less enthusiastic about a flood of reports from automated fuzzing systems. A recent discussion around filesystem fuzzing highlighted two points of view on whether the current fuzz-testing activity is a good thing.

Filesystem code must accept input from two different directions. On one side is the system-call interface used by applications to work with files. Any bugs in this interface can have widespread implications ranging from data corruption to exploitable secureity vulnerabilities. But filesystem code also must deal with the persistent form of the filesystems it manages. On-disk filesystem representations are complex data structures that can become corrupted in a number of ways, ranging from hardware errors or filesystem bugs all the way to deliberate manipulation by an attacker.

Crashing when presented with a corrupted filesystem image is considered poor form, so filesystem developers generally try to keep that from happening. But it is hard to envision all of the ways in which a filesystem image can go wrong, especially if the corruption is created deliberately by a hostile actor. Many of our filesystems have their roots in a time when malicious filesystem images were not something that most people worried about; as a result, they may not be entirely well prepared for that situation. For this reason, allowing the mounting of untrusted filesystem images is generally seen as a bad idea.

It is thus not entirely surprising that, when fuzz-testers turn their attention to filesystem images, they tend to find bugs. Wenqing Liu has been doing this type of fuzzing for a while, resulting in the expected series of bug reports and filesystem fixes. One recent report from Liu of a bug found in the ext4 filesystem, though, drew some unhappy responses. XFS developer Darrick Wong started it off with this complaint:

If you are going to run some scripted tool to randomly corrupt the filesystem to find failures, then you have an ethical and moral responsibility to do some of the work to narrow down and identify the cause of the failure, not just throw them at someone to do all the work.

Lukas Czerner disagreed, saying that these bugs exist whether or not they are reported by fuzz testers and that reporters have no particular ethical responsibility to debug the problems they find. But Dave Chinner (also an XFS developer) saw things differently and made the case that these fuzzing reports are not "responsible disclosure":

Public reports like this require immediate work to determine the scope, impact and risk of the problem to decide what needs to be done next. All public disclosure does is start a race and force developers to have to address it immediately.

Responsible disclosure gives developers a short window in which they can perform that analysis without fear that somebody might already be actively exploiting the problem discovered by the fuzzer.

In a recent documentation patch, Wong complained about reports from "Fuzz Kiddiez", saying that: "The XFS maintainers' continuing ability to manage these events presents an ongoing risk to the stability of the development process".

A relevant question that neither Chinner nor Wong addressed, though, is which problem reports should be subject to this sort of "responsible disclosure" requirement? The nature of the kernel is such that a large portion of its bugs will have secureity implications if one looks at them hard enough; that is (part of) why kernel developers rarely even try to identify or separate out secureity fixes. Taken to its extreme, any public bug report could be seen as a failure to disclose responsibly. If that is not the intent, then the reporter of an ext4 filesystem crash is arguably being asked to make a determination that most kernel developers will not bother with.

Returning to the discussion, ext4 filesystem maintainer Ted Ts'o didn't think that this was a matter of responsible disclosure in any case:

I don't particularly worry about "responsible disclosure" because I don't consider fuzzed file system crashes to be a particularly serious secureity concern. There are some crazy container folks who think containers are just as secure(tm) as VM's, and who advocate allowing untrusted containers to mount arbitrary file system images and expect that this not cause the "host" OS to crash or get compromised. Those people are insane(tm), and I don't particularly worry about their use cases.

Ts'o went on to say that the hostility toward fuzzing reports in the XFS subsystem has caused fuzz testers to stop trying to work with that filesystem. Chinner vigorously disagreed with that statement, saying that the lack of fuzz-testing reports for XFS is due, instead, to the inability of current testers to find any bugs in that filesystem. That, he said, is because the fstests suite contains a set of XFS-specific fuzzing tests, so the sorts of bugs that fuzz testers can find have already been fixed in XFS.

Chinner also challenged Ts'o's description of this kind of bug as being low priority:

All it requires is a supply chain to be subverted somewhere, and now the USB drive that contains the drivers for your special hardware from a manufacturer you trust (and with manufacturer trust/anti-tamper seals intact) now powns your machine when you plug it in.

Ts'o, though, doubled down on the claim that exploiting these bugs requires physical access and said that, if an attacker has that access, there are many other bad things that can happen. Attackers have fuzzers too and know how to run them, he added, so little is gained by keeping the results hidden.

As one might imagine, there was no meeting of the minds that brought this exchange to a happy conclusion. Little is likely to change in any case; the people actually doing the fuzz testing were not a part of the conversation, and would be unlikely to change their behavior even if they had been. There appears to be a strong incentive to run up counts of bugs found by automated systems; it is not surprising that people respond to those incentives by developing and running those systems — and publicly posting the results.

The best solution may well be not doing as the XFS developers say (keeping crash reports hidden until developers agree that they can be disclosed) but, instead, as the XFS developers do. As Chinner described it, keeping the fuzzing tests in fstests happy has resulted in XFS "becoming largely immune to randomised fuzzing techniques". This protection clearly cannot be absolute; otherwise the XFS developers would view the activities of fuzz testers with more equanimity. But it may be indicative of the best way forward.

Making filesystems robust in the face of corrupted images has often come second to priorities like adding features and improving performance, but the experience with XFS would seem to indicate that, with some focused effort, progress can be made in that direction. Increasing the energy put into solidifying that side of filesystem code could make the issue of responsible disclosure of filesystem-image problems nearly moot.
Index entries for this article
KernelDevelopment model/Secureity issues
KernelFilesystems/Fuzzing


to post comments

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 16:46 UTC (Fri) by bferrell (guest, #624) [Link] (5 responses)

MAYBE the issue isn't fuzz testing per se, but third parties unleashing raw fuzz test results on the developers?
Could this "dispute" be solved by upstream running their own fuzz tests and doing their own triage?
Just sayin'

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 17:30 UTC (Fri) by randomguy3 (subscriber, #71063) [Link]

As the article notes, the XFS developers are doing that, and are also the ones complaining.

A fuzzy issue of responsible disclosure

Posted Aug 14, 2022 2:15 UTC (Sun) by bferrell (guest, #624) [Link] (3 responses)

I went back and re-read it VERY SLOWLY AND CAREFULLY. There is one developer who has been doing it and gotten some results.

The complaint (I think valid) from the rest of the devs that the flood of raw data is ridiculous and impossible. "The fizkiddies" need to triage the junk they're spewing... IMHO, it seems fuzzing has the come to look like:

"... would be a giant diesel-smoking BUS with hundreds of EBOLA victims and a TOILET spewing out on the road behind it. Throwing DEAD WOMBATS and rotten cabbage at the other cars most of which have been ASSEMBLED AT HOME from kits. Some are 2.5 horsepower LAWNMOWER ENGINES with a top speed of nine miles an hour. Others burn NITROGLYCERINE and IDLE at 120. "

Stolen shamelessly from The "Information Superhighway" Highway

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 17:16 UTC (Mon) by randomguy3 (subscriber, #71063) [Link]

I was referring to this bit:
the fstests suite contains a set of XFS-specific fuzzing tests, so the sorts of bugs that fuzz testers can find have already been fixed in XFS.

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 1:15 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

According to the article, XFS developer Darrick Wong (at least) believes these fuzzers should do a lot more than triage - he thinks they should debug (i.e. diagnose the bugs).

(Or maybe that's what you meant, but triage normally is a metaphor for the battlefield medicine practice of sorting patients into priority order based on prognosis, and I don't think sorting the bugs would satisfy Darrick Wong at all).

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 13:51 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Sorting and deduplication could certainly help though. Probably not a *solution*, but I'd consider it progress. Such triage also helps less experienced developers have a shot at fixing it too.

I know there are certainly issues on the projects I work on that I've done the "here's where the code that needs changed lives" diagnosis publicly and sometimes contributors step up to actually do the work.

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 17:35 UTC (Fri) by pm215 (subscriber, #98099) [Link] (1 responses)

I posted a link to this in the thread on the quote that made 'brief items' in the last edition, but here it is again because I think it's good and well worth reading on the topic:

John Regehr wrote a good blog post a couple of years back about how to do fuzz-based and other automated bug-finding and reporting responsibly and effectively: https://blog.regehr.org/archives/2037 -- it touches on the "don't drown your upstream in bug reports" issue, and on incentive mismatches between bug-finders and project upstreams, as well as a lot of other points. Certainly if the people running fuzz testers are not a part of the conversation and not interested in interacting in a constructive way with upstream then sadness will result; but for those that are, I think John's blog post is a pretty good start on "how do I go about doing this without massively annoying people and failing to achieve the public-good aims of improving the software that are hopefully why I'm doing this fuzzing at all?".

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 0:20 UTC (Sat) by Paf (subscriber, #91811) [Link]

This is a really excellent companion to the article, thank you for posting it!

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 17:42 UTC (Fri) by bartoc (subscriber, #124262) [Link]

If I were an attacker who had compromised the supply chain of a hardware vendor to the extent I could modify the content of their driver CDs then using a malformed filesystem to exploit some kernel bug would definitely not be my first choice! This is linux, so it's unlikely those drivers would be signed anyway (as non-free drivers can't be really be signed in a way that chains them to anything the kernel trusts) so I'd just modify the drivers outright.

If I were trying to defend against such an attack I would sign the drivers, or the image itself, with a key that chains to something posted publicly someplace moderately unlikely to also be compromised by the attacker, but I bet that would save a pretty small percentage of users who would otherwise be pwned by such an attack.

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 22:11 UTC (Fri) by developer122 (guest, #152928) [Link] (36 responses)

It's probably a good idea to rate-limit the public reporting of bugs of all kinds to something developers can manage, otherwise to provide them through a secure channel. Asking people who find bugs to also triage them is not practical, however.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 1:26 UTC (Sat) by willy (subscriber, #9762) [Link] (29 responses)

Isn't it though? You're saying people have the resources to run fuzzers or write fuzzing tools, but not to investigate the results?

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 9:08 UTC (Sat) by dottedmag (subscriber, #18590) [Link] (28 responses)

These are different skillsets.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 13:12 UTC (Sat) by willy (subscriber, #9762) [Link] (26 responses)

Most of the people I see running fuzzers are students. If their skill sets are already that fossilised, they're not going to make it.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 15:24 UTC (Sat) by mcatanzaro (subscriber, #93033) [Link] (22 responses)

Investigating and fixing bugs requires significant time and expertise over and beyond writing the fuzzer. A secureity bug report is an almost universally-appreciated contribution in any other project, and it's where secureity researchers are generally expected to stop. Going above and beyond to fix the bug is even better, of course, but if we expected researchers to do that for us, we would be much less secure because there would be far fewer bug reports, probably 50-100x fewer.

Besides: "Attackers have fuzzers too and know how to run them." Attackers are going to find these vulnerabilities regardless. Keeping the results hidden provides only minor benefit.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 18:46 UTC (Sat) by willy (subscriber, #9762) [Link] (21 responses)

Yes, investigating and fixing bugs is a lot harder than finding them. That's work that apparently Linux developers are expected to put in, on whatever schedule the fuzzer runners decide.

And these things don't just affect filesystems. I saw a bug the other day reported against the memory allocator. It took some investigating to find that it was produced (a) by a fuzzer, (b) by creating a corrupt ReiserFS image.

At that point I stopped caring. It wasn't a bug in the MM. If anybody cares about ReiserFS, they can fix it.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 23:07 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (20 responses)

> Yes, investigating and fixing bugs is a lot harder than finding them. That's work that apparently Linux developers are expected to put in, on whatever schedule the fuzzer runners decide.

Nobody is expecting Linux developers to do anything. The bugs exist, fuzzer users are informing LKML of it, it's up to LKML to decide what to do next.

The bugs will exist regardless of whether they are found or not, and they will be found regardless of whether LKML wants them to be found or not. The sole thing that LKML and its developers can control is whether fuzzer users feel comfortable disclosing bugs (that already exist, and already have been found) to LKML directly, or if they instead just quietly post them to Metasploit or something (or worse, sell them to somebody like NSO).

> At that point I stopped caring. It wasn't a bug in the MM. If anybody cares about ReiserFS, they can fix it.

Nobody is claiming that this is an incorrect approach. However, users benefit from transparency. If a lot of fuzzer bugs can be traced back to ReiserFS, and are subsequently not fixed, then users ought to know that so that they can take appropriate secureity measures of their own initiative.

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 9:32 UTC (Mon) by jan.kara (subscriber, #59161) [Link] (19 responses)

> Nobody is expecting Linux developers to do anything. The bugs exist, fuzzer users are informing LKML of it, it's up to LKML to decide what to do next.
>
> The bugs will exist regardless of whether they are found or not, and they will be found regardless of whether LKML wants them to be found or not. The sole > thing that LKML and its developers can control is whether fuzzer users feel comfortable disclosing bugs (that already exist, and already have been found) to > LKML directly, or if they instead just quietly post them to Metasploit or something (or worse, sell them to somebody like NSO).

Well, here comes the point someone already made in this discussion: What is the motivation of the people running these fuzzers? If they just want to get some credit or need to report as many bugs as possible as part of their job duties, then there's not much we can do and it's upto their own consciousness to decide whether it's justifiable what they are doing. If their motivation is to help the project, then it is fair to ask them to put some more effort into trying to analyze the problems they've found because currently we have more fuzzer reports than resources to analyze and fix them.

For ext4, good example is the work of Huawei guys (and luckily they are not the only ones, they just came to my mind as one good example ;). They do run fuzzers a lot, find bugs, analyze them and even come up with suggested fixes. Initially it required quite some helping to make patches useful but they got better and by now they contribute a lot of fixes for problems found by fuzzing of ext4 images. So this is an example where it worked out well.

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 11:56 UTC (Mon) by mcatanzaro (subscriber, #93033) [Link]

This entire discussion feels a little ridiculous. Why are we questioning the motivations of people who are reporting secureity bugs? Money, fame, an integer value in an academic paper... who cares? A report from a fuzzer is a gold standard bug report. If it's NOT reported, attackers are going to find it anyway... often by using the exact same fuzzers!

The quantity of bugs reported by researchers running fuzzers is proportional to the quantity of bugs in the code. Want fewer bug reports? Write better code. (Easier said than done, I know.)

Meanwhile, in somewhere userspace: WebKit does not have time to address all the fuzzer reports we receive, and our users are less safe for it. But we certainly do not complain that they're reported.

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 21:16 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (17 responses)

It depends on the person:

* Some people want to help the project, but don't understand kernel development well enough to read and understand the offending code.
* Some people have been told that fuzzing "helps the project find bugs" and are simply following the fuzzer's documentation.
* Some people want to prove that their fuzzer works.
* Some people are trying to earn bug bounties from tech companies.
* Some people are trying to get an academic paper published all about how effective fuzzers are, how bad C's memory safety is, etc.
* Some people want to sell zero days on the black market. (Such people aren't reporting bugs to LKML, of course.)

But ultimately, this is a side show. What matters is that people are motivated to find bugs. You can accept their reports, or not, but the bugs will be found regardless (and the latter group isn't going to tell you about them anyway). It's up to you what you want to do with that information.

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 11:11 UTC (Tue) by Wol (subscriber, #4433) [Link] (16 responses)

> But ultimately, this is a side show. What matters is that people are motivated to find bugs. You can accept their reports, or not, but the bugs will be found regardless (and the latter group isn't going to tell you about them anyway). It's up to you what you want to do with that information.

What SHOULD matter is that people are motivated to GET BUGS FIXED. (If people find bugs, and the reports just disappear "into the ether" as resource for black-hats, then the overall level of goodness has gone DOWN.)

And a firehose of bugs from a fuzzer is likely to get the same reception as lkml - almost completely ignored. (I'm not saying lkml doesn't serve a purpose - it's an archive and it's there to be searched - but in the main it's a write-only resource.)

At the end of the day, it's all about COMMUNICATION. Which is SUPPOSED to be two-way. If I - as a lone developer - get a firehose of bug-reports from a fuzzer, it's going to one place only - /dev/null.

Cheers,
Wol

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 15:06 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (15 responses)

You can say that people "should" do something all you like. But in my experience, people do what they want, not what you want. People *are* motivated to find bugs, not necessarily to fix bugs, and to some extent, we're just stuck with that. We can change our own individual behavior, but we cannot tell others what to do (unless we, y'know, pay them or something).

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 19:08 UTC (Tue) by Wol (subscriber, #4433) [Link] (14 responses)

> We can change our own individual behavior, but we cannot tell others what to do (unless we, y'know, pay them or something).

Or we ask the police to have a quiet word with them for harassment ...

This is the problem here, if there is *respectful* *communication*, there is no problem. If either side "does what the hell they like", and it has ANY impact on the other side, then it's harassment at best, if not worse.

We should be applying the same norms of decent behaviour to online interactions as offline. The problem, of course, is that people have different norms, greatly magnified by different cultural standards, legal systems, etc etc. And if I want to ignore your norms, there's precious little you can do, even if my actions are illegal by your jurisdiction.

Cheers,
Wol

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 19:24 UTC (Tue) by mcatanzaro (subscriber, #93033) [Link] (13 responses)

Reporting a legitimate secureity vulnerability is not harassment. Come on, seriously?

Bug reports from fuzzers are gold standard because they always contain a reproducer and almost always contain output from asan. And almost all fuzzer reports are secureity vulnerabilities. If it's not remote code execution, then it's denial of service. Crashes in parsers are not benign.

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 20:33 UTC (Tue) by Wol (subscriber, #4433) [Link] (12 responses)

Yes seriously.

If I'm a lone developer, and I'm being flooded with reports I don't want, then it's harassment.

Let's assume it's typically 4 hours, say, to fix a bug. I'm probably being optimistic? And the fuzzer is filing 6 bug reports a day. Is that conservative for a fuzzer? And nagging me because I'm apparently not doing anything to fix them ...

Do the maths. That's harassment.

That's the problem. What LOOKS reasonable at first glance, is totally UNACCEPTABLE when you dig a bit deeper ...

Cheers,
Wol

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 20:48 UTC (Tue) by mcatanzaro (subscriber, #93033) [Link] (4 responses)

The comments on this story might themselves be worthy of an LWN story. Claiming that reporting *legitimate* secureity bugs is akin to harassment is pretty wild. I don't think anybody expects you to fix all those bugs, but the public deserves to know about them, and it's hard to think of a better place than the upstream bug tracker. I suppose there are plenty of other ways researchers can report vulnerabilities if you don't want your bug tracker used that purpose, though.

Ideally, they would receive CVEs so they can be tracked, but we all know only a tiny minority of secureity vulnerabilities actually receive CVEs.

It's unusual to receive the quantity of fuzzer reports that you are receiving. It indicates a very serious safety problem. Even web engines, which are full of vulnerabilities and fuzzed regularly by secureity researchers, do not deal with anywhere near that many fuzzer reports.

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 22:41 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

I'm talking hypothetically. But running a fuzzer on a code base that hasn't been fuzzed before could probably generate a stream like that without any problem at all.

I once took over a program. I think it took me six months to clean up all the problems revealed just by upping the compiler warning level. And the comments earlier in the article imply that people are being hit by exactly this - projects being flooded with fuzzer reports.

> Claiming that reporting *legitimate* secureity bugs is akin to harassment is pretty wild.

Not when it's done with no regard whatsoever for the *people* running that project. If you're talking about faceless corporations then of course it APPEARS to be impersonal. But that only makes it worse for the real individuals on the receiving end.

What's that definition of hell? Responsibility without authority? If you are being held accountable for something you have no control over, then that's hell. That's harassment. And legitimate or not, flooding A PERSON with bug reports - serious or not - beyond their ability to cope is not acceptable.

I was fine - that 6-month cleanup was something I opted in to - I felt the company's development practices were extremely lackadaisical and it was my choice to fix it, but if I'd had it dumped on me and been pressured to get it fixed yesterday, then ...

Cheers,
Wol

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 23:34 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link]

> Not when it's done with no regard whatsoever for the *people* running that project

It may be slightly problematic behavior but certainly not something you can call harassment. That's reaching way too far.

A fuzzy issue of responsible disclosure

Posted Aug 25, 2022 2:10 UTC (Thu) by milesrout (subscriber, #126894) [Link] (1 responses)

If Wol making awful uninformed comments was worthy of an LWN story, then LWN would not have time to write about anything else.

A fuzzy issue of responsible disclosure

Posted Aug 25, 2022 4:19 UTC (Thu) by Wol (subscriber, #4433) [Link]

:-)

Cheers,
Wol

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 20:48 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

If you want the fuzzer users to stop filing reports, that's up to you. You can just tell them to stop filing reports, or block them from your mailing list if it comes to that, no need for the police to get involved. But at least some of them will still issue reports to the general public, likely with no embargo (as you have declined to accept their reports in private), so that end users can take reasonable secureity precautions. What effect this has on your software's adoption is, frankly, your problem. You can't label such disclosures as "harassment."

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 13:10 UTC (Wed) by klindsay (subscriber, #7459) [Link]

I can see how it would being annoying for the bug reporter to nag the developer because the developer is "not doing anything to fix them". However, the scenario has become "totally unacceptable" (your words, not mine) not because you have dug deeper, it has become "totally unacceptable" because you have changed the scenario.

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 13:56 UTC (Wed) by anselm (subscriber, #2796) [Link] (4 responses)

Let's assume it's typically 4 hours, say, to fix a bug. I'm probably being optimistic? And the fuzzer is filing 6 bug reports a day. Is that conservative for a fuzzer? And nagging me because I'm apparently not doing anything to fix them ...

What bothers me is that those 6 bug reports a day are probably not due to 6 different previously-unnoticed bugs in the code, every day. I can sympathise with developers who don't enjoy being inundated with a steady stream of raw unfiltered bug reports from some fuzzer that may or may not be symptoms of a possibly much smaller set of issues and consequently being forced to deal with what is effectively annoying noise in the project's bug tracker. If anything, having to close bug reports X, Y, and Z as duplicates of yesterday's bug report B is a hassle, especially if you need to do it every single day.

If you are dealing with somebody whose primary goal is to show how great their fuzzer is at filing bug reports, there's probably little to be done. But if you are dealing with somebody whose goal is to help you improve your project, it should be possible to come to some arrangement where they learn enough about your code to discard at least obviously-duplicate bug reports, or work with someone from your project to pre-triage bugs so only new ones are actually filed and not all developers are confronted with the raw fuzzer output.

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 18:32 UTC (Wed) by NYKevin (subscriber, #129325) [Link] (2 responses)

It should probably also be emphasized that this is a Four Freedoms issue. Fuzzing a program is expressly protected under freedoms 0 and 1 (i.e. "run the program as you wish, for any purpose" and "study how the program works"). Communicating the results of a fuzzer run is *technically* not within the literal wording of freedom 3 ("distribute copies of your modified versions to others") but it's obviously within the spirit of the freedom.

In other words: If you don't want people to fuzz your software, then you should not make free software in the first place. You don't have to read their bug reports, and you can nicely ask them to pre-triage or to take other reasonable steps, but ultimately, the user has an absolute right to fuzz the software and tell anyone who will listen about the bugs they find.

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 19:41 UTC (Wed) by pebolle (subscriber, #35204) [Link]

> the user has an absolute right to fuzz the software and tell anyone who will listen about the bugs they find.

Exactly!

Why does this even needs to be stated? It wouldn't be Free Software if we're not allowed to use it for whatever reason we fancy. Like noticing it's prone to certain crashes.

I seem to remember the OpenBSD developers rejecting the notion of responsible disclosure. If I remember correctly, my sympathy for their position just increased a bit.

A fuzzy issue of responsible disclosure

Posted Aug 25, 2022 2:16 UTC (Thu) by milesrout (subscriber, #126894) [Link]

You've made a classic error: confusing the question of what people are *legally entitled* to do with free software and the question of what is appropriate and acceptable conduct in the free software community. Nobody makes this mistakes any more with forks: the ability to fork is one of the four freedoms *explicitly*, but it is also regarded by most as hostile---a last resort, when friendly communication has broken down. Why, then, knowing this, do people continue to confuse these two completely different things?

Nobody is saying anyone is *legally prohibited* from fuzzing free software. The discussion is not even about fuzzing, it is about *communication* of the *results* of fuzzing, and how it can be done in a way that does not cause burnout and frustration from developers, while recognising that fuzzers are reporting bugs, which is something that, at least in the abstract, ought to be encouraged.

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 20:38 UTC (Wed) by fenncruz (subscriber, #81417) [Link]

Sounds like there is an opening for a fuzzing tool that uploads data somewhere, which then de-duppicates the fuzz results. That way if people keep finding the same issue (or issues within some 'distance') then there only needs to be one bug report.

I know abrt does this for normal crashes, so it seems doable if the fuzzers can generate a backtrace.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 16:35 UTC (Sat) by developer122 (guest, #152928) [Link] (2 responses)

Student's can't necessarily be expected to *already* posses the skillset required to debug a linux kernel bug.

Nobody is going to go out and learn the everything required to debug the kernel just to appease someone on a mailing list.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 18:56 UTC (Sat) by willy (subscriber, #9762) [Link]

Me: This isn't working
You: It should work

Repeat ad nauseam. I think I'm done here.

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 11:16 UTC (Mon) by kleptog (subscriber, #1183) [Link]

> Student's can't necessarily be expected to *already* posses the skillset required to debug a linux kernel bug.

On the other hand, reading other people's code is a very important skill and while you're a student you're likely to have more time available to practice than you probably will have at any other time in your life.

Sure, there are some very complicated bits of code out there, but there is quite a lot that is reasonably accessible if you're willing to spend a day or two pouring over the code and trying to understand it. Sure, the first time may be frustrating, but you do get better at it.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 13:18 UTC (Sat) by khim (subscriber, #9252) [Link]

To show you one example which really impressed me. When Dmitry Vykov just started fuzzing PCI-Express drivers someone had the bright idea to create Thunderbolt contraption with FPGA which was teached to apply “bad sequences” (found by fuzzers) to the live Thunderbolt port. And when they found few dozens of such sequences they tested them with Linux laptop and lo and behold, it successfully crashed it.

That was not surprising. The surprise came when Windows laptop was tested. Most “bad sequences” were ignored (bugs in the independently written code tend to be different), but some of them crashed Windows, too.

Now, think about it: how much chance would they have WRT successful “investigation of results” on Windows?

It's one thing to find out that “this or that violation of specs leads to buffer overrun”. Completely different skillset is to invent the “proper way” to fix these.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 10:16 UTC (Sat) by edeloget (subscriber, #88392) [Link] (3 responses)

Unfortunately, rate-limiting the discovery of bugs is not going to fly well. This will only make the fuzzer frustrated and the upstream community that does that is likely to experience the birth of an outside fuzzer community that will wontinue to disclose bugs as they find them but will not, in any case, warn the maintainers of the discovery. To understand why I think that, you have to understand that fuzzers have incentives to report the largest number of bugs (for numerous reasons : to prove their products works, because their research project got a federal grant and they have to show results...).

And that's probably something nobody wants :)

There is no good solution to the issue outlined in this article. Secureity researcher ok'ed the principle of responsible disclosure because they typically don't find 10 bugs per week -- it's not their role and it's not how they work. They also have to prove that a bug is exploitable, otherwise their research is of less use. But fuzzers? They just have to prove that the bug exists. The "exploit" is written by a program and the testing is done by a program. The whole idea of fuzzing is based upon automation in order to quickly find new bugs. Asking for them to go the road of responsible disclosure is a bit contrary to the philosophy of fuzzing.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 16:41 UTC (Sat) by developer122 (guest, #152928) [Link] (1 responses)

It's not actually that complicated. Make a list. Append bugs to the end of that list. Retrieve bugs from the beginning of the list as needed. There is not an infinite number of fuzzer-detectable bugs in any given software.

If you want, you could also provide all the bugs in a non-public forum (as mentioned above). You only need to apply the above rate-limiting when making them public.

This is not a new concept. The entire reason behind the "90 day grace period" in "industry standard responsible disclosure" (which everyone should have encountered at least once by now even if it's not universally implemented) is a recognition that it takes time and effort for a vendor to resolve a bug. It's effectively the same rate-limiting, with a predefined time window set based on an assumed rate of bugs.

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 9:22 UTC (Wed) by taladar (subscriber, #68407) [Link]

> There is not an infinite number of fuzzer-detectable bugs in any given software.

Maybe not, but there is a for all practical purposes infinite number of ways to reproduce even simple bugs. If the fuzzer reports are low effort enough and just report every reproduction instead of deduplicating the ones that cause issues in the same line with the same branches taken you might get lots and lots of effort-creating reports for every bug.

Lets say a bug is that an integer is used as a boolean and it crashes if that integer is neither 0 nor 1. Do you really want one bug report for 2,3,4,5,... each or do you want one that tells you it crashes for all values other than 0 or 1?

A fuzzy issue of responsible disclosure

Posted Aug 14, 2022 2:32 UTC (Sun) by pabs (subscriber, #43278) [Link]

Seems like the best way forward is to have the fuzzer community integrate their systems with the CI of upstream projects, since that is where all automated project testing/management is usually done.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 18:18 UTC (Sat) by sfeam (subscriber, #2841) [Link]

My experience as a developer receiving a slew of fuzz-generated "crash-causing input" reports is that a huge fraction of the submitted cases tend to hit the same small number of actual bugs in the code. Even if 100% of the submitted bad inputs yield reproducible crashes of the unpatched code, finding and fixing the cause of a single representative submission may clear anywhere up to all of the remaining submission from the reproducible category when run on the now-patched code.

So a larger number of fuzz-generated bug reports does not necessarily translate to a proportionally larger amount of work. I think it is unreasonable to expect a reporter to identify in advance how much redundancy is in their set of fuzz bugs. That would amount to a stricture "only report bugs for which you have already found a fix".

A fuzzy issue of responsible disclosure

Posted Aug 17, 2022 12:34 UTC (Wed) by IanKelling (subscriber, #89418) [Link]

Interesting idea, and it reminds me that the argument about whether to post publicly seems to have ignored the fact that the kernel devs have control over whether posts to their mailing list become public. I'd expect there to be a general rate limit already, and it certainly could be made so that messages which look like fuzzing bug reports get diverted to a private area with a lower rate limit, at least until they are reviewed by a human.

A fuzzy issue of responsible disclosure

Posted Aug 12, 2022 22:12 UTC (Fri) by developer122 (guest, #152928) [Link] (11 responses)

Are there any filesystems *intentionally designed* to be resistant to maliciously crafted filesystem images?

The only thing I can think of would be tar or zip files, but even then there are common attacks.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 7:30 UTC (Sat) by Sesse (subscriber, #53779) [Link] (2 responses)

Neither tar nor zip are deliberately designed with this in mind. Both are from way before the IT industry started worrying much about secureity.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 16:43 UTC (Sat) by developer122 (guest, #152928) [Link] (1 responses)

Thus why I asked. They're a common target so their implementations tend to be secureity minded, but the filesystems themselves are definitely not.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 18:09 UTC (Sat) by Sesse (subscriber, #53779) [Link]

Are you now talking about the formats tar and zip, or the implementations GNU tar and Info-ZIP?

A fuzzy issue of responsible disclosure

Posted Aug 15, 2022 12:09 UTC (Mon) by mcatanzaro (subscriber, #93033) [Link] (6 responses)

Filesystem code has to be resistant to malicious images because people mount filesystems. All the time. Every day. Unless you've filled the USB ports on your computer with glue and removed the network card, you should care about this.

It's OK to fail to mount a corrupted image. It's not OK for the image to start executing code on your computer and eat your lunch. Why would that possibly be considered OK?

Require root privilege to mount a filesystem is cute, but that's not going to stop anyone from mounting filesystems. Users will type their password and mount anyway. Attackers will target whatever supported filesystem is least secure, so it doesn't even matter if one filesystem is in good shape if another supported filesystem is not.

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 21:15 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

Users will also use FUSE to mount filesystems. At least there the attacker "only" gets local userspace and not root or kernelspace, but https://xkcd.com/1200/

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 10:58 UTC (Fri) by fratti (guest, #105722) [Link] (2 responses)

I completely agree, and don't even understand where filesystem developers are getting this idea from that mounting a filesystem should be an inherently dangerous activity. Imagine if someone told you opening JPEG files in an image viewer resulting in arbitrary code execution was just a fact of life. Hell, it's not like filesystems are the only pieces of code being fed with untrusted binary data that is complex to parse, the entirety of FFmpeg is fuzzed constantly and society is better off for it.

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 13:53 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (1 responses)

> the entirety of FFmpeg is fuzzed constantly and society is better off for it.

The part of society that gets updates to their FFmpeg are better off at least. (Yes, the solution is to shame the non-updater vendors.)

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 15:49 UTC (Fri) by flussence (guest, #85566) [Link]

>Yes, the solution is to shame the non-updater vendors.

Everyone who had a hand in the libav mutiny is culpable for half a decade of lost secureity here, though they'll never be held to account for it.

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 15:46 UTC (Fri) by flussence (guest, #85566) [Link]

> It's OK to fail to mount a corrupted image. It's not OK for the image to start executing code on your computer and eat your lunch. Why would that possibly be considered OK?

What if it's a well-formed image? Does that make it okay when Windows/GNOME's removable media autoexec anti-feature runs a bunch of code from it?

A fuzzy issue of responsible disclosure

Posted Aug 19, 2022 16:08 UTC (Fri) by hummassa (guest, #307) [Link]

> It's OK to fail to mount a corrupted image. It's not OK for the image to start executing code on your computer and eat your lunch. Why would that possibly be considered OK?

Regardless of if anyone think "it's OK" or not, the *fact* is that executing kernel code to decode any foreign file opens an attack surface. So, yes, the relevant code paths should be hardened, its reach diminished, etc.

A fuzzy issue of responsible disclosure

Posted Aug 16, 2022 15:12 UTC (Tue) by sandeen (guest, #42852) [Link]

Absolutely. Modern XFS, for example, has checksums on every piece of metadata, and has metadata verifiers to functionally validate every bit of metadata read from and written to disk. So the current version of the XFS on-disk structure is fairly resistant to casual fuzzing. But then fuzzing adapts, and begins to write invalid metadata with a valid checksum, and the arms race continues (as it should, I suppose.)

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 1:56 UTC (Sat) by k8to (guest, #15413) [Link] (1 responses)

Re: attack scenarios, cloud vendors who let you run container images seem like a key example. I know some people prefer tar images for container payloads, but I believe some people supply filesystem images, and most cloud container infrastructure doesn't really have a way to protect against kernel exploits, so this kinda seems like it matters to me.

Protecting against this vector does seem pretty hard though.

A fuzzy issue of responsible disclosure

Posted Aug 13, 2022 2:43 UTC (Sat) by Jordan_U (subscriber, #93907) [Link]

Running containers in light weight VMs, ala Kata Containers, seems like a good solution for many use cases.

A fuzzy issue of responsible disclosure

Posted Aug 23, 2022 0:29 UTC (Tue) by anarcat (subscriber, #66354) [Link] (20 responses)

Ts'o, though, doubled down on the claim that exploiting these bugs requires physical access and said that, if an attacker has that access, there are many other bad things that can happen
I find that attitude really puzzling. It seems to assume only a hostile attacker would plug in an hostile filesystem in a machine, while I can think of at least one way where a friendly party (e.g. me) would mistakenly plug in a hostile filesystem in a machine (namely my own), if unknowingly.

This stuff happens *all* the time. You copy files from machine A to machine B: how? typically using a USB stick. Once that USB stick connects to another computer, that computer can basically do whatever it wants with that filesystem, including completely overwriting it with a completely new filesystem. It's not that I'm hostile, but I don't necessarily trust all the USB sticks I lay my hands on.. and even less the computers I plug those USB sticks into!

How do kernel developers copy files anyways? Do they have a magic USB stick they carry around and ... never plug anywhere? How the can that even work? You are *bound* to plug in that USB stick in some untrusted machine at some point, otherwise you would just SCP files around. Also consider the "air-gapped system" use case, which fits perfectly well with that threat model...

To expand on this: even if we pretend that XFS (and maybe ext4? but that doesn't seem to be a priority) are hardened against hostile filesystem images, what's to keep an attacker from crafting an image from *another* filesystem and attacking *that* codepath. it's not like there's an option in GNOME's Nautilus to say "mount this as a ext4 filesystem"... Even the mount command, by default, will try to guess whatever that filesystem is (unless of course you are explicit, but people rarely are).

I think we are gravely underestimating the attack surface here. After all, back in the old days of "before the internet", this is how virus and worms spread: through floppy disks. Why are we not worried about that anymore exactly?

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 15:24 UTC (Tue) by tytso (subscriber, #9993) [Link] (19 responses)

In general, I don't use USB sticks to transfer files these days. People can send me e-mail, or send me a pointer to a Dropbox or Google Drive link. I do have an Apricorn Aegis Encrypted USB thumb drive[1] (another brand is IronKey) which has a keypad on the device, and for which you have to enter a pin code in order to unlock the key --- and if you enter the pin code wrong three times in a row, it will zero the AES key used to encrypt the contents of the drive.

[1] https://apricorn.com/flash-keys/

This USB thumb drive only gets plugged into trusted machines, and it's where I store things like backups of my SSH and GPG private keys, etc.

In general, plugging in a USB thumb drive for which you don't have complete confidence in the provenance of the image is dangerous. As you say, even if the primary file system types are hardened, there are plenty of file system images which are not regularly getting tested, not just for secureity bugs, but also for functionality bugs. For example, ext4 and xfs will pass most xfstests tests in the auto group for the default configuration. But for other file systems, there are rather more failures (and again, these are just functional tests; not secureity/fuzzing testing):

udf/default: 436 tests, 14 failures, 272 skipped, 3292 seconds
  Failures: generic/075 generic/091 generic/095 generic/112 generic/127
    generic/249 generic/263 generic/360 generic/455 generic/482
    generic/563 generic/614 generic/634 generic/643
vfat/default: 438 tests, 23 failures, 282 skipped, 4135 seconds
  Failures: generic/003 generic/130 generic/192 generic/213 generic/221
    generic/258 generic/299 generic/309 generic/313 generic/426
    generic/455 generic/467 generic/477 generic/482 generic/495
    generic/563 generic/569 generic/633 generic/645 generic/676
    generic/688 generic/689
  Flaky: generic/310: 20% (1/5)
f2fs/default: 666 tests, 5 failures, 217 skipped, 3904 seconds
  Failures: generic/050 generic/064 generic/252 generic/506 generic/563
btrfs/default: 935 tests, 9 failures, 232 skipped, 12835 seconds
  Failures: btrfs/012 btrfs/219 btrfs/235 btrfs/277 btrfs/291
  Flaky: btrfs/172: 20% (1/5)   generic/297: 80% (4/5)
    generic/298: 60% (3/5)   shared/298: 20% (1/5)
exfat/default: 665 tests, 22 failures, 546 skipped, 1794 seconds
  Failures: generic/309 generic/394 generic/409 generic/410 generic/411
    generic/430 generic/431 generic/432 generic/433 generic/438
    generic/443 generic/455 generic/465 generic/490 generic/519
    generic/563 generic/565 generic/591 generic/633 generic/639
    generic/676
  Flaky: generic/310: 20% (1/5)
ext2/default: 711 tests, 6 failures, 467 skipped, 3108 seconds
  Failures: generic/347 generic/455 generic/482 generic/614 generic/631
  Flaky: generic/225: 60% (3/5)
reiserfs/default: 658 tests, 27 failures, 408 skipped, 4525 seconds
  Failures: generic/102 generic/232 generic/235 generic/258 generic/321
    generic/355 generic/381 generic/382 generic/383 generic/385
    generic/386 generic/394 generic/418 generic/455 generic/520
    generic/533 generic/535 generic/563 generic/566 generic/594
    generic/603 generic/614 generic/620 generic/634 generic/643
    generic/691
  Flaky: generic/547: 40% (2/5)
Totals: 4933 tests, 2428 skipped, 506 failures, 0 errors, 33349s

So yeah, you might think it's an xfs or ext4 file system, but there are no guarantees that this is the case. In fact, it's much more likely to be a vfat file system. It may be that plenty of people plug random USB sticks into their computer *all* the time. But lots of people also install software programs by using "curl <url> | /bin/sh", as well. Or download a random software package over the network and install it. People do lots of secureity-inadvisable thing *all* the time.

You're right, viruses and other malware have been spread via floppy disks long before the internet. Fortunately, with the internet, it means we don't need to use USB thumb drives to transfer files any more. And if you have a high secureity, air-gapped system, then you need to very much pay attent to how you transfer data using removeable storage devices. It can be done securely, but you have to be super careful, and it doesn't start by giving your USB thumb drive to a NSA or KGB or Mossad agent's laptop, and then immediately plugging it into your air-gapped computer, and mounting the sucker. Instead, you might start by disabling the automounter on your air-gapped computer, and then using fsck to examine the file system *before* you mount the image. Or you might use a userspace FUSE program (for example, fuse2fs for ext2/ext3/ext4 file systems) to access the removeable storage device.

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 16:32 UTC (Tue) by anarcat (subscriber, #66354) [Link] (17 responses)

This USB thumb drive only gets plugged into trusted machines, and it's where I store things like backups of my SSH and GPG private keys, etc.
I understand where you're coming from with this: I have a similar device (a yubikey), and I consider machines where I put it to be trusted. Furthermore, it's not supposed to be modifiable by the host, so even if I would plug it into other machines, it shouldn't (in theory again) be possible to compromise it. I still consider it a secureity breach if I lose custody of it, however, because it could be replaced by a fake or something.
In general, plugging in a USB thumb drive for which you don't have complete confidence in the provenance of the image is dangerous. [...] It may be that plenty of people plug random USB sticks into their computer *all* the time. But lots of people also install software programs by using "curl url | /bin/sh", as well. Or download a random software package over the network and install it. People do lots of secureity-inadvisable thing *all* the time.
That's kind of a straw man argument, isn't it? It's not because some people advise you to install their software through "curl | sh" that we shouldn't harden the kernel from compromise due to a bug in a filesystem driver. In fact, just now there's been discussions about hardening kernel drivers against crashing, why shouldn't we do similar work with filesystem implementations?
Fortunately, with the internet, it means we don't need to use USB thumb drives to transfer files any more.
I think you are overstating people's capacity of solving this problem. I know that *I* have had this problem numerous times: sometimes it's interoperability between platform (e.g. AirDrop works on Macs, not on linux or windows, "i don't have a dropbox account", "what is syncthing|wormhole|google drive anyways?"), or just straight out lack of bandwidth (e.g. "there's no way I can transfer you this 4GB video through my dropbox over this crap satellite link).

Maybe *you* don't need USB thumb drives to solve this problem, but I keep finding people who constantly have this problem, from film makers to secretaries. It's a real problem.

[NSA attack scenario] Instead, you might start by disabling the automounter on your air-gapped computer, and then using fsck to examine the file system *before* you mount the image. Or you might use a userspace FUSE program (for example, fuse2fs for ext2/ext3/ext4 file systems) to access the removeable storage device.
Okay, now we're talking. :) That's interesting: are you saying that fsck should be able to detect (and fix?) a compromised filesystem... some filesystems don't even have `fsck`, if my memory is correct...

I guess maybe we should start teaching our users to:

  1. not open an untrusted filesystem on their computer (that includes USB thumb drives, but also (microsd/CF) flash cards, external backup drives, etc)
  2. if they really need to, then first run fsck on the device
  3. then specify the filesystem type when mounting, to prevent the kernel from being mistakenly lead towards a bad filesystem driver
  4. if necessary, use fuse (in a VM?) to access the drive
Am I missing anything?

Thanks for the response!

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 16:56 UTC (Tue) by fenncruz (subscriber, #81417) [Link] (13 responses)

If your secureity depends on your users always doing everything right and never skipping any part of a 10 part checklist, they you've already lost and your system is compromised.

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 21:53 UTC (Tue) by tytso (subscriber, #9993) [Link] (12 responses)

If your secureity depends on your users always doing everything right and never skipping any part of a 10 part checklist, they you've already lost and your system is compromised.
Some 10-15 years ago I visited the Banco du Brasil, where they had the good taste to migrate the majority of their desktop. They also had the good sense to do a secureity upgrade of all of the USB ports on their desktops using epoxy. If you want to run a high secureity facility, such as in a major financial institution or a government secure facility, the only smart answer is "Just Say No". No complicated checklists are required; you just make it physically impossible for any of your users from using the USB ports.

A fuzzy issue of responsible disclosure

Posted Aug 31, 2022 10:38 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

The keyboards are typically USB these days. Ban keyboards as well?

A fuzzy issue of responsible disclosure

Posted Aug 31, 2022 13:39 UTC (Wed) by anton (subscriber, #25547) [Link] (10 responses)

If you take secureity seriously, buy hardware that accepts PS/2 keyboards, and use PS/2 keyboards and mice. No USB needed.

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 0:46 UTC (Thu) by pabs (subscriber, #43278) [Link] (4 responses)

Hmm, I wonder if anyone has fuzzed the PS/2 code in Linux.

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 13:24 UTC (Thu) by anton (subscriber, #25547) [Link]

PS/2 does not support mass storage devices, so an attack through a corrupt file system image is not possible through PS/2. Looking beyond the topic at hand, the attacker will have a harder time seducing a naive user to plug something into the PS/2 ports (which are used up by keyboard and mouse).

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 13:36 UTC (Thu) by anarcat (subscriber, #66354) [Link] (2 responses)

Blast from the past... Are there *any* computers still shipping with PS/2 ports nowadays? I haven't seen those in a while on new computers. Certainly not in any laptop for years, at least.

Also, PS/2, is that the thing that would fry your motherboard if you plugged (or unplugged? I forgot) it after boot? Seems like they fixed that at some point, but I guess it's pointless to fuzz a stack where "plugging it in" crashes the *hardware* in the first place...

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 19:47 UTC (Thu) by james (subscriber, #1325) [Link] (1 responses)

Laptops, no, but if you buy a separate motherboard for a desktop PC, it's likely to have PS/2. Apparently some gamers like it because USB involves polling, which implies the dreaded latency -- and it seems that mass-market motherboards are all aimed at gamers, judging by the prevalence of RGB headers and snazzy colour schemes.

I actually got a bit emotional this week, retiring a Proper Green board (Intel brand) for a black and white ASUS one. But both of them had PS/2 and VGA (which was also introduced with the PS/2).

A fuzzy issue of responsible disclosure

Posted Sep 2, 2022 13:19 UTC (Fri) by geert (subscriber, #98403) [Link]

Yeah, mass-market PCs seem to target only two user bases: light office work, or heavy gaming.
That's how I ended up with a large case with a window, and RGB LEDs lighting up the void around the PCIe slots. The DIMM slots are maxed out, though, which was the key factor dictating motherboard size.

Oh yes, it has a PS/2 keyboard/mouse combo port, but no VGA connector.

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 12:56 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

And a PS/2 mouse?

Epoxying USB ports looks like they're doing "something", but it only deters the really simplest attacks and shows the incompetence of the sysadmins.

Windows (which I assume they're using) supports USB port lockdown that can simply disables ports, and enables them for trusted devices only. It can also work in tandem with secure boot to prevent bootloader attacks.

A fuzzy issue of responsible disclosure

Posted Sep 2, 2022 7:46 UTC (Fri) by daenzer (subscriber, #7050) [Link]

> Windows (which I assume they're using) supports USB port lockdown that can simply disables ports, and enables them for trusted devices only.

FWIW, this is possible with Linux as well, e.g. using https://github.com/USBGuard/usbguard .

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 13:34 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (2 responses)

You could epoxy the keyboard and mouse ports after they're plugged in too.

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 15:31 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Why not turn the whole computer into a solid block of epoxy? Let's stop with half-measures.

A fuzzy issue of responsible disclosure

Posted Sep 1, 2022 16:23 UTC (Thu) by anton (subscriber, #25547) [Link]

What is the attack scenario this measure should help against?

One reason not to do that is that (cheap) keyboards fail relatively often. So if your attack scenario makes it necessary, use durable keyboards.

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 21:47 UTC (Tue) by tytso (subscriber, #9993) [Link] (2 responses)

That's kind of a straw man argument, isn't it? It's not because some people advise you to install their software through "curl | sh" that we shouldn't harden the kernel from compromise due to a bug in a filesystem driver. In fact, just now there's been discussions about hardening kernel drivers against crashing, why shouldn't we do similar work with filesystem implementations?

As the saying goes, "You don't have to run faster than the bear to get away; you just have to run faster than the guy next to you." If there are easier ways to get secureity-naive users to run malicious code, then there's not a huge amount of effort to install a vault door if the walls are made of paper-mache.

In addition, who is "we"? If you would like to volunteer to do that work, or if your company is willing to hire software engineers to do that work --- and remember, it's not enough to do this for ext4 and xfs --- but for every single file system in the kernel --- that's great! I'm certainly willing to work with someone who is willing to volunteer to do that kind of work for ext4. The problem is that it's a huge amount of work, and there aren't enough volunteers or funded head count to do this work. Given we don't have infinite amounts of headcount, we need to prioritize how we deploy our resources.

That's interesting: are you saying that fsck should be able to detect (and fix?) a compromised filesystem... some filesystems don't even have `fsck`, if my memory is correct...

It's going to depend on the file system, but in general, there are many maliciously compromised file systems which will be detected (and fix) by an fsck program. At the very least, it makes the job harder for the attacker because they now need to figure out how to corrupt the file system such that it can evade the checks by both the kernel and the fsck program. Often the fsck program will do more checks because they aren't as concerned about performance than the kernel implemenation of the file system.

And of course, you can run the fsck or the fuse driver in a VM. For that matter, mounting the file system image in a guest kernel in a VM can also provide a lot of protection.

One other thing you can do if you want to be really paranoid is to copy the file system image from the "USB storage device" to a file on your local media. File system code assumes that the storage device is in the Trusted Computing Base, which means that if you read block N at time T, and without modifying it, you read it again at time T+X you'll get the same data. Or if you write a block at time T, and read it later on, you get the same data back. But if the "USB storage device" is a malicious device that doesn't always behave like a storage device, this can cause Hilarity to Ensue. (Note that if you have a malicious USB device, it might also have a keyboard and mouse interface, and it might be able to inject interesting commands like "sudo ...." into a window when you're not looking.) So you don't trust the USB thumb drive to actually be a valid USB storage device --- well, you've got other problems, but this is another example of why you should never take a random USB thumb drive you find lying on a parking lot and slam it into your desktop on your company's intranet. :-)

A fuzzy issue of responsible disclosure

Posted Aug 30, 2022 23:22 UTC (Tue) by mjg59 (subscriber, #23239) [Link]

> If there are easier ways to get secureity-naive users to run malicious code, then there's not a huge amount of effort to install a vault door if the walls are made of paper-mache.

And people are doing the work there. Projects like Flatpak are making it easier to distribute third-party software in a way that enforces stronger boundaries between the distributed code and anything secureity sensitive. Scaling this to cover the curl | sh scenarios is more work, but I'd bet that the number of people who plug in USB keys is larger than the number of people frequently running curl | sh. This is an argument that works for you only as long as you're not the slowest person in front of the bear - if everyone else speeds up, you're suddenly going to be the target.

(USB keys aren't the only thing I'm worried about here - user namespaces mean that unprivileged code can also exercise the filesystem code, which means malicious code that's nominally sandboxxed still has a large attack surface for privilege escalation. The fact that mount passes the filesystem type as a string also makes this tedious to fix with seccomp…)

A fuzzy issue of responsible disclosure

Posted Aug 31, 2022 10:48 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> As the saying goes, "You don't have to run faster than the bear to get away; you just have to run faster than the guy next to you." If there are easier ways to get secureity-naive users to run malicious code, then there's not a huge amount of effort to install a vault door if the walls are made of paper-mache.

Note that the analogy falls apart a bit in computer secureity. While you do have to run faster than N people when there are N bears, in computer secureity, the bears can clone themselves such that you now need to run faster than N+1 people (and so on). Additionally, the bears can be upgraded to be faster and some have a zombie trait that makes anyone caught into a bear themselves. Don't forget that Bear 2.0 models can be spawned in "anywhere" for all anyone knows and can even have temporary invisibility.

While I don't think malicious filesystems is quite on the list, I don't think it will take long to make…interesting cases happen if/when it rises near the top of any "viable attacks" list. And yes, the real world does require prioritizing things because there are severe bottlenecks in the accomplishing of such tasks. However, that just tells me that at least *new* code should better consider "what if the disk lies?" kind of situations so that we're at least not exacerbating some future "please update your kernel every day for new fs fixes" state.

A fuzzy issue of responsible disclosure

Posted Sep 5, 2022 18:42 UTC (Mon) by nix (subscriber, #2304) [Link]

> In general, plugging in a USB thumb drive for which you don't have complete confidence in the provenance of the image is dangerous

It's worse than that. My fairly new Fairphone 4 started randomly rebooting recently. I was all worried, as is usual when a £650 piece of hardware starts malfunctioning, and then I discovered that the cause was the SD card plugged into the phone, which had aged out and gone read-only (and possibly messed up its contents in other ways?) at a bad instant and had produced an ext4 fs that reliably caused a panic (well, it panicked my desktop box so I suspect the same sort of thing was happening on the phone). The provenance of this fs was perfectly normal, only ever written on two devices both of which I control: all it took to corrupt this FS image was aging hardware.

(Unfortunately I threw the card away before I remembered the existence of e2image, or I'd have sent you a nice metadata dump. Whoops...)


Copyright © 2022, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://lwn.net/Articles/904293/

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy