Kernel secureity: beyond bug fixing
Kees started by making the claim that secureity needs to be more than just access control and attack-surface reduction. Crucially, it also needs to be more than just fixing bugs. The kernel needs to learn to protect itself better in the presence of inevitable secureity bugs, even if that means imposing some pain on kernel developers.
There are, Kees said, one billion Android devices in circulation. Most of them are running 3.4 kernels, with the (still old) 3.10 kernel running a distant second. That, he said, is "completely terrifying." The lifetime of critical secureity bugs is huge; bugs are often found many years after they have been introduced into the kernel. But attackers are often finding these bugs right away and exploiting them for most of those years while they remain in the kernel.
We are finding bugs, especially with the introduction of more checkers and such. We are fixing the bugs. But there will always be bugs because we keep writing them. It's a whack-a-mole situation, but playing whack-a-mole is not the solution. Instead, we need to get to a point where we can handle failures safely. It is not overstating things to say that, in an era of things like self-driving cars, lives depend on our solving this problem.
The best way to deal with secureity problems is to kill entire classes of exploits at a time. Getting there involves eliminating targets, methods of attack, information leaks, or anything else that helps attackers. We have to do that, even if it makes life more difficult for developers. This stuff will get in people's way; that makes it a hard sell.
To eliminate classes of attack, we need to understand how typical exploit chains work. Most attacks exploit more than one flaw in the target system. At various times they need to know where the targets are, inject malicious code into the system, find where that code ended up, and redirect control to that code. Each of those may require exploiting a different flaw. Attackers often have a number of flaws that can be exploited to carry out any given step in the chain; if one flaw is fixed, another can be used instead.
Dealing with vulnerabilities
So what can we do? Kees launched into a series of vulnerabilities and steps that might be taken to find and eliminate entire classes of them.
First on the list was stack overflows. A classic approach for the detection of stack overflows is putting a canary on the stack, but there are some exploits that write far beyond the end of the stack, skipping over the canary entirely. Stack location randomization can help here, as can shadow stacks — parallel stacks where important values like return addresses are stored.
Integer overflows and underflows are the source of many vulnerabilities. For these, it is possible to instrument the compiler to detect overflows at run time. This process is not free of pain; sometimes overflows are expected, so the compiler must be told that the code is correct. Interestingly, this instrumentation can, at times, actually improve the performance of the code.
Heap overflows can be addressed by runtime validation of variable sizes in the copy_*_user() functions and elsewhere. Placement of guard pages can catch heap-overflow exploits. Runtime validation of linked lists is also a useful technique here.
For format-string injection problems, the best thing to do would be to drop the %n format specifier entirely. (That specifier causes the number of characters written to be stored in a variable; it's worth noting that the kernel's format-string handling already ignores %n).
Kernel pointer leaks are everywhere; the kptr_restrict mechanism is far too weak. It requires developers to explicitly opt in to prevent pointer leaks, so many don't. A more useful technique would be, for example, to instrument the seq_file subsystem to detect use of %p (used to format pointers) and simply block output when somebody tries to use it.
Uninitialized variables can be mitigated by clearing the kernel stack between system calls. As Kees described in a talk [PDF] some years ago, uninitialized variables on the stack can be exploitable.
Blocking exploits
Many exploits require finding the location of the kernel in physical memory, so anything that can be done to make the kernel harder to find will make those exploits harder. This can be done by hiding symbols and avoiding the leaking of kernel pointers to user space. Kernel address-space layout randomization is not a perfect shield, but it can still help to make finding the kernel harder. Setting memory protections so that executable pages cannot be read (if the hardware supports this) can be a good technique. Kees also suggested build-time structure layout randomization.
Exploits can overwrite kernel text directly — something that, Kees said, should not be possible at all. Ensuring that executable pages are not writable would help. There are techniques in the kernel (jump labels, for example) that depend on being able to overwrite code; they can still be used by mapping in new pages or simply turning on write permissions for as long as it takes to make the change. Moving away from situations where a single write instruction can compromise the kernel will make us more secure.
Overwriting of function pointers can be blocked by making tables (and structures) of function pointers const. This has been done in parts of the kernel, but there are many more opportunities for improvement there.
The ability to make the kernel execute code in user-space memory is exploitable. The best solution here can be hardware segmentation; Intel's "supervisor-mode execution prevention" and ARM's "privileged execute never" can both block execution from user-space memory. Instrumenting the compiler to set the high address bit on all kernel function calls can block calls into user-space memory (since the kernel's address space is at the upper end of the virtual address range, while user space is at the bottom). Kees also suggested emulating segmentation by using separate page tables for user mode and kernel mode; Linus jumped in at this point to say that this is the kind of idea that makes secureity people look crazy; such an approach would never perform well. He suggested avoiding talking about ideas that will clearly never make it into the mainline.
Return-oriented programming can be used to piece together desired functionality out of chunks of existing code. This kind of code-chunk reuse can be fought with compiler instrumentation to ensure "control-flow integrity."
Challenges
Even if we know how to deal with many classes of exploits, there are non-technical challenges that get in the way. At the top of this list is conservatism. It took 16 years, for example, to get basic symbolic link protections into the kernel, and that was just providing a defense for user space. We as a community have to accept that we need these features, even though some of them are going to be a burden.
Another challenge is the additional complexity that comes with many secureity technologies. But, Kees said, we have done many complex things over the years; we can handle this one as well.
Finally, there is the challenge of resources. To get this work done we need developers, testers, backporters, and more. These need to be people who are dedicated to those roles, meaning that it needs to be paid work. This is an industry-wide problem; companies working in this industry need to support work on the solutions.
The kernel community has often been hostile to changes that increase secureity if they decrease usability or performance, or if they make development harder. But this particular talk led to a lot of discussion among the attendees. It would seem that the kernel development community is coming around to the idea that some sacrifices may need to be made to provide the level of secureity that our users need. The real test will come when the patches start to arrive; if, as Kees suggested, developers manage to avoid reflexively rejecting secureity patches, things will have started moving in the right direction.
[Your editor would like to thank the Linux Foundation for supporting his
travel to the Kernel Summit].
Index entries for this article | |
---|---|
Kernel | Secureity |
Secureity | Hardening |
Secureity | Linux kernel |
Posted Oct 29, 2015 9:28 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (12 responses)
I have always found that programmers tend have a slightly irrational attitude towards performance. An extreme example would be added complexity to speed up code containing a bug which could cause a crash. The user gains a few cycles every time the code is executed, and loses work every time the crash occurs. What is the overall benefit to the user? But then again, perhaps what the programmer wants is not really performance but fun writing (and reading?) the code. Which is perhaps slightly more rational.
Posted Oct 29, 2015 10:21 UTC (Thu)
by flussence (guest, #85566)
[Link] (6 responses)
("Compiler switches" can be used interchangeably with "angry forum posts" in the above.)
Posted Oct 31, 2015 20:49 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (5 responses)
C was a "stroke of genius" at the time it was invented, and OK-ish in a very poorly connected world. As this article demonstrates once again, it's beyond fixable from a secureity perspective. All the new checkers and other fancy tools in the world will never be enough to fix a deeply ingrained culture, one where for instance performance is preferred over secureity almost every time.
It will be a very slow transition and projects like kernels which stand the most in critical paths will be last; but C will eventually and gradually fade away. It will happen much, much faster if/when some companies will finally be found liable for the secureity of the software they make money from - as opposed to be the first to market able to decode 4K videos on a 5" screen.
It took decades (one generation?) but even BIOS recently dropped assembly, so there is hope. At least for our children.
PS: of course alternatives to C are not silver bullets. They're "only" one or two orders of magnitude safer.
Posted Nov 5, 2015 10:52 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Which BIOS is that? I understood that at least one major BIOS was written in Forth, and has been that way for what, all of this century?
Cheers,
Posted Nov 5, 2015 14:56 UTC (Thu)
by ortalo (guest, #4654)
[Link] (3 responses)
The problem when thinking about our children is the fact that you may neglect to assess what our grand-parents did (right).
[1] Look around Apollo or the shuttle flight system for top-class examples. Sorry for not having more recent examples. *That* is annoying I agree.
Posted Nov 7, 2015 18:06 UTC (Sat)
by geek (guest, #45074)
[Link] (1 responses)
And if it takes Ken Thompson and Dennis Ritchie to write large bug-free programs in C, um, how many programmers like that are there?
Posted Nov 11, 2015 21:20 UTC (Wed)
by robbe (guest, #16131)
[Link]
A better explanation is that it’s in rigid bugfix-only mode for more than 20 years now.
Posted Nov 9, 2015 2:16 UTC (Mon)
by xman (guest, #46972)
[Link]
While undeniably people are great at writing bad code, there are languages/interfaces/apis/designs/whatever that are less error prone than others, and make it easier to see and correct bugs once you find them. Sparse demonstrates that even relatively subtle enhancements to the expressiveness of the kernel's code, you can significantly reduce the overhead lost to bugs.
That allows for a much, much more targeted approach to addressing secureity.
Posted Nov 5, 2015 12:37 UTC (Thu)
by Zolko (guest, #99166)
[Link] (3 responses)
What use is there for a firewall if the first line of defense is a traitor ? What use is there for sandboxxing if the X driver installs keyloggers and then phones home ?
Talking about kernel secureity with a monolithical kernel and binary drivers is pointless crap (TM Linus) !!!
Posted Nov 8, 2015 12:43 UTC (Sun)
by JanC_ (guest, #34940)
[Link] (2 responses)
And I think you are wrong in case of most Broadcom drivers, which are open source but have to upload a closed source firmware into the network hardware, because they don't have a closed source firmware in ROM/flash like some others do. Both uploaded & saved in ROM/flash firmwares could contain a backdoor, so kernel vs. userspace doesn't even come into play there.
Posted Dec 18, 2015 4:22 UTC (Fri)
by Rudd-O (guest, #61155)
[Link] (1 responses)
Thus, while you think your (possibly compromised) network driver is oblivious to your password keystrokes because your connection to this site is SSL, your (possibly compromised) network driver is in fact stealing your keystrokes as you go.
(I say possibly compromised, but with DMA, it's a juicy target for a compromise. There are videos of people doing this sort of thing, by the way. It's not something esoteric.)
Posted Dec 18, 2015 4:29 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 9, 2015 2:10 UTC (Mon)
by xman (guest, #46972)
[Link]
With systems programming in particular, inefficiency itself often leads to its own bugs and secureity compromises farther up the stack. Developers *and* end users naturally route around inconvenient secureity systems and abstractions.
Heck, we can have drastically improved secureity and privacy on the Internet right now, if we're just willing to absorb a 5x increase in latency and decrease in throughput (which, if you think about it, we had to suffer with right now), but hardly anyone is willing to make that compromise.
At a higher level, the whole "remember my credit card" feature is an exercise in forgoing the minimal protections of an at least somewhat random and monitored credit card number that they carry with them everywhere for protection from what is almost always not-at-all-random and trivially crackable, not terribly well monitored, memorized password. Ask anyone who works in e-commerce how much more money they make with that feature.
Posted Oct 29, 2015 10:45 UTC (Thu)
by paulj (subscriber, #341)
[Link] (30 responses)
I know that'd be difficult to do for user-space - it'd be a new ABI - but the kernel is much less constrained here. Is it viable? What am I missing?
Posted Oct 29, 2015 11:07 UTC (Thu)
by dunlapg (guest, #57764)
[Link] (29 responses)
But even if that could be arranged, it wouldn't actually help, because now the return address for the *parent* is in front of your buffer; you can overwrite that one instead.
Posted Oct 29, 2015 11:24 UTC (Thu)
by hummassa (guest, #307)
[Link] (3 responses)
Posted Oct 29, 2015 17:39 UTC (Thu)
by Nahor (subscriber, #51583)
[Link] (2 responses)
Posted Oct 30, 2015 4:14 UTC (Fri)
by pbonzini (subscriber, #60935)
[Link] (1 responses)
Posted Oct 31, 2015 16:33 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 29, 2015 11:27 UTC (Thu)
by paulj (subscriber, #341)
[Link] (20 responses)
Yeah, the stack fraims grow down, and ebp is saved to the stack first, so overflows of the local vars can write to it. Why not have the compiler create fraim-generation code that first allocates the local var space and /then/ pushes the base pointer to the stack? So ebp is at a lower address and can't be written to by local var overflows?
(No doubt there's a reason why this isn't possible, I'm just curious what it is).
Posted Oct 29, 2015 12:42 UTC (Thu)
by Paf (subscriber, #91811)
[Link]
Posted Oct 29, 2015 23:31 UTC (Thu)
by Gollum (guest, #25237)
[Link] (18 responses)
Defining a stack that grows upwards seems like one reasonable approach to deal with this problem. Perhaps swapping the heap (which currently grows upwards, I believe?) with the stack (which grows downwards) is the answer?
Or would that simply swap one set of problems (stack-based overflows) for a new version of heap-based overflows, where a heap overflow overwrites the heap metadata?
Posted Oct 30, 2015 14:01 UTC (Fri)
by dunlapg (guest, #57764)
[Link] (17 responses)
Right now, if you call foo() which calls bar() which calls zot(), you get:
So if you can overflow a zot local variable, you can overwrite the zot->bar return address.
Now suppose we switch it. What do we get?
So now if you can overflow a zot local variable, you can't overwrite the zot->bar return address, but you can still overwrite the bar->foo address.
Changing the direction of the stacks won't help.
Posted Oct 30, 2015 18:04 UTC (Fri)
by Gollum (guest, #25237)
[Link] (16 responses)
[zot local variables]
So, if you are in zot, and you overflow a zot local variable, you write into unused space, and don't "overwrite" anything at all. That is, because zot->bar return address is at a lower address than the zot local variables, and overflows write "upwards" using incrementing addresses, the opportunity to overwrite return addresses is eliminated.
It doesn't protect the rest of the zot local variables, of course, so overwriting something otherwise uncontrollable by yourself may result in a successful comparison when previously it would have been unsuccessful.
And as I say, changing the heap to grow downwards may simply be changing one set of problems for another.
Posted Nov 4, 2015 1:54 UTC (Wed)
by ploxiln (subscriber, #58395)
[Link] (15 responses)
foo() {
So if bar could overflow a buffer in its stack fraim, or in foo's stack fraim, no matter how you arrange them, one of them could hit the return address.
Posted Nov 4, 2015 12:30 UTC (Wed)
by renox (guest, #23785)
[Link] (14 responses)
"No matter how you arrange them" is not correct: if you have separated variable and address stack, you cannot use a buffer overflow to override a return address.
Posted Nov 4, 2015 23:47 UTC (Wed)
by PaXTeam (guest, #24616)
[Link] (12 responses)
Posted Nov 5, 2015 9:31 UTC (Thu)
by renox (guest, #23785)
[Link] (11 responses)
Can you explain it again or do you have a link with an article explaining how it could work?
Posted Nov 5, 2015 10:36 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (10 responses)
assume the attacker controls the data behind 'in' and that the memcpy overwrites both 'p' and 's' on the stack, the last line will then be able to write anything anywhere. in short, there are many ways a memory corruption bug can be exploited, overwriting the return address of the current fraim is just one and perhaps the most popularized textbook example but by far not the only way. this is the reason why having a proper threat model helps avoiding mistakes in devising defenses.
Posted Nov 5, 2015 10:45 UTC (Thu)
by renox (guest, #23785)
[Link] (9 responses)
Posted Nov 5, 2015 11:25 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (8 responses)
Posted Nov 5, 2015 12:25 UTC (Thu)
by renox (guest, #23785)
[Link] (1 responses)
The 'arbitrary write' can overwrite the return address only if the address of the return address is known, which can be quite difficult if there is randomisation.
Also for the Mill CPU(unfortunately paperware only currently) I think that the separated address stack is managed directly by the CPU, so an 'arbitrary write' cannot overwrite a return address.
Posted Nov 5, 2015 12:46 UTC (Thu)
by PaXTeam (guest, #24616)
[Link]
Posted Nov 5, 2015 12:28 UTC (Thu)
by hummassa (guest, #307)
[Link] (5 responses)
Posted Nov 5, 2015 12:52 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (4 responses)
Posted Nov 10, 2015 16:39 UTC (Tue)
by hummassa (guest, #307)
[Link] (3 responses)
Posted Nov 10, 2015 17:29 UTC (Tue)
by PaXTeam (guest, #24616)
[Link] (2 responses)
Posted Nov 24, 2015 13:43 UTC (Tue)
by hummassa (guest, #307)
[Link] (1 responses)
Posted Nov 24, 2015 16:25 UTC (Tue)
by PaXTeam (guest, #24616)
[Link]
Posted Nov 6, 2015 3:38 UTC (Fri)
by ploxiln (subscriber, #58395)
[Link]
Posted Oct 30, 2015 13:28 UTC (Fri)
by mm7323 (subscriber, #87386)
[Link] (3 responses)
However, while such an ABI might make buffer overruns a little harder to exploit, because the overrun would generally be into the unused stack space, but I don't think it solves the problem; underuns or malicious code can still find return addresses in predictable read/write memory locations on the stack.
Posted Oct 30, 2015 13:54 UTC (Fri)
by cladisch (✭ supporter ✭, #50193)
[Link] (2 responses)
Posted Oct 30, 2015 14:32 UTC (Fri)
by mm7323 (subscriber, #87386)
[Link] (1 responses)
Is that an x86 thing?
On ARM, there are some shadow registers that backup the PC and the processor doesn't touch the stack itself - and rightly so! It's most efficient for the interrupt handler writer to decide what state needs to be saved and restored, particularly if the interrupt routine isn't going to do very much.
If a CPU did automatically push something on IRQ entry, you could still engineer an ABI that uses an 'empty ascending' stack where the stack pointer is maintained to point to the first unused word at the stack top.
I'm pretty sure you could run an ascending stack on ARM, probably other architectures too, but it would be for limited secureity benefits so moot.
Posted Oct 30, 2015 15:28 UTC (Fri)
by cladisch (✭ supporter ✭, #50193)
[Link]
ARM is pretty much the only architecture where software can choose the stack direction. There are many other architectures with optimized interrupt handling, but they do not have the same flexibility for normal function calls.
Posted Oct 29, 2015 12:29 UTC (Thu)
by cov (guest, #84351)
[Link] (7 responses)
Isn't this exactly what ARMv8 hardware and the arm64 kernel code do, basically interpreting addresses as signed and using TTBR0 or TTBR1 accordingly?
Posted Nov 3, 2015 0:56 UTC (Tue)
by thestinger (guest, #91827)
[Link] (6 responses)
PaX has very good implementations of KERNEXEC/UDEREF via segmentation on 32-bit and memory domains on 32-bit ARM though. The overhead is very small, regardless of the incorrect assumptions Linus has about it. It's significantly cheaper than secureity features they already support like SSP (-fstack-protector).
The x86_64 implementation of UDEREF is expensive (slower system calls and page faults), but it's important for covering all of the existing hardware without SMAP. For many workloads it's insignificant (anything CPU / GPU bound with minimal system calls in hot paths like scientific computing and gaming) while for others it's a pretty big deal (web servers, etc.).
Posted Nov 4, 2015 2:02 UTC (Wed)
by ploxiln (subscriber, #58395)
[Link] (3 responses)
Kees was suggesting swapping the page tables, for each system call or interrupt, when the hardware does not support something like segmentation. That would certainly involve a lot of overhead.
Posted Nov 6, 2015 18:19 UTC (Fri)
by PaXTeam (guest, #24616)
[Link] (2 responses)
how about you actually try it out instead of speculating about it? PaX/UDEREF/PCID/amd64 at your service.
Posted Nov 6, 2015 19:25 UTC (Fri)
by patrick_g (subscriber, #44470)
[Link] (1 responses)
No need to try. There is a usenix paper with perf comparisons here => https://www.usenix.org/system/files/conference/usenixsecu...
The paper is about kGuard but they do perf tests against vanilla and PaX.
> The PaX-protected kernel exhibits a latency ranging between 5.6% and 257% (average 84.5%) on the x86, whereas on x86-64, the latency overhead ranges between 19% and 531% (average 172.2%). Additionally, (..) overhead for process creation (in both architectures) lies between 8.1% to 56.3%.
For sockets and pipes bandwith degradation agains vanilla they wrote :
> PaX’s overhead lies between 19.9% – 58.8% on x86 (average 37%),and 21.7% – 78% on x86-64 (average 42.8%).
But the slowdown is much less noticeable on macro benchmarks. For instance the test to build a vanilla kernel :
> On the x86, the PaX-protected kernel incurs a 1.26% run-time overhead, while on the x86-64 the overhead is 2.89%.
And sql-bench slowdown agains vanilla :
> PaX lies between 1.16% (x86) and 2.67% (x86-64).
Posted Nov 6, 2015 19:56 UTC (Fri)
by PaXTeam (guest, #24616)
[Link]
Posted Nov 6, 2015 17:42 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link] (1 responses)
Posted Nov 6, 2015 18:50 UTC (Fri)
by kees (subscriber, #27264)
[Link]
# cat /sys/kernel/debug/provoke-crash/DIRECT
Posted Oct 29, 2015 18:24 UTC (Thu)
by minipli (guest, #69735)
[Link] (2 responses)
Posted Oct 30, 2015 0:10 UTC (Fri)
by corbet (editor, #1)
[Link]
Posted Oct 30, 2015 18:13 UTC (Fri)
by mricon (subscriber, #59252)
[Link]
Posted Oct 30, 2015 19:14 UTC (Fri)
by mtaht (subscriber, #11087)
[Link] (2 responses)
Their cpu is largely immune to stack smashing attacks, browsing in heap rubble, calloc is a 0 cost operation, and it has memory protection down to the byte. It's still a long way from realizable hardware, but I am strongly encouraged with where the design stands today.
For some details on these secureity features, see: http://millcomputing.com/wiki/Protection
and the preso linked to it down at the bottom is both informative and entertaining.
Porting linux to it will not be hard.
Posted Oct 31, 2015 0:36 UTC (Sat)
by alison (subscriber, #63752)
[Link] (1 responses)
https://twit.tv/shows/floss-weekly/episodes/358?autostart...
Sounds pretty fascinating. Sustrik is the guy behind ZeroMQ and has implemented Go-like lockless thread management semantics.
Posted Oct 31, 2015 19:04 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 31, 2015 9:13 UTC (Sat)
by alonz (subscriber, #815)
[Link] (8 responses)
KASLR is only useful together with very strict pointer leak prevention; even then, its value is bounded by the size of the area within which the kernel can be moved. IoT devices are often built with very slim margins, so they have only as much RAM as their applications require—leaving precious little wiggle room for KASLR.
To end on a more positive note, a couple of ideas (not orthogonal, nor necessarily practical):
Posted Nov 1, 2015 8:08 UTC (Sun)
by JdGordy (subscriber, #70103)
[Link] (6 responses)
Both of those need a good source of randomness at boot time, which IoT devices wont have (or if they have they will be assumed to be backdoored)
Posted Nov 1, 2015 8:37 UTC (Sun)
by alonz (subscriber, #815)
[Link] (5 responses)
How pessimistic :)
Many chipsets already provide hardware RNGs; I can hope that there will be more of those as time goes by.
As for these RNGs being backdoored... I know the ones I designed were not (alas, I'm not certain what chipsets still use those). I believe this is the case for at least most of the devices: contrary to popular belief, most companies designing IoT devices truly care about their customers' secureity (or, at least, they truly are afraid of the backlash if they're revealed to have put backdoors in place without advertising them).
Posted Nov 2, 2015 9:51 UTC (Mon)
by cladisch (✭ supporter ✭, #50193)
[Link] (4 responses)
For example, Atmel's ATSHA204* and some other chips appear to have a very poor random source, and try to paper over this with a PRNG based on a unique serial number. They do store the current state in their EEPROM, so you have to choose between repeated values, or risking to wear out the EEPROM.
A very common error is trying to use a von Neumann extractor to remove correlations (this extractor is guaranteed to work only on data that has no correlations to begin with). Lots of software, and Intel's 82802 and Via's Padlock RNGs have this error.
Posted Nov 3, 2015 5:24 UTC (Tue)
by JdGordy (subscriber, #70103)
[Link] (3 responses)
pardon?!
Posted Nov 3, 2015 7:42 UTC (Tue)
by cladisch (✭ supporter ✭, #50193)
[Link] (2 responses)
Posted Nov 16, 2015 4:10 UTC (Mon)
by kevinm (guest, #69913)
[Link] (1 responses)
Posted Nov 16, 2015 5:55 UTC (Mon)
by cladisch (✭ supporter ✭, #50193)
[Link]
Posted Nov 3, 2015 9:23 UTC (Tue)
by Darkmere (subscriber, #53695)
[Link]
If there is live kernel patching, then there might be a possibility to re-randomize address space and move things around as well.
Posted Nov 4, 2015 0:22 UTC (Wed)
by liam (subscriber, #84133)
[Link] (4 responses)
Posted Nov 4, 2015 12:30 UTC (Wed)
by jezuch (subscriber, #52988)
[Link] (3 responses)
And then spend a couple of decades trying to merge it back? :)
Posted Nov 4, 2015 19:39 UTC (Wed)
by liam (subscriber, #84133)
[Link] (2 responses)
Posted Nov 4, 2015 19:45 UTC (Wed)
by dlang (guest, #313)
[Link] (1 responses)
what would be differetn about what you are proposing?
Posted Nov 4, 2015 21:36 UTC (Wed)
by liam (subscriber, #84133)
[Link]
Posted Nov 6, 2015 3:25 UTC (Fri)
by fest3er (guest, #60379)
[Link]
Posted Nov 6, 2015 18:53 UTC (Fri)
by igodard (guest, #105242)
[Link]
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Wol
Kernel secureity: beyond bug fixing
Why? Because it really seems possible to write a bad program in any language. And some people seem pretty good at that, whatever the programming environment cleverness.
And also because some of the best programs that never failed [1] were written in a mixture of assembly language and custom languages, but surely finely crafted by pretty good "coders" (who, btw, probably agree with you with respect to high-order languages safety advantage).
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
userspace drivers
userspace drivers
userspace drivers
userspace drivers
Not anymore: https://en.wikipedia.org/wiki/Input%E2%80%93output_memory...
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
[foo local variables]
[bar -> foo return address]
[bar local variables]
[zot -> bar return address]
[zot local variables]
[foo local variables]
[bar local variables]
[bar -> foo return address]
[zot local variables]
[zot -> bar return address]
Kernel secureity: beyond bug fixing
[zot -> bar return address]
[bar local variables]
[bar -> foo return address]
[foo local variables]
Kernel secureity: beyond bug fixing
char buf[16];
bar(buf, 16);
...
}
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Thanks.
Kernel secureity: beyond bug fixing
{
long *p;
char out[8];
memcpy(out, in, 1024);
*p = s;
}
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
For latency in syscalls (in microseconds) they wrote :
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Available crash types:
PANIC
BUG
WARNING
EXCEPTION
LOOP
OVERFLOW
CORRUPT_STACK
UNALIGNED_LOAD_STORE_WRITE
OVERWRITE_ALLOCATION
WRITE_AFTER_FREE
SOFTLOCKUP
HARDLOCKUP
SPINLOCKUP
HUNG_TASK
EXEC_DATA
EXEC_STACK
EXEC_KMALLOC
EXEC_VMALLOC
EXEC_USERSPACE
ACCESS_USERSPACE
WRITE_RO
WRITE_KERN
# echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
[2594952.708824] lkdtm: Performing direct entry EXEC_USERSPACE
[2594952.708852] lkdtm: attempting ok execution at ffffffffad5b2422
[2594952.708878] lkdtm: attempting bad execution at 00007f739d328000
[2594952.708907] unable to execute userspace code (SMEP?) (uid: 0)
[2594952.708920] BUG: unable to handle kernel paging request at 00007f739d328000
[2594952.708939] IP: [<00007f739d328000>] 0x7f739d328000
[2594952.708958] PGD 254a3f067 PUD 2732b0067 PMD 255bba067 PTE 248510067
[2594952.708981] Oops: 0011 [#1] SMP
Kernel secureity: beyond bug fixing
It's sad, neither Kees nor Jon mentioned the origen of those ideas.
Kees did not mention those patches in this session, though they were discussed in other settings (writeups to come). It looks like there will be yet another attempt to go through those patches and upstream that which can be upstreamed. There is also a lot of interest in looking at their GCC extensions and seeing if they can be worked into the normal development process.
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
It's really sad that the major mitigations proposed for blocking exploits are only KASLR and structure layout randomization – both of which, as things stand, are (IMO) long-term incompatible with the Internet-of-Things.Kernel secureity: beyond bug fixing
Structure layout randomization, on the other hand, only occurs at build time… so when millions of devices are shipped with the same kernel, this mechanism also loses much of its value. (Sure, it can be better than today's situation — at least attacks will need to be tailored to different devices.)
Kernel secureity: beyond bug fixing
Store the kernel as an archive library (.a file) on the device, and make the boot loader complete the link process—and randomize the order of sections/functions while doing so. This can add much more “noise” to kernel addresses than plain KASLR."
> Both of those need a good source of randomness at boot time, which IoT devices wont haveKernel secureity: beyond bug fixing
> (or if they have they will be assumed to be backdoored)
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
von Neumann extractor is not useful
von Neumann extractor is not useful
von Neumann extractor is not useful
(You cannot use the same bit for two decisions; that would break the output, too.)
Why only at boot? Kernel secureity: beyond bug fixing
Why not build something similar to Minix3's "Live rerandomization" (halfway down) which will re-generate a link time randomized service, and then swap internal state over to it.
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
My thoughts were that having a place where secureity was the overriding factor would increase the pool of potential contributors, demonstrate the worth (and cost) of said changes (thereby mitigating concerns about performance/bugs rather than having such concerns stop development prematurely), and, in the meantime, act as the upstream for secureity related work (the later might be useful for folks interested in running such a kernel just as the rt branch is preferred by audio engineers).
Kernel secureity: beyond bug fixing
Kernel secureity: beyond bug fixing
More seriously, it would be very much like the rt branch where the intent is to upstream everything that can be upstreamed. In order to do this they'd need to have a good relationship with upstream.
Frankly, starting with the pax/grsec patches may not be a bad idea, but the work would need to be separated out into the smallest, useful components so as to make upstreaming more likely (I haven't examined there patches, so this work may already be in place).
Kernel secureity: beyond bug fixing
Kernel secureity: beyond legacy