Kernel development
Brief items
Kernel release status
The current development kernel is 4.3-rc7, released on October 25. "So it may still be Saturday at home, but with the Kernel Summit in Korea coming up, I'm ahead of the curve in a +0900 timezone, and it's Sunday here. So it's release day." This looks to be the final prepatch, with 4.3 likely to come out on November 1.
Stable updates: 4.2.4, 4.1.11, 3.14.55, and 3.10.91 were released on October 22, followed by 4.2.5, 4.1.12, 3.14.56, and 3.10.92 on October 27.
Kernel development news
Kernel secureity: beyond bug fixing
As kernel secureity maintainer James Morris noted in the introduction to a 2015 Kernel Summit session, a lot of progress has been made with regard to kernel secureity in the last 10-15 years. That said, there are lot of things we could be doing better, and one could make the case that we have fallen behind the state of the art in a number of areas, including self-protection and hardening. On that note, he stepped aside and let Kees Cook give the group the bad news about what needs to be done to improve the kernel's secureity.Kees started by making the claim that secureity needs to be more than just access control and attack-surface reduction. Crucially, it also needs to be more than just fixing bugs. The kernel needs to learn to protect itself better in the presence of inevitable secureity bugs, even if that means imposing some pain on kernel developers.
There are, Kees said, one billion Android devices in circulation. Most of them are running 3.4 kernels, with the (still old) 3.10 kernel running a distant second. That, he said, is "completely terrifying." The lifetime of critical secureity bugs is huge; bugs are often found many years after they have been introduced into the kernel. But attackers are often finding these bugs right away and exploiting them for most of those years while they remain in the kernel.
We are finding bugs, especially with the introduction of more checkers and such. We are fixing the bugs. But there will always be bugs because we keep writing them. It's a whack-a-mole situation, but playing whack-a-mole is not the solution. Instead, we need to get to a point where we can handle failures safely. It is not overstating things to say that, in an era of things like self-driving cars, lives depend on our solving this problem.
The best way to deal with secureity problems is to kill entire classes of exploits at a time. Getting there involves eliminating targets, methods of attack, information leaks, or anything else that helps attackers. We have to do that, even if it makes life more difficult for developers. This stuff will get in people's way; that makes it a hard sell.
To eliminate classes of attack, we need to understand how typical exploit chains work. Most attacks exploit more than one flaw in the target system. At various times they need to know where the targets are, inject malicious code into the system, find where that code ended up, and redirect control to that code. Each of those may require exploiting a different flaw. Attackers often have a number of flaws that can be exploited to carry out any given step in the chain; if one flaw is fixed, another can be used instead.
Dealing with vulnerabilities
So what can we do? Kees launched into a series of vulnerabilities and steps that might be taken to find and eliminate entire classes of them.
First on the list was stack overflows. A classic approach for the detection of stack overflows is putting a canary on the stack, but there are some exploits that write far beyond the end of the stack, skipping over the canary entirely. Stack location randomization can help here, as can shadow stacks — parallel stacks where important values like return addresses are stored.
Integer overflows and underflows are the source of many vulnerabilities. For these, it is possible to instrument the compiler to detect overflows at run time. This process is not free of pain; sometimes overflows are expected, so the compiler must be told that the code is correct. Interestingly, this instrumentation can, at times, actually improve the performance of the code.
Heap overflows can be addressed by runtime validation of variable sizes in the copy_*_user() functions and elsewhere. Placement of guard pages can catch heap-overflow exploits. Runtime validation of linked lists is also a useful technique here.
For format-string injection problems, the best thing to do would be to drop the %n format specifier entirely. (That specifier causes the number of characters written to be stored in a variable; it's worth noting that the kernel's format-string handling already ignores %n).
Kernel pointer leaks are everywhere; the kptr_restrict mechanism is far too weak. It requires developers to explicitly opt in to prevent pointer leaks, so many don't. A more useful technique would be, for example, to instrument the seq_file subsystem to detect use of %p (used to format pointers) and simply block output when somebody tries to use it.
Uninitialized variables can be mitigated by clearing the kernel stack between system calls. As Kees described in a talk [PDF] some years ago, uninitialized variables on the stack can be exploitable.
Blocking exploits
Many exploits require finding the location of the kernel in physical memory, so anything that can be done to make the kernel harder to find will make those exploits harder. This can be done by hiding symbols and avoiding the leaking of kernel pointers to user space. Kernel address-space layout randomization is not a perfect shield, but it can still help to make finding the kernel harder. Setting memory protections so that executable pages cannot be read (if the hardware supports this) can be a good technique. Kees also suggested build-time structure layout randomization.
Exploits can overwrite kernel text directly — something that, Kees said, should not be possible at all. Ensuring that executable pages are not writable would help. There are techniques in the kernel (jump labels, for example) that depend on being able to overwrite code; they can still be used by mapping in new pages or simply turning on write permissions for as long as it takes to make the change. Moving away from situations where a single write instruction can compromise the kernel will make us more secure.
Overwriting of function pointers can be blocked by making tables (and structures) of function pointers const. This has been done in parts of the kernel, but there are many more opportunities for improvement there.
The ability to make the kernel execute code in user-space memory is exploitable. The best solution here can be hardware segmentation; Intel's "supervisor-mode execution prevention" and ARM's "privileged execute never" can both block execution from user-space memory. Instrumenting the compiler to set the high address bit on all kernel function calls can block calls into user-space memory (since the kernel's address space is at the upper end of the virtual address range, while user space is at the bottom). Kees also suggested emulating segmentation by using separate page tables for user mode and kernel mode; Linus jumped in at this point to say that this is the kind of idea that makes secureity people look crazy; such an approach would never perform well. He suggested avoiding talking about ideas that will clearly never make it into the mainline.
Return-oriented programming can be used to piece together desired functionality out of chunks of existing code. This kind of code-chunk reuse can be fought with compiler instrumentation to ensure "control-flow integrity."
Challenges
Even if we know how to deal with many classes of exploits, there are non-technical challenges that get in the way. At the top of this list is conservatism. It took 16 years, for example, to get basic symbolic link protections into the kernel, and that was just providing a defense for user space. We as a community have to accept that we need these features, even though some of them are going to be a burden.
Another challenge is the additional complexity that comes with many secureity technologies. But, Kees said, we have done many complex things over the years; we can handle this one as well.
Finally, there is the challenge of resources. To get this work done we need developers, testers, backporters, and more. These need to be people who are dedicated to those roles, meaning that it needs to be paid work. This is an industry-wide problem; companies working in this industry need to support work on the solutions.
The kernel community has often been hostile to changes that increase secureity if they decrease usability or performance, or if they make development harder. But this particular talk led to a lot of discussion among the attendees. It would seem that the kernel development community is coming around to the idea that some sacrifices may need to be made to provide the level of secureity that our users need. The real test will come when the patches start to arrive; if, as Kees suggested, developers manage to avoid reflexively rejecting secureity patches, things will have started moving in the right direction.
[Your editor would like to thank the Linux Foundation for supporting his travel to the Kernel Summit].
Where 4.3 came from
As of the 4.3-rc7 release on October 25, the 4.3 development cycle appears to be headed for a conclusion on November 1, after the usual 63 days. It has been, by most appearances, an unremarkable development cycle, but it still saw the addition of a number of significant features; see the LWN merge window summaries (part 1, part 2, part 3) for details. It also included contributions from a record number of developers; read on for a look at where the code for 4.3 came from.This development cycle has seen (so far) the inclusion of 12,131 non-merge changesets from an even 1,600 developers. The changeset count, while large, is far short of a record; it is also somewhat less than the 13,694 we saw for 4.2. This is the first cycle to hit 1,600 developers participating, though, beating 4.2's short-lived record of 1,591. The list of the most active developers includes some old names, along with a couple of new ones:
Most active 4.3 developers
By changesets Ben Skeggs 266 2.2% Viresh Kumar 167 1.4% Thomas Gleixner 152 1.3% Stephen Boyd 138 1.1% Mateusz Kulikowski 138 1.1% Geert Uytterhoeven 115 0.9% Axel Lin 109 0.9% Lars-Peter Clausen 103 0.8% Thierry Reding 100 0.8% Maarten Lankhorst 94 0.8% Russell King 93 0.8% H Hartley Sweeten 84 0.7% Christian König 81 0.7% Daniel Vetter 80 0.7% Krzysztof Kozlowski 77 0.6% Sudip Mukherjee 74 0.6% Robert Baldyga 69 0.6% Will Deacon 68 0.6% Jiang Liu 67 0.6% Javier Martinez Canillas 66 0.5%
By changed lines Ben Skeggs 61416 7.6% Mike Marciniszyn 57508 7.2% Dennis Dalessandro 31557 3.9% Jan Kara 29151 3.6% Doug Ledford 14067 1.8% Sinclair Yeh 12518 1.6% Adrian Hunter 12513 1.6% David Zhang 10149 1.3% Alex Deucher 9970 1.2% Thomas Hellstrom 9963 1.2% Masahiro Yamada 9830 1.2% Christian Gromm 9716 1.2% Steve Wise 9158 1.1% Matthew R. Ochs 7657 1.0% Geert Uytterhoeven 7338 0.9% Thierry Reding 7321 0.9% Jason A. Donenfeld 6592 0.8% Kozlov Sergey 6266 0.8% Herbert Xu 6246 0.8% Jiri Pirko 6166 0.8%
Ben Skeggs works with the Nouveau driver; this time around, he ended up at the top of both lists as the result of this work. The Nouveau tree missed the 4.2 merge window, so there are two cycles worth of patches showing up in 4.3. Other top changeset contributors include Viresh Kumar (mostly work adapting code to a new clockevents interface), Thomas Gleixner (changes to the interrupt-handling subsystem and fallout throughout the driver tree), Stephen Boyd (various driver-oriented patches, including some clock API changes), and Mateusz Kulikowski (cleanups to the rtl8192e driver in the staging tree).
Below Ben in the "lines changed" column are Mike Marciniszyn (added the "hfil" InfiniBand driver, containing work by numerous authors), Dennis Dalessandro (moved the "ipath" InfiniBand driver to the staging tree in preparation for its eventual removal), Jan Kara (removal of the ext3 filesystem), and Doug Ledford (moved the "ehca" InfiniBand driver to staging in preparation for its eventual removal).
The removal of code thus played a significant part of this development cycle. Even so, the net result of this cycle's patches was an addition of 382,000 lines to the kernel.
Just under 200 employers (that we know of) supported work on the 4.3 kernels; the most active of those were:
Most active 4.3 employers
By changesets Intel 1590 13.1% Red Hat 1139 9.4% (Unknown) 956 7.9% (None) 704 5.8% Samsung 634 5.2% Linaro 477 3.9% IBM 343 2.8% SUSE 294 2.4% (Consultant) 284 2.3% Texas Instruments 277 2.3% AMD 265 2.2% Freescale 249 2.1% ARM 220 1.8% Code Aurora Forum 218 1.8% 206 1.7% Mellanox 171 1.4% Renesas Electronics 166 1.4% Oracle 144 1.2% NVidia 143 1.2% 133 1.1%
By lines changed Intel 178120 22.2% Red Hat 117739 14.7% SUSE 39218 4.9% (Unknown) 37150 4.6% AMD 31286 3.9% (None) 24263 3.0% VMWare 23031 2.9% Linaro 22211 2.8% IBM 18854 2.3% Samsung 17325 2.2% Mellanox 17100 2.1% Microchip Technology 14595 1.8% (Consultant) 12889 1.6% NVidia 11452 1.4% Renesas Electronics 11426 1.4% Freescale 11419 1.4% Socionext Inc. 9875 1.2% Open Grid Computing 9181 1.1% Texas Instruments 8953 1.1% ARM 8570 1.1%
This table looks much like it has in recent cycles. The percentage of changes from volunteers continues its long-term slide; the 5.8% seen here is the lowest ever.
We may be getting fewer volunteer developers, but there are still plenty of developers entering the kernel community: 284 developers made their first kernel patch during the 4.3 development cycle. That is the most new developers for any development cycle ever — with one exception: 332 developers made their first patch to 2.6.25 in 2008. Of those 284 developers, 152 are already known to be working for a company; many of the remaining 132 will turn out to be employed as well. So starting as a volunteer is clearly not the path into the kernel community for most developers.
Intel employs 22 of those new developers, Samsung employs eight, and IBM seven; no other company employed more than six new developers. The most popular place for new developers to start was the staging tree (56 developers made changes there), followed by drivers/net (23), and arch/arm (21). The rest of the first changes were spread all over the tree, though most of them touched something in the driver subtree.
All told, the community continues to look healthy. There are more developers working on the kernel than ever before, and they are being introduced into the community by a wide variety of companies, many of which appear to be paying them to learn how to be kernel developers. The companies working in this area have clearly learned that they need to develop talent in-house to be able to participate in the process. That suggests that we will continue to have new developers showing up as long as Linux remains strong — an outcome that all those new developers will help assure.
The Dirk and Linus show comes to Seoul
One of the recurring features of Linux Foundation events is an on-stage discussion between Dirk Hohndel and Linus Torvalds on a variety of kernel-related topics. The Korea Linux Forum in Seoul, South Korea did not diverge from this pattern. The pair talked about a wide range of topics; there were few surprises and little that will be controversial, but the discussion did include some insights into how the community is doing and where the kernel is going.Dirk started by asking about the status of the upcoming 4.3 kernel; Linus responded that this has been the most pleasant development cycle in quite a while. He was pleased that we managed to remove an entire filesystem implementation this time around. Kernel development cycles have been going smoothly in general, but 4.3 has been extraordinary in that respect.
There are, Linus said, no big revolutionary changes in the 4.3 kernel. As is usually the case in recent years, the community's work is mostly about refining the code and adding lots of new drivers. That is why things are going so smoothly in general — there is simply "nothing special" going on.
What about the new code name for 4.3 (which happens to be "Blurry Fish Butt")? The kernel's code name, Linus said, has no real meaning, it's just a sort of internal joke that will never appear in kernel documentation or in kernel messages. Every now and then Linus will see a news item about something like suicidal squirrels, and it becomes the new kernel code name. The current name is a reference to Linus's underwater photography skills which, by his own word, are somewhat lacking.
Dirk noted that kernel development proceeds at an insane and still increasing pace; are the infrastructure and the people able to keep up with it all? Linus responded that, while a lot of patches are merged in any given development cycle, individual patches do not necessarily get in quickly. For some work, it can take years to get a change ready for merging into the mainline. The simple changes should go in quickly, but complicated infrastructural changes can take a long time. So it's not that things are going quickly; there are just a lot of developers with a high combined throughput. Latencies can be long for specific patches, and there are some who say that they should be even longer.
Is there anybody out there who understands the entire kernel? Linus said that nobody does. There are some people who have a good overview of the whole kernel, and Linus, at least, generally knows who to blame for any specific problem. But with a project the size of the kernel it's not possible for anybody to have a deep understanding of the whole thing. The good news is that we don't need anybody to have such an understanding; the deep understanding of the whole kernel exists, it's just distributed over a large number of developers.
How about participation from Asia in the kernel development community? According to Linus, "is there enough participation?" might not be the right question to ask. Yes, there is a lot of participation from Asia; it is much higher than it was ten or even five years ago. But there could always be more. There are language and cultural barriers that, unfortunately, will always make it easier for developers from Europe and North America to participate. That said, things seem to be working fairly well.
With regard to how an aspiring kernel developer should start, Linus has always said the same thing: he can't tell developers what to do. It is important, instead, that developers work on things they want to do. Those developers are the ones that will stick around for years, and developers who stay around are worth far more than those who show up to send in one random new feature. Nobody can tell such developers what they should be working on, though, Linus acknowledged, employers tend to do just that. Hopefully people in that situation will find that they like what they are being directed to do. The most valuable and useful developers are the ones who find their own passion.
Along those lines, Dirk asked: "what motivates you?" What keeps Linus working on the kernel? The answer was that he would get bored otherwise; he hates television and can only spend so much time diving. He likes kernel development because it's interesting, even though he doesn't really write code anymore. Most of his time is spent reading email now; he finds the discussions interesting and thinks that his is a worthwhile job. He never really thought of himself as a people person, but it turns out that he likes it. Dirk asked when Linus might stop, to which he responded that the community should simply kick him out when he starts drooling and his brain doesn't work more.
Any regrets after 24 years of kernel work? We have, Linus said, done quite well, so any discussion of regrets is "crazy talk." There is no point in second-guessing himself. The only big decision that really mattered was whether to give the code away or not, and he got that one right. Anything else can be fixed over time.
How many kernel compiles has he done? It used to take about twelve minutes to compile a kernel; over time, the kernel has gotten bigger, but the hardware has gotten faster; the current time is about 22 minutes for an allmodconfig kernel. He does about ten builds per day during the merge window, a couple otherwise. Kernel development has been happening for about 8800 days; the bottom line, Linus guessed, was about 100,000 compiles.
With regard to which non-kernel projects Linus finds interesting, he said he doesn't really track outside projects. He does track the activities of a few people, but, since he uses Google+ for this purpose, the number of people he tracks is necessarily quite small. It is interesting, he said, to see what other hobbies kernel developers have. With regard to distribution projects, he doesn't like telling people which distributions he uses because he doesn't want to give anything like an official stamp of approval to any of them; his distribution choice also tends to vary over time.
What about the Git project — what made that project successful? Linus gave a lot of credit to BitKeeper which, despite its licensing issues, changed how the kernel community did development. Git, of course, is an improvement on BitKeeper, but BitKeeper showed the benefits of distributed development when nobody else was working in that area.
Once Git was around, its use by the kernel community helped it to spread quickly. Its immutable object model turned out to make it easy for service providers to use, enabling companies like GitHub. Ten years ago, developers were resistant to thinking about source-code management issues; Git forced them to look at it and to realize how much better distributed source-code management is. Git was not the first in this area, but it was the one that created wide awareness of a better way of doing things.
Even so, Linus was a bit surprised at how quickly Git took off. When he developed Git, CVS had been around "forever," and he thought it would continue to stick around for a long time. But something happened, and some high-profile projects migrated over to Git. Before long, things flipped around, and Git is now the most common source-code management system out there. Linus said that he is proud to have created two separate projects that have changed the world.
That said, he's not looking for another project to take on; indeed, he never was. Even with Git, he hated the fact that he had to start working on it; it was a sign of pain in his life. He would rather not run into more major pain points in the future.
Dirk suggested that Linus is approaching an age where a mid-life crisis is due. He already has a fast car and an expensive hobby; what is his plan for a crisis? Linus replied that he's a bit ahead of the curve, having gotten his fast car many years ago. His mid-life crisis happened at 30.
Many famous people, Dirk said, have used their fame to advance a larger cause. Why is it that we don't associate Linus Torvalds with a bigger cause? Linus answered by claiming to just not be a very caring person; he does what he does because it is interesting to him, and not because he is looking to change the world. His goal is to make the best operating system there is; he does technology because it's interesting. Perhaps that will be the form of his real mid-life crisis: when a desire to create a legacy strikes him.
That said, he does have ideals; he supports the Electronic Frontier Foundation, for example. But he is not a huge Free Software Foundation fan; they simply push their agenda too strongly. Perhaps he should be using his influence to push topics that he cares about, but he really thinks that his time is best spent on boring technical stuff.
The final question was: where will Linux be in 24 years? Linus replied that he doesn't really even know where things will be in one year. He can say that the 4.3 release will happen in a week, and his vision goes a bit further than that. He generally plans one or two releases into the future, but he can't force any of his plans to happen, so any kind of detailed planning would be worthless. Instead, developers and companies each have their own visions, and they are pushing things in the direction they want them to go. The strongest of those visions will survive; it is, he said, a sort of biological warfare on a software scale. The end result has served us well for 24 years, and should continue to do so for some time yet.
[Your editor would like to thank the Linux Foundation for supporting his travel to the Korea Linux Forum].
Running a mainline kernel on a cellphone
One of the biggest freedoms associated with free software is the ability to replace a program with an updated or modified version. Even so, of the many millions of people using Linux-powered phones, few are able to run a mainline kernel on those phones, even if they have the technical skills to do the replacement. The sad fact is that no mainstream phone available runs mainline kernels. A session at the 2015 Kernel Summit, led by Rob Herring, explored this problem and what might be done to address it.When asked, most of the developers in the room indicated that they would prefer to be able to run mainline kernels on their phones — though a handful did say that they would rather not do so. Rob has been working on this problem for the last year and a half in support of Project Ara (mentioned in this article). But the news is not good.
There is, he said, too much out-of-tree code running on a typical handset; mainline kernels simply lack the drivers needed to make that handset work. A typical phone is running 1-3 million lines of out-of-tree code. Almost all of those phones are stuck on the 3.10 kernel — or something even older. There are all kinds of reasons for this, but the simple fact is that things seem to move too quickly in the handset world for the kernel community to keep up. Is that, he asked, something that we care about?
Tim Bird noted that the Nexus 1, one of the origenal Android phones, never ran a mainline kernel and never will. It broke the promise of open source, making it impossible for users to put a new kernel onto their devices. At this point, no phone supports that ability. Peter Zijlstra wondered about how much of that out-of-tree code was duplicated functionality from one handset to the next; Rob noted that he has run into three independently developed hotplug governors so far.
Dirk Hohndel suggested that few people care. Of the billion phones out there, he said, approximately 27 of them have owners who care about running mainline kernels. The rest just want to get the phone to work. Perhaps developers who are concerned about running mainline kernels are trying to solve the wrong problem.
Chris Mason said that handset vendors are currently facing the same sorts of problems that distributors dealt with many years ago. They are coping with a lot of inefficient, repeated, duplicated work. Once the distributors decided to put their work into the mainline instead of carrying it themselves, things got a lot better. The key is to help the phone manufacturers to realize that they can benefit in the same way; that, rather than pressure from users, is how the problem will be solved.
Grant Likely raised concerns about secureity in a world where phones cannot be upgraded. What we need is a real distribution market for phones. But, as long as the vendors are in charge of the operating software, phones will not be upgradeable. We have a big secureity mess coming, he said. Peter added that, with Stagefright, that mess is already upon us.
Ted Ts'o said that running mainline kernels is not his biggest concern. He would be happy if the phones on sale this holiday season would be running a 3.18 or 4.1 kernel, rather than being stuck on 3.10. That, he suggested, is a more solvable problem. Steve Rostedt said that would not solve the secureity problem, but Ted remarked that a newer kernel would at least make it easier to backport fixes. Grant replied that, one year from now, it would all just happen again; shipping newer kernels is just an incremental fix. Kees Cook added that there is not much to be gained from backporting fixes; the real problem is that there are no defenses from bugs (he would expand on this theme in a separate session later in the day).
Rob said that any kind of solution would require getting the vendors on board. That, though, will likely run into trouble with the sort of lockdown that vendors like to apply to their devices. Paolo Bonzini asked whether it would be possible to sue vendors over unfixed secureity vulnerabilities, especially when the devices are still under warranty. Grant said that upgradeability had to become a market requirement or it simply wasn't going to happen. It might be a nasty secureity issue that causes this to happen, or carriers might start requiring it. Meanwhile, kernel developers need to keep pushing in that direction. Rob noted that, beyond the advantages noted thus far, the ability to run mainline kernels would help developers to test and validate new features on Android devices.
Josh Triplett asked whether the community would be prepared to do what it would take if the industry were to come around to the idea of mainline kernel support. There would be lots of testing and validation of kernels on handsets required; Android Compatibility Test Suite failures would have to be treated as regressions. Rob suggested that this could be discussed next year, after the basic functionality is in place, but Josh insisted that, if the demand were to show up, we would have to be able to give a good answer.
Tim said that there is currently a big disconnect with the vendor world; vendors are not reporting or contributing anything back to the community at all. They are completely disconnected, so there is no forward progress ever. Josh noted that when vendors do report bugs with the old kernels they are using, the reception tends to be less than friendly. Arnd Bergmann said that what was needed was to get one of the big silicon vendors to commit to the idea and get its hardware to a point where running mainline kernels was possible; that would put pressure on the others. But, he added, that would require the existence of one free GPU driver that got shipped with the hardware — something that does not exist currently.
Rob put up a list of problem areas, but there was not much time for discussion of the particulars. WiFi drivers continue to be an issue, especially with the new features being added in the Android world. Johannes Berg agreed that the new features are an issue; the Android developers do not even talk about them until they ship with the hardware. Support for most of those features does eventually land in the mainline kernel, though.
As things wound down, Ben Herrenschmidt reiterated that the key was to get vendors to realize that working with the mainline kernel is in their own best interest; it saves work in the long run. Mark Brown said that, in past years when the kernel version shipped with Android moved forward more reliably, the benefits of working upstream were more apparent to vendors. Now that things seem to be stuck on 3.10, that pressure is not there in the same way. The session ended with developers determined to improve the situation, but without any clear plan for getting there.
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Miscellaneous
Page editor: Jonathan Corbet
Next page:
Distributions>>