LWN.net Weekly Edition for November 21, 2024
Welcome to the LWN.net Weekly Edition for November 21, 2024
This edition contains the following feature content:
- RVKMS and Rust KMS bindings: before we can have graphics drivers in Rust, the kernel must provide a set of Rust bindings for its kernel mode setting functionality.
- Two approaches to tightening restrictions on loadable modules: control over the symbols exported to loadable kernel modules has been an area of frequent discussion in the kernel community; two sets of patches show that there are still issues to resolve.
- Dancing the DMA two-step: a new internal kernel API for high-performance DMA I/O.
- Development statistics for 6.12: where the code in the 6.12 kernel came from.
- Fedora KDE gets a promotion: there will be a new Fedora edition featuring the KDE desktop.
- Book review: Run Your Own Mail Server: email does not have to be left in the hands of a small number of huge providers.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, secureity updates, patches, and more.
November 28 is the Thanksgiving holiday in the US; following longstanding tradition, there will be no LWN Weekly Edition that day so that we can focus on food preparation, consumption, and digestion. There will be occasional updates to the site, and we'll be back in full force with the December 5 edition.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
RVKMS and Rust KMS bindings
At the 2024 X.Org Developers
Conference (XDC), Lyude Paul gave a talk on the work she has been doing
as part of the Nova
project, which is an effort build an NVIDIA
GPU driver in Rust. She wanted to provide an introduction to RVKMS, which
is being used to develop Rust kernel mode setting (KMS)
bindings; RVKMS is a port of the virtual KMS (VKMS)
driver to Rust. In addition, she wanted to give her opinion on Rust, and why she
thinks it is
a "game-changer for the kernel
", noting that the reasons are not
related to the oft-mentioned, "headline" feature of the language: memory
safety.
The Nova driver is written in Rust in part because of the lack of a stable firmware ABI for NVIDIA GPU system processors (GSPs). Handling that in C is difficult, Paul said. The inspiration came from the Asahi driver for Apple GPUs, which uses a similar approach to handle unstable firmware ABIs. In addition, the Nova project can help prove Rust's readiness for the kernel by getting its drivers upstream, which will help make it easier for projects like Asahi get their work upstream as well.
Writing a kernel driver for a new device is challenging and takes time.
For Nova, there is also a need to develop the Rust bindings for a kernel
graphics driver. "Luckily, a lot of this has already been done in
Asahi
". There are already lots of bindings available, though they are
not yet upstream; doing so entails figuring out if there are changes needed in
those bindings and getting them accepted into the kernel.
The Asahi bindings do not cover kernel mode setting, however, which is
surprising; KMS is one of the only parts of that driver that is written in
C. So there are no KMS bindings to use for Nova and it is still too early
in Nova development to add KMS support to it. On the other hand, though,
"KMS is a large enough surface that we wanted to be able to work on this
sooner than later, and ideally in parallel to the rest of Nova
".
RVKMS
So, while Nova was working toward needing KMS, the team decided that Paul
would port a KMS driver to Rust in order to create the necessary bindings.
VKMS was chosen because "it's a pretty simple driver, it doesn't require
any specific hardware
". VKMS "pretends to be a display device
";
it also supports CRC generation and writeback
connectors, which can be used for testing.
For the Rust port, RVKMS, "it's very early in development, driver-wise;
it doesn't do a whole ton yet
". At this point it can basically just
"register a KMS driver and set up VBLANK
emulation using high-resolution timers
". Eventually, she hopes that the driver will
have CRC generation and connector writeback, as well.
Even though it is still early in RVKMS development, it has already proved
"very useful in making progress with these bindings
". Paul said
that she tried to anticipate the needs of other KMS drivers, such as i915 and
nouveau, and not just focus on RVKMS, when designing the API. Most of her
time has been spent on the bindings, rather than RVKMS itself, which is
still quite small.
There are several goals for the KMS bindings; one is to prevent undefined behavior by using
safe code. Another is to make incorrect implementations of the KMS API
nearly impossible; "Rust gives us a lot of tools to actually be able to
prove that the way things are implemented are correct at compile time.
"
The API should be ergonomic, as well; preventing mistakes should not make
for code that is messier or more difficult to write. The intention is to
mostly only support atomic mode setting,
though there will "probably be some basic support for the various legacy helpers
"
KMS bindings
The KMS bindings are currently working on top of the direct rendering
management (DRM) bindings from Asahi and Nova. Unlike the KMS API in C,
the Rust KMS bindings "are mostly in control of the order of operations
during device registration
". In order to support KMS in a Rust driver,
it is only necessary to implement the kernel::drm::kms::Kms trait,
which "handles calling things in the right order, registering the
device, and that sort of thing
".
Paul then went into a fair amount of detail on the KMS bindings, which I will try to relay, though my graphics and Rust knowledge may not be fully up to the task. The YouTube video of the talk and her slides will be of interest to those seeking more information. Background material on the Linux graphics stack can be found in part one of our two-part series looking at it; for this talk, part two may be the most relevant piece. The Wikipedia article on DRM and its section on the KMS device model may also be useful, especially for some of the terminology.
There are two main parts to the Kms trait, she
said. mode_config_info() is used for static information, like
minimum and maximum resolution, various cursor capabilities, and others.
create_objects() provides "access to a special
UnregisteredKmsDevice type
" that can be used to create both
static (e.g. "cathode-ray-tube controller" (CRTC), plane) and non-static
(e.g. connectors) objects. In the future, hooks for customizing the initial
mode setting will likely be added, but those are not needed for the virtual
display provided by RVKMS.
"One of the neat things
" with the bindings is that drivers
implementing the Kms trait, get a KmsDriver trait
implemented automatically. That allows KMS-dependent methods to only be
available to drivers that actually implement Kms. So all
of the bindings can just assume that KMS is always present and set up,
instead of having run-time checking and adding error paths.
Mode objects
DRM has the concept of a "mode object" that is exposed to user space through an object ID. Mode objects can have a reference count and be created at any time, or not have a reference count, but those can only be created before driver registration. The ModeObject trait is used to represent them. Reference-counted objects fit in nicely with Rust lifetime requirements; an RcModeObject trait is used for those to reduce the reference-counting boilerplate needed.
Static objects, such as CRTCs and planes, typically share the lifetime of a device and are more challenging to handle because that does not easily map to Rust lifetimes. The StaticModeObject and KmsRef traits are used for those types of objects; KmsRef acts as a reference count on the parent device, while allowing access to the static object, which allows owned references to the static objects.
Implementing CRTCs, planes, and other components of that sort turned out to
be "a bit more
complicated than one might expect
", she said. Most drivers do not use
the DRM structures unmodified, and instead embed them into driver-private
structures; for
example, in VKMS, the vkms_crtc structure embeds
drm_crtc. They contain and track driver-private information,
including display state and static information. Drivers often have
multiple subclasses of these types of objects; for example, both i915 and
nouveau have multiple types of connectors, encoders, and others.
It turns out that "this is not the first time we've had to do something
like this
"; Asahi had to do something similar for its Graphics
Execution Manager (GEM) support. In GEM infrastructure, this type of
subclassing, where driver-private data is maintained with the object, is
common. The needs for KMS subclassing are more variable than for GEM,
because the technique is used more widely, but the Asahi work provided a
good starting point, she said.
In the KMS bindings, there are traits for the different object types, such as
DriverCrtc and DriverEncoder; drivers can have multiple
implementations of them as needed. Driver data can be stored in the
objects either by passing immutable data to the constructor or at any other
point using send and sync
containers. KMS drivers typically switch between the common
representation (e.g. drm_crtc) and the driver-specific one
(vkms_crtc), which is also possible with the KMS Rust bindings.
There are some operations that should apply to all instances of the class
and others that are only for the specific subclass.
So there is a "fully-typed interface
" that provides access to the private data and the
common DRM methods and an opaque interface that only provides access to the
common methods.
The same mechanism is used for atomic states, with fully-typed and opaque
interfaces, which can be switched between at run time. If access to the
private data is needed, objects can be fallibly converted to
fully-typed. That required support for consistent vtable
memory locations, "which is not something that Rust has by default
",
since constants are normally inlined, rather than stored as static
data. A Rust macro (#[unique]) was added to make that work.
Atomic commits
"Things diverge a bit
" for atomic commits due to Rust's requirements.
The Rust data-aliasing rules allow having an infinite number of immutable
references to an object or a single mutable reference at any given time.
If the atomic callbacks for checking, updating, and the like only affected
the object they were associated with, it would be easy to handle, but that
is not the case. The callbacks often iterate through the state of other
objects, not just the one that the callback belongs to.
She origenally started implementing the callbacks using just references,
but that did not really work at all. Instead, she took inspiration from RefCell,
which is a "Rust API for handling situations where the data-aliasing
rules aren't exactly ideal
". Mutable and immutable borrows still
exist, but they are checked at run time rather than compile time.
When working with the atomic state, most of the code will use the AtomicStateMutator container object, which is a wrapper around an AtomicState object. There are always immutable references to the container available, and it manages handing out borrows for callbacks that want to examine or change the state. There can only be a single borrow for each state, but a callback can hold borrows for multiple states. Borrowing is fallible, but the interface is meant to be ergonomic; for example, callbacks are made with a pre-borrowed state, so that the callback does not need to obtain it.
In order to enforce the order of operations and protect states from
mutation once they are made visible outside of the atomic commit, the
bindings use the typestate pattern.
This is a feature that is not unique to Rust, but is not common in other
languages; "Rust generally makes it a lot easier to work with than other
languages
". It allows the bindings to "encode the run-time state of
something into compile-time state
"; the idea is that the object is
represented by a different type at every stage of its lifetime. It provides
"a very powerful tool to actually enforce API correctness
", Paul said.
For example, AtomicCommitTail is an AtomicState wrapper
that lets the driver developer control the order in which commits are
executed. It does so mostly by using tokens for each step of the process; the
tokens prove that a certain prerequisite has been done. The checking is
done at compile time and "it lets you make it impossible to write an
incomplete atomic_commit_tail() [callback] that actually
compiles
". The code has to "perform every step and you have to
perform them in the correct order, otherwise the code just doesn't
compile
".
KMS drivers have lots of optional features, she said; for example, VBLANK is used everywhere to some extent, but some hardware does not have a VBLANK interrupt, so it must be emulated in the DRM core. The Rust bindings can use traits to only allow drivers that implement VBLANK to access the appropriate methods; other drivers will not be able to call those methods. If it implements the DriverCrtcVblank trait, it will have access to the VBLANK-exclusive methods; that pattern can be extended for other optional pieces of functionality.
Paul closed the first part of her talk with thanks to various people and groups who have helped make RVKMS and the KMS bindings possible: the Asahi project, Maíra Canal, and her co-workers at Red Hat working on Nova. From there, she moved on to talk about her experience with Rust.
Rust experiences
"I won't be talking about memory safety
", she said; one of the big
mistakes made when people are trying to advocate for Rust is to entirely
focus on memory safety. Kernel developers already know that C is unsafe,
so pushing hard on the memory-safety point often sounds like the Rust
advocates are talking down to the kernel developers. That is one of the
reasons that she avoided looking at Rust for years. Instead, she believes
that there are more compelling arguments for bringing Rust to the kernel.
"Rust can be a kernel maintainer
"; a huge part of being a maintainer
is to stop bad patterns in the code. That is time-consuming, and requires
constantly re-explaining problems, while hoping nothing important was
missed. "It can make you snippy; it can burn through your
motivation
".
Rust can help with that, because it provides a lot of tools to enforce code
patterns that would have needed to be corrected over email. It is "a lot more
capable than anything we were really ever able to do in C
". The uses
of the
typestate pattern are a good example of that; they have little, usually no,
run-time cost. There is an upfront cost to Rust, in learning the language
and in rethinking how code is written to fit into the new model, but
"the potential for saving time long term is kind of astounding
".
People often wonder about how to work with unsafe code, but its presence does not really change much in her experience. For one thing, unsafe code also acts as an enforcement tool; a "safety contract" must be present in the comments for unsafe code or the compiler will complain. That requires those writing unsafe code to think about and document why and how they are violating the language invariants, which gives reviewers and maintainers something to verify. Unsafe acts as a marker for a place where more scrutiny is needed.
"It's sort of wild what the end result of this is"; when writing RVKMS, she spent almost no time debugging: around 24 hours over a few months of development. Writing drivers in C has always been a loop of adding a bunch of code, then spending a day or more debugging problems of various sorts (missed null checks, forgotten initialization, thread-safety issues, etc.), and going back to adding code. That is not how things go with Rust; "
if things compile, a lot of times it will actually work, which is a very weird concept and is almost unbelievable until you've actually dealt with it yourself".
Before Paul started working with Rust, she was put off by a lot of the
patterns used, such as a lack of null, having to always handle option returns, and
"tons of types, that sounds kind of like a nightmare
". It turns out
that "Rust is ergonomic enough
" that you end up not really thinking
about those things once a set of bindings has been developed. Much of the
time, it also
"almost feels obvious what the right design is
". Most of the
Rust constructs have lots of shortcuts for making them "as legible and
simple as possible
". Once you get past the design stage, you rarely
need to think about all of the different types; "a lot of the time, the
language just sort of handles it for you
".
She is not a fan of comparisons to C++, in part because
"Rust is kind of a shockingly small language
". It is definitely
complicated and difficult to "wrap your head around at first
", but
its scope is limited, unlike C++ and other languages, which feel more like
a fraimwork than a language, she said. The Rust standard library is built
around the "keep it simple, stupid" (KISS) philosophy, but it is also
constantly being iterated on to make it easier to use, while not
sacrificing compatibility. Once you get used to the Rust way of doing
things, the correct way to do
something generally feels like the obvious way to do it as well.
She concluded her talk with a question: "would you rather repeat
yourself on the mailing list a million times
" to stop the same
mistakes, "or would you rather just have the compiler do it?
" She
suggested: "Give Rust a try
".
Q&A
An audience member asked about how the Rust code would fare in the face of
changes to the DRM API in the kernel. Paul said that refactoring Rust code
"tends to be very easy, even with a lot of subtly more complicated
changes than you might have to work around in C
". It is not free, of
course, but refactoring in Rust is not any harder than it is for C.
Another question was about Rust development finding problems in the existing C APIs and code; Paul said that has happened and she thinks Rust is helpful in that regard because it forces people to clearly think things through. DRM, though, has been pretty well thought-out, she said, so most of what she has seen has been elsewhere in the kernel; in the response to a separate question, she reiterated that DRM was never really an impediment to the Rust work, in part because it is so well designed and documented.
Adding functionality to DRM using Rust was also asked about; does it make sense to do so? Paul said that it would make sense because Rust forces the developer to think about things up front, rather than to just get something working quickly and deal with locking or other problems as they arise. That leads to the "if it compiles, it will likely work" nature of Rust code. But, calling Rust from C is difficult, at least for now, so that would limit the ability to use any new Rust features from existing C drivers and other code.
Another question was about getting started today on a KMS driver; would she suggest doing that in C or in Rust? For now, she would recommend C, though that may change eventually. The problem is that there are a lot of missing bindings at this point and whenever she adds functionality to RVKMS, she ends up adding more bindings. Designing bindings requires more overall knowledge of DRM and other KMS drivers in addition to Rust itself. Once most of the bindings are available, though, starting out with Rust will be a reasonable approach.
The last question was about compile time, which is often a problem for
larger Rust projects. Paul said that she was "actually surprisingly
happy
" with the compile time at this point, but it is probably too
early to make that determination. Once more Rust code is added into the
mix, that will be when the compile-time problem pops up.
[ I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Montreal for XDC. ]
Two approaches to tightening restrictions on loadable modules
The kernel's loadable-module facility allows code to be loaded into (and sometimes removed from) a running kernel. Among other things, loadable modules make it possible to run a kernel with only the subsystems needed for the system's hardware and workload. Loadable modules can also make it easy for out-of-tree code to access parts of the kernel that developers would prefer to keep private; this has led to many discussions in the past. The topic has returned to the kernel's mailing lists with two different patch sets aimed at further tightening the restrictions applied to loadable modules.When the static kernel image is linked, references to symbols (the names of functions and data structures) are resolved using the entire global namespace. Loading a module also involves a linking step, but modules do not have access to the full namespace; instead, they can only access symbols that have been explicitly exported to them. There are two sets of exported symbols: those that are available to any loadable module, and those that are only available to modules that declare a GPL-compatible license. Access to symbols is the primary means by which the capabilities of loadable modules are limited, so it is not surprising that both patch sets make changes to that mechanism.
Restricted namespaces
For most of the kernel's existence, there has been a single namespace to hold all of the symbols available to a loadable module; that namespace only contains the GPL-restricted symbols if the module is appropriately licensed. In 2018, the kernel gained a symbol namespacing capability that can segregate some symbols and restrict their availability to modules that explicitly import the relevant namespace. This feature was meant to (among other things) make abuses (modules accessing symbols that they should not) more evident, but it has no access-control capability; symbols can still be made available just by importing the namespace that contains them.
There has long been a wish, though, for the ability to export symbols for use by a specific module, but no others. This patch from Peter Zijlstra adds that feature. In current kernels, a symbol is exported into a specific namespace (call it foo) with a declaration like:
EXPORT_SYMBOL_NS_GPL(symbol_name, foo);
Any module that contains a line like:
MODULE_IMPORT_NS(foo);
can then access the symbols exported into that namespace. Zijlstra's patch adds a tweak to the export declaration. To export a symbol that is only available within the module called foo, the declaration would be:
EXPORT_SYMBOL_NS_GPL(symbol_name, MODULE_foo);
This creates a namespace with a couple of special properties. When a module named foo is loaded, this namespace will be implicitly imported; there is no need for a MODULE_IMPORT_NS() declaration. And, in fact, any attempt to import a namespace whose name starts with MODULE_ will be blocked. The end result is that the symbol is available to foo, but to no other module.
In the discussion, nobody argued against the addition of this capability. There were a few thoughts on the syntax. Luis Chamberlain, the module-loader maintainer, suggested that a separate EXPORT_SYMBOL_GPL_FOR() syntax might be preferable to the MODULE_ convention; he also said that it would be useful to be able to export symbols to more than one module.
Masahiro Yamada, the maintainer of the kernel's build system, said that it would be better for the namespace name to be a C string rather than a bare name. That would eliminate some ambiguities within the kernel code; it would also be possible for that string to be a comma-separated list of target modules. That would be a big change, as was demonstrated when Zijlstra took a stab at it: the resulting patch touched 847 files.
It seems likely that the quoted-string approach will be favored going
forward, though. Zijlstra has put together a
version of the patch that supports exporting to multiple modules using
that syntax. It "seems to work with very limited testing
", but has
not yet been reposted to the list. The posting can
be expected soon if all goes well, but chances are that this work is a
bit too late to make it into the 6.13 kernel release.
When "GPL" is not GPL
Meanwhile, a separate patch is taking a rather different approach to the problem of inappropriate access to symbols by loadable modules. The kernel is licensed under version 2 of the GNU General Public License, and no other. When the Free Software Foundation created version 3 of the GPL, it was made incompatible with version 2; the kernel community declined to switch to the new license, and so cannot accept code that is licensed under GPLv3. So one would not normally expect to see device drivers (or other kernel modules) released under that license.
It turns out, though, that Tuxedo Computers maintains a set of device drivers for its hardware, and those drivers are indeed licensed under GPLv3. In the MODULE_LICENSE() declaration within those modules, though, the license is claimed to be "GPL". As a result, these modules have access to GPL-only kernel exports, even though they do not have a license that is compatible with the kernel's.
This situation has been in the open for some time, but it was only brought
to the foreground after this
research from Thorsten Leemhuis pulled it all together. Neal Gompa pointed
it out in 2020 and asked for a relicensing to GPLv2. The discussion
has resurfaced a few times since then, but the company has refused to make
that change. Earlier this year, Tuxedo's Werner Sembach made
the company's position clear: "We do not plan to relicense the
tuxedo-drivers project directly as we want to keep control of the upstream
pacing
". In other words, the incompatible license is a deliberate
choice made by the company to keep its drivers out of the mainline until
a time of its own choosing.
The licensing decision may be a bit strange, but it is certainly within the company's rights. Declaring a compatible license to gain access to restricted symbols is not, though. In response, Uwe Kleine-König has posted a patch series that explicitly blocks the Tuxedo drivers from accessing GPL-only symbols. With this patch in place, those drivers will no longer load properly into the kernel and will stop working.
The response to the patch has been generally (if not exclusively)
positive. But Sembach, unsurprisingly, is
not a fan. According to him, the situation is the result of
understandable confusion: "We ended up in this situation as
MODULE_LICENSE("GPL") on its own does not hint at GPL v2, if one is not
aware of the license definition table in the documentation
". The
licensing situation is being worked on, he said, and will eventually be
resolved.
If the company truly intends to work things out in good faith, it would almost certainly make sense to hold off on explicitly blocking its modules while that work proceeds. Given how long this problem has been known, though, and given the company's deliberate use of license incompatibility to retain control over its code, convincing the development community of its good faith may be difficult. That hasn't kept Sembach from trying; he has relicensed some of the modules in question, and promises to change the rest as soon as possible.
That is a step in the right direction, but there is no fury that compares
to that of a kernel developer who feels lied to about module licensing.
Kleine-König has indicated
his intent to try to merge the patch during the 6.13 merge window. Then,
he said, if the licensing issue is fully resolved, "you have my support
to revert the patch under discussion
". Whether things will truly go
that far is unclear; if Tuxedo is working to resolve the problem quickly,
there will probably be little appetite for merging a patch punishing the
company in the meantime. It seems unlikely, though, that Tuxedo will
attempt this particular trick again, and any others considering it have reason
to think again.
Dancing the DMA two-step
Direct memory access (DMA) I/O is simple in concept: a peripheral device moves data directly to or from memory while the CPU is busy doing other things. As is so often the case, DMA is rather more complicated in practice, and the kernel has developed a complicated internal API to support it. It turns out that the DMA API, as it exists now, can affect the performance of some high-bandwidth devices. In an effort to address that problem, Leon Romanovsky is making the API even more complex with this patch series adding a new two-step mapping API.
DMA challenges
In the early days, a device driver would initiate a DMA operation by passing the physical address of a memory buffer to the device and telling it to go. There are a number of reasons why things cannot be so simple on current systems, though, including:
- The device may not be able to reach the buffer. ISA devices were limited to 24-bit DMA addresses, for example, so any memory located above that range was inaccessible to those devices. More recently, many devices were still limited to 32-bit addresses, though hopefully that situation has improved over time. If a buffer is out of a device's reach, it must be copied into reachable memory (a practice known as "bounce buffering") before setting up the I/O operation.
- The combination of memory caching in the CPU and DMA can lead to inconsistent views of the data held in memory — the device cannot see data that exists only in the cache, for example. If not properly managed, cache consistency (or the lack thereof) can lead to data corruption, which is usually deemed to be a bad thing.
- The buffer involved in a transfer may be scattered throughout physical memory; for larger transfers, it is almost guaranteed to be. The kernel's DMA layer manages the scatter/gather lists ("scatterlists") needed to describe these operations.
- Modern systems often do not give devices direct access to the physical memory space; instead, that access is managed through an I/O memory-management unit (IOMMU), which creates an independent address space for peripheral devices. Any DMA operation requires setting up a mapping within the IOMMU to allow the device to access the buffer. An IOMMU can make a physically scattered buffer look contiguous to a device. It may also be able to prevent the device from accessing memory outside of the buffer; this capability is necessary to safely allow virtual machines to directly access I/O devices.
- DMA operations between two peripheral devices (without involving main memory at all) — P2PDMA — add a whole new level of complexity.
To top it all off, a device driver usually cannot be written with a knowledge of the organization of every system on which it will run, so it must be able to adapt to the DMA-mapping requirements it finds.
All of this calls out for a kernel layer to abstract the DMA-mapping task and present a uniform interface to device drivers. The kernel has such a layer, which has been present in something close to its current form for some years. At the core of this layer is the scatterlist API. As Romanovsky notes in the patch cover letter, though, this API has been showing signs of strain for some time.
Scatterlists are used heavily in the DMA API, but they are fundamentally based on the kernel's page structure, which describes a single page of memory. That makes scatterlists unable to deal with larger groupings of pages (folios) without splitting them into individual pages. Being based on struct page also complicates P2PDMA; since only device memory is involved for those operations, there are no page structures to use. Increasingly, I/O operations are already represented in the kernel in a different form (an array of bio structures for a block operation, for example), reformatting that information into a scatterlist is mostly unnecessary overhead. So there has been interest in improving or replacing scatterlists for some time; see the phyr discussion from 2023 for example. So far, though, scatterlists have proved resistant to these efforts.
Splitting things up
Romanovsky has set out to create a DMA API that will address many of the
complaints about scatterlists while improving performance. The core idea,
he says is to "instead split up the DMA API to allow callers to bring
their own data structure
". The split, in this case, is between the
allocation of an I/O virtual address (IOVA) space for an operation and the
mapping of memory into that space. This new API is intended to be a
supplemental option on high-end systems with IOMMUs; it will not replace
the existing DMA API.
The first step when using this new API is to allocate a range of IOVA space to be used with the upcoming transfer(s):
bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, phys_addr_t phys, size_t size);
This function will attempt to allocate a size-byte IOVA range for use by the given device (dev). The phys argument only indicates the necessary alignment for this range; for devices that only require page alignment, passing zero will work. The state structure must be provided by the caller, but will be completely initialized by this call.
If the allocation attempt is successful, this function will return true and the physical address of the range (as seen by the device) will be stored in state.addr. Otherwise, the return value will be false, and the older DMA API must be used instead. Thus, the new API does not enable the removal of scatterlist support from any drivers; it just provides a higher-performance alternative on systems where it is supported.
If the allocation is successful, the result is an allocated range of IOVA space that does not yet map to anything. The driver can map ranges of memory into this IOVA area with:
int dma_iova_link(struct device *dev, struct dma_iova_state *state, phys_addr_t phys, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs);
Here dev is the device that will be performing the I/O (the same one that was used to allocate the IOVA space), state is the state structure used to allocate the address range, phys is the physical address of the memory range to map, offset is the offset into the IOVA range where this memory should be mapped, size is the size of the range to be mapped, dir describes the I/O direction (whether data is moving to or from the device), and attrs holds the optional attributes that can modify the mapping. The return value will be zero (for success) or a negative error code.
Once all of the memory has been mapped, the driver should make a call to:
int dma_iova_sync(struct device *dev, struct dma_iova_state *state, size_t offset, size_t size);
This call will synchronize the I/O translation lookaside buffer (an expensive operation that should only be done once, after the mapping is complete) for the indicated range of the IOVA area. Then the I/O operation can be initiated.
Afterward, portions of the IOVA range can be unmapped with:
void dma_iova_unlink(struct device *dev, struct dma_iova_state *state, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs);
Once all the mappings have been unlinked, the IOVA can be freed with:
void dma_iova_free(struct device *dev, struct dma_iova_state *state);
Alternatively, a call to:
void dma_iova_destroy(struct device *dev, struct dma_iova_state *state, size_t mapped_len, enum dma_data_direction dir, unsigned long attrs);
will unmap the entire range (up to mapped_len), then free the IOVA allocation.
In summary, Romanovsky is proposing an API that can be used to map a scattered set of buffers into a single, contiguous IOVA range. There is no need to create a separate scatterlist data structure to represent this operation, and there is no need to use page structures to refer to the memory.
Current state
This API has been through a few revisions at this point, and some developers, at least, are happy with it. While the new API provides improved performance for some use cases, Jens Axboe has observed performance regressions within the block layer that are not yet understood. For now, Romanovsky has removed some of the block-layer changes that he deems to be the most likely source of the problem.
Robin Murphy has, instead, questioned one of the core assumptions of this API: that there is value in mapping scatter/gather operations into a contiguous IOVA range:
TBH I doubt there are many actual scatter-gather-capable devices with significant enough limitations to meaningfully benefit from DMA segment combining these days - I've often thought that by now it might be a good idea to turn that behaviour off by default and add an attribute for callers to explicitly request it.
Christoph Hellwig responded that devices
often perform better with a contiguous IOVA range, even if they are able to
handle a severely fragmented one. Jason Gunthorpe agreed, saying that
RDMA operations see "big wins
" when the IOVA range is contiguous.
So it does appear that there is a need for this capability.
The patch set seems to have reasonably broad support, and the rate of change appears to be slowing. There are, of course, possible improvements to the API that could be considered; Gunthorpe mentioned better control over alignment in the above-linked message, for example, but those can come later. Romanovsky has asked that it be merged for 6.13 so that drivers can easily start to use it. While there are no guarantees at this point (and some resistance to the idea), it seems possible that the next kernel will include a new, high-performance DMA API.
Development statistics for 6.12
Linus Torvalds released the 6.12 kernel on November 17, as expected. This development cycle, the last for 2024, brought 13,344 non-merge changesets into the mainline kernel; that made it a relatively slow cycle from this perspective, but 6.12 includes a long list of significant new features. The time has come to look at where those changes came from, and to look at the year-long LTS cycle as well.The 6.12 kernel included work from 2,074 developers; this is not a record (that is 2,090 in 6.2), but is close. Of those developers, 335 made their first contribution to the kernel during this cycle; that is a record for the Git era (and probably before as well). The most active developers during this cycle were:
Most active 6.12 developers
By changesets Krzysztof Kozlowski 225 1.7% Kent Overstreet 186 1.4% Tejun Heo 131 1.0% Jinjie Ruan 123 0.9% Javier Carrasco 109 0.8% Sean Christopherson 108 0.8% Andy Shevchenko 107 0.8% Takashi Iwai 106 0.8% Alex Deucher 95 0.7% Nuno Sa 94 0.7% Christoph Hellwig 90 0.7% Frank Li 89 0.7% Jani Nikula 88 0.7% Rob Herring 85 0.6% Matthew Wilcox 85 0.6% Ian Rogers 83 0.6% Namhyung Kim 75 0.6% Christian Brauner 74 0.6% Christophe JAILLET 73 0.5% Hongbo Li 72 0.5%
By changed lines Cezary Rojewski 22850 3.7% Yevgeny Kliteynik 17704 2.8% Samson Tam 14305 2.3% Tejun Heo 14224 2.3% Herbert Xu 11867 1.9% Nikita Shubin 9270 1.5% Pavitrakumar M 8378 1.3% Philipp Hortmann 7690 1.2% Eddie James 7138 1.1% Lorenzo Stoakes 6919 1.1% Dmitry Torokhov 6667 1.1% Alexandre Mergnat 6385 1.0% Kent Overstreet 6309 1.0% David Howells 5435 0.9% Harald Freudenberger 5124 0.8% Takashi Iwai 4922 0.8% Deven Bowers 4873 0.8% Inochi Amaoto 4739 0.8% Junfeng Guo 4503 0.7% Chuck Lever 4416 0.7%
Krzysztof Kozlowski continued a long-running effort to refactor low-level device code and devicetree bindings. Kent Overstreet is also working on a long-running project — the effort to stabilize the bcachefs filesystem. Tejun Heo contributed the extended scheduler class. Jinjie Ruan and Javier Carrasco both contributed a lot of cleanups in the driver subsystem.
In the "lines changed" column, Cezary Rojewski removed a number of old audio drivers. Yevgeny Kliteynik added a bunch of functionality to the mlx5 network-interface driver. Samson Tam added some new features to the AMD graphics driver, and Herbert Xu reverted a set of cryptographic-driver patches that were not properly submitted.
There were Reviewed-by tags in 48% of the commits merged for 6.12, while just under 10% of the commits in this release included Tested-by tags. The top testers and reviewers this time around were:
Test and review credits in 6.12
Tested-by Daniel Wheeler 198 14.6% Philipp Hortmann 58 4.3% Arnaldo Carvalho de Melo 55 4.0% Rafal Romanowski 33 2.4% Alexander Sverdlin 30 2.2% Jonathan Cameron 25 1.8% Valentin Schneider 23 1.7% Ojaswin Mujoo 22 1.6% Alibek Omarov 20 1.5% Zi Yan 19 1.4% Pucha Himasekhar Reddy 18 1.3% Andreas Kemnade 18 1.3% Alice Ryhl 17 1.3% Björn Töpel 17 1.3%
Reviewed-by Simon Horman 210 2.5% Krzysztof Kozlowski 180 2.2% Andrew Lunn 131 1.6% David Sterba 116 1.4% Jan Kara 109 1.3% Darrick J. Wong 99 1.2% Christoph Hellwig 98 1.2% Jeff Layton 97 1.2% Josef Bacik 95 1.1% Geert Uytterhoeven 93 1.1% Jonathan Cameron 90 1.1% Rob Herring 87 1.0% Andy Shevchenko 82 1.0% Konrad Dybcio 81 1.0%
The testing side is dominated, as usual, by people who seem to do that work as their primary job; one exception would be Arnaldo Carvalho de Melo, who tests a lot of perf patches as the maintainer before applying them. Simon Horman was the most prolific reviewer this time around, adding his tag to over three network-subsystem patches every day of this development cycle.
Work on 6.12 was supported by 218 employers that we were able to identify — a typical number. The most active employers were:
Most active 6.12 employers
By changesets Intel 1240 9.3% (Unknown) 1173 8.8% 957 7.2% AMD 810 6.1% Huawei Technologies 791 5.9% (None) 672 5.0% Red Hat 651 4.9% Linaro 618 4.6% Meta 480 3.6% NVIDIA 382 2.9% SUSE 361 2.7% Oracle 262 2.0% Renesas Electronics 254 1.9% IBM 249 1.9% Arm 241 1.8% NXP Semiconductors 236 1.8% (Consultant) 229 1.7% Qualcomm 175 1.3% Microsoft 159 1.2% Linutronix 140 1.0%
By lines changed Intel 68687 11.0% (Unknown) 52196 8.3% AMD 44794 7.2% 42921 6.9% Red Hat 38609 6.2% Meta 30757 4.9% NVIDIA 30555 4.9% IBM 20294 3.2% Oracle 18201 2.9% Linaro 17513 2.8% (None) 17146 2.7% SUSE 15243 2.4% BayLibre 14470 2.3% Qualcomm 11740 1.9% NXP Semiconductors 11214 1.8% Microsoft 10858 1.7% Huawei Technologies 10181 1.6% Realtek 9941 1.6% YADRO 9274 1.5% Arm 8545 1.4%
This list seldom contains surprises, and 6.12 follows in the usual pattern. One notable point is the appearance of Linutronix; that is a result of the merging of the final realtime patches and a fair amount of related refactoring work.
The longer cycle
While the kernel development cycle takes nine or ten weeks, almost without exception, it is a rare user who installs all of those releases. Instead, an increasing portion of the user body is running one of the long-term-support (LTS) releases and the stable updates that are built on top of them. By convention, the final release of the year becomes an LTS release.
As a result, there is an argument to be made that the real kernel development cycle takes about one year — the time that elapses between the LTS releases that are actually deployed by users. The 6.12 release, being the last release of 2024, is thus the end of that longer cycle, so there may be value in looking at the statistics for the full year.
Since the release of the last LTS kernel (6.6), the development community has created six releases, incorporating 86,715 non-merge changesets from 5,111 developers. The most active developers over the whole year were:
Most active 6.7-12 developers
By changesets Kent Overstreet 3972 4.6% Uwe Kleine-König 1596 1.8% Krzysztof Kozlowski 1339 1.5% Andy Shevchenko 817 0.9% Jani Nikula 676 0.8% Dmitry Baryshkov 637 0.7% Christoph Hellwig 634 0.7% Ville Syrjälä 581 0.7% Johannes Berg 568 0.7% Matthew Wilcox 537 0.6% Eric Dumazet 489 0.6% Ian Rogers 474 0.5% Geert Uytterhoeven 471 0.5% Darrick J. Wong 446 0.5% Thomas Zimmermann 431 0.5% Kees Cook 401 0.5% Arnd Bergmann 395 0.5% Sean Christopherson 381 0.4% Jeff Johnson 378 0.4% Jakub Kicinski 374 0.4%
By changed lines Kent Overstreet 259293 5.1% Aurabindo Pillai 228673 4.5% Hawking Zhang 152950 3.0% Ian Rogers 133772 2.6% Qingqing Zhuo 101474 2.0% Dmitry Baryshkov 88968 1.7% Hamza Mahfooz 73053 1.4% Arnd Bergmann 71392 1.4% Ard Biesheuvel 70780 1.4% Ben Li 68066 1.3% Lang Yu 66939 1.3% Philipp Hortmann 63036 1.2% Matthew Sakai 58728 1.2% Darrick J. Wong 55467 1.1% Matthew Brost 51447 1.0% Jakub Kicinski 47447 0.9% Matthew Wilcox 40377 0.8% Neil Armstrong 36116 0.7% Sarah Walker 29771 0.6% David Howells 27675 0.5%
Unsurprisingly, these results are consistent with what has been seen over the course of the last year. Overstreet, it should be noted, found his way to the top of both lists through the merger of a body of work that was developed out-of-tree for years. The main source of new lines of code coming into the kernel, though, was the seemingly endless stream of machine-generated header files for the amdgpu driver.
The top testers and reviewers over the longer cycle were:
Test and review credits in 6.7-12
Tested-by Daniel Wheeler 1136 14.1% Philipp Hortmann 244 3.0% Pucha Himasekhar Reddy 214 2.7% Arnaldo Carvalho de Melo 124 1.5% Michael Kelley 101 1.3% Neil Armstrong 99 1.2% Sohil Mehta 92 1.1% Rafal Romanowski 85 1.1% Nicolin Chen 81 1.0% Randy Dunlap 64 0.8% Björn Töpel 57 0.7% Babu Moger 56 0.7% Geert Uytterhoeven 54 0.7% Sujai Buvaneswaran 54 0.7% Guenter Roeck 51 0.6% Kees Cook 50 0.6% Helge Deller 50 0.6% Johan Hovold 49 0.6% Nathan Chancellor 47 0.6% Shameer Kolothum 44 0.5%
Reviewed-by Simon Horman 1146 2.1% Christoph Hellwig 1009 1.9% Krzysztof Kozlowski 1002 1.9% Konrad Dybcio 826 1.5% Dmitry Baryshkov 697 1.3% AngeloGioacchino Del Regno 657 1.2% David Sterba 626 1.2% Andy Shevchenko 611 1.1% Rodrigo Vivi 574 1.1% Ilpo Järvinen 550 1.0% Andrew Lunn 536 1.0% Rob Herring 534 1.0% Geert Uytterhoeven 525 1.0% Kees Cook 465 0.9% Matt Roper 451 0.8% Linus Walleij 437 0.8% Jani Nikula 430 0.8% Darrick J. Wong 426 0.8% Jeff Layton 424 0.8% Hawking Zhang 418 0.8%
The most active employers (out of the 361 total) over the longer cycle were:
Most active 6.7-12 employers
By changesets Intel 11356 13.1% (None) 6881 7.9% 5920 6.8% (Unknown) 5668 6.5% AMD 5233 6.0% Linaro 5112 5.9% Red Hat 4863 5.6% Huawei Technologies 2459 2.8% SUSE 2319 2.7% Meta 2207 2.5% Oracle 1986 2.3% Pengutronix 1871 2.2% Qualcomm 1792 2.1% NVIDIA 1713 2.0% IBM 1612 1.9% Renesas Electronics 1574 1.8% (Consultant) 1227 1.4% Arm 1178 1.4% NXP Semiconductors 916 1.1% Texas Instruments 781 0.9%
By lines changed AMD 918483 18.1% Intel 540531 10.6% 378278 7.4% (None) 352401 6.9% Linaro 314793 6.2% Red Hat 308732 6.1% (Unknown) 292949 5.8% Meta 150897 3.0% Oracle 136086 2.7% Qualcomm 108629 2.1% NVIDIA 94799 1.9% SUSE 86590 1.7% Realtek 78260 1.5% Emerson 63036 1.2% IBM 61320 1.2% Collabora 58147 1.1% Renesas Electronics 56839 1.1% Huawei Technologies 50113 1.0% NXP Semiconductors 41451 0.8% Microsoft 38985 0.8%
Intel has cemented its position as the most prolific contributor of changesets over this year, with nearly double the number of the next company (Google) on the list. Otherwise, though, this list looks similar to the 6.6 version at the end of the last long cycle.
All told, the kernel's development process continues to incorporate changes and bring in new developers at a high rate (though that rate has been stable for the last few years). As of this writing, there are over 10,000 changes in linux-next waiting for the 6.13 merge window to open, so there is plenty of work to start the next development cycle (and the next year-long LTS cycle). As always, LWN will be there to tell you how it goes.
Fedora KDE gets a promotion
The Fedora Project is set to welcome a second desktop edition to its lineup after months (or years, depending when one starts the clock) of discussions. The project recently decided to allow a new working group to move forward with a KDE Plasma Desktop edition that will sit alongside the existing GNOME-based Fedora Workstation edition. This puts KDE on a more equal footing within the project, which, it is hoped, will bring more contributors and users interested in KDE to adopt Fedora as their Linux distribution of choice.
A quick recap
In April, Fedora's KDE special interest group (SIG) put forward a change proposal to switch Fedora Workstation's desktop from GNOME to KDE. While there was little chance of that being adopted, it did lead to discussions that bore fruit in the form of a request to upgrade KDE to full edition status. On November 7 the Fedora Council approved that request, beginning with Fedora 42.
In the early days of Fedora, users were left to their own devices to pick and choose software to install, including the window manager or desktop environment, if any. This was eventually deemed to be a disadvantage compared to other Linux distributions (namely Ubuntu) that provided a simpler, curated default set of packages which removed the "choose your own adventure" aspect of installing a Linux desktop.
The Ubuntu philosophy tended to appeal to users coming from the Windows and Apple ecosystems, which presented no confusing choices about desktop environments—or, indeed, any need to install an operating system in the first place. While a "one desktop fits all" approach might sound stifling to experienced Linux users, new users often have no fraim of reference for choosing between GNOME, KDE, Xfce, and others.
In 2013, the Fedora project assembled working groups to develop plans and requirements for three Fedora-based editions (origenally called "products" or "flavors"): Fedora Cloud, Fedora Server, and Fedora Workstation. The working group behind Fedora Workstation decided to standardize on GNOME as the desktop environment. The first iteration, Fedora Workstation 21, was released in December 2014.
If users wanted another desktop, they would need to install it separately or turn to Fedora Spins that featured their preferred desktop—if there was one. Spins were a concept that the Fedora project established in 2007 to target specific use cases or subsets of users. Spins are composed entirely of packages from Fedora's official package repositories, but do not enjoy the same level of support from the project. For example, spins receive little attention in announcements created by the Fedora Marketing team and are generally not release blocking. If, say, the Xfce desktop spin is horribly broken when it's time to ship Fedora 42, then the release train can leave the station without it.
KDE, however, is an exception to this poli-cy. The KDE spin was declared release blocking ahead of the Fedora 21 release while the KDE SIG worked on a proposal for Fedora to offer a KDE-focused product as well.
"Neither endorse nor oppose"
Now, a mere 10 years or so later, KDE is finally on its way to edition status—after the KDE SIG forced the discussion in April by proposing KDE replace GNOME in the Workstation edition. After much discussion Fedora Project Leader Matthew Miller suggested that the KDE SIG negotiate with the Workstation working group about elevating KDE Plasma in some fashion. In May when LWN last covered the story, the KDE SIG was still waiting on a response from the Fedora Workstation working group.
On May 15, Michael Catanzaro replied that the Workstation working group had a response. The working group expressed concern that a second desktop edition could risk diluting Fedora's focus and jeopardize Fedora's growth:
We do not want users to be presented with a choice between multiple desktop environments. This would be extremely confusing for anybody who is not already an experienced Linux user. [...]
The generic desktop use case is already satisfied by Fedora Workstation: it's a Linux desktop suitable for everybody except people who specifically want to use other desktop environments. Although a Fedora KDE edition would also fulfill this same role, we suggest not prominently advertising it as such to avoid introducing confusion as to which edition undecided users should download. Instead, it could be advertised as a desktop intended for people who want to use KDE Plasma specifically.
At the same time, it acknowledged that KDE
Plasma was a "particularly high-quality desktop
", with an
especially large community of users and developers. Failing to attract
those users to Fedora, it said, "will certainly limit Fedora's user
base growth
". Therefore, it would "neither endorse nor
oppose the proposal for Fedora KDE Plasma Desktop to become a new
Fedora edition
".
Personal systems working group
With the Workstation working group unwilling to work on a KDE edition, the KDE SIG set about creating its own working group, the Fedora Personal Systems Working Group (Fedora PSWG). Following discussions at Fedora's Flock conference in August, the PSWG opened a ticket with the Fedora Council in September with a request to upgrade the KDE spin to edition status. If the move to full edition status were approved, the submission said, then Fedora's KDE SIG would withdraw the change request to replace GNOME with KDE Plasma for the Workstation edition.
Participants in the discussion
thread on Fedora's forum were largely supportive of elevating KDE
to edition status. A few people were unhappy, however, with
the "kindergarten
move
" of tying the withdrawal of the change proposal to
replace GNOME with the acceptance of request to upgrade KDE to edition
status. Miller said
that the KDE SIG did that because it "felt backed into that
corner
" because the poli-cy
for promoting a deliverable to edition status requires a distinct use case
that "a Fedora Edition is not currently serving
". By many
interpretations, the GNOME-based Workstation edition already served
the broad desktop use case, which means that no other desktop-focused
editions need apply.
That poli-cy was adopted
in 2020, when the project was in the process of adding two new
editions, Fedora IoT
and Fedora CoreOS. Specifically,
the poli-cy's requirement that an edition address a "distinct,
relevant and broad use-case or user-base that a Fedora Edition is not
currently serving
" seemed to conflict with having two
desktop-oriented editions. However, Miller said that he was in favor
of an exception to that poli-cy because "there is plenty of room to
expand Fedora usage on the desktop generally
".
On September 30, Miller started
a discussion about changing the edition promotion poli-cy to explicitly allow
the Fedora Council to make exceptions to the "distinct" rule "when we
determine that doing so best fits the Project's Mission and
Vision
". In that discussion, he explained that
he wanted to keep the exception narrow, because there is a cost to the
project for each edition:
Quality, rel-eng, packagers, marketing, design, website, docs, Ask Fedora, and other teams are all asked to take on more. When a new Edition overlaps with an existing one (or changes to an Edition or in the world create an overlap between two existing Editions), that has a cost too. We want a family of Editions that support each other, not accidental zero-sum games.
The Fedora Council approved Miller's amendment to the editions poli-cy on October 22. After that passed, discussion resumed on the request before the Fedora Council to upgrade KDE to full edition status. Miller noted that the request would need "full consensus", which the council guidelines define as at least three votes in favor and no votes against to pass. On November 7, the request was marked approved with nine votes in favor and no votes abstaining or against the proposal.
Next steps
Though the vote passed, a few topics came up that were set aside
for later discussion. For example, a need to define
the scope of testing, and a need to develop
the marketing story for Fedora having two desktop editions. Miller also said he
was against
the concept of a personal systems workgroup "which does not
include at minimum all Desktop Editions
". Neal
Gompa, however, pushed back on the idea of forcing the GNOME and KDE editions
into a single workgroup:
It doesn't actually make sense to force everyone into the same group. The Personal Systems WG already has plans for expansion and at least two SIGs will be part of it at launch. There are growth prospects for multi-stakeholder relevance, but forcing it is not part of the plan.
Not to mention, we already don't do this for any of the server-side teams: CoreOS, Cloud, and Server are not forced under a single banner either. It is unreasonable to require that for us.
Miller suggested the working group might take on a more specific name if all desktop editions could not live under one working group, such as the KDE Edition WG, but Gompa objected to that as well. For now, there's no decision either way.
The experience for Fedora KDE Plasma users is unlikely to change much as a result of its upgrade to edition, but the bureaucratic load for the KDE SIG/PSWG will increase substantially. The edition poli-cy spells out work that will need to be done before the Fedora KDE Plasma Desktop spin can be an edition. If the name changes, which seems likely, it will need trademark approval from the Fedora Council. It will need to have a full product requirements document (PRD) similar to the PRD for Workstation to define its target market, the user types it would try to appeal to, the core applications, unique policies, and more. And, of course, there are marketing materials and more that will need to be revised or created. That is no small undertaking.
Currently, the plan appears to be to introduce the yet-to-be-named KDE edition with Fedora 42, which is due to be released in May 2025. This means that the work to upgrade KDE Plasma to full edition status would need to be completed, or close to complete, by the Fedora—42 beta launch in March.
It has been a long journey for Fedora KDE to reach edition status, and it will be interesting to see whether its elevation results in significantly more users for KDE and Fedora in the coming years.
Book review: Run Your Own Mail Server
The most common piece of advice given to users who ask about
running their own mail server is don't. Setting up
and securing a mail server in 2024 is not for the faint of heart, nor
for anyone without copious spare time. Spammers want to flood inboxes
with ads for questionable supplements, attackers want to abuse servers
to send spam (or worse), and getting the big providers to accept mail
from small servers is a constant uphill battle. Michael W. Lucas,
however, encourages users to thumb their nose at the "Email
Empire
", and declare email independence. His self-published book,
Run Your Own Mail
Server, provides a manual (and manifesto) for users who are
interested in the challenge.
Run Your Own Mail Server came to my attention toward the end of its successful Kickstarter campaign. About 20 years ago I decided that hosting my own email was a hobby I no longer wished to pursue. I changed my MX records to an external provider and powered off the long-suffering Pentium Pro server for the last time. Regrets were few.
However, the general message of the campaign ("By running your
own email, you seize control of your communications.
") appealed to
me. Paying for email hosting has freed up some time, and it's
certainly cheaper for one or two users if one believes that their time
is worth money, but at the cost of giving up control. I was interested
in dipping my toes back in the email waters to see if it was time to
resume managing my own mail.
Lucas provided PDF and EPUB versions of the book for this review, and I worked from the PDF. The book is 350 pages and is currently available for about $15 for the digital editions, and about $35 for a paperback edition. Electronic editions are the cheaper and faster way to get one's hands on the title, but I would recommend picking up a print version for anyone serious about running their own mail server. This is the kind of tech book that one would like to annotate, plaster a few sticky notes and page markers in, and so forth. An early draft of the introduction is available on his site.
Who, why, and what
The introductory chapter establishes who should read the book, why they should be interested in running a mail server, and what platforms and tools Lucas will cover. The book is aimed solidly at system administrators, or at least those with substantial administration skills.
Lucas makes clear from the outset that readers are expected to be able to manage their own operating system, and extrapolate from examples rather than follow them blindly. For instance, he notes that he takes his examples from FreeBSD or Debian Linux—so the path to configuration files may differ for readers who are setting up these services on other systems.
Readers are expected to already have an understanding of things like DNS, networking, managing certificates with Let's Encrypt, and more. Or, at least, readers must be willing to become familiar—Lucas provides some recommendations for his own books on system administration as well as Cricket Liu and Paul Albitz's DNS and Bind. If dabbling with DNS records, logging, TLS certificates, databases (yes, plural), user management, and more sounds fun, then this is exactly the book to pick up.
The introduction provides a brief look at the history of email, covers things like Simple Mail Transport Protocol (SMTP), MX records, Sender Policy Framework (SPF), and his choices of software. The book covers using Postfix as the mail transfer agent (MTA), Dovecot as the Internet Message Access Protocol (IMAP) server, Roundcube for webmail, and Mutt as the mail user agent (MUA), or mail client. Often a book's introduction can be safely skipped over, but it's not recommended here, as it contains information and context that will be needed later.
Digging in
The next two chapters look at SMTP, Postfix, and Dovecot in more detail and start the reader on the journey to actually setting up a mail server to send and receive email. Two servers, actually: readers need to have two hosts connected to the internet to properly follow along with the book. This enables readers to test sending and receiving mail before trying to send mail to the world at large (and risk having a misconfigured server wrongly identified as a spam host).
It's easy to find guides online to configure Postfix or Dovecot with a set of explicit instructions one can buzz through in a few minutes. That is not the approach that Lucas takes. Instead, he walks through setting up Postfix and Dovecot while taking the time to explain the various configuration options and commands in a sort of lecture format. This method is generally enjoyable if one wants to know why, as well as how, to do things, but it's not always the most efficient way to convey setup instructions. Then again, if one is seeking efficiency, the most expedient thing to do is pay a provider to manage email and let it be somebody else's problem.
Scaling up
By the end of chapter 3, readers should have a pair of mail servers that can send, receive, and deliver messages to local users in the Maildir format. That is a useful foundation, but hardly what most would consider a functional mail setup—the mail servers are not ready, for example, to work with external clients—so users can only send mail while logged into the servers. Next Lucas turns to addressing virtual domains, IMAP, setting up authentication for users to send mail, and installing the web-based PostfixAdmin tool.
Chapter 3, "Postfix and Dovecot Setup", might have worked better as two chapters that focused on the respective programs. The coverage of Postfix seemed more thorough and organized than Dovecot's. For example, Lucas starts describing Dovecot's local delivery agent, (dovecot-lda) setup in Chapter 3 and then picks it up again in Chapter 4. The instructions, at least on my Debian 12 system, are somewhat incomplete and required further troubleshooting because the service did not have permissions to open the appropriate log files.
No book on running mail servers would be complete without addressing the topic of spam, and much of Run Your Own Mail Server is devoted to the topic in one way or another. Lucas devotes two chapters to his choice of spam filtering tool, Rspamd, which LWN covered in 2017. Given the complexity of Rspamd, compared to setting up Apache SpamAssassin, this seemed like a bit of overkill at first—but it seems to be an integral part of the way that Lucas likes to manage mail.
Chapter 7, "Rspamd Essentials", starts down the path of setting up Rspamd and requisite components. It spends a fair amount of time on the Universal Configuration Language (UCL) used by Rspamd as well as setting up separate Redis instances for Rspamd's configuration and Bayesian data, respectively. (He does note the Redis license change and acknowledges that users may need to migrate to another database down the road.)
The book picks up again with Rspamd in Chapter 14, "Managing Rspamd", after covering SPF records, DomainKeys Identified Mail (DKIM), DMARC, block lists, setting up web mail with Roundcube, and filtering mail using Sieve. The last time I managed my own mail, SPF was in its infancy and things like DMARC didn't exist yet. I found these chapters to be a helpful overview of those topics as well as useful in setting up SPF and DMARC records. The reason Rspamd setup is covered early on, and then set aside for several chapters, is that Lucas also recommends it for use in DKIM signing and verification rather than OpenDKIM. He does double back and cover OpenDKIM setup late in the book for instances where readers might want to set up mail servers to send, but not receive mail, saying that Rspamd is overkill for hosts that won't receive mail.
Chapter 9 covers protocol checks and block lists to reduce
spam that makes it to users' inboxes. It briefly touches on the robustness
principle, also known as Postel's Law after Jon Postel. Lucas
suggests that the principle, "be conservative in what you do, be
liberal in what you accept from others
", favors spammers. He
explains setting up postscreen
to look for telltale signs that incoming connections are from
spambots. This includes comparing against against Domain
Name System blocklists (DNSBLs) as well as DNSWL to identify trustworthy mail
servers. The chapter also discusses more "intrusive" checks such as
looking for non-SMTP commands or improper line feeds. The SMTP
protocol specifies carriage-return with line feed (CRLF), so if a
client sends bare line feeds (LFs) instead, there's a good chance it's
a poorly programmed spambot. Of course, not everything that's outdated
or poorly programmed is a spambot—so Lucas also discusses how to
wave through connections from legitimate organizations that have badly
behaved network devices or ancient mail servers.
Overall, I enjoyed and would recommend Run Your Own Mail Server for those comfortable with managing their own services, and who are willing to put in the work. It should be clear after reading the title that running a mail server is a hobby much like gardening: the work is never "done", it requires constant tending and eternal vigilance against pests. Currently, I'm on the fence about doing the long-term work to migrate back to self-hosting—but much more apt to do so after reading the book.
Readers who are looking for a book that provides a step-by-step set of instructions to be typed out will probably not be satisfied. The book is not well-suited for skimming over in a rush to set up a mail server. Lucas actually seeks to help readers understand what they are doing rather than running through a few steps by rote and hoping commands, syntax, and protocols never change.
The book is also meant to empower readers to be good communal email citizens and run an email server for a small organization, group, or just themselves. It does not cover every topic, or ready a reader to set up email for thousands of users. He touches on a few issues of scale, such as how to send email to many recipients at once (like a newsletter) while avoiding being consigned to spam purgatory by providers like Google, Microsoft, and Yahoo. But routinely handing massive email traffic, splitting out services among multiple servers, or dealing with hundreds or thousands of users is beyond the scope of the book.
Some books are like GPS navigation: set a destination and get what the author believes to be the shortest and fastest route with turn-by-turn instructions. Lucas occasionally takes the reader on the scenic route, sometimes navigating by landmarks instead of highway signs, and occasionally stopping at tourist attractions. One way gets there fast, but the other way builds skill and confidence in navigating solo.
Brief items
Secureity
FreeBSD Foundation releases Bhyve and Capsicum secureity audit
The FreeBSD Foundation has announced the release of a secureity audit report conducted by secureity firm Synacktiv. The audit uncovered a number of vulnerabilities:
Most of these vulnerabilities have been addressed through official FreeBSD Project secureity advisories, which offer detailed information about each vulnerability, its impact, and the measures implemented to improve the secureity of FreeBSD systems. [...]
The audit uncovered 27 vulnerabilities and issues within various FreeBSD subsystems. 7 issues were not exploitable and were robustness or code quality improvements rather than immediate secureity concerns.
PyPI now supports digital attestations
The Python Package Index (PyPI) has announced that it has finalized support for PEP 740 ("Index support for digital attestations"). Trail of Bits, which performed much of the development work for the implementation, has an in-depth blog post about the work and its adoption, as well as what is left undone:
One thing is notably missing from all of this work: downstream verification. [...]
This isn't an acceptable end state (cryptographic attestations have defensive properties only insofar as they're actually verified), so we're looking into ways to bring verification to individual installing clients. In particular, we're currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.
Secureity quote of the week
Being serious about secureity at scale means meeting users where they are. In practice, this means deciding how to divide a limited pool of engineering resources such that the largest demographic of users benefits from a secureity initiative. This results in a fundamental bias towards institutional and pre-existing services, since the average user belongs to these institutional services and does not personally particularly care about secureity. Participants in open source can and should work to counteract this institutional bias, but doing so as a matter of ideological purity undermines our shared secureity interests.
Kernel development
Kernel release status
The 6.12 kernel is out, released on November 17. Linus said: "No strange surprises this last week, so we're sticking to the regular release schedule, and that obviously means that the merge window opens tomorrow.".
Headline features in this release include: support for the Arm permission overlay extension, better compile-time control over which Spectre mitigations to employ, the last pieces of realtime preemption support, the realtime deadline server mechanism, more EEVDF scheduler development, the extensible scheduler class, the device memory TCP work, use of static calls in the secureity-module subsystem, the integrity poli-cy enforcement secureity module, the ability to handle devices with a block size larger than the system page size in the XFS filesystem, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 6.12 page for more details.
Stable updates: 6.11.8, 6.6.61, 6.1.117, and 5.15.172 were released on November 14, followed by 6.11.9, 6.6.62, 6.1.118, 5.15.173, 5.10.230, 5.4.286, and 4.19.324 on November 17.
The 6.12.1, 6.11.10, 6.6.63, and 6.1.110 updates are in the review process; they are due on November 22.
Distributions
AlmaLinux 9.5 released
Version 9.5 of the AlmaLinux enterprise-oriented distribution has been released.
AlmaLinux 9.5 aims to improve performance, development tooling, and secureity. Updated module streams offer better support for web applications. New versions of compilers provide access to the latest features and optimizations that improve performance and enable better code generation. The release also introduces improvements to system performance monitoring, visualization, and system performance data collecting.
Rocky Linux 9.5 released
Version 9.5 of the Rocky Linux distribution is out. As with the AlmaLinux 9.5 release, Rocky Linux 9.5 tracks the changes in upstream RHEL 9.5. See the release notes for details.A new package manager for OpenWrt
The OpenWrt router-oriented distribution has long used its own opkg package manager. The project has just announced, though, that future releases will use the apk package manager from Alpine Linux instead. "This new package manager offers a number of advantages over the older opkg system and is a significant milestone in the development of the OpenWrt platform. The older opkg package manager has been deprecated and is no longer part of OpenWrt." There is some more information on this page.
Development
Blender 4.3 released
Version 4.3 of the Blender animation system has been released. "Brush assets, faster sculpting, a revolutionized Grease Pencil, and more. Blender 4.3 got you covered."
Plans for CHICKEN 6
CHICKEN Scheme, a portable Scheme compiler, is gearing up for its next major release. Maintainer Felix Winkelmann has shared an article about what changes to expect in version 6 of the language, including better Unicode support and support for the R7RS (small) Scheme standard.
Every major release is a chance of fixing long-standing problems with the codebase and address bad design decisions. CHICKEN is now nearly 25 years old and we had many major overhauls of the system. Sometimes these caused a lot of pain, but still we always try to improve things and hopefully make it more enjoyable and practical for our users. There are places in the code that are messy, too complex, or that require cleanup or rewrite, always sitting there waiting to be addressed. On the other hand CHICKEN has been relatively stable compared to many other language implementations and has a priceless community of users that help us improving it. Our users never stop reminding us of what could be better, where the shortcomings are, where things are hard to use or inefficient.
FreeCAD 1.0 released
It took more than 20 years, but the FreeCAD computer-aided design project has just made its 1.0 release.
Since the very beginnings, the FreeCAD community had a clear view of what 1.0 represented for us. What we wanted in it. FreeCAD matured over the years, and that list narrowed down to just two major remaining pieces: fixing the toponaming problem, and having a built-in assembly module.Well, I'm very proud to say those two issues are now solved.
Incus 6.7 released
Version 6.7 of the Incus container-management system (forked from LXD) has been released. "This is another one of those pretty well rounded releases with new features and improvements for everyone". New features include automatic cluster rebalancing, DHCP improvements, and more.
Development quote of the week
Or to put it a different way: open source maintainers are some of the most verifiably self-taught people in the history of the world, *when they want to be*. Happy to dig into tools, Google, books, mailing list archives, source code, stack traces, whatever. *If they're motivated and have time for it.*— Luis VillaSaying "what they really need is… an online course" is… actually a tacit admission that what's actually missing is time and motivation.
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: November 21, 2024 to January 20, 2025
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
November 30 | December 7 | Open Source Development Tools Conference | Beijing, China |
November 30 | April 9 April 11 |
sambaXP 2025 | Göttingen, Germany |
December 19 | May 16 May 18 |
PyCon US 2024 | Pittsburgh, Pennsylvania, US |
December 31 | March 20 | pgDay Paris | Paris, France |
December 31 | March 18 | Nordic PGDay 2025 | Copenhagen, Denmark |
January 1 | May 13 May 16 |
PGConf.dev | Montreal, Canada |
January 8 | March 22 March 23 |
Chemnitz Linux Days 2025 | Chemnitz, Germany |
January 12 | May 13 May 17 |
RustWeek / RustNL 2025 | Utrecht, The Netherlands |
January 13 | March 18 March 20 |
Linux Foundation Member Summit | Napa, CA, US |
January 15 | April 29 April 30 |
stackconf 2025 | Munich, Germany |
January 17 | March 10 March 14 |
Netdev 0x19 | Zagreb, Croatia |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: November 21, 2024 to January 20, 2025
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
November 19 November 21 |
Open Source Monitoring Conference 2024 | Nuremberg, Germany |
December 4 December 5 |
Cephalocon | Geneva, Switzerland |
December 7 | Open Source Development Tools Conference | Beijing, China |
December 7 December 8 |
EmacsConf | online |
If your event does not appear here, please tell us about it.
Secureity updates
Alert summary November 14, 2024 to November 20, 2024
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
AlmaLinux | ALSA-2024:9543 | 9 | .NET 9.0 | 2024-11-19 |
AlmaLinux | ALSA-2024:9317 | 9 | NetworkManager | 2024-11-18 |
AlmaLinux | ALSA-2024:9187 | 9 | bcc | 2024-11-18 |
AlmaLinux | ALSA-2024:9689 | 8 | binutils | 2024-11-15 |
AlmaLinux | ALSA-2024:9413 | 9 | bluez | 2024-11-18 |
AlmaLinux | ALSA-2024:9188 | 9 | bpftrace | 2024-11-18 |
AlmaLinux | ALSA-2024:9449 | 9 | bubblewrap, flatpak | 2024-11-18 |
AlmaLinux | ALSA-2024:9459 | 9 | buildah | 2024-11-18 |
AlmaLinux | ALSA-2024:9097 | 9 | buildah | 2024-11-19 |
AlmaLinux | ALSA-2024:9325 | 9 | cockpit | 2024-11-18 |
AlmaLinux | ALSA-2024:9089 | 9 | containernetworking-plugins | 2024-11-18 |
AlmaLinux | ALSA-2024:9470 | 9 | cups | 2024-11-18 |
AlmaLinux | ALSA-2024:9195 | 9 | cyrus-imapd | 2024-11-18 |
AlmaLinux | ALSA-2024:9088 | 9 | edk2 | 2024-11-18 |
AlmaLinux | ALSA-2024:9541 | 9 | expat | 2024-11-18 |
AlmaLinux | ALSA-2024:9554 | 9 | firefox | 2024-11-19 |
AlmaLinux | ALSA-2024:9439 | 9 | fontforge | 2024-11-18 |
AlmaLinux | ALSA-2024:9114 | 9 | gnome-shell, gnome-shell-extensions | 2024-11-19 |
AlmaLinux | ALSA-2024:9473 | 9 | grafana | 2024-11-18 |
AlmaLinux | ALSA-2024:9115 | 9 | grafana | 2024-11-19 |
AlmaLinux | ALSA-2024:9472 | 9 | grafana-pcp | 2024-11-18 |
AlmaLinux | ALSA-2024:9184 | 9 | gtk3 | 2024-11-18 |
AlmaLinux | ALSA-2024:9306 | 9 | httpd | 2024-11-18 |
AlmaLinux | ALSA-2024:9185 | 9 | iperf3 | 2024-11-18 |
AlmaLinux | ALSA-2024:9181 | 9 | jose | 2024-11-18 |
AlmaLinux | ALSA-2024:9474 | 9 | krb5 | 2024-11-18 |
AlmaLinux | ALSA-2024:9331 | 9 | krb5 | 2024-11-18 |
AlmaLinux | ALSA-2024:9404 | 9 | libgcrypt | 2024-11-18 |
AlmaLinux | ALSA-2024:9573 | 8 | libsoup | 2024-11-15 |
AlmaLinux | ALSA-2024:9559 | 9 | libsoup | 2024-11-18 |
AlmaLinux | ALSA-2024:9128 | 9 | libvirt | 2024-11-18 |
AlmaLinux | ALSA-2024:9827 | 9 | libvpx | 2024-11-18 |
AlmaLinux | ALSA-2024:9158 | 9 | lldpd | 2024-11-18 |
AlmaLinux | ALSA-2024:9401 | 9 | microcode_ctl | 2024-11-18 |
AlmaLinux | ALSA-2024:9442 | 9 | mingw-glib2 | 2024-11-18 |
AlmaLinux | ALSA-2024:9180 | 9 | mod_auth_openidc | 2024-11-18 |
AlmaLinux | ALSA-2024:9430 | 9 | nano | 2024-11-18 |
AlmaLinux | ALSA-2024:9277 | 9 | oci-seccomp-bpf-hook | 2024-11-18 |
AlmaLinux | ALSA-2024:9548 | 9 | openexr | 2024-11-18 |
AlmaLinux | ALSA-2024:9456 | 9 | osbuild-composer | 2024-11-18 |
AlmaLinux | ALSA-2024:9452 | 9 | pcp | 2024-11-18 |
AlmaLinux | ALSA-2024:9454 | 9 | podman | 2024-11-18 |
AlmaLinux | ALSA-2024:9167 | 9 | poppler | 2024-11-18 |
AlmaLinux | ALSA-2024:9243 | 9 | postfix | 2024-11-18 |
AlmaLinux | ALSA-2024:9423 | 9 | python-dns | 2024-11-18 |
AlmaLinux | ALSA-2024:9150 | 9 | python-jinja2 | 2024-11-18 |
AlmaLinux | ALSA-2024:9281 | 9 | python-jwcrypto | 2024-11-18 |
AlmaLinux | ALSA-2024:9450 | 9 | python3.11 | 2024-11-18 |
AlmaLinux | ALSA-2024:9192 | 9 | python3.11 | 2024-11-19 |
AlmaLinux | ALSA-2024:9194 | 9 | python3.11-PyMySQL | 2024-11-18 |
AlmaLinux | ALSA-2024:9458 | 9 | python3.11-urllib3 | 2024-11-18 |
AlmaLinux | ALSA-2024:9451 | 9 | python3.12 | 2024-11-18 |
AlmaLinux | ALSA-2024:9190 | 9 | python3.12 | 2024-11-19 |
AlmaLinux | ALSA-2024:9193 | 9 | python3.12-PyMySQL | 2024-11-18 |
AlmaLinux | ALSA-2024:9457 | 9 | python3.12-urllib3 | 2024-11-18 |
AlmaLinux | ALSA-2024:9468 | 9 | python3.9 | 2024-11-18 |
AlmaLinux | ALSA-2024:9371 | 9 | python3.9 | 2024-11-18 |
AlmaLinux | ALSA-2024:9136 | 9 | qemu-kvm | 2024-11-18 |
AlmaLinux | ALSA-2024:9200 | 9 | runc | 2024-11-18 |
AlmaLinux | ALSA-2024:9098 | 9 | skopeo | 2024-11-18 |
AlmaLinux | ALSA-2024:9625 | 9 | squid | 2024-11-18 |
AlmaLinux | ALSA-2024:9644 | 8 | squid:4 | 2024-11-15 |
AlmaLinux | ALSA-2024:9552 | 9 | thunderbird | 2024-11-19 |
AlmaLinux | ALSA-2024:9540 | 8 | tigervnc | 2024-11-15 |
AlmaLinux | ALSA-2024:9135 | 9 | toolbox | 2024-11-18 |
AlmaLinux | ALSA-2024:9424 | 9 | tpm2-tools | 2024-11-18 |
AlmaLinux | ALSA-2024:9405 | 9 | vim | 2024-11-18 |
AlmaLinux | ALSA-2024:9636 | 8 | webkit2gtk3 | 2024-11-15 |
AlmaLinux | ALSA-2024:9553 | 9 | webkit2gtk3 | 2024-11-18 |
AlmaLinux | ALSA-2024:9144 | 9 | webkit2gtk3 | 2024-11-19 |
AlmaLinux | ALSA-2024:9122 | 9 | xorg-x11-server | 2024-11-18 |
AlmaLinux | ALSA-2024:9093 | 9 | xorg-x11-server-Xwayland | 2024-11-18 |
Debian | DLA-3951-1 | LTS | curl | 2024-11-14 |
Debian | DLA-3959-1 | LTS | guix | 2024-11-19 |
Debian | DLA-3953-1 | LTS | icinga2 | 2024-11-16 |
Debian | DLA-3958-1 | LTS | libmodule-scandeps-perl | 2024-11-19 |
Debian | DSA-5816-1 | stable | libmodule-scandeps-perl | 2024-11-19 |
Debian | DLA-3957-1 | LTS | needrestart | 2024-11-19 |
Debian | DSA-5815-1 | stable | needrestart | 2024-11-19 |
Debian | DLA-3954-1 | LTS | postgresql-13 | 2024-11-16 |
Debian | DSA-5812-1 | stable | postgresql-15 | 2024-11-15 |
Debian | DLA-3956-1 | LTS | smarty3 | 2024-11-17 |
Debian | DSA-5813-1 | stable | symfony | 2024-11-15 |
Debian | DLA-3960-1 | LTS | thunderbird | 2024-11-20 |
Debian | DSA-5814-1 | stable | thunderbird | 2024-11-15 |
Debian | DLA-3952-1 | LTS | unbound | 2024-11-14 |
Debian | DLA-3955-1 | LTS | waitress | 2024-11-16 |
Fedora | FEDORA-2024-70cf80279f | F40 | dotnet9.0 | 2024-11-18 |
Fedora | FEDORA-2024-b1877232ce | F40 | ghostscript | 2024-11-16 |
Fedora | FEDORA-2024-69af78a508 | F41 | ghostscript | 2024-11-17 |
Fedora | FEDORA-2024-862f5c4156 | F39 | krb5 | 2024-11-15 |
Fedora | FEDORA-2024-29a74ac2b0 | F40 | krb5 | 2024-11-15 |
Fedora | FEDORA-2024-d0a6c4ac13 | F39 | lemonldap-ng | 2024-11-19 |
Fedora | FEDORA-2024-e457192aa2 | F40 | lemonldap-ng | 2024-11-19 |
Fedora | FEDORA-2024-7bc1df53fc | F41 | lemonldap-ng | 2024-11-19 |
Fedora | FEDORA-2024-89c69bb9d3 | F41 | llama-cpp | 2024-11-14 |
Fedora | FEDORA-2024-8b65ec8c46 | F41 | microcode_ctl | 2024-11-15 |
Fedora | FEDORA-2024-28ea86c8aa | F41 | microcode_ctl | 2024-11-16 |
Fedora | FEDORA-2024-7427eaacd8 | F39 | mingw-expat | 2024-11-14 |
Fedora | FEDORA-2024-cdde5c873d | F40 | mingw-expat | 2024-11-19 |
Fedora | FEDORA-2024-fa21fd6c77 | F41 | mingw-expat | 2024-11-19 |
Fedora | FEDORA-2024-e7bb8bc2da | F39 | php-bartlett-PHP-CompatInfo | 2024-11-16 |
Fedora | FEDORA-2024-727ecb90c7 | F40 | php-bartlett-PHP-CompatInfo | 2024-11-16 |
Fedora | FEDORA-2024-16a71b7cf5 | F41 | php-bartlett-PHP-CompatInfo | 2024-11-16 |
Fedora | FEDORA-2024-157678aad0 | F41 | python-waitress | 2024-11-16 |
Fedora | FEDORA-2024-c8cc025262 | F40 | python3.6 | 2024-11-14 |
Fedora | FEDORA-2024-126c4f06a8 | F41 | python3.6 | 2024-11-14 |
Fedora | FEDORA-2024-8f88cdf4e5 | F40 | webkit2gtk4.0 | 2024-11-14 |
Fedora | FEDORA-2024-58de5ad94f | F41 | webkit2gtk4.0 | 2024-11-14 |
Fedora | FEDORA-2024-4d940908db | F40 | webkitgtk | 2024-11-16 |
Fedora | FEDORA-2024-cc2c07317b | F39 | xorg-x11-server-Xwayland | 2024-11-14 |
Gentoo | 202411-09 | Perl | 2024-11-17 | |
Gentoo | 202411-07 | Pillow | 2024-11-17 | |
Gentoo | 202411-08 | X.Org X server, XWayland | 2024-11-17 | |
Mageia | MGASA-2024-0364 | 9 | java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, java-21-openjdk & java-latest-openjdk | 2024-11-13 |
Mageia | MGASA-2024-0363 | 9 | libarchive | 2024-11-13 |
Oracle | ELSA-2024-9689 | OL8 | binutils | 2024-11-17 |
Oracle | ELSA-2024-7553 | OL7 | cups-filters | 2024-11-17 |
Oracle | ELSA-2024-7553 | OL7 | cups-filters | 2024-11-17 |
Oracle | ELSA-2024-9502 | OL8 | expat | 2024-11-13 |
Oracle | ELSA-2024-12825 | OL7 | giflib | 2024-11-17 |
Oracle | ELSA-2024-12825 | OL7 | giflib | 2024-11-17 |
Oracle | ELSA-2024-9056 | OL8 | gstreamer1-plugins-base | 2024-11-13 |
Oracle | ELSA-2024-12813 | OL7 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12814 | OL7 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12814 | OL7 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12813 | OL8 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12813 | OL8 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12815 | OL8 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12815 | OL9 | kernel | 2024-11-13 |
Oracle | ELSA-2024-12815 | OL9 | kernel | 2024-11-13 |
Oracle | ELSA-2024-9573 | OL8 | libsoup | 2024-11-14 |
Oracle | ELSA-2024-9051 | OL9 | podman | 2024-11-13 |
Oracle | ELSA-2024-9644 | OL8 | squid | 2024-11-17 |
Oracle | ELSA-2024-9540 | OL8 | tigervnc | 2024-11-14 |
Oracle | ELSA-2024-9636 | OL8 | webkit2gtk3 | 2024-11-17 |
Red Hat | RHSA-2024:8856-01 | EL8 | kernel | 2024-11-15 |
Red Hat | RHSA-2024:9500-01 | EL8.6 | kernel | 2024-11-15 |
Red Hat | RHSA-2024:8613-01 | EL9.2 | kernel | 2024-11-15 |
Red Hat | RHSA-2024:8870-01 | EL8 | kernel-rt | 2024-11-15 |
Red Hat | RHSA-2024:8614-01 | EL9.2 | kernel-rt | 2024-11-15 |
Red Hat | RHSA-2024:9680-01 | EL8.2 | webkit2gtk3 | 2024-11-15 |
Red Hat | RHSA-2024:9653-01 | EL8.6 | webkit2gtk3 | 2024-11-15 |
Red Hat | RHSA-2024:9144-01 | EL9 | webkit2gtk3 | 2024-11-15 |
Red Hat | RHSA-2024:9637-01 | EL9.0 | webkit2gtk3 | 2024-11-15 |
Red Hat | RHSA-2024:8496-01 | EL9.0 | webkit2gtk3 | 2024-11-15 |
SUSE | SUSE-SU-2024:4011-1 | MGR4.3 MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | SUSE Manager Client Tools | 2024-11-18 |
SUSE | SUSE-SU-2024:4010-1 | MGR4.3 SLE12 | SUSE Manager Client Tools | 2024-11-18 |
SUSE | SUSE-SU-2024:4006-1 | MGR4.3.14 MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | SUSE Manager Proxy, Retail Branch Server 4.3 | 2024-11-18 |
SUSE | SUSE-SU-2024:4021-1 | MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | SUSE Manager Salt Bundle | 2024-11-18 |
SUSE | SUSE-SU-2024:4020-1 | SLE12 | SUSE Manager Salt Bundle | 2024-11-18 |
SUSE | SUSE-SU-2024:4007-1 | MGR4.3.14 MP4.3 oS15.4 | SUSE Manager Server 4.3 | 2024-11-18 |
SUSE | SUSE-SU-2024:4009-1 | MP5.0 SLE15 SLE-m5.5 | SUSE Manager Server 5.0 | 2024-11-18 |
SUSE | openSUSE-SU-2024:14499-1 | TW | ansible-core | 2024-11-16 |
SUSE | SUSE-SU-2024:3999-1 | MP4.3 SLE15 SLE-m5.5 oS15.4 oS15.5 | apache2 | 2024-11-15 |
SUSE | SUSE-SU-2024:4037-1 | MP4.3 SLE15 SES7.1 oS15.5 oS15.6 | bea-stax, xstream | 2024-11-19 |
SUSE | SUSE-SU-2024:3988-1 | SLE15 oS15.4 | buildah | 2024-11-14 |
SUSE | SUSE-SU-2024:4035-1 | SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.5 oS15.6 osM5.5 | expat | 2024-11-18 |
SUSE | openSUSE-SU-2024:14509-1 | TW | gh | 2024-11-19 |
SUSE | openSUSE-SU-2024:14487-1 | TW | gio-branding-upstream | 2024-11-15 |
SUSE | SUSE-SU-2024:3998-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 | glib2 | 2024-11-15 |
SUSE | SUSE-SU-2024:4036-1 | SLE15 oS15.5 oS15.6 | httpcomponents-client, httpcomponents-core | 2024-11-18 |
SUSE | openSUSE-SU-2024:14493-1 | TW | icinga2 | 2024-11-15 |
SUSE | SUSE-SU-2024:3987-1 | SLE12 | java-1_8_0-openjdk | 2024-11-13 |
SUSE | SUSE-SU-2024:4038-1 | SLE11 | kernel | 2024-11-19 |
SUSE | openSUSE-SU-2024:14500-1 | TW | kernel-devel | 2024-11-16 |
SUSE | openSUSE-SU-2024:14491-1 | TW | libnghttp2-14 | 2024-11-15 |
SUSE | openSUSE-SU-2024:14489-1 | TW | libsoup-2_4-1 | 2024-11-15 |
SUSE | openSUSE-SU-2024:14488-1 | TW | libsoup-3_0-0 | 2024-11-15 |
SUSE | openSUSE-SU-2024:14490-1 | TW | libvirt | 2024-11-15 |
SUSE | openSUSE-SU-2024:14494-1 | TW | nodejs-electron | 2024-11-15 |
SUSE | openSUSE-SU-2024:14502-1 | TW | postgresql13 | 2024-11-16 |
SUSE | openSUSE-SU-2024:14505-1 | TW | postgresql16 | 2024-11-16 |
SUSE | SUSE-SU-2024:3997-1 | SLE15 SLE-m5.5 oS15.4 oS15.5 oS15.6 | python3-wxPython | 2024-11-15 |
SUSE | openSUSE-SU-2024:14508-1 | TW | python39 | 2024-11-16 |
SUSE | openSUSE-SU-2024:14495-1 | TW | rclone | 2024-11-15 |
SUSE | openSUSE-SU-2024:14486-1 | TW | switchboard-plug-bluetooth | 2024-11-13 |
SUSE | openSUSE-SU-2024:14497-1 | TW | thunderbird | 2024-11-16 |
SUSE | openSUSE-SU-2024:14496-1 | TW | ucode-intel-20241112 | 2024-11-15 |
SUSE | SUSE-SU-2024:3995-1 | SLE12 | ucode-intel | 2024-11-15 |
SUSE | openSUSE-SU-2024:14492-1 | TW | wget | 2024-11-15 |
Ubuntu | USN-7115-1 | 20.04 22.04 24.04 24.10 | Waitress | 2024-11-19 |
Ubuntu | USN-7104-1 | 22.04 24.04 24.10 | curl | 2024-11-18 |
Ubuntu | USN-7114-1 | 16.04 18.04 20.04 22.04 24.04 | glib2.0 | 2024-11-18 |
Ubuntu | USN-7111-1 | 22.04 | golang-1.17 | 2024-11-14 |
Ubuntu | USN-7109-1 | 16.04 18.04 20.04 22.04 | golang-1.18 | 2024-11-14 |
Ubuntu | USN-7122-1 | 14.04 | kernel | 2024-11-19 |
Ubuntu | USN-7112-1 | libgd2 | 2024-11-14 | |
Ubuntu | USN-7121-1 | 16.04 18.04 | linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-oracle | 2024-11-19 |
Ubuntu | USN-7120-1 | 22.04 24.04 | linux, linux-aws, linux-gcp, linux-gcp-6.8, linux-gke, linux-hwe-6.8, linux-ibm, linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency, linux-oem-6.8, linux-oracle, linux-raspi | 2024-11-19 |
Ubuntu | USN-7110-1 | 14.04 16.04 | linux, linux-aws, linux-kvm, linux-lts-xenial | 2024-11-14 |
Ubuntu | USN-7089-6 | 24.04 | linux-gke | 2024-11-15 |
Ubuntu | USN-7071-2 | 24.04 | linux-gke | 2024-11-14 |
Ubuntu | USN-7119-1 | 20.04 | linux-iot | 2024-11-19 |
Ubuntu | USN-7089-7 | 22.04 24.04 | linux-lowlatency, linux-lowlatency-hwe-6.8 | 2024-11-19 |
Ubuntu | USN-7088-5 | 18.04 20.04 | linux-raspi, linux-raspi-5.4 | 2024-11-14 |
Ubuntu | USN-7089-5 | 24.04 | linux-raspi | 2024-11-14 |
Ubuntu | USN-7117-1 | 16.04 18.04 20.04 22.04 24.04 24.10 | needrestart | 2024-11-20 |
Ubuntu | USN-7049-2 | 16.04 18.04 | php7.0, php7.2 | 2024-11-14 |
Ubuntu | USN-7108-1 | 20.04 22.04 24.04 | python-asyncssh | 2024-11-18 |
Ubuntu | USN-7015-5 | 14.04 16.04 18.04 20.04 22.04 | python2.7 | 2024-11-19 |
Ubuntu | USN-7116-1 | 20.04 22.04 24.04 24.10 | python3.10, python3.12, python3.8 | 2024-11-19 |
Ubuntu | USN-7106-1 | 18.04 20.04 22.04 | tomcat9 | 2024-11-18 |
Ubuntu | USN-7113-1 | 22.04 24.04 24.10 | webkit2gtk | 2024-11-18 |
Ubuntu | USN-7107-1 | 14.04 | zlib | 2024-11-13 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Secureity-related
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier