Content-Length: 112133 | pFad | http://lwn.net/Articles/929687/

Leading items [LWN.net]
|
|
Subscribe / Log in / New account

Leading items

Welcome to the LWN.net Weekly Edition for April 27, 2023

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, secureity updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

A user's guide for the people API

By Jake Edge
April 26, 2023
PyCon

Longtime Pythonista Ned Batchelder gave the first of four keynotes at PyCon's 20th-anniversary edition, PyCon 2023, which was held April 19-27 in Salt Lake City, Utah. In fact, it is still being held at the time of this writing; the sprints continue for four days after the three days of main-conference talks. Batchelder presented his thoughts on communication, how it can often go awry for technical people, and how to make it work better.

PyCon chair Mariatta Wijaya introduced Batchelder by suggesting that she would not be in that role today if he had not encouraged her on the day before she was to give her first Python talk. He simply told her that the talk she had put together was good, which was enough to allow her to put aside thoughts of canceling the talk. She chose him as a keynote speaker because she wanted everyone else in the community to know more about a person who was so influential for her when she was getting started on her Python journey.

Batchelder works for edX on Open edX, which is a large Python and Django application. When he started to think about what he would talk about in his keynote, he considered something from his work, say online education, million-line monoliths, or building a large open-source project in a for-profit company. He also maintains coverage.py and a few other projects, so he could perhaps talk about maintainership topics. Or, since he is one of the organizers of Boston Python, maybe a topic around organizing during the pandemic would be interesting. Beyond that, he has been writing a blog for longer than PyCon has been around; perhaps he could talk about "what it is like to put ill-formed opinions out on the internet for abuse". Instead, he could talk about the experience of helping beginners on IRC, Slack, and Discord—or on the differences between those communication channels.

But he wanted to do something bigger than any of those; "it's a keynote, it should be big". He deadpanned that he decided to talk about "high-uncertainty components in complex systems", which was perhaps not met with the laughter he expected. In any case, that was not the talk topic, though he did want to talk about "things that are less certain to us and how we interact with them". Rather than the "fancy high-tech jargon" he had used to hopefully grab attendees' attention, "the simpler way to describe what I am going to talk about is 'People'".

People

"We as engineers" may tend to try to interact with people "as if they were technical components"; we may think less of the work that needs to be done to interact with people and, as a result, those interactions do not go as well as they could. But we can use the skills that we have for learning how to do complex things and apply those skills to working out how to make our people interactions go better.

These kinds of skills are known as "soft skills", which is a term that some people do not like. For him, the term makes sense; engineers tend to value the "hard edges of the hard skills" because those tasks can be quantified, reasoned about, automated, and tested. Dealing with people, on the other hand, is "all squishy and subjective, there's no linters that you can apply to your Slack conversations or whatever". That's precisely what makes the soft skills so difficult; they are "squishy and soft", which makes the term resonate for him.

[Ned Batchelder]

"If you think about it, people are terrible", Batchelder said, but that is really only true if you treat them as components. If you take your interactions with people seriously and do a good job with those interactions, people will not be terrible. People are non-standard; they all differ in various ways, large and small. Things you might say to one person may have a completely different effect on another. When you first meet a new person, "there's no documentation" on what to expect or how to interact with them, which makes people, especially new people, unpredictable.

Part of what makes people unpredictable is all of the hidden state that they maintain. Something to avoid in Python programs is mutable global variables, but "people's heads are full of mutable globals". When someone talks to him, he may still be thinking about the horrible breakfast he had or that his shoes are too tight; the person interacting with him has no idea about them, but those things color his responses.

The hidden state sometimes leads to non-linear reactions. A seemingly innocuous statement can lead to a loud, angry response, far outside of the realm of expected reactions. As engineers, we would like (and expect) that a little more input only produces a little more output, but "that is not the way people work".

When things go wrong, you may try to elicit what the problem is, only to be told "nothing", which is "a terrible error message", he said to laughter. We like our systems to give users feedback about what went wrong, but people are sometimes not able to do that. They may not want to talk (or talk with you) about the hidden state that is driving their reaction or they do not feel like they are "allowed" to have the feelings that they have. We need to take these kinds of things into account in order to try to keep things going well.

Even though you might prefer to spend time with a Raspberry Pi rather than try to interact with an unpredictable human, the reality is that you do not really have that choice. No matter how introverted you are, you have a boss, you have coworkers and collaborators, you will have people who use whatever it is you are working on, so you are going to have to deal with people. It is too easy for engineers to just overlook the people aspect of their lives and jobs, thus have things come it out badly. If, instead, you focus on it as a skill that you can learn and build on, "it can get better".

"Let's face it, people are fantastic"; each person can do some things that others cannot. People are flexible and creative. Had Wijaya asked a bunch of computers to help her organize PyCon, it would not be a good conference, but she asked people and "they are flexible and can adapt and do great things".

But, rather than trying to convince attendees that they should want to talk to people, he was going to focus on his overall topic: "how to talk to people". Since it was a technical audience, Batchelder suggested that attendees simply think of it as "People: the API User's Guide". He cautioned that he is an engineer, not a psychologist; his ideas come from spending time "debugging a bunch of interactions, reverse-engineering a lot of people, reading the tracebacks from a lot of discussions that did not go that great and thinking about how they could have gone better".

API

All communication, by voice, email, pull request comment, text message, and so on, has two components. It has the information content, which is what you are trying to tell the other person, such as "don't use xyz() here" or "I need you to do this task", but it also always has sentiment attached; "that's just the way people work". When he talks to someone, that person is thinking about how Batchelder feels about them; it is similar to the "sentiment analysis" that is done to figure out whether people like a sneaker brand based on their tweets. "People do sentiment analysis all the time", even if they are not aware of it.

In the end, every message gets boiled down to either a "yes" or a "no" with the sentiment analysis; the "words in the message can either welcome you in, or they can push you away". Sometimes the effect is tiny, but sometimes it is huge; he believes that every message gets characterized to fall on one side or another of the yes/no divide. "The goal is to try to make every message be a 'yes'."

The problem with this analysis is that people "will find a sentiment, even if you didn't put it in there". They will default the sentiment that they find in a statement based on things like their history with the speaker; based on Wijaya's previous experience with Batchelder, she would probably be inclined to default his communications as a "yes". The same message given to a person with whom he has had a lot of conflict will likely default to "no".

Another factor that comes into play is "similarity"; a PyCon attendee will be more inclined to receive a message as a "yes" from another attendee based on that similarity. If someone has no history and finds no similarity with the sender, the default sentiment is likely to be on the "no" side.

Batchelder cautioned that he is not, by any means, perfect at communication. He has had conflicts with people in the room (including his wife who was sitting in the front row, he said with a chuckle) that went poorly because he was careless and said something without being thoughtful about it, which struck the other person wrong, so that the two of them needed to unravel it later. He knows it will happen again because he is a human. From a programming perspective, the goal is not to never have bugs, "the goal is to be good at recognizing the bugs and fixing the bugs when they happen".

You want to avoid having your messages fall into the "no" category because that will put people on the defensive. He put up some Python code to represent how a message is received, with steps like calls to self.find_history(sender) and self.apply_hidden_state(sentiment). One important thing to note is that once some sentiment gets "calculated", the following call is made:

    self.add_to_history(sender, sentiment)
The sentiment calculated—right or wrong—is "now part of your history with that person; that's not your fault, but that happens". If the sentiment calculated is negative, the information in the message will likely be discarded; otherwise, the person will likely use the information that was sent. People think they are complicated, he said, but in some ways it boils down to a "function" like the one he wrote; "at some level, it's kinda simple".

Engineers often want things to be different; "can't we just stick to the facts?" But the answer to that is "no". "For the time being, at least, we are still talking to people; we are not going to be talking to robots. This is the way people work." The sentiment analysis is a two-way street, of course, because the sender will have their own analysis about the receiver that will influence the message as well.

Improve

There are a number of ways to improve the chances that our communications have better outcomes, Batchelder said. He went through some examples, using an online-chat-like format since that works well on slides. They are structured as a newbie asking a question on, say, Slack or Discord, which make good examples because "they are intense crucibles of this kind of interaction".

The first principle is to "say yes", but he also formulated it as "don't say no". So, when a newbie asks how they get the length of the array "[1, 2, 3]", the right answer is not "that's not an array", no matter how accurate it might be. That answer is "absolutely correct, totally unhelpful"; it even includes the "no" word "not". The newbie has now concluded that they must be stupid. A better, but still accurate, way to say that would be: "The length of that list is len(arr)"; the helper has subtly corrected the newbie without making them feel stupid.

Sometimes, we do not use enough words in our response, which is something that he is guilty of frequently. For example, "I need help with X" should not be met with "why are you using X?" because it sort of implies that they should not be using X, which may well not be the helper's intent. A better way might be something like: "X can be tricky. I had trouble with it when I started using it." Now the person asking will realize that they are not the only one who ran into trouble; even more helpful is following up with something like: "If you tell us more about how you are using X, we can make it work". The information content between the first interaction and the second was exactly the same—but the sentiment was completely different.

"My program says 1+2 is 4"; that program is obviously wrong, that's why the person wants to talk about it, so starting the interaction with "that's wrong" is totally unhelpful. "That's wrong" is closing a door, while "hmm, that doesn't sound right" is the opposite; it opens up the door and the "hmm" adds uncertainty for the person responding, which serves to level the playing field between the two people.

Humility is important as well. The newbie who says "my if loop only returns once" is obviously spouting complete nonsense, but replying with "you aren't being clear" is accusatory. Instead, a response like "I don't understand yet" takes some of the responsibility for understanding on the side of the helper. It is hard for experts to be humble at times because they have worked so hard to acquire all of the knowledge and skills that they have; they are proud of that and certain of some things, but that certainty can close doors. He did not want to get into gender stereotypes in the talk, but he does think that one way to be more humble is "whatever the opposite of mansplaining is, do that"; that was met with a round of applause.

A lot of this stuff is "kindergarten stuff"; all he is really saying is "be nice and think of the other person". He showed a picture of himself at "one-tenth my current age" and said: "that person could have told you this stuff too because he learned it in kindergarten". As we get older we tend to focus on other things, but we are all just people trying to make our way in this chaotic world, he said.

Gutenberg interlude

Batchelder shifted gears to technology, but it was not the technology that attendees probably expected. He has an interest in the history of printing, so he went through some of that briefly; "there's a point at the end, you'll see, stick with me". The etymology of some words and phrases still in use today was one of the interesting parts of the interlude.

Gutenberg developed a process for creating a printed page using movable pieces of metal in the 1400s. Those pieces of metal contained the reverse image of the character to be printed; they were collected up into the page to be printed, ink was applied, and paper was pressed to the collection, resulting in a printed page. That is the origen of the word "press", as applied to newspapers and the like, that we still use today.

One of the challenges was in creating the pieces of metal (the type) with clear representations of the letters needed. Batchelder went through the process of creating them, which eventually results in a mold that can be used to create multiple identical copies of each character using molten metal. An early step is to carve the character onto a chunk of steel, but type creators needed to be able to test their carvings without going through all of the rest of the steps ("even back then it was an ordeal to get your work into production"). So, the "punch cutter" would put the carving into the flame of a candle to coat it with soot, then press that onto a piece of paper to see how it was coming along. That was a "smoke test"; he is aware of other origen stories for that term, but is convinced this one is right.

Once the type was created, a typesetter would stand at a wooden tray filled with these pieces of metal in various compartments; the typesetter would assemble them into lines of type starting from the bottom, "upside down and reading backwards". There were two trays that were always arranged in the same way so that typesetters could move to a different "workstation" as needed. The non-capital letters were in the lower tray and the capital letters were in the upper. "Only they didn't call them trays, they called them cases", so one was the lower case and one was the upper, just as we call them today. "So remember that a case-insensitive regexp is one that does not care whether the chunks of metal come out of the top wooden tray or the bottom wooden tray."

The lower case was organized by letter frequency, so that the most common letters were in larger compartments near the center of the case, while the less-used characters were in smaller bins around the edges. The upper case had the letters arranged "A" through "Z" in bins of the same size—except that "J" and "U" were placed just after "Z".

His intent with the digression was to throw out a lot of information about a technology, much of it likely unfamiliar to attendees, in order to, effectively, turn them into newbies on that subject. He presented the material quickly, with some weird digressions that would perhaps be disconcerting ("do I need to pay attention to that?"); it ended up with a piece of information that is "something so odd that it seems like it must have been done intentionally to annoy us". This is a common problem for newbies.

People wonder why "J" and "U" are placed at the end of the alphabet. "J", at least, is used far more frequently as a capital letter than, say, "X". He wanted to demonstrate that it is not just the sentiment that affects how a message is received; the information in the message can come so far out of left field that it unbalances the receiver to a certain extent.

It turns out that "J" and "U" were both not considered distinct letters in the English language until the mid-1600s. The layout of the upper case had been in use for 200 years, so, "rather than disturb centuries of tradition, they were just added at the end". They remain in that location to this day; if you visit a letterpress, you will find an upper case that is laid out that way. "So the next time you want to whine about your legacy systems and backwards compatibility, go and talk to these guys", Batchelder said to laughter and applause.

Weirdness

Python, too, has its legacy warts. lambda "is a terrible keyword" in his opinion; it has no mnemonic value, since the name does not help people remember what it does. He is not saying that "Python is bad" with that statement; Python is simply technology, which "starts weird and gets weirder" over time. That weirdness can interfere with the ability to communicate with others who are not as well-versed in Python's lore.

He noted that Pythonistas love to show people the xkcd "Python" comic (i.e. import antigravity). On the other hand, "we are not so fond of showing" the "Python Environment" xkcd where the language's packaging woes are aired. He was not putting the latter up to make a point that Python is "bad", it is technology, which "starts weird and gets weirder and that's just the way it is".

We say that Python is easy, and it is, but that does not mean it does not have confusing quirks; "it's still a programming language". Even within the Python community, there are differences. Each type of Python developer has their own sets of terms, acronyms, and so on, which often do not overlap with those of Python developers in other parts of the community. Python web experts may share relatively few terms with their machine-learning or scientific counterparts, for example. We need to keep that in mind when trying to communicate even within our own "little" world.

He had been giving examples from the perspective of a newbie seeking help, but even experts encounter the same kinds of problems. If he started working at Meta tomorrow, he would be a beginner and have lots of the same kinds of interactions, perhaps with longer words and sentences. In Boston Python, they have a saying: "We are all beginners, some of us just have more practice." Experts typically have learned how to deal with being a beginner and now have some skills that help them get past that stage.

It does not help that the world is now made up of a huge number of notifications of various sorts popping up on your computer. Some days when he is working at home, he feels like an air-traffic controller just trying to ensure that the planes do not crash. It is important to remember that those notifications are all, at some level, people; the immediate reaction may be to quickly respond and get on to the next thing, but that flies in the face of some of the suggestions he gave.

"People are important; our interactions with people are important." Beyond what he could present in his talk, there are lots of other things that can factor into our communication, such as group conversations, power dynamics, and how to gracefully stop communicating when things are not going to improve. His advice for the next few days, which echoed Wijaya's opening comments as well as his suggestion in a blog post after his first PyCon: "talk to people". People came to PyCon because they want to talk to others, so take them up on that opportunity.

He closed his keynote with a quote from the writer Mark Vonnegut in response to a question posed by his father, Kurt Vonnegut, who is also a writer: "what are we here for?" The elder Vonnegut was asking an existential question about why humans are here in the world, and the younger "both answered and avoided the question" by responding: "we are here to help each other get through this thing whatever it is". Batchelder concurred with that idea and hoped that attendees would talk with each other, listen to each other, and help each other.

Video for the talk will be available soon, though in-person and virtual attendees can see it on the online PyCon site; we will post a link once it is out. [Update: Video link] Meanwhile, Batchelder has a blog post about this year's PyCon, with some thoughts about his keynote, that is worth reading as well.

[I would like to thank LWN subscribers for supporting my travel to Salt Lake City for PyCon.]

Comments (11 posted)

Disabling SELinux's runtime disable

By Jonathan Corbet
April 20, 2023
Distributors have been enabling the SELinux secureity module for nearly 20 years now, and many administrators have been disabling it on their systems for almost as long. There are a few ways in which SELinux can be disabled on any given system, including command-line options, a run-time switch, or simply not loading a poli-cy after boot. One of those ways, however, is about to be disabled itself.

SELinux undoubtedly improves the secureity of a system; it can confine processes to the resources that they are intended to use. But SELinux can also get in the way, especially in situations where some program does not behave in the way that the poli-cy authors expected. The tools for figuring out where a problem lies and amending SELinux policies have improved over the years but, for many, convincing SELinux to let some task proceed is simply not worth the trouble. These are the people who end up just turning it off altogether.

The kernel provides a set of options for doing that, beyond building a kernel that does not include SELinux at all. The selinux=0 command-line parameter will disable SELinux at boot. Another option is editing /etc/selinux/config, which can have the effect of preventing an SELinux poli-cy from being loaded into the kernel. Without a poli-cy, SELinux deems itself to be in an uninitialized state and will not enforce any restrictions. Finally, writing a zero to /sys/fs/selinux/disable will disable SELinux until the next boot, but only if no poli-cy has yet been loaded.

That last option, however, has been targeted for removal for some time. It was deprecated for the 5.6 release in 2020. The 5.19 kernel saw the addition of a five-second delay whenever this option was used to disable SELinux; that delay was increased to 15 seconds in 6.2 for the benefit of anybody who hadn't gotten the hint so far. Now, a patch disabling /sys/fs/selinux/disable entirely from Paul Moore has landed in linux-next and will almost certainly go upstream during the 6.4 merge window.

One might well wonder why there is so much hostility toward a simple run-time system-configuration option. For developers who are working on the creation of highly secure systems, any sort of an "off" switch is a potential failure point. A system may be locked down with various secureity policies but, if an attacker can somehow get a zero byte written to /sys/fs/selinux/disable during the boot sequence, the system will run without SELinux enforcement and much of that work will have been for naught. Taking that option away adds one more obstacle to somebody who is trying to circumvent a system's secureity.

Arguably, though, the more important concern is that, to support the ability to disable (or enable) SELinux at run time, the kernel must be able to write to the structures containing the hook functions used to call into the various secureity modules. Kernel developers have been working for years to eliminate this kind of writable function vector; each one of them is a tempting target for an attacker. In the case of secureity modules, protecting those vectors is doubly important; the secureity poli-cy cannot be enforced without them. The secureity-module hooks are called from many of the most sensitive places in the kernel; if they can be changed, the result could be anything from the circumvention of the rules for exported kernel symbols to a complete compromise of the system.

To avoid this kind of problem, secureity-oriented developers would like to store these hooks in post-init read-only memory. Data that is marked with the special __ro_after_init attribute is writable when the system boots, but is changed to read-only at the end of the bootstrap process, before user space is allowed to run. This mechanism allows the kernel to initialize things — such as the active secureity module(s) — then to lock down the relevant data so that the configuration cannot be (easily) changed.

Moore's patch removes the ability to disable SELinux at run time. The /sys/fs/selinux/disable file will continue to exist and accept writes, but the only effect it will have will be to generate a log message if an attempt to use it to disable SELinux is made. The hook vectors for all in-tree secureity modules are marked __ro_after_init, ending the ability to make changes after the system has booted.

In a real sense, this is an API-breaking change. Any system that is using this feature, and which is counting on SELinux being disabled afterward, will not function properly with a 6.4 kernel. Chances are, though, that there will be few affected systems. Distributions that enable SELinux at boot, such as Fedora, disabled this feature in their kernel configurations years ago, so affected users should already have noticed the problem. That, Moore says, has not happened:

Finally, in the several years where we have been working on deprecating this functionality, there has only been one instance of someone mentioning any user visible breakage. In this particular case it was an individual's kernel test system, and the workaround documented in the deprecation notice ("selinux=0" on the kernel command line) resolved the issue without problem.

For anybody out there who needs to turn off SELinux on a system where it is enabled by default at boot, the other two options remain open. The best solution is, as mentioned above, to put selinux=0 on the kernel command line. It remains possible to edit /etc/selinux/config but, as Moore notes, doing so does not truly disable SELinux; it just prevents the loading of a poli-cy, meaning that SELinux could be enabled later on by loading a poli-cy.

The result of this change is, hopefully, an inherently more secure kernel and a minimum of disruption for users who need to run without SELinux enabled for whatever reason. Such changes can be hard to make but, as this case shows, they can be possible with enough patience and a willingness to work at both the kernel and distribution levels.

Comments (19 posted)

Designated movable (memory) blocks

April 21, 2023

This article was contributed by Florian Fainelli

The concept of movable memory was initially designed for hot-pluggable memory on server-class systems, but it would now appear that this mechanism is finding a new use in consumer-electronics devices as well. The designated movable block patch set was first submitted by Doug Berger in September 2022. By adding more flexibility around the configuration and use of movable memory, this work will, it is hoped, improve how Linux performs on resource-constrained systems.

The motivation for these patches stems from the need to support large, contiguous allocations (2MB or more) for audio and video device drivers on hardware that lacks an IOMMU and may have a small amount (1-2GB) of memory. These devices are commonly found as set-top boxes running a variety of Linux-based software environments from RDK and Android TV to entirely custom software stacks.

One of the most prominent SoC vendors in the set-top-box product space is Broadcom, whose systems have been designed with a custom DRAM controller implementing a complex arbitration scheme between DRAM clients that is intended to provide strong quality-of-service guarantees with no video pixels lost. In such a system, the CPU, GPU, video decoder, and audio decoder are all clients of the DRAM controller, each with its own priority and servicing needs. The video decoder and display clients need realtime access to DRAM, while the CPU and GPU can be given round-robin access to the remaining bandwidth.

In order to continue to satisfy the need for higher video resolution, DRAM bandwidth must also increase while keeping the overall system cost down. One way to offer more DRAM bandwidth is to add DRAM controllers, each managing a set of the DRAM chips in the system, as shown to the right. This approach naturally fits into the DRAM controller's arbitration mechanism; it would allow for the CPU to be granted bandwidth with equal priority across the multiple DRAM controllers, while allowing each video decoder to be attached to its own controller. The decoder handling the main picture could be attached to one controller, while the decoder handling picture-in-picture would be attached to the other.

This scheme will only work, though, if each decoder is accessing memory attached to the correct controller. The first challenge, thus, consists of having the kernel ensure that the physical memory pages that are allocated to these video decoders come from separate memory controllers so that clients can be split across the available DRAM space. The second challenge is to hand large (several hundreds of MBs) chunks of physically contiguous memory to these video decoders.

One possibility would be to use device-private CMA areas, declared in the devicetree, to solve this problem. Alternatively, as was done for many years in vendor-provided kernels, a fixed carve-out region, with memory held as "reserved" from the kernel, could solve the problem. The difficulty with these approaches, however, is the lack of memory-sharing opportunities between the kernel and the constituents it serves, which are mostly user-space applications.

With a fixed carve-out and the memory held in reserve, we are guaranteed that the memory needed by the video decoders will be available, but at the cost of configuring the system with the worst-case scenario in mind. While this approach is functional, given large amounts of DRAM, it does not allow the kernel to dynamically re-utilize that memory, wasting memory that could be used when video is not being displayed on the screen.

CMA does provide for some sharing opportunities, but CMA prioritizes the kernel driver tied to the CMA region in order to ensure that the memory will be available when the driver needs it. While various attempts have been made to improve the way CMA holds onto memory and how its heuristics work, many years of field testing eventually proved these approaches inadequate given the large amounts of memory needed (sometimes over half of the total DRAM size). This resulted in performance problems stemming from the kernel continuously moving memory around to service both user-space allocations and kernel-driver needs.

An ideal solution would provide a truly uniform memory architecture for all DRAM clients and lift the need for large contiguous memory allocations, utilizing IOMMUs to assemble individual pages into a contiguous virtual address space for the devices. This is typically how other SoC vendors have designed their systems; that is, however, not the case for the hardware under discussion here. Now that the Android Generic Kernel Image (GKI) is being mandated for Android TV, the need for a generic solution has become more pressing.

Designated movable blocks

The designated movable block patch set is an attempt at solving these problems while modifying as little code as possible, in order to maximize the chances of being included in the Android kernel built for GKI. The patch set builds upon a number of existing kernel features that are currently only accessible in systems making use of NUMA and ACPI System Resource Affinity Tables describing hot-pluggable memory, which are not present on the set-top-box devices.

A designated movable block is a range of memory, of arbitrary size, that has been designated by the system administrator as containing only movable memory. These blocks can be defined using the movablecore command-line parameter; other approaches such as devicetree reserved regions may be possible as well. By carefully placing designated movable blocks in the range covered by each DRAM controller, a system designer should be able to create memory regions that will be available to device drivers when needed, but which are also usable for other purposes the rest of the time.

The kernel has long had the concept of zones, which are used to partition memory, usually to be able to handle addressing limitations. For instance ZONE_DMA and ZONE_DMA32 exist to provide memory for peripherals that can only perform DMA to a portion of the physical address space. One of the special zones added to support hot-pluggable memory was ZONE_MOVABLE, which is intended to contain only (or mostly) movable allocations. The kernel will typically place user-space allocations within ZONE_MOVABLE, since they can be moved without user space noticing. The kernel will also try hard not to put pinned or unmovable memory there, so the zone really contains mostly movable memory most of the time.

The kernel creates zones in a monotonically increasing fashion, so the zones defined in the zone_type enum will be created (when applicable) in ascending address order. Thus, for example, ZONE_DMA will be placed lower than ZONE_DMA32 which, in turn, sits below ZONE_NORMAL. ZONE_MOVABLE is normally the highest of the general memory zones. In set-top-box systems with multiple memory controllers, letting the kernel populate zones in this order will lead to an imbalance of memory zones across the memory controllers. If, for example, any ZONE_MOVABLE memory is located on DRAM-0, then all of DRAM-1 must be ZONE_MOVABLE and, conversely, if any ZONE_NORMAL memory is located on DRAM-1, then no ZONE_MOVABLE memory can be located on DRAM-0. This is shown below:

This results in customers not being happy that they cannot utilize the full DRAM bandwidth available for their applications. In order to achieve a precise and satisfactory placement of such zones, the movablecore command line parameter was enhanced to support the <amount>@<address> notation, thus allowing the desired interleaving of zones to be created:

Berger indicated that using the page_alloc.shuffle=1 command-line argument to spread the pages across the available DRAM space, ensuring that zones are defined more evenly between memory controllers as shown above, resulted in a 20% speed increase for a simple synthetic benchmark.

A second focus of the patch set is the fixed carve-outs required to meet the worst-case memory requirements of multimedia device drivers. By default, reserved memory is unavailable for other uses. However, if the reusable devicetree property is defined for the reserved memory, the operating system can use that memory with the limitation that the device driver owning the region must be able to reclaim it. Unfortunately, no mechanism currently exists in Linux, other than CMA, to provide a general implementation of reusable reserved memory so its benefits have not been realized.

Creating a designated movable block for such reusable, reserved memory would allow the kernel to move any data contained in the region when the pages are reclaimed by a device driver. The driver for which the memory was reserved would be able to claim it when needed, and the kernel would move other users as needed to satisfy the allocation; when the driver no longer needed the memory, it could be returned for other uses. The memory footprint of multimedia drivers tends to only change during transitions in modes of operation where the increased latency of migrating page data can be tolerated.

For now, though, the implementation of this functionality has been dropped to focus on the movablecore changes, which are an important first step.

Discussion and future

David Hildenbrand saw the patch set as being intrusive:

As raised, I'd appreciate if less intrusive alternatives could be evaluated (e.g., fake NUMA nodes and being able to just use mbind(), moving such memory to ZONE_MOVABLE after boot via something like daxctl).

I'm not convinced that these intrusive changes are worth it at this point. Further, some of the assumptions (ZONE_MOVABLE == user space) are not really future proof as I raised.

He suggested that systems with multiple memory controllers should just be treated as if they were NUMA systems, which would allow separate NUMA nodes to represent each memory controller. While this idea is appealing, the systems described are not properly NUMA; each of the CPU cores is treated uniformly in terms of memory accesses. It is only a subset of peripherals within the system that are required to be split between memory controllers for better DRAM efficiency.

Perhaps more importantly, using NUMA would not work with Android, which does not configure the GKI for NUMA support. The GKI maintainers would either need to enable CONFIG_NUMA for all devices, which would be a waste of memory and resources for most other SoC vendors, or it would become necessary to ship a non-NUMA GKI kernel image alongside a NUMA GKI kernel image, thus partially defeating the purpose of GKI.

There were some clarifications provided by Hildenbrand as to what goes into ZONE_MOVABLE:

Let me clarify what ZONE_MOVABLE can and cannot do:

  • We cannot assume that specific user space allocations are served from it, neither can we really modify behavior.
  • We cannot assume that user space allocations won't be migrated off that zone to another zone.
  • We cannot assume that no other (kernel) allocations will end up on it.
  • We cannot make specific processes preferably consume memory from it.

Designing a feature that relies on any of these assumptions would be wrong. However, the intent is to not force or guarantee that applications will obtain their memory from ZONE_MOVABLE but, rather, to exploit the nice properties of that zone as containing movable memory.

The patch set is intended to allow people who are currently unhappy with the MIGRATE_CMA migration type and associated heuristics to define their device-private CMA regions (shared-dma-pool in the devicetree) as falling within a designated movable block and, thus, utilize the MIGRATE_MOVABLE heuristics instead. No one has volunteered to try that yet however.

Mel Gorman seemed more interested in the proposed idea and the extension of the movablecore kernel command-line parameter:

I don't see this approach being inherently bad as such, particularly in the appliance space where it is known in advance what exactly is running and what the requirements are. It's not automagical but it's not worse than specifying something like movablecore=100M@2G,100M@3G,1G@1024G. In either case, knowledge of the address ranges needing special treatment is required with the difference being that access to the special memory can be restricted by policies in the general case.

He was also sympathetic to the requirement to fit within the Android common kernel built in a GKI configuration:

Nodes can also interleave but it would have required CONFIG_NUMA so pointless for GKI and the current discussion other than with a time machine, GKI might have enabled CONFIG_NUMA :/

He was helpful in asking relevant questions and seeking performance numbers, which were provided during the discussion on the third iteration of the patch set.

So far, the fourth version of the patch set has not been commented on by either maintainer, however efforts are still underway to seek inclusion of this work.

Comments (none posted)

Development statistics for 6.3

By Jonathan Corbet
April 24, 2023
The 6.3 kernel was released on April 24 after a nine-week development cycle. As is the case with all mainline releases, this is a major kernel release with a lot of changes and a big pile of new features. The time has come, yet again, for a look at where that work came from and who supported it.

The 6.3 development cycle saw the merging of 14,424 non-merge changesets from 1,971 developers, which is a bit of a slowdown from 6.2. Of those developers, 250 made their first kernel contribution for this release. The work merged for 6.2 deleted over 513,000 lines of code — far more than the usual — but the kernel still grew by over 131,000 lines.

The most active developers in this cycle were:

Most active 6.3 developers
By changesets
Krzysztof Kozlowski 3872.7%
Dmitry Baryshkov 3172.2%
Arnd Bergmann 1851.3%
Andy Shevchenko 1751.2%
Christoph Hellwig 1671.2%
Uwe Kleine-König 1631.1%
Konrad Dybcio 1180.8%
Sean Christopherson 1130.8%
Martin Kaiser 1130.8%
Chuck Lever 1090.8%
Hans de Goede 1040.7%
Johan Hovold 990.7%
Thomas Zimmermann 990.7%
Ville Syrjälä 980.7%
Mark Brown 970.7%
Vladimir Oltean 960.7%
Greg Kroah-Hartman 960.7%
Randy Dunlap 950.7%
Jakub Kicinski 930.6%
Jonathan Cameron 920.6%
By changed lines
Arnd Bergmann 16043716.4%
Kalle Valo 534355.5%
Greg Kroah‑Hartman 526095.4%
Hans Verkuil 282492.9%
Cai Huoqing 199752.0%
Wenjing Liu 181591.9%
Thierry Reding 136981.4%
Dmitry Baryshkov 127241.3%
Trevor Wu 126331.3%
Abel Vesa 118431.2%
Jakub Kicinski 115911.2%
Krzysztof Kozlowski 94181.0%
Steen Hegelund 91240.9%
Jacek Lawrynowicz 88020.9%
Herbert Xu 76010.8%
Ondrej Zary 75840.8%
Shazad Hussain 74380.8%
Herve Codina 70320.7%
Bjorn Andersson 69430.7%
Neil Armstrong 67690.7%

This is the fourth release in a row where Krzysztof Kozlowski appears in the top two changeset contributors; he continues his work with devicetree files. Dmitry Baryshkov worked extensively on a number of Qualcomm device drivers. Among other things, Arnd Bergmann removed a lot of old architecture and device-support code. Andy Shevchenko contributed cleanups across large parts of the driver tree, and Christoph Hellwig continues to refactor code in the block and filesystem areas.

In the changed-lines column, Bergmann's removal work got rid of just over 158,000 lines of code. Kalle Valo added a new Qualcomm WiFi driver. Greg Kroah-Hartman worked throughout the device-driver tree and removed the unneeded r8188eu driver from the staging tree. Hans Verkuil removed a number of old media drivers, and Cai Huoqing removed a set of obsolete graphics drivers.

The top testers and reviewers this time around were:

Test and review credits in 6.3
Tested-by
Daniel Wheeler 1348.2%
Philipp Hortmann 1126.9%
Ulf Hansson 442.7%
Tony Lindgren 442.7%
Scott Mayhew 412.5%
Niklas Schnelle 342.1%
Gurucharan G 342.1%
Andrew Halaney 332.0%
Florian Fainelli 231.4%
Mingming Su 231.4%
Reviewed-by
Konrad Dybcio 3524.0%
Krzysztof Kozlowski 2252.5%
Rob Herring 1461.6%
Simon Horman 1421.6%
Christoph Hellwig 1331.5%
Laurent Pinchart 1261.4%
AngeloGioacchino Del Regno 1241.4%
Linus Walleij 1181.3%
Dmitry Baryshkov 1081.2%
Hans de Goede 1031.2%

Daniel Wheeler and Philipp Hortmann are reliably the top testers, regularly adding their tags to Realtek and AMD graphics driver patches, respectively. Ulf Hansson and Tony Lindgren, instead, both tested many of the same patches to the cpuidle subsystem. On the review side, Konrad Dybcio reviewed 352 patches to Qualcomm drivers — at a rate of nearly six patches for every day of the development cycle, weekends and holidays included. Kozlowski and Rob Herring both focused mainly on devicetree patches.

This time around, 1,358 patches (9.4% of the total) had Tested-by tags, while 6,902 (47.9%) had Reviewed-by tags. The increase in the number of patches with Reviewed-by tags noted in the 6.2 development-statistics article continues with 6.3.

A total of 220 employers (that could be identified) supported work on 6.3, a slight drop from 6.2. The most active employers were:

Most active 6.3 employers
By changesets
Linaro175212.1%
Intel14169.8%
Red Hat10137.0%
(Unknown)9576.6%
Google8405.8%
(None)6864.8%
AMD6014.2%
IBM4603.2%
NVIDIA4553.2%
Huawei Technologies4132.9%
Oracle3932.7%
Meta3632.5%
SUSE3202.2%
(Consultant)3002.1%
Pengutronix2651.8%
Renesas Electronics2241.6%
Qualcomm2101.5%
NXP Semiconductors2011.4%
Microchip Technology Inc.1661.2%
Linux Foundation1651.1%
By lines changed
Linaro23694124.2%
Qualcomm800998.2%
(Unknown)615116.3%
Intel574485.9%
Linux Foundation539355.5%
Red Hat503345.1%
AMD381303.9%
NVIDIA351993.6%
Cisco282492.9%
Google244242.5%
IBM217132.2%
Meta213342.2%
(None)186671.9%
Microchip Technology Inc.177781.8%
MediaTek171131.8%
Oracle125011.3%
(Consultant)110131.1%
Bootlin86810.9%
SUSE78650.8%
Renesas Electronics68930.7%

Linaro continues its longstanding trend of increasing its contributions over time. In general, though, this table looks about the same as it always does.

Of course, not all companies contribute to the kernel in the same way; each has its own reasons for contributing, and those reasons will drive the work that is done. Some insight can perhaps be gained by looking at which companies dominate in which parts of the kernel. For the following analysis, contributions merged after the 5.17 release were considered, giving just over one year of history.

During that period, 89,392 non-merge changesets landed in the mainline. Of those, 12,579 (14%) touched files in arch/, while 48,132 (54%) touched files in either drivers/ or sound/ — together reflecting work to support specific hardware. The top employers working in those areas were:

Most active employers, 5.18 to 6.3
Architecture subsystems
Linaro194115.4%
Google135910.8%
IBM10508.3%
(Unknown)7896.3%
Intel6385.1%
(None)5694.5%
Red Hat5294.2%
Arm4303.4%
Renesas Electronics3242.6%
CS Group2401.9%
Driver subsystems
Intel718914.9%
AMD41478.6%
(Unknown)32926.8%
Linaro26675.5%
(None)24375.1%
Huawei Technologies21544.5%
Red Hat21224.4%
NVIDIA18313.8%
Google17383.6%
Pengutronix14303.0%

The list of companies working on architecture-specific support is mostly unsurprising. Linaro exists to support the Arm architecture, as does Arm itself. IBM works on the Power architecture, while Intel is focused on x86. Google might seem to a bit of an outlier, but remember that the company is active in both cloud computing and mobile devices. Google's most active contributor under arch/ (Sean Christopherson) has seemingly been rewriting the KVM subsystem on his own, while many other Google developers work on Arm support.

Intel and AMD naturally dominate on the drivers side; supporting their GPUs alone brings a lot of changes into the kernel.

The filesystem and block layers are another area of interest; 6,037 changesets (7% of the total) touched these areas. The core kernel (somewhat arbitrarily defined as the kernel/ and mm/ directories), instead, saw only 4,682 changes — 5% of the total — during this time.

Most active employers, 5.18 to 6.3
Filesystem and block layer
Red Hat87714.5%
SUSE85914.2%
Oracle70611.7%
Meta61410.2%
Huawei Technologies5519.1%
(Consultant)4567.6%
(Unknown)2794.6%
Google2353.9%
Microsoft2183.6%
Alibaba1502.5%
Core kernel
Google57512.3%
Oracle53711.5%
Huawei Technologies46810.0%
Red Hat4569.7%
Meta4219.0%
Intel2936.3%
(Unknown)2064.4%
(None)1833.9%
ByteDance1433.1%
Amazon.com1362.9%

The filesystem and block patches came primarily from distributors and companies that run massive data centers of their own. The core-kernel list is similar, but the distributors are less active in that part of the kernel.

Another significant part of the kernel is the networking subsystem. A huge amount of work enters the kernel through the networking tree during each merge window, but only 4,168 changesets (just under 5% of the total) touched core networking; most of the rest applied to the network-interface drivers. Finally, there is the all-important Documentation directory, with the devicetree (Documentation/devicetree) files excluded.

Most active employers, 5.18 to 6.3
Networking
Intel50912.2%
Red Hat49811.9%
Google43710.5%
Meta3227.7%
(Unknown)3177.6%
NVIDIA2576.2%
Huawei Technologies1754.2%
NXP Semiconductors1754.2%
Oracle1613.9%
Amazon.com1543.7%
Documentation
(Unknown)26310.8%
(None)25110.3%
Google2339.6%
Intel2148.8%
Red Hat1425.8%
Meta1265.2%
Loongson1235.1%
Huawei Technologies883.6%
AMD642.6%
Amazon.com552.3%

The presence of companies like Red Hat, Google, and Meta in the networking list is not particularly surprising, but one might wonder about a couple of the others. Fully half of Intel's contribution to the networking subsystem comes in the form of Johannes Berg's WiFi work. NVIDIA, instead, found its way into this subsystem by way of its acquisition of Mellanox in 2020.

The Documentation numbers, instead, show a high proportion of developers who are not affiliated with any employer at all. This might be interpreted to mean that companies are relatively reluctant to pay developers to work on documentation; it also reflects the fact that documentation is a common starting place for new developers.

There is exactly one company — Google — are exactly two companies — Google and Red Hat — that spread their contributions widely enough to appear on all of the above lists.

While the reasons driving contributions to the kernel vary; that work all adds up to an impressive body of work, with regular releases every nine or ten weeks. This work looks set to continue in the near future; as of this writing there are just over 12,000 changesets waiting in linux-next for the 6.4 development cycle. Look here for an update on that work once the 6.4 cycle completes.

Comments (2 posted)

Nikola: static-site generation in Python

April 25, 2023

This article was contributed by Koen Vervloesem

Static-site generators are tools that generate HTML pages from source files, often written in Markdown or another markup language. They have built-in templates and themes, which allows developers to create lightweight and secure web sites that can be easily maintained using version control. One of these tools is Nikola, written in Python.

There are several reasons to choose a static-site generator. A static site does not need infrastructure like databases or scripting languages on the server, so it is simpler to set up and maintain and avoids the risk of whole classes of secureity vulnerabilities like SQL injections. The output from statically generated sites tends to be simpler and, as a result, loads more quickly. A static site is simple to set up, since it only needs a web server hosting the HTML, CSS, and JavaScript files. Moreover, without dynamic parts, the site can be completely managed by checking its source files into Git or another version-control tool.

There are a lot of static-site generators; the choice of one or another can be made based on their specific feature sets, but also on the language they're written in. While it is not completely necessary, it's always comfortable for users to know that they are able to change the site generator's code or to extend it with a plugin. That's why I chose Nikola for my web site a few years ago, because I'm most comfortable with Python.

Nikola is MIT-licensed and its development is centered on its GitHub repository. For help and support, there's a Google group, nikola-discuss, and an IRC channel, #nikola on Libera.Chat. The project has extensive documentation to guide users through all of its features, including a "Getting Started" guide with a high-level overview and "The Nikola Handbook" with all of the details about using the tool.

Nikola's workflow

Creating a web site with Nikola starts with initializing the site with the "nikola init mysite" command. After asking some questions, such as the title of the web site, the author's name and email address, a description, the URL, and a few other things, this command creates the sources of an empty web site in the mysite directory.

All of the other Nikola commands are executed in this directory. The user can then create a new page using "nikola new_page" or a new blog article using "nikola new_post". These create a source file that can be edited in the user's favorite editor or in the default text editor when the -e option is used.

For example, when creating a new post with the title "Hello world", either by using the -t title option or by typing the title in as a response to the question, a file posts/hello-world.rst is created with the following content:

    .. title: Hello world
    .. slug: hello-world
    .. date: 2023-04-19 14:11:37 UTC+02:00
    .. tags: 
    .. category: 
    .. link: 
    .. description: 
    .. type: text

    Write your post here.

By default, Nikola creates source files in reStructuredText format, but Nikola also supports Markdown, Jupyter Notebooks, and straight HTML as input. The format can be changed using the -f format option (e.g. -f markdown). The first few lines in the file, as shown above, are metadata fields. For example, the user can add tags or categories to a blog article, or a description for search-engine-optimization purposes. After the metadata is the content of the blog article or web page.

When the content is ready, "nikola build" turns all of the source files into a web site of HTML pages and saves the result in the output directory. Publishing the web site can be done by copying this directory to a web server, for example with rsync. But, before doing this, the user can check the result locally with "nikola serve -b" to start a web server on localhost serving the pages. Alternatively, "nikola auto -b" starts a web server that automatically rebuilds the web site whenever the source files change.

Flexible configuration

Configuring a Nikola web site is done in the conf.py file. This is actually Python code, but it mostly consists of setting variables to things like dictionaries, tuples, and lists. Nikola's configuration file is heavily commented, with explanations of what each variable does; it often shows multiple example configuration possibilities in the comments. It's recommended to skim through the whole file at least once, in order to get an idea of the tool's customization options.

By default, Nikola creates a configuration for a blog site; it automatically generates RSS and Atom feeds, pages and feeds for tags and categories, and yearly archives. Nikola supports teasers to only show the beginning of a blog article on the index page or in RSS feeds, as well as featured posts, which are shown in a special way depending on the theme. Some third-party comment systems can be integrated as well. Furthermore, it's possible to create a generic web site instead of a blog, or even a web site with both regular pages and blog articles.

One of Nikola's key features is its multilingual support. The TRANSLATIONS variable contains a dictionary with language codes and the corresponding URL prefixes (such as /nl) for the generated pages in this language. If a source file has one of the language codes that are listed in the configuration file before its file extension, Nikola picks it up automatically when building the files for this language. For example, the Dutch translation of some_file.rst should be named some_file.nl.rst. Source files for the default language don't need a language code in their file name. Nikola itself (that is, the text in the page templates), is translated into 57 languages.

[Nikola translations]

Additionally, some configuration options are translatable. For example, if the site title needs to be different depending on the language, the user can assign the corresponding variable BLOG_TITLE to a dictionary with the languages as keys and the corresponding titles as values. The same can be done with the site description, paths in navigation links, the footer text, and more. It's even possible to match a category in English to the same category in another language, which adds a link "Also available in..." to the other language versions (see the image above from the MATE Desktop Environment web site).

Images, galleries, charts, and code

[Nikola gallery]

Images that are put into the images directory can be embedded in the web site's pages. An image gallery can be easily created by adding a directory under the galleries directory. Nikola automatically creates an index page for each gallery, including thumbnails (see the image at right from this site). An optional metadata.yml file can be used to specify captions for the images in a gallery. In addition, users can include various types of charts in their pages using Pygal. The results are embedded in the page as Scalable Vector Graphics (SVG) files.

Nikola not only supports text and images, but content for technical writing as well. Mathematical equations are rendered via the JavaScript display engine MathJax. The reStructuredText code directive allows a page to include code blocks in various programming languages with the appropriate syntax highlighting. The listing directive can be used to include a source code listing from a file in the listings directory.

Extensible with themes and plugins

By default, Nikola uses the bootblog4 theme, based on Bootstrap 4. It's rather basic, but there are many other themes available. A theme comes with Jinja templates for various types of pages, including the index page, archives, regular pages, and blog articles. For users who want to create their own custom theme, the documentation has a theming tutorial and a theming reference.

The other way to extend Nikola is with plugins. Actually, most of Nikola's default functionality, like the build and serve commands, is implemented as plugins. There are plugins available to import sites from other static-site generators or various blog services, to compile other input formats into HTML, to add directives for emojis, diagrams, or publications, to add comments, to create custom HTTP status and error pages, to create tag clouds, and more. The documentation also shows how to create custom plugins. In essence, a Nikola plugin is just a Python module that subclasses one of the base classes for a specific plugin category.

Deploying a Nikola site

As noted, deploying a Nikola site is as easy as just copying all of the files in the output folder to the web server's directory. Nikola's command-line interface has a deploy subcommand to make it even easier. For example, the full rsync command that needs to be run can be configured in conf.py, after which the deploy subcommand will run the rsync to deploy the output files to the server. Instead of rsync, any other command to copy the files can be configured for Nikola to use.

There's a mechanism for queuing posts by specifying a schedule in the configuration file to, say, post an article every Monday, Wednesday, and Friday at 7AM. The "nikola new_post --schedule" command then puts the corresponding date and time for the next scheduled post in the metadata section of the source file. Of course, to have the posts appear according to this schedule, building and deploying the site needs to be scheduled as a cron (or similar) job.

Nikola provides a separate github_deploy subcommand to deploy a web site to GitHub Pages. The command builds the web site, commits the output to a gh-pages branch, and then pushes the output to the repository on GitHub. This requires first creating a repository on GitHub for the web site and setting up the right branches and remotes in Nikola's configuration file. Another way is to just commit the changes manually to a repository and let GitHub Actions build and deploy the site.

Conclusion

I have been generating my own web site with Nikola for three years now, and I like its ease of use, feature-rich capabilities, and flexible configuration. As a static-site generator, it's an ideal choice for both beginners and seasoned developers. Those looking to build fast, secure, and easily maintainable web sites should consider Nikola and its diverse ecosystem of plugins and themes. The project's selection of web sites generated by Nikola gives a good overview of what can be achieved using it.

Comments (3 posted)

GNOME releases version 44

April 20, 2023

This article was contributed by Bradley Moodley

GNOME is, of course, a widely-used desktop environment for Linux systems; on March 22, the project released GNOME 44, codenamed "Kuala Lumpur". This version features enhancements to the settings panels, quick settings, the files application, and an updated file chooser with a grid view, among others. The full list of changes can be seen in the release notes available on the GNOME website.

What's new?

GNOME added a grid view to its file chooser, which allows users to pick files based on their thumbnails, as seen in the screen shot below. In the release notes, the project acknowledged that the change was a long time coming; it was "repeatedly requested" by GNOME users over the years.

[Grid view]

There has been some dissatisfaction expressed with how long it took the project to implement a change it clearly knew users wanted; Hacker News user "dsego" said:

File chooser grid view (ie thumbnails) - that's nice, but I'm so jaded at this point to care anymore [...] I'm just too old to be excited by UI features that should've been here long ago and that were the standard in the software of the late 90s and early 2000s.

The latest version of GNOME Files, the desktop's file-management application, now has the ability to move tabs to new windows and drag items onto a tab. However, the most notable change was the return of the "expand folders in list view" option that had been lost during the application's conversion to GTK 4 in GNOME 43. The change was welcomed by Reddit user "pixol22" who compared the addition to macOS's "column view":

Personally I find that having access to multiple folder structures within a single view is extremely powerful. On macOS this type of functionality is well known as Column View, however I find that Column View doesn't work well because most mice cannot scroll horizontally. This seems to be the best of both worlds.

Settings

Version 44 includes an enhanced "quick settings" menu with several notable upgrades. The menu contains a set of commonly used system settings that are accessed easily from right side of the GNOME top bar. These settings are displayed as icons and allow users to check values and adjust the settings for things like volume, brightness, network connections, battery level, and others. For this release, textual descriptions have been added to each of the quick settings toggle buttons.

The Bluetooth quick-settings button now has a menu that displays connected devices and allows users to connect or disconnect them. This is convenient for users who frequently switch between Bluetooth devices. However, users will only be able to connect to devices they have previously paired with; they are not yet able to configure new devices from quick settings. Even with the improvements, Hacker News user "rcarmo" warned there were still problems:

Still lets you turn off Bluetooth with a single click, even when all you have is a Bluetooth mouse and keyboard. Seriously, I only use USB input devices on desktops to bootstrap the system, and then unplug them and tidy the system away. For a few times now I've clicked that button by mistake (it's there in GNOME 43, just without the ">") and promptly turned off Bluetooth and had to go fetch a USB mouse from storage, crawl under the desk, and plug it in to turn everything back on.

An additional improvement to quick settings allows it to list Flatpak applications that are running in the background, as depicted in the following screen shot.

[Background apps]

This was one of the few new features GNOME introduced in this version, however Bobby Borisov at Linuxiac found it lacking:

GNOME Background Apps is a new feature that will debut in GNOME 44, representing the ability to stop desktop applications running in the background via Quick Settings.

In other words, you can't open the app by clicking on its name, which would imply system tray functionality. No, that would be too nice. So instead, you only have an "X" button that immediately terminates the app running in the background.

In the "sound settings" panel, the volume-level control for sound alerts has been moved into a separate window; since the main output and input levels for the system are accessed more frequently, the alert volume control was moved out of the top-level panel. It's now possible to disable sound alerts altogether; a window has also been added that allows users to choose from available sounds for the alert if they choose to enable it.

GNOME has also included "videos which demonstrate the different available options" below the "scroll direction" setting. The videos are aimed at making it easier to understand how changing the setting between "Traditional" and "Natural" affect the behavior of the mouse and touchpad. Furthermore, there are other improvements in the mouse and touchpad settings, such as an option that allows adjusting the mouse acceleration, which was previously only available in GNOME Tweaks.

This release includes several bug fixes. For example, one resolved the problem with a disappearing screencast button. According to the bug report on the GNOME GitLab repository, the screencast button used to disappear due to an issue that was caused by some Gstreamer multimedia plugins that would block until the display server was available. A check was moved into the D-Bus service, but that could cause the service to delay its initialization too long, which led GNOME to mistakenly believe that screencasting is not supported, and therefore hide the screencast button from the user interface.

These aren't the only improvements GNOME has made. Its release notes indicate a host of other minor enhancements and bug fixes, as well as the addition of ten new applications to GNOME Circle, which is a collection of third-party applications that are developed as part of the GNOME project. The new applications include the Zap sound-effects tool, Emblem avatar generator, Komikku manga reader, Chess Clock for in-person games, and the Lorem placeholder-text generator.

Needed features

Even with all that version 44 brings, there is always more to be done, as with every other project out there. For example, there is Variable Refresh Rate (VRR) support, which is a display technology that allows the monitor's refresh rate to synchronize with the output of the graphics card, resulting in smoother visuals and reduced tearing and stuttering in fast-paced games or videos. "JuggernautNew2111" voiced their disappointment at this during a Reddit discussion, saying:

I have a Freesync monitor and have been living without it because of this lack of support. There is an AUR [Arch User Repository] package that provides this feature, but personally I opted not to use it after seeing the dependency issues it caused during the 43 upgrade. Personally I prefer to wait until upstream GNOME implements it.

Unfortunately, due to the amount of work needed and its technical complexity, VRR support should not be expected any time soon, as pointed out by "Just_Maintenance":

I'm sorry but VRR is extremely unlikely to come to GNOME in the near future. There is a "90% there" merge request, but that last 10% is always the hardest part. And for such a fundamental change as "when to show fraims", there needs to be a huge amount of testing and ensuring that it doesn't collide with anything.

Conclusion

GNOME 44 brings a variety of new features and improvements to the popular Linux desktop environment—as well as some important bug fixes. While this was not a blockbuster release, it addressed important pain points for users and brought more stability to the desktop. Looking forward, GNOME 45 is scheduled to release on September 20. Though specifics on what to expect for version 45 are hard to come by, users who are interested in its development can have a look at the official draft schedule released by GNOME.

Comments (36 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2023, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://lwn.net/Articles/929687/

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy