Leading items
LGM: GIMP's new release, new über-core, and future
GIMP took center stage at Libre Graphics Meeting (LGM) 2012 in Vienna. Day three featured a block of GIMP-related talks that saw the official release of the new stable 2.8 version of the application, plus a look at three new developments that will impact the future of the raster editor in the coming months. The biggest change is that the 2.9 development series has already been ported to the generic graphics library (GEGL) engine — the ease of which reportedly surprised even the developers — but there were interesting revelations about GPU-accelerated image processing and a new take on text handling, too.
FLOSS historians may recall that the very first LGM evolved out of a GIMP developers' summit. Still, in the past few years other application projects have grabbed the spotlight — Blender, Krita, Inkscape, and Scribus, for starters. Part of the reason has been GIMP's slower development cycle over the past few releases; as a long-established project it has a sizable codebase to maintain, and considerable effort in recent years has gone into building GEGL, the next-generation image processing library designed to bring high bit-depth image support, non-destructive operations, and other such niceties. Planning that transition while simultaneously implementing feature requests on the existing core added up to some multi-year waits between major revisions. The previous stable release, 2.6, landed in October 2008.
Thus it was a minor surprise when GIMP maintainer Michael Natterer and GEGL maintainer Øyvind Kolås announced the official release of GIMP 2.8 during their talk. Technically, news of the release had leaked out the day before when the files appeared on the official FTP site, but the announcement was still unexpected good news. Feature-wise most of the new work in GIMP 2.8 was in place when we covered the 2.7.3 development build in November 2011. The headlines include the option of docking all of the tool palettes onto the image editor to function as a single window (with tabs for keeping multiple files open at once), layer groups, on-canvas text editing tools, a new "cage transform" tool, and numerous enhancements to the layout and manipulation of of tool settings.
The GEGL has landed
Upstaging the release of 2.8 was Natterer and Kolås's session showcasing their recent work porting the GIMP development tree to GEGL. GEGL has been slated to replace the legacy GIMP core for close to a decade, and although recent GIMP releases have included an option to activate GEGL for specific functions (such as color transformation), as recently as December 2011 the official plan was still for integrate other work (such as Google Summer of Code projects) into GIMP 2.10, and make the transition to GEGL in GIMP 3.0. The GEGL transplant had been long-planned, the maintainers said, but it took both of them being in the same room at the same time to jump-start it. As they reported, when that happened almost accidentally in March, the port took off, and is now more than 90% complete.
As Natterer explained it, Kolås was in town for a
week-long hacking session, and the two decided to attempt some GEGL
porting just to verify their planned approach. Kolås added a GEGL
feature that used GIMP's existing image-tile storage as its back-end.
Natterer patched in the new feature, and it immediately worked so well
that he began cutting out other bits of legacy code to replace it with
GEGL buffer manipulation. The more legacy code he ported to GEGL, the more
fifteen-year-old layers of abstraction in the existing code base
simply "collapsed away
". In addition to simplifying and
shrinking the code, he said, migrating tile manipulation to GEGL
buffers made it possible to replace many image filters (such as blurs)
with primitive GEGL operations. The two continued to work, and roughly
a month later, merged the result into the GIMP trunk. The 2.9 code not only
replaces GIMP's core process with a GEGL engine, but it provides a
separate GEGL engine for GIMP plugins. In 2.10, the legacy plugin API
will be officially deprecated.
In addition to telling the tale of its development, the two also demonstrated the GEGL-backed GIMP 2.9 live. First, they showed how compressing the range of an 8-bit-per-channel image resulted in serious color-banding. Then, they switched to 16-bit-per-channel mode, re-compressed the same image, and restored it to its original look without any discernible banding or quality loss. Another demonstration illustrated how 16-bit-per-channel mode allowed similarly higher-quality painting and gradients. Those examples are easily visible, but in practice the benefits of high bit-depth editing are not so immediate; rather, the errors accumulate over several steps — but they do accumulate, with every operation done in 8-bit mode.
There are still other benefits to the new image pipeline, however. Because GEGL operations are defined on abstract buffers, adding support for an entirely different image format is a matter of writing a new format for babl, the underlying pixel transformation layer. During the GEGL hack-a-thon, Kolås wrote such a back-end for indexed color images (such as you find in the GIF format). Natterer had originally planned to drop support for indexed images, but with the babl format defined, they work just as well as any other format in the GEGL-ified GIMP. GEGL also allows GIMP to use all sorts of painting and filter operations on indexed images (such as smudging and blurring) that are typically not possible on indexed color.
The GEGL transition is still not complete; in particular UI elements
frequently mismatch when in high-bit depth mode. For example, the
levels and histogram sliders still show 0-255 as the range, rather
than 0-100% (or some other bit-depth-neutral label). But the brave
can already experiment with GIMP 2.9. More work remains to adapt
GIMP's filters and plugins to GEGL, some of which will be tackled by
2012 GSoC students. But in addition to its practical usage for the
GIMP project, Natterer pointed out that the ease and success of the
GIMP port demonstrates that GEGL is stable, fast, and demonstrably
useful for "arbitrarily complex
" applications.
Kolås went into more depth on the inner workings of GEGL, which has the potential to serve as the graphics engine for other editors (and indeed there is some work underway on GEGL usage in the MyPaint project). It could also support vector and path operations, or be used in other application classes altogether, such as map transformations. Because GIMP's plugin engine is separate from the main application core, it, too, could be usable by other applications, allowing them to call GIMP plugins without requiring them to adopt GEGL itself.
The modern sounds of the GIMP
The GEGL integration talk joined two other presentations about GIMP development. Peter Sikking and Kate Price of Man Machine Interface Works (MMIworks) presented a glimpse at their still-in-progress work to revamp the editor's text handling capabilities. MMIworks is a user interaction firm that methodically investigates UI challenges and develops solutions; their past work with GIMP includes the new scaling and transformation tools and the single-window-mode implementation.
MMIworks set out to re-design the text manipulation tools for GIMP hoping to unify the sometimes conflicting uses of text in GIMP images: some text elements are part of the design, while some are used as annotations; some text is high-resolution typography destined for print, while other text is small and used for icon creation. In addition, GIMP users frequently apply multiple filters and effects to text, but want it to remain editable, which complicates the implementation.
Further complicating the task is one feature many users want but that is not yet implemented in GIMP: the ability to warp text along a path. Exactly where that warping takes place is another variable: at the top of the line, the bottom of the line, at the midpoint, or perhaps at the x-height. In addition, they said, there is no reason that text-warping could not be applied to the margins or the alignment of a paragraph. Throw in the fact that there are four possible writing directions, and text manipulation amounts to a substantial challenge.
The two did not have a final solution to present, but they did provide hints as to the direction the GIMP is heading. While experimenting with how to re-implement text manipulation from scratch, they decided that many of the parameters associated with text manipulation had features in common with vector or path editing tools. As a result, GIMP's eventual text-manipulation tool will be integrated with its path tools. As they explained it, whether the artist starts with a line of text and warps it to a path, or starts with a path and attaches text to it, the tools should offer the same control.
Victor Oliveira presented his work porting GEGL and GIMP functionality to the OpenCL framework. OpenCL is a cross-platform language for writing code blocks (called kernels) that can be executed on either CPUs or on GPUs (as well as other resources, such as FPGAs). The specification is managed by the Khronos Group (most widely known for OpenGL), and there are Linux implementations for all of the major GPU architectures as well as the Mesa software library.
Previous work has already ported specific GEGL functions to OpenCL; Oliveira ported more functions, including colorspace conversions. He also implemented device-to-device data transfer, wrote a filter API, and benchmarked OpenCL performance. The resulting work is already available, it was merged from the opencl-ops branch in GEGL 0.20 and GIMP 2.8 — Oliveira estimated that it encompassed around 15,000 lines of code.
For systems without a GPU-based OpenCL implementation, the OpenCL core also "automatically" makes GEGL multi-threaded, so there are still performance increases on multi-core CPUs. Some of the biggest gains, he said, come in OpenCL-based color transformations, which were historically a bottleneck. For any GPU-based system, however, the speed-up is quite significant. OpenCL is particularly fast with floating-point arithmetic, which is at the heart of GEGL's high-bit-depth operations. Thus, the more GEGL is integrated with GIMP (such as in the 2.9 series), the faster GIMP will become on multi-core and GPU-accelerated machines.
There are still more operations that could be ported to OpenCL, he said, but the next big step is to get more users to test the OpenCL code on a wider variety of CPU and GPU combinations. Intel, AMD, and Nvidia each have their own implementation, plus the open source Mesa implementation, which adds up to a lot of possible set-ups. Oliveira has already done some caching work to overcome the latency caused by transferring pixels from the CPU to the GPU and back; he suggested that adding an OpenGL output node to GEGL would further speed things up by allowing the image to be rendered directly on the graphics hardware rather than making yet another round trip.
GIMP interpretation
Together, the release of GIMP 2.8, the sudden port of GIMP 2.9 to GEGL, and the immediate availability of GPU-accelerated operations add up to more GIMP news this year than in the previous few LGMs combined. Granted, true GIMP fanatics have probably been running development builds for many months, but for everyone who depends on distribution-provided package, GIMP has taken a big jump forward.
Looking to the future, MMIworks always produces solid tools, and the rebuilt text-manipulation tools suggest that more original features are still in the works — warping text along the x-height line or paragraph-alignment line is certainly a new idea. Taken together with GPU accelerated operations, GIMP is moving forward on the user-facing tools and on the back-end simultaneously. But the most significant news is perhaps Natterer's comments at how much smaller and easier to maintain the GEGL-based GIMP is. As an aside during his talk, he mentioned that the simplified core is easier for new developers to dig into and understand. For an open source project, that is one of the best features one could wish for.
[Thanks to the Libre Graphics Meeting for assistance with travel to Vienna.]
Tasting the Ice Cream Sandwich
Owners of Android handsets can be forgiven for feeling frustration over how long it took to get an update from the 2.3 "gingerbread" release. Google's flat-out effort to improve tablet support led to a 3.0 ("honeycomb") release that was not deemed suitable for handset use—or for open-source release. It was only with the 4.0 "Ice Cream Sandwich" (ICS) cycle that all that new code became available for handsets—sort of. Six months after the 4.0 release, your editor finally got his hands on a device that can run it; what follows is a review of sorts.The availability of the 4.0 release has not been as wide as a lot of users might have hoped for. Upgrades for existing handsets have been slow in coming. And for many handsets, including your editor's trusty Nexus One, there will never be an official 4.0 system. Even worse, the CyanogenMod developers have also decided that they will not be working on porting the upcoming 4.0-based CM9 release to the Nexus One. Their reasoning is understandable—in short, the Nexus One lacks the memory to run this release in a pleasing way—but it is still somewhat sad. Once upon a time, the Linux community prided itself on continuing to support older hardware long after its manufacturers had forgotten about it. As Linux moves into the consumer electronics world, the ability and the desire to support last year's devices are both falling by the wayside.
Interestingly, it is not just the core system that leaves older devices behind. Users of older Android releases who search for this week's hot application often get a surprising result: nothing. If an application does not run under a given Android version, the "Google Play" program simply will not show it at all. (Perhaps more disconcertingly, viewing an application's page with a desktop browser yields the message that one's handset—not currently in use—is not supported). The end result is that users of last year's hardware can find themselves in a situation where parts of the application landscape simply seem to disappear over time—updates stop happening, and new applications may never become available.
The inability to run an application for LWN's payroll service, combined with the availability of an unlocked version of the Galaxy Nexus from Google and the simple desire for a new toy drove the acquisition of a 4.0-capable device. There is one thing about the Galaxy Nexus which immediately stands out to a Nexus One (or Nokia N9—your editor's other device) user: its size. The Galaxy Nexus could almost be considered to be an extra-small tablet; it is large enough to be an uncomfortable fit in a pants pocket.
That size brings some advantages, of course, starting with the larger screen with its 1280x720 resolution. The phone features five-band 3G connectivity, a dual-core processor, a front-facing camera, and even a built-in barometer. The extra processing power and memory are immediately evident when using the handset; it is far more responsive than any Android handset your editor has used previously. Given all that, it may just prove possible to get used to hauling a larger handset around.
Google's version of the Galaxy Nexus is fully unlocked, meaning it is a simple matter of a single "fastboot" command to unlock the bootloader, which is the key to installing a different operating system on the device. There is one little surprise worth knowing about: on this phone, unlocking the bootloader will wipe the device. Anybody who wants to do the kinds of things enabled by an unlocked bootloader is presumably prepared to cope with an amnesiac handset, but this behavior is still a good thing to know about.
The ICS experience
To users of previous Android's versions, the Ice Cream Sandwich release can be a little jarring at first. Some things just aren't where one expects them to be anymore. Certain ingrained behaviors—holding down the home key to get a list of running applications, for example—don't work in the same way anymore (in this case, the application list has been moved to its own dedicated key). The application directory now scrolls sideways instead of upward. One can no longer place widgets or contact icons on the background by holding a finger there; one must, instead, notice the little tab in the application directory and use that. The search and menu buttons are long gone. In the menu case, the button has been replaced by an icon that may appear at the bottom of an applications screen, except when it appears at the top instead; playing "find the menu" can be one of the more awkward parts of the ICS experience. That notwithstanding, the interface mostly works well once one gets used to the new ways of doing things.
One simultaneously good and bad feature of Android phones is the way they upload so much information to the Google mothership. The good side becomes immediately evident when one moves into a new handset; an awful lot of things Just Work like they did on the previous one. Contacts and calendar events are there, applications magically install themselves, and so on. Your editor was a little surprised to observe that Android handsets now pass wireless network passwords up to Google as well; the new handset associated itself with the local network without even asking. Searching through the menus turns up a mention of WiFi passwords in the "backup" option; they are stored with the list of installed applications and other bits of miscellaneous information. There is no apparent way to turn off the backing-up of these passwords, which might well be regarded as sensitive information, without turning off the backup feature entirely.
One other surprise that has clearly hit a number of Galaxy Nexus owners is that the handset cannot function as a USB mass storage device when plugged into a computer. Instead, it wants to talk to the media transport protocol (MTP), which gives it better control over what is shared with the host. Alas, MTP is not particularly well supported in Linux; there is a FUSE-based mtpfs module, but it failed to function properly on your editor's system. The best approach seems to be to use an application that has libmtp support built into it; nautilus, for example, is able to move files to and from the phone with relatively little trouble.
There is, as it turns out, a whole series of applications out there aimed at making it easier to move files back and forth. Most of them set up some sort of web server on the device that can then be accessed from elsewhere on the net; some have fairly slick JavaScript-based browser interfaces. These applications also must be given full access to the entire device, and one must trust that they will let only the intended user into the device. One of them demanded the ability to access location data, which was a bit disconcerting: it certainly does not need that information to carry out its intended task. Linux-based users may be most at home with an application like SSHDroid, which runs an SSH server accessible in the usual ways.
There are some other nice 4.0 features worth a quick note. It includes a reasonably advanced mechanism for controlling and limiting wireless data use that can, among other things, monitor and clamp the usage of specific applications. Internet telephony with SIP is a native Android feature now, but, in a move clearly intended to mollify carriers, the handset will not do SIP calls unless a WiFi network is available. Android can now use dm-crypt to encrypt all the storage on the device; an encrypted phone requires a password at power-on or it will not be able to function. Those curious about the details of how whole-phone encryption works on Android can find some information on this page.
One other thing one notices quickly with the 4.0 release is the presence of a number of user interface features that, previously, were only available with CyanogenMod. The ability to tweak the color of the notification LED, more home screens, the configurable "favorite applications" bar at the bottom of the home screen, and the ability to go straight to an application from the lock screen—though the latter is limited to the camera on official Android—are examples. CyanogenMod may not have any sort of special path into official Android, but it seems clear that Google's developers are paying attention to what CyanogenMod is doing. That is not how a typical open source system might work, but it's far better than nothing.
On the other hand, other CyanogenMod features are still very much missing. Your editor misses the configurable "power bar" widget, for example. CyanogenMod allows the application directory to be displayed more densely, even on the Nexus One's smaller screen. The CyanogenMod camera application is superior to what Android offers, though, it must be admitted, the new panorama mode in the 4.0 camera application is kind of fun. And, of course, Android just does not offer the sort of configurability provided by CyanogenMod.
The good news is that, of course, there is a version of CyanogenMod 9 in the works for the Galaxy Nexus. Experimenting with the CM9 nightly builds has not yet begun in the LWN laboratories; it seemed worthwhile to get a good sense for stock Android 4.0 first. But the truth of the matter is that one does not truly appreciate a shiny new gadget until one has attempted to brick it. So stay tuned for a look at CM9 on this device sometime in the near future.
In the meantime, it is clear that the development of the Android platform continues at a fast pace. It has become visibly slicker and more capable over a relatively short period of time. For better or for worse, Android represents a highly successful combination of fully free software, corporate-controlled open source, and fully proprietary code. The result may not be quite the 100% free device that we would like, but it has led to a series of nicely shiny toys with a lot of hackability, which is not an entirely bad result.
Highlights from the PostgreSQL 9.2 beta
The PostgreSQL project has just released a beta of its next major version, 9.2. As usual with its annual release, this version includes many new features, most of which are targeted at improving database performance. The developers have been hard at work improving response times, increasing multicore scalability, and providing for more efficient queries on large data. They also found time to include some other major features, so let's explore a few of the things 9.2 beta has to offer.
JSON support
If the new non-relational databases (or "NoSQL") have proved anything, it's that many application developers want to store JSON objects in a database. With version 9.2, that database can be PostgreSQL.
The JSON support in PostgreSQL 9.2 isn't complete JSON database functionality, but it goes a long way toward that. First, there's a validating JSON data type, so that you can create tables with a specific JSON field:
Table "public.users" Column | Type | Modifiers --------------+---------+------------ user_id | integer | not null user_name | text | user_profile | json |
Better is that there are multiple JSON conversion functions, including row_to_json() and array_to_json(), which allow you to get the results of a query in JSON format.
select row_to_json(names) from ( select schemaname, relname from pg_stat_user_tables ) as names; row_to_json ------------------------------------------- {"schemaname":"public","relname":"users"}
This means that applications can now send queries to PostgreSQL, get back results in JSON format, and immediately act on those results without further conversion. Unfortunately, it is not yet possible to send your query as a JSON object or JavaScript code, but that's likely to come in some future version of PostgreSQL.
To make the JSON support really useful, though, you need two optional components, or "extensions" to PostgreSQL: hstore and PL/v8. Hstore is a data type that stores indexed key-value data and ships with PostgreSQL. PL/v8 is a stored procedure language based on Google's v8 javascript engine, sponsored by Heroku and NTT.
Hstore allows you to store flattened JSON objects as a fully indexed dictionary or hash. PL/v8 lets you write fast-executing JavaScript code which can run inside the database to do all kinds of things with your JSON data. One such is to create expression indexes on specific JSON elements and save them, giving you stored search indexes much like CouchDB's "views".
The PostgreSQL project chose to implement its own JSON formatting rather than utilizing any external library, reducing external dependencies and improving portability.
Range types
Anyone who's ever written a calendaring application can tell you that there's no such thing as a "point in time". Time comes in blocks, and pretty much everything you want to do with times and dates involves spanning minutes, hours, or days of time. For a long time, the only way relational databases had to represent spans of time was as two endpoints in different fields, an unsatisfactory and error-prone method.
In 9.2, contributor Jeff Davis introduces "range types" which allow the representation of any one-dimensional linear range, including time, real numbers, alphabetical indexes, or even points on a line. Such ranges can be compared, checked for overlap, and even included in unique indexes to prevent conflicts. PostgreSQL is the first major relational database system to have this concept.
To give you a concrete example, imagine you're writing a conference scheduling application. You want to make sure that no room can be scheduled for two speakers at the same time. Your table might look something like this:
CREATE TABLE room_reservations ( room_no TEXT NOT NULL, speaker TEXT NOT NULL, talk TEXT NOT NULL, booking_period TSTZRANGE, EXCLUDE USING gist (room_no WITH =, booking_time WITH &&) );
That odd "EXCLUDE USING gist" clause says not to let anyone insert a record for the same room at overlapping times. It utilizes two existing PostgreSQL features, GiST indexes and exclusion constraints. This substitutes for dozens of lines of application code in order to enforce the same constraint. Then you can insert records like this:
INSERT INTO room_reservations VALUES ( 'F104', 'Jeff Davis', 'Range Types Revisited', '[ 2012-09-16 10:00:00, 2012-09-16 11:00:00 )' );
As you can see, PostgreSQL's range types support mathematical closed and open bracket notation, helping you define ranges which do or don't overlap with adjacent ranges.
Scalability to 64 cores
Thanks to its Multiversion Concurrency Control (MVCC) architecture, PostgreSQL does not need to hold any locks for reading data, just for writing data. This should, in theory, allow for near-infinite multicore scalability of a read-only workload. But in reality, PostgreSQL 9.1 only scaled to around 24 cores before throughput per core fell off sharply. This really irritated PostgreSQL contributors Noah Misch and Robert Haas, so they decided to do something about it.
The main reason was that PostgreSQL was actually holding locks for each read. The biggest of these was a unitary lock on the table to make sure that it didn't get dropped while the read query was still running. When you have a lot of very short queries doing reads against the same table, contention on this table lock becomes a major bottleneck. Through a combination of repartitioning the lock memory space, and reducing the time required to get a lock, they largely eliminated that bottleneck.
Other contributors, such as lead developer Tom Lane, made other optimizations to the read-only workload, by, for example, reducing memory copying for cached query plans. The University of California at Berkeley donated the use of a high-memory 64-core server for testing. The results of all of these optimizations are gratifying and dramatic.
PostgreSQL now scales smoothly to 64 cores and over 350,000 queries per second, compared to topping out at 24 cores and 75,000 queries per second on PostgreSQL 9.1. Throughput is better even at low numbers of cores. Note that this is on an extreme workload: all primary-key lookups on a few large tables which fit in memory. While it may seem obscure, that workload describes many common web applications (such as Rails applications), as well as the kind of workload many of the new key-value databases are designed to handle.
Index-only access
One of the features other database systems have, which PostgreSQL has lacked, is the ability to look up data only in an index without checking the underlying table. In the databases which support it (such as MySQL and Oracle) this is a tremendously useful performance optimization, sometimes called "covering indexes".
The reason why it's useful is that for very large tables an index on one or two columns could be 1/100th the size of the table, and is often cached in memory even when the table is not. Thus if you can touch only the index, you can avoid IO, making your query return twenty times faster. It's even more useful if the table in question is only going to be used to join two other tables on their primary keys.
However, the same MVCC which makes read queries scale so well on PostgreSQL made index-only access difficult. Even if the data you wanted was in the index, you had to check the base table for concurrency information. But then contributor Heikki Linnakangas created a highly cacheable bitmap called the Visibility Map, so that the query executor only has to check the base table for data pages which have been recently updated.
This means that, in 9.2, you'll be able to get your query answered just by the index in many or most cases, speeding up queries against large tables. Yes, this also means an end to most "COUNT(*) is slow on PostgreSQL" issues.
The caveat with this feature is that the table or tables in question need to be fairly up-to-date in database maintenance ("VACUUM"), which limits the ability to use the optimization on tables with a lot of "churn". Regardless, for many common use cases and for data warehouses, index-only access will be an order-of-magnitude performance improvement.
Cascading replication
Of course, these days horizontal scalability is a lot more popular than vertical scalability. The PostgreSQL developers, particularly Jun Ishiduka, Fujii Masao, and Simon Riggs, have continued to improve the binary replication introduced in PostgreSQL 9.0. Version 9.2 now contains support for cascading replication, which I will explain by example:
Imagine that you have three load-balancing PostgreSQL servers in Amazon US East, and another cluster of three replicated PostgreSQL servers in Amazon US West for failover in case of another AWS region-wide outage. If you want to use streaming replication, each server in US West needs to replicate directly from the master in US East, driving your transfer costs through the roof.
What you really want is the ability to replicate to database server 1 in US West, and have the two other servers in US West replicate from that server. With PostgreSQL 9.2, you can.
Configuration is fairly simple if you already have PostgreSQL 9.1 replication set up. Simply tell the cascading replica to accept replication connections by setting the wal_senders parameter. Then connect to it from the downstream replicas.
Other features
This isn't everything in the PostgreSQL 9.2 beta. There's performance enhancements for writes such as group commit, a new class of index called SP-GiST, reductions in CPU power consumption, multiple enhancements to modifying database schema at runtime, and even several new database monitoring commands. You can read about the new features in the PostgreSQL 9.2 beta release notes and the beta documentation.
Some features, planned for 9.2, didn't make it into this release due to issues found during development and testing. These include checksums on data pages to detect faulty storage hardware, federated databases, regular expression indexing, and "command triggers" which can launch an action based on arbitrary database events. Hopefully we'll see all of these in PostgreSQL 9.3 next year.
The PostgreSQL project hopes you'll download and test the beta to help identify and fix bugs in version 9.2. If the project holds true to its timeline for the last couple of years, the final release of version 9.2 should be some time in September. In the meantime, you can download and test the beta version.
Page editor: Jonathan Corbet
Next page:
Security>>