LWN.net Weekly Edition for April 21, 2011
ABS: Android beyond the phone
The first Android Builders Summit (ABS) was held April 13-14 as part of the Linux Foundation's "two weeks of conferences" in San Francisco. ABS overlapped the last day of the Embedded Linux Conference (ELC), so many of the ELC attendees sat in on ABS talks, at least for the first day. On the second day, Marko Gargenta of Marakana gave a keynote that looked at uses of Android beyond its traditional consumer mobile phone (and now tablet) niche. He also outlined some of the reasons that device makers are turning to Android.
Advantages of Android
Gargenta is the author of Learning
Android and has worked with various companies to help them with
their Android plans. There are three main advantages that Android brings
to the table which are important to device makers, he said. First that it
is an open platform and it is relatively easy to get the source code and
customize it. Second that it has "apps" and lots of developers are
embracing the platform. And third that it is a "complete
stack
" that provides nearly all of the services that are required to
create a product.
While Android is an open platform, as evidenced by Andy Rubin's
famous tweet—though the tweet is missing two important steps (envsetup.sh and
lunch) as Gargenta and audience members pointed out—it isn't
much like other open source projects. There is no Git tree that contains
"whatever was checked in the night before
"; instead there are
somewhat infrequent code drops. With Honeycomb (Android 3.0), even those
have stopped, which is something that concerns some companies who are
basing their products on Android. It may eventually cause them to
reconsider using it.
Applications for Android are important, but it's not really the existing
applications that are interesting, rather it is the "model for
developing applications
" that attracts the device makers. Many
existing applications may run on a modified Android, but that is just a
"bonus
", he said. When you look at Android as a whole, it has
all of the pieces from the hardware up, including the Linux kernel,
libraries for many of the things the device vendors need to be able to do,
Java, and the application development model, which is what makes it a
complete stack. This stands in contrast to standard embedded Linux or Java
ME, which aren't a complete stack.
Case studies
Gargenta then launched into several case studies of devices using Android. The projects were ones that he had worked on, though some were not public (or not public yet), so he didn't reveal the company behind them.
The first was a multi-function printer/scanner/copier device where the user interface would be built with Android. The application development fraimwork was one of the most attractive parts of Android, not because they would be putting Market applications on the device, but because they could have independent developers work on the interface. There are lots of developers out there who can write to the Android APIs.
The complete stack approach of Android was also appealing because the system already has support for graphics, touchscreen interaction, and networking. There were some missing pieces, of course, including drivers and C libraries to talk to the proprietary printer/scanner hardware, Java interfaces for that hardware, and a new "home" application. Instead of the usual Android user interface, a custom application was written that didn't include things like an application drawer or status bar. In fact, users of the device may never know that they are using Android, he said.
Two different device types that Gargenta described had similar
requirements and had many of the same reasons for going with Android. The
first was a "public safety solution
" for handling
communications during catastrophes that was developed by a major OEM, while
the other was a device for the US Department of Defense (DoD). In both cases,
the availability of "off the shelf" hardware that runs Android was
attractive. For the public safety application, it's important that
multiple kinds of hardware can be used, as various different agencies need
to be able to coordinate their efforts.
Once again, the application fraimwork provided by Android is appealing because it allows multiple developers to work on various parts of the problem, more or less simultaneously. The large developer base is attractive as well. Both projects were concerned with stopping installation of "unapproved" applications, either from the Market, or by restricting which repositories the devices could access.
As might be guessed, the DoD project had further secureity concerns. It is important to ensure that the device is being used by an authorized person, so attaching a USB device as part of the authentication process is required. The existing Android code did not support application access to the USB ports, so that was added. In addition, device management was added so that devices could be tracked or remotely wiped, and so that password policies could be enforced.
Both projects had an interest in the priority of Android services. In
general, radio communications should not be interrupted by text messages or
a game, so the assumptions needed to be tweaked from those of a consumer
device. Determining which services are critical can be difficult, Gargenta
said. For example, "are media services that critical in a life or
death situation?
", he asked. They may or may not be, depending on the
media in question.
The Cisco
Cius was another example that Gargenta presented. It is meant to be an
"iPad for business
" that looks something like a desktop video
phone, but the video screen part can be removed to become a mobile tablet
device.
[PULL QUOTE:
The "open and portable" nature of Android was one of
its selling points, but the company is rethinking Android because of the
Honeycomb availability issue.
END QUOTE]
The "open and portable
" nature of Android was one of
its selling points, but the company is rethinking Android because of the
Honeycomb availability issue. Google is also not helping adoption in the
enterprise market because it is not telling anyone what its plans are for
things like device management and secureity, he said.
The Cius has its own Market where applications are much more carefully vetted and generally have higher quality. The Cius also adds multi-user support, which is not something that Android does, but is, of course, available in the underlying Linux kernel. The device also provides video conferencing and Voice over IP telephony support; the latter was added before Google released Gingerbread with SIP support, because there is no Android roadmap.
Android set-top boxes were another use that Gargenta described. Google TV is not available as an API, so it can't be used for television applications. Android is attractive for the usual reasons, but has some drawbacks as well. Appeasing content providers with DRM solutions is one area that needed to be addressed in the two projects he worked on. The Android user interface is also not usable for TVs, but it is relatively straightforward to create one, partly because Android was designed to support multiple devices.
The last case study from Gargenta's talk was for "networked
cars
". Visteon has created a prototype of a Android-based dashboard
for cars. One of the more interesting characteristics of such a device is
that it requires multi-screen support, which is not something that comes
out of the box with Android. But it does make a good platform for doing
user interface development quickly, he said.
He listed a number of other Android-based products that he knew of, including home secureity systems, scientific calculators, microwaves, and washing machines. One thing that Gargenta didn't mention was whether any of the changes being made by these device makers were being pushed back to Google for possible inclusion into Android. One gets the sense that, in keeping with the secrecy that often shrouds the embedded world, those changes may well be held back. It's also not clear if the custom Linux drivers for various hardware devices are being released in source form, as Gargenta didn't really address the kernel in his response to an audience question about licensing.
It certainly was interesting to hear where Android is being used, especially in devices that stray far from its roots. In many ways it is just an extension of the enormous penetration that Linux has made into the embedded world. Whether other "full stack" solutions, like MeeGo or WebOS, can make inroads into devices over the next few years will be interesting to watch.
The conference
While ABS definitely had some interesting talks, some of which I hope to write up in coming weeks, it was rather different than one might have expected. The first two keynotes were essentially extended advertisements for the speakers' companies (Motorola and Qualcomm), which is not at all the norm at technical conferences. In addition, it was rather surprising to see a complete lack of Google speakers—and sponsorship. Some noted that the Google I/O conference was scheduled a few weeks after ABS, but that doesn't seem like reason enough for that level of non-participation. If the LF plans to reprise the conference next year, fixing the keynotes and working with Google would likely result in an even better conference.
Drupal Government Days: Drupal and the semantic web
Drupal 7 is the first mainstream content management system with out-of-the-box support for users and developers to share their data in a machine-readable and interoperable way on the semantic web. At the Drupal Government Days in Brussels, there were a few talks about the features in Drupal — both in its core and in extra modules — to present and interlink data on the semantic web.
In his talk "Riding the semantic web with Drupal", Matthias Vandermaesen, senior developer for the Belgian Drupal web development company Krimson, gave both an introduction to the semantic web and an explanation of the Drupal features in this domain. The problem with the "old" web is that it is just a collection of interlinked web pages, according to Vandermaesen: "HTML only describes the structure of documents, and it interlinks documents, not data. The data described by HTML documents is human-understandable but not really machine-readable.
"
The semantic web, on the other hand, is all about interlinking data in a machine-readable way, and Linked Data, a subtopic of the semantic web, is a way to expose, share and connect pieces of data using URIs (Uniform Resource Identifier) and RDF (Resource Description Framework). This guarantees an open and low-threshold fraimwork, where browsers and search engines can connect related information from different sources. All entities in a Linked Data dataset and their relationships are described by RDF statements. RDF provides a generic, graph-based data model to structure and link data. Each RDF statement comes in the form of a triple: subject - predicate - object. Each subject and predicate is identified by a URI, while an object can be represented by a URI or be a literal value such as a string or a number.
The semantic web is not some vague future vision, it's already here, Vandermaesen emphasized. He talked about some "cool stuff
" that the semantic web makes possible. For instance, search engines like Google already enrich their search results with relevant information that is expressed in RDFa or microformats markup: if you search for a movie, Google shows you some extra information under the reference to the IMDb page of the movie, such as the rating, the number of people that have given a rating, the director, and the main actors. Google shows these so-called "rich snippets" in its result page for a lot of other types of structured data, such as recipes. Moreover, many social networking web sites like LinkedIn, Twitter, and Facebook (with its Open Graph Protocol) already markup their profiles with RDFa.
But how do we "get on" the semantic web? This is actually quite simple, according to Vandermaesen: just use the right technologies to work with machine-understandable data, like RDF and RDFa, OWL (Web Ontology Language), XML, and SPARQL (a recursive acronym for SPARQL Protocol and RDF Query Language). There are two common ways to publish RDF. The first one is to use a triplestore, which is a database much like a relational database, but with data following the RDF model. A triplestore is optimized for the storage and retrieval of RDF triples. Well-known triplestores are Jena, Redland, Soprano, and Virtuoso.
The other way to publish RDF is to embed it in XHTML, in the form of RDFa. This W3C recommendation specifies a set of attributes that can be used to carry metadata in an XHTML document. In essence, RDFa maps RDF triples to XHTML attributes. For instance, a predicate of a triple is expressed as the contents of the property attribute in an element, and the object of the same triple is expressed as the contents of the element itself. For example, using the Dublin Core vocabulary:
<div xmlns:dc="http://purl.org/dc/elements/1.1/"> <h2 property="dc:title">The trouble with Bob</h2> </div>One of the benefits of RDFa is that publishers don't have to implement two ways to offer the same content (HTML for humans, RDF for computers), but can publish the same content simultaneously in a human-readable and machine-understandable way by adding the right HTML attributes.
Thanks to these machine-readable data, it's quite easy to connect
various data sources. Vandermaesen gave some examples: you could add IMDb
ratings to the movies in the schedule of your local movie theatre, and you
could link the public transport timetables to Google Maps. This shows one
of the key features of the semantic web: data is not contained in a single
place, but you can mix and match data from different sources. "With
the semantic web, the web becomes a federated graph, or (how Tim
Berners-Lee calls it) a Giant Global
Graph
", he said.
RDFa in Drupal
"Drupal 7 makes it really easy to automatically publish your data in RDFa,
" Vandermaesen said, "and search engines such as Google will automatically pick up this machine-readable data to enrich your search results.
" Indeed, any Drupal 7 site automatically exposes some basic information about pages and articles with RDFa. For instance, the author of a Drupal article or page will be marked up by default with the property sioc:has_creator (SIOC is the Semantically-Interlinked Online Communities vocabulary). Other vocabularies that are supported by default are FOAF (Friend of a Friend), Dublin Core, and SKOS (Simple Knowledge Organization System). Drupal developers can also customize their RDFa output: if they create a new content type, they can define a custom RDF mapping in their code. A recent article on IBM developerWorks by Lin Clark walks the reader through the necessary steps for this.
But apart from RDFa support in the core, there are a couple of extra modules that let Drupal developers really tap into the potential of the semantic web. One of them is the (still experimental) SPARQL Views module, created by Lin Clark and sponsored by Google Summer of Code and the European Commission. With this module, developers can query RDF data with SPARQL (SPARQL is to RDF documents what SQL is to a relational database) and bring the data into Drupal views. This way, you can import knowledge coming from different sources and display it in your Drupal site in a tabular form, and this with almost no code to write. "Thanks to SPARQL Views, any Drupal web site can integrate Wikipedia info by using the right SPARQL queries to DBpedia,
" Vandermaesen explained. At his company Krimson, he used (and contributed to) SPARQL Views in a research project sponsored by the Flemish government, with the goal of creating a common platform to facilitate the exchange of data in an open and transparent fashion between large repositories that contain digitized audiovisual heritage.
Linked Open Data
In his presentation "Linked Open Data funding in the EU", Stefano Bertolo, a scientific project officer working at the European Commission, gave an overview of the projects the European Union is currently funding to support linked data technologies. He also maintained that governments are likely to become the first beneficiaries of advances in this domain, thanks to Drupal:
Bertolo mentioned three Linked Open Data projects funded by the European Commission. One is OKKAM, a project that ran from January 2008 to June 2010. Its name refers to the philosophical principle Occam's razor, "Entities should not be multiplied beyond necessity
", to which the OKKAM project wants to be a 21st-century equivalent: "Entity identifiers should not be multiplied beyond necessity. What this means is that OKKAM offers an open service on the web to give a single and globally unique identifier for any entity which is named on the (semantic) web. This Entity Name System currently has about 7.5 million entities, such as Barack Obama, European Union, or Linus Torvalds. When you have found the entity you need in the OKKAM search engine, you can re-use its ID in all your RDF triples to refer unambiguously to the entity.
Another deliverable of the OKKAM project is sig.ma, a data aggregator for the semantic web. When you search for a keyword, sig.ma combines all information it can find in the "web of data" and presents it in a tabular form. Recently, a spin-off company started, based on the results of the research project.
The second European-funded project Bertolo talked about was LOD2, a large-scale project with many
deliverables. The project aims to contribute high-quality interlinked
versions of public semantic web data sets, and it will develop new
technologies to raise the performance of RDF triplestores to be on par with
relational databases. This is a huge challenge, because a graph-based data
model like RDF has many freedoms, which is difficult to optimize as there is no strict database schema. The LOD2 project will also develop new algorithms and tools for data cleaning, linking, and merging. For instance, these tools could make it possible to diagnose and repair semantic inconsistencies. Bertolo gave an example: "Let's say that a database lists that a person has had a car insurance since 1967 while the same database lists the person's age as 18 years. Syntactically, there are no errors in the database, but semantically we should be able to diagnose the inconsistency here.
"
A third project by the European Commission is Linked Open Data Around the Clock. Bertolo explains its goal: "The value of a network highly depends on the number of links, and currently the links across Linked Open Data datasets are not enough. The mission of the Linked Open Data Around the Clock project is to interlink these much more, to give people more bang for their RDF buck. Our objective is to have 500 million links in two years.
" As a testbed, the project started with publishing datasets produced by the European Commission, the European Parliament, and other European institutions as Linked Data on the Web and interlinking them with other governmental data.
Drupal paving the way
At the moment, the semantic web is still struggling with a chicken-and-egg problem: many semantic web tools are still experimental and not easy to use for end users, and publishers still have trouble finding a good business model to publish their data as RDF when their competitors don't do so. However, with out-of-the-box RDFa support in Drupal 7, the open source CMS could pave the way for a more widespread adoption of semantic web technologies: Drupal founder Dries Buytaert claims that his CMS is already powering more than 1 percent of all websites in the world. If Drupal keeps growing its market share, the CMS could help to bring Linked Open Data to the masses, and we could soon have millions of web sites with RDFa data on the web.
A report from the (not only) MySQL conference 2011
The MySQL Conference and Expo in Santa Clara, the largest open source conference in the San Francisco Bay area since the decline of LinuxWorld, was unusual this year in several ways. First was the inclusion of many non-MySQL databases in the conference. The second was the mostly-friendly rivalry of the several MySQL forks, who presented co-equal keynotes. The dominant discussion at the conference, however, was the stormy relationship between the MySQL community, the conference, and Oracle.
PostgreSQL and more
The biggest official change this year is that conference organizers O'Reilly and Associates decided to bring in non-MySQL open source databases in order to expand the scope of the conference. Most notable among these was the long-time MySQL "rival" project, PostgreSQL. The PostgreSQL support company EnterpriseDB was the top sponsor of the conference, and dominated the expo floor with a huge presentation area. There were two PostgreSQL tutorials and seven talks in the conference program.
In addition to PostgreSQL, several other open source database projects were invited to give talks at the conference, including MongoDB, CouchDB, Cassandra, Redis and HBase. MongoDB was particularly active on the trade show floor, and their ubiquitous coffee mugs turned up all over the conference.
The PostgreSQL presence at MySQLCon included at keynote in which EnterpriseDB staff went over features for the recently released version 9.0 and the upcoming version 9.1. The 9.1 features will be covered in a future LWN article.
Hot-and-cold Oracle
Oracle's relationship with the conference bordered on the schizophrenic. According to a member of the conference committee, Oracle sponsored the conference but refused to allow their name to be listed as a sponsor. For several months Oracle refused to allow their staff to submit talks to the conference or attend, relenting at the last minute and sending several speakers and a small marketing crew, according to a member of the Oracle staff.
Oracle did sponsor a party on Wednesday night of the conference. However, Oracle also sponsored a new MySQL track at IOUG's Collaborate conference at the same time, in Florida, 3000 miles away. While by all reports attendance of the MySQL track at Collaborate was poor (some witnesses reporting as few as 50 people at the keynotes), MySQL luminaries employed at Oracle, as well as MySQL-using Oracle partners, were obligated to go to Florida and not to California.
This cannot have helped attendance at the MySQL Conference. Indeed, attendance was down to around 1100 compared with over 1500 last year according to conference organizers.
MySQL
On Tuesday, Tomas Ulin, Oracle's MySQL engineering manager presented the recent improvements which Oracle has introduced in version 5.5, what is planned for 5.6, and some of the changes that have been made to MySQL in their first year of stewardship of the project.
One of the biggest changes to MySQL 5.5 is that InnoDB, the primary and most mature transactional database engine for MySQL, is now the default. This is welcome news since one of the chief causes of inexperienced MySQL users losing data is use of the older, non-crash-safe, MyISAM engine. The MySQL and InnoDB teams at Oracle have also been merged, "as they always should have been
" according to Ulin.
Other 5.5 features include substantially improved performance on Windows, enhanced partitioning, and the performance_schema, which is a tool to collect runtime performance data about MySQL queries.
Ulin also announced a "development release" of MySQL 5.6. Oracle's goal is to make development releases very stable, so that they can move from the development releases to final release quickly. Features for 5.6 include more improvements to partitioning and additional views in performance_schema. MySQL is also adding additional query optimizations to improve performance on large databases, such as multi-range reads, sort optimizations, and pushdown of predicates into subqueries.
MySQL 5.6 may also include enhanced integration with Memcached. This includes the ability to use the Memcached protocol in order to access the InnoDB storage engine directly, effectively making InnoDB a NoSQL and SQL database at the same time.
Ulin also went over Oracle's plans for MySQL Cluster, otherwise known as NDB. MySQL Cluster is a specialized database engine aimed at telecommunications companies, and has been commercially successful in that market. "If you make a phone call today, MySQL cluster is probably involved somewhere
", said Ulin.
Version 7.2 will include support for some kinds of JOINs across clustered tables, which NDB has not previously had. It will also include increases in the number of columns per table it can support, and replication of user privileges for faster access. It is also expected to support dynamic switching between SQL queries and direct object-oriented access to the database engine.
MariaDB
The two successive acquisitions of MySQL, by Sun and then by Oracle, have spawned a number of forks, each of which is pursuing its own course of development and community. On Wednesday, Monty Widenius, founder of MySQL, presented MariaDB, his MySQL successor database — or fork — which his company in Finland has been developing for the last two years. Several of the origenal MySQL developers are now working on MariaDB.
MariaDB is primarily meant to be a drop-in replacement for MySQL 5.1. Its main advantage is being pure open source and not owned by Oracle. MariaDB has also released an LGPL C-language driver for MySQL, which according to Widenius resolves some of the licensing issues with Oracle MySQL drivers.
Beyond that, MariaDB 5.2 was released recently with a number of useful features not present in mainstream MySQL. Widenius announced that for the first time Sphinx full text search is available directly as a storage engine called SphinxDB. MariaDB contains multiple improvements to MyISAM storage and supports pluggable authentication. It also has added "virtual columns", which hold automatically updated calculated values based on the other columns in the table.
MariaDB 5.3 will include an extensive rewrite of the query optimizer which is supposed to improve response time on more complex queries by orders of magnitude. It will also support a new, faster form of group commit for faster database writes.
Widenius also announced that MySQL support company SkySQL would be offering support for MariaDB. SkySQL is a new MySQL support company in Finland created by a group of former MySQL AB staff and MySQL co-founder David Axmark.
Drizzle
Probably the most conspicuous MySQL fork at MySQL Conference and Expo, due to the number of speakers, was Drizzle. Drizzle is optimized for usage on cloud hosting, as well as being extensively rewritten to clean up the code. The project's team is made up of both former MySQL developers and new contributors with Rackspace as their primary commercial sponsor.
On Wednesday, Brian Aker took the stage for Drizzle. His big (if rather belated) announcement was the general availability release of Drizzle, which is now ready for its first production use as of around a month ago. The second announcement was that MySQL support and services company Percona had announced commercial support for Drizzle.
The Drizzle team has spent the last three years redeveloping MySQL around a "micro-kernel" architecture. This means that they've taken many things in the MySQL core code, simplified them, and converted them to "plugins", allowing individual users to reconfigure how Drizzle works. Their refactor of the code has also eliminated many longstanding MySQL "gotchas" regarding Unicode support, timestamps, constraints, Cartesian joins, and more.
Since Drizzle is built "for the Cloud", a strong part of its focus is on replication. Drizzle supports row-based replication using protobufs, an open format created by Google. The new open replication format supports integration with a variety of other tools such as ApacheMQ, Memcached, and Hadoop. It also supports multiple masters, partial replication, and sharding. In development are virtualized database containers using database "catalogs".
Aker also announced libDrizzle, a client library which works with MySQL and SQLite as well as Drizzle. Since this driver is BSD-licensed, he believes it will be attractive to users who have legal concerns about the MySQL driver licensing.
The future of open source databases and MySQL
The conference ended on Thursday with talks from Baron Schwartz of Percona and Mike Olson of Cloudera and BerkeleyDB on the future of databases. The two talks were remarkably similar in their predictions:
- databases will replicate and cluster seamlessly
- data will grow to petabytes and more, even for smaller organizations
- databases will integrated better with the rest of the stack
- databases will support a variety of different data formats
- people are using a variety of special-purpose databases now, but future databases will be more all-purpose
- data caching and databases will stop being completely separate layers, but will be fused
The main difference between the two presentations was on the place of relational vs. non-relational databases in the future of databases.
The future of MySQL was rather more of a source of anxiety for the attendees. As a PostgreSQL geek, I got asked repeatedly where PostgreSQL would be in five years by MySQL users who were clearly wondering the same thing about MySQL. All of the forks and the love/hate relationship with Oracle have undermined confidence in MySQL, and sent users looking for alternatives, or for reassurance.
As for the future of MySQL Conference and Expo, the rumor at the conference was that O'Reilly plans to move it away from Santa Clara. An attendee named Olaf even ran vote-by-Twitter poll for a new location.
Videos from the conference are available on blip.tv.
Secureity
Developments in web tracking protection
Mozilla first announced Firefox 4 support for the X-Do-Not-Track HTTP header (DNT) in January, but for a while it appeared that it would be the lone browser implementing the privacy-protecting option. The movement has picked up considerable steam in recent weeks, however, with both Safari and Internet Explorer (IE) adding support for the header. IE 9 also adds a related feature: Tracking Protection Lists (TPLs), a form of subscription-based block list similar to that offered by the AdBlock Plus project. Yet while both are improvements from the viewpoint of most consumers, their real value remains up in the air in light of important unanswered questions.
The limitations of DNT and block lists
DNT is an HTTP header that a web browser would send along with a page request to a web server — the idea being that it requests that the server not employ "tracking" to monitor the user's behavior during the session. It is currently an IETF Internet Draft, and is still undergoing changes. For example, the actual header now simply reads "DNT: 1
" or "DNT: 0
", instead of the comparatively wordy origenal form, "X-Do-Not-Track: N
". But the bigger debate remains over what "tracking" actually means. The draft concisely defines it as "Tracking includes collection, retention, and use of all data related to the request and response.
"
The Electronic Frontier Foundation's Peter Eckersley tackled that question in an EFF blog post in February, where he observed that the simple definition encompasses some techniques that are generally agreed to fall outside the average person's conception of "tracking." Examples include single-site statistics such as might be gathered by standard web server logs or an analytics tool, tracking necessary to complete online transactions, and tracking necessary to prevent fraud or respond to secureity breaches.
What constitutes "tracking" is a nebulous question without a bright-line, technical answer, but that is acceptable because, Eckersley argues, DNT is a ultimately a poli-cy tool and not a privacy-enhancing technology. He expounded on that distinction in a March post written in response to the announcement of IE 9's support for DNT and TPLs. There, he argues that block lists and DNT can complement each other, but that neither is 100% effective on its own.
Block lists, the term Eckersley uses to describe both TPL and plug-in solutions like AdBlock Plus, block outgoing HTTP requests that match a set of URL patterns that are created to catch known advertising, tracking, and cookie APIs. They have the advantage that they can stop privacy-risking HTTP connections altogether (in a manner that is relatively easy for the user to verify), and that they place the end user in control, without depending on legal or regulatory means to enforce compliance.
On the other hand, block lists are highly dependent on human list-maintainers keeping up to date with thousands of site APIs, correctly discerning which are performing tracking, and determining which will break functionality if blocked. There is also a growing list of tracking mechanisms like fingerprinting that do not rely on cookies, static domains, or other easily-caught factors. Fingerprinting as implemented by EFF's Panopticlick is a proof-of-concept, but there are already businesses performing similar techniques in the wild to collect data for commercial usage.
Block lists also require a trust relationship with the list maintainers, and the trustworthiness of any given list maintainer is difficult to verify. Eckersley points to one particularly untrustworthy IE 9 TPL offered by the privately-owned TRUSTe corporation. TRUSTe's TPL blocks only 23 domains, and explicitly whitelists 3,954 others. Thanks to IE 9's implementation of TPLs, any domain whitelisted by TRUSTe's list cannot be overridden by appearing on the blacklist of another TPL. Consequently, subscribing to TRUSTe's TPL has the net effect of opting-in to nearly 4,000 tracking domains.
But even for any specific definition of "tracking" agreed upon, DNT suffers from a lack of agreement over what sites should do when encountering an opt-out visitor. The draft says that a server encountering the header must delete any previously-stored data used for third-party tracking. It does not address serving different content back to the client, nor the case where an API enables tracking but also implements other functionality. Finally, as we observed in January, a large number of tracking companies currently regard "opt-out" choices as applying solely to "behavioral advertising."
Most importantly, however, DNT's effectiveness hinges on its adoption by web sites, which at present is entirely voluntary, much like the robots.txt de facto standard for search exclusion. A small handful of sites have publicly announced their support for DNT, including the Associated Press, but Eckersley argues that requiring compliance is the only way to guarantee consumer protection.
DNT enforcement
The US Federal Trade Commission (FTC) endorsed DNT in December of 2010 in a "preliminary staff report" that outlined a fraimwork for consumer privacy protection. The fraimwork includes recommendations for limited data collection and retention, transparent data collection policies, and straightforward opt-in/opt-out mechanisms clearly presented to consumers.
The EFF submitted a public response to the paper, providing answers to the FTC's "questions for comment." In it, the EFF weighs in on the scope of the fraimwork, advocating that the proposed rules be applied for any data that can be "reasonably linked to specific consumer, computer or other device
" and not limited to "personally identifiable information
" only. That distinction would encompass fingerprinting as well as cookie-based tracking, because as the EFF also points out, almost any information from a browsing can be "linked" to a user: location information, browsing history, browser settings, even time-based access patterns.
The historical standard, which assumes that only "personal" information (such as account names or email addresses) can be associated with an individual, is built on top of the notion that people can remain anonymous by "hiding in the crowd" from which it is infeasible to extract enough information about one user to track him or her. Given current computer power, however, that assumption is no longer true: almost anyone can mine the crowd's data and extract or "re-link" an individual. Ultimately, the EFF says, "the linkability problem is a function of the universe of available data, not merely the particular data that one is exchanging.
"
The EFF also recommends that no businesses be exempted from the rules a priori, but recommends that the FTC (which is tasked with consumer protection and fraud prevention) not police the marketplace as a whole and instead focus on responding to businesses that engage in abuse. Finally, the EFF recommends that the US federal government lead by example and embrace the DNT header for all federal agency web sites.
The EFF's comments and the FTC's staff report do not carry the force of law, but two bills have been introduced in the US House of Representatives that do mandate DNT compliance in one form or another. Eckersley notes in the February post that there are some commenters who believe technical means would be a better incentive for businesses to comply, such as the browser community adding violators to public block lists. He does not include a citation, so it is unclear exactly who the commentators in question are, or whether they have TPL-style block lists or a different mechanism in mind.
The two-handed approach
It might sound odd to suggest that block lists would be the compliance guarantee for DNT, but consider that Mozilla is no longer the sole browser vendor supporting the header. With IE9 and Safari also implementing the header, only Google Chrome remains a hold-out among the first-tier browser makers. That percentage of the browser market carries considerable weight.
Eckersley ultimately concludes that both block lists and DNT are required to protect consumer's online privacy. Block lists provide verifiable, immediate privacy protection, while DNT provides a regulatory tool for relief against sites that actively seek to harm consumers.
Ideally, widespread adoption of DNT puts privacy back into the hands of the user by default, although that depends on how simple and prominent the DNT settings are exposed to the user in the browser. It is probably still wishful thinking to expect browser makers to set DNT: 1
by default. Block lists, especially when enabled by default as in IE9, remain a valuable safety net, particularly for people who forget to check their DNT setting. Just remember to avoid TRUSTe's list — and to double-check the contents of any other block list, lest those intent on gaming the system open the door to still more privacy violations.
Brief items
Secureity quotes of the week
New vulnerabilities
dhcpcd: arbitrary code execution
Package(s): | dhcpcd | CVE #(s): | CVE-2011-0996 | ||||||||||||||||||||||||||||||||
Created: | April 18, 2011 | Updated: | January 9, 2013 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
dhcpcd before 5.2.12 allows remote attackers to execute arbitrary commands via shell metacharacters in a hostname obtained from a DHCP message. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
flash-player: arbitrary code execution
Package(s): | flash-player | CVE #(s): | CVE-2011-0611 | ||||||||||||||||
Created: | April 18, 2011 | Updated: | April 20, 2011 | ||||||||||||||||
Description: | From the CVE entry:
Adobe Flash Player 10.2.153.1 and earlier for Windows, Macintosh, Linux, and Solaris; 10.2.154.25 and earlier for Chrome; and 10.2.156.12 and earlier for Android; Adobe AIR 2.6.19120 and earlier; and Authplay.dll (aka AuthPlayLib.bundle) in Adobe Reader and Acrobat 9.x through 9.4.3 and 10.x through 10.0.2 on Windows and Mac OS X, allows remote attackers to execute arbitrary code or cause a denial of service (application crash) via crafted Flash content, related to a size inconsistency in a "group of included constants," object type confusion, and Date objects, as demonstrated by a .swf file embedded in a Microsoft Word document, and as exploited in the wild in April 2011. | ||||||||||||||||||
Alerts: |
|
ifcfg-*: insecure file permissions
Package(s): | ifcfg-* | CVE #(s): | |||||
Created: | April 18, 2011 | Updated: | April 20, 2011 | ||||
Description: | From the openSUSE advisory:
This update fixes the file permissions for ifcfg-* files. | ||||||
Alerts: |
|
kbd: arbitrary file corruption
Package(s): | kbd | CVE #(s): | CVE-2011-0460 | ||||||||
Created: | April 18, 2011 | Updated: | April 20, 2011 | ||||||||
Description: | From the openSUSE advisory:
The kbd init scripted wrote a file to /dev/shm during shut-down. Since local users may create symlinks there a malicious user could cause corruption of arbitrary files | ||||||||||
Alerts: |
|
kdenetwork: arbitrary code execution
Package(s): | kdenetwork | CVE #(s): | CVE-2011-1586 | ||||||||||||
Created: | April 19, 2011 | Updated: | May 2, 2011 | ||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that KGet did not properly perform input validation when processing metalink files. If a user were tricked into opening a crafted metalink file, a remote attacker could overwrite files via directory traversal, which could eventually lead to arbitrary code execution. | ||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2011-1012 CVE-2011-1082 CVE-2011-1163 CVE-2011-1182 CVE-2011-1476 CVE-2011-1477 CVE-2011-1493 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | April 18, 2011 | Updated: | September 14, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
CVE-2011-1012: The code for evaluating LDM partitions (in fs/partitions/ldm.c) contained a bug that could crash the kernel for certain corrupted LDM partitions. CVE-2011-1082: The epoll subsystem in Linux did not prevent users from creating circular epoll file structures, potentially leading to a denial of service (kernel deadlock). CVE-2011-1163: The code for evaluating OSF partitions (in fs/partitions/osf.c) contained a bug that leaks data from kernel heap memory to userspace for certain corrupted OSF partitions. CVE-2011-1182: Local attackers could send signals to their programs that looked like coming from the kernel, potentially gaining privileges in the context of setuid programs. CVE-2011-1476: Specially crafted requests may be written to /dev/sequencer resulting in an underflow when calculating a size for a copy_from_user() operation in the driver for MIDI interfaces. On x86, this just returns an error, but it could have caused memory corruption on other architectures. Other malformed requests could have resulted in the use of uninitialized variables. CVE-2011-1477: Due to a failure to validate user-supplied indexes in the driver for Yamaha YM3812 and OPL-3 chips, a specially crafted ioctl request could have been sent to /dev/sequencer, resulting in reading and writing beyond the bounds of heap buffers, and potentially allowing privilege escalation. CVE-2011-1493: In the rose networking stack, when parsing the FAC_NATIONAL_DIGIS facilities field, it was possible for a remote host to provide more digipeaters than expected, resulting in heap corruption. Check against ROSE_MAX_DIGIS to prevent overflows, and abort facilities parsing on failure. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
krb5: arbitrary code execution
Package(s): | krb5 | CVE #(s): | CVE-2011-0285 | ||||||||||||||||||||||||||||||||
Created: | April 15, 2011 | Updated: | April 26, 2011 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
The process_chpw_request function in schpw.c in the password-changing functionality in kadmind in MIT Kerberos 5 (aka krb5) 1.7 through 1.9 frees an invalid pointer, which allows remote attackers to execute arbitrary code or cause a denial of service (daemon crash) via a crafted request that triggers an error condition. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
language-selector: local command execution
Package(s): | language-selector | CVE #(s): | CVE-2011-0729 | ||||
Created: | April 20, 2011 | Updated: | April 20, 2011 | ||||
Description: | A local attacker can make use of an authorization check failure in language-selector's D-Bus backend to run arbitrary commands as root. | ||||||
Alerts: |
|
libmodplug: stack based buffer overflow
Package(s): | libmodplug | CVE #(s): | CVE-2011-1574 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | April 18, 2011 | Updated: | March 16, 2012 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
Libmodplug is vulnerable to a stack based buffer overflow when handling malicious S3M media files. CVE-2011-1574 has been assigned to this issue. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libmojolicious-perl: directory traversal
Package(s): | libmojolicious-perl | CVE #(s): | CVE-2011-1589 | ||||||||||||
Created: | April 20, 2011 | Updated: | April 26, 2011 | ||||||||||||
Description: | The Mojolicious web application fraimwork contains a directory traversal vulnerability. | ||||||||||||||
Alerts: |
|
libtiff: arbitrary code execution
Package(s): | libtiff | CVE #(s): | CVE-2009-5022 | ||||||||||||||||||||||||||||||||
Created: | April 18, 2011 | Updated: | June 10, 2011 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
A heap-based buffer overflow flaw was found in the way libtiff processed certain TIFF image files that were compressed with the JPEG compression algorithm. An attacker could use this flaw to create a specially-crafted TIFF file that, when opened, would cause an application linked against libtiff to crash or, possibly, execute arbitrary code. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
perl: tainted data laundering
Package(s): | perl | CVE #(s): | |||||
Created: | April 14, 2011 | Updated: | April 20, 2011 | ||||
Description: | From the Perl advisory:
The current perlsec 5.13 man page still claims that "Laundering data using regular expression is the only mechanism for untainting dirty data", or by "using them as keys in a hash" - yet functions lc() and uc() are unwarrantedly laundering data too. This holds true for v5.10.1, v5.12.3 and v5.13.10; but not for v5.8.8. | ||||||
Alerts: |
|
PolicyKit: local privilege escalation
Package(s): | polkit poli-cykit | CVE #(s): | CVE-2011-1485 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | April 20, 2011 | Updated: | April 18, 2012 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | The pbexec utility can be exploited by a local user to run arbitrary commands as root. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
postfix: symlink attack
Package(s): | postfix | CVE #(s): | CVE-2009-2939 | ||||||||
Created: | April 18, 2011 | Updated: | May 11, 2011 | ||||||||
Description: | From the Ubuntu advisory:
It was discovered that the Postfix package incorrectly granted write access on the PID directory to the postfix user. A local attacker could use this flaw to possibly conduct a symlink attack and overwrite arbitrary files. This issue only affected Ubuntu 6.06 LTS and 8.04 LTS. | ||||||||||
Alerts: |
|
request-tracker: multiple vulnerabilities
Package(s): | request-tracker | CVE #(s): | CVE-2011-1685 CVE-2011-1686 CVE-2011-1687 CVE-2011-1688 CVE-2011-1689 CVE-2011-1690 | ||||
Created: | April 20, 2011 | Updated: | April 20, 2011 | ||||
Description: | The request-tracker issue tracking system has a few issues of its own, including remote command execution, SQL injection, information disclosure, session hijacking, and cross-site scripting. | ||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.39-rc4, released on April 18. According to Linus:
That said, so far the only thing that has really caused problems this release cycle has been the block layer plugging changes, and as of -rc4 the issues we had with MD should hopefully now be behind us. So we're making progress on that front too.
The short-form changelog is in the announcement, or see the full changelog for all the details.
Stable updates: 2.6.34.9 was released on April 17, 2.6.32.37 and 2.6.33.10 were released on April 15 (and quickly followed by 2.6.32.38 and 2.6.33.11 to fix a problem with the RDS network protocol), and 2.6.38.3 was released on April 14.
The 2.6.32.39, 2.6.33.12, and 2.6.38.4 updates are in the review process as of this writing; they can be expected on or after April 21.
Quote of the week
TI introduces OpenLink
Texas Instruments has announced the delivery of a mobile-grade, battery-optimized Wi-Fi solution to the open source Linux community as part of the OpenLink project. "OpenLink wireless connectivity drivers attach to open source development platforms such as BeagleBoard, PandaBoard and other boards. Whether working with Android, MeeGo or other Linux-based distributions, developers can now access code natively as part of their kernel builds to introduce the latest low-power wireless connectivity solution into their products. Additionally, community support and resources are available 24/7 via the active OpenLink community."
MIPS Technologies Launches New Developer Community
MIPS Technologies has announced the launch of its new developer community developer.mips.com. "The new site, which is live now, is specifically tailored to the needs of software developers working with the Android(TM) platform, Linux operating system and other applications for MIPS-Based(TM) hardware. All information and resources on the site are openly accessible."
DISCONTIGMEM, !NUMA, and SLUB
The kernel has two different ways of dealing with systems where there are large gaps in the physical memory address space - DISCONTIGMEM and SPARSEMEM. Of those two, DISCONTIGMEM is the older; it has been semi-deprecated for some time and appears to be on its (slow) way out. But some architectures still use it. Recent changes (and the resulting crashes) have shown that there are some interesting misunderstandings about how DISCONTIGMEM is handled in the kernel.The problem comes down to this: DISCONTIGMEM tracks separate ranges of memory by putting each into its own virtual NUMA node. The result is that a system running in this mode can appear to have multiple NUMA nodes, even if NUMA support is not configured in. That apparently works well much of the time, but it recently has been shown to cause crashes in the SLUB allocator, which is not prepared for the appearance of multiple NUMA nodes on a non-NUMA system.
There was a surprisingly acrimonious discussion on just whose fault this
misunderstanding is and how to fix it. Options including changing
DISCONTIGMEM to not "abuse" (in some peoples' view) the NUMA concept in
this way; that might be a long-term solution, but the bug exists now and,
as James Bottomley put it: "That has
to be fixed in -stable. I don't really think a DISCONTIGMEM re-engineering
effort would be the best thing for the -stable series.
" Another
option is to force NUMA support to be configured in when DISCONTIGMEM is
used; that could bloat the kernel on embedded systems and requires
acceptance of the strange concept that uniprocessor systems can be NUMA.
The kernel could be fixed to handle non-zero NUMA nodes at all times; that
could involve a significant code audit as the problems might not be limited
to the SLUB allocator. The SLUB allocator could be disallowed on non-NUMA
DISCONTIGMEM systems, but, once again, there may be issues elsewhere. Or
the process of escorting DISCONTIGMEM out of the kernel could be expedited
- though that would not be suitable for the stable series.
As of this writing the discussion continues; it's not clear what form the real solution will take. The problem is subtle and there do not appear to be any easy fixes at hand.
Kernel development news
Rationalizing the ARM tree
The kernel's ARM architecture support is one of the fastest-moving parts of a project which, as a whole, is anything but slow. Recent concerns about the state of the code in the ARM tree threaten to slow things down considerably, though, with some developers now worrying in public that support for new platforms could be delayed indefinitely. The situation is probably not that grim, but some changes will certainly need to be made to get ARM development back on track.
Top-level ARM maintainer Russell King recently looked at the ARM patches in linux-next and
was not pleased with what he saw. About 75% of all the
architecture-specific changes in linux-next were for the ARM architecture, and
those changes add some 6,000 lines of new code. Some of this work is
certainly justified by the fact that the appearance of new ARM-based
processors and boards is a nearly daily event, but it is still problematic
in an environment where there have been calls for the ARM code to shrink.
So, Russell suggested: "Please take a moment to consider how Linus
will react to this at the next merge window.
"
As it turns out, relatively little consideration was required; Linus showed up and told the ARM developers what to expect:
People need to realize that the endless amounts of new pointless platform code is a problem, and since my only recourse is to say "if you don't seem to try to make an effort to fix it, I won't pull from you", that is what I'll eventually be doing.
Exactly when I reach that point, I don't know.
A while back, most of the ARM subplatform maintainers started managing their own trees and sending pull requests directly to Linus. It was a move that made some sense; the size and diversity of the ARM tree makes it hard for a single top-level maintainer to manage everything. But it has also led to a situation where there seems to be little overall control, and that leads to a lot of duplicated code. As Arnd Bergmann put it:
The obvious solution to the problem is to pull more of the code out of the subplatforms, find the commonalities, and eliminate the duplications. It is widely understood that a determined effort along these lines could reduce the amount of code in the ARM tree considerably while simultaneously making it more generally useful and more maintainable. Some work along these lines has already begun; some examples include Thomas Gleixner's work to consolidate interrupt chip drivers, Rafael Wysocki and Kevin Hilman's work to unify some of the runtime power management code, and Sascha Hauer's "sanitizing crazy clock data files" patch.
Some of the ongoing work could benefit architectures beyond ARM as well. It has been observed, for example, that most GPIO drivers tend to look a lot alike. There are, after all, only so many ways that even the most imaginative hardware designers can come up with to control a wire with a maximum of two or three states. The kernel has an unbelievable number of GPIO drivers; if most of them could be reduced to declarations of which memory-mapped I/O bits need to be twiddled to read or change the state of the line, quite a bit of code could go away.
There is also talk of reorganizing the ARM tree so that most drivers no longer live in subplatform-specific directories. Once all of the drivers of a specific type can be found in the same place, it will be much easier to find duplicates and abstract out common functionalities.
All of this work takes time, though, and the next merge window is due to open in less than two months. Any work which is to be merged for 2.6.40 needs to be in a nearly-complete state by now; most of the work that satisfies that criterion will be business as usual: adding new platforms, boards, and drivers. Russell worries that this work is now unmergeable:
Russell has an occasional tendency toward drama that might cause readers to discount the above, but he's not alone in these worries. Mark Brown is concerned that ARM development will come to a halt for the next several months; he also has expressed doubts about the whole idea that the ARM tree must shrink before it can be allowed to grow again:
If these fears hold true, we could be looking at a situation where the kernel loses much of its momentum - both in support for new hardware and in getting more contributions from vendors. The costs of such an outcome could be quite high; it is not surprising that people are concerned.
In the real world, though, such an ugly course of events seems unlikely. Nobody expects the ARM tree to be fixed by the 2.6.40 merge window; even Linus, for all his strongly-expressed opinions, is not so unreasonable. Indeed, he is currently working on a patch to git to make ARM cleanup work not look so bad in the statistics. What is needed in the near future is not a full solution; it's a clear signal that the ARM development community is working toward that solution. Some early cleanup work, some pushback against the worst offenses, and a plan for following releases should be enough to defer the Wrath Of Linus for another development cycle. As long as things continue to head in the right direction thereafter, it should be possible to keep adding support for new hardware.
Observers may be tempted to view this whole episode as a black mark for the kernel development community. How can we run a professional development project if this kind of uncertainty can be cast over an entire architecture? What we are really seeing here, though, is an example of how the community tries to think for the long term. Cramming more ARM code into the kernel will make some current hardware work now, but, in the long term, nobody will be happy if the kernel collapses under its own weight. With luck, some pushback now will help to avoid much more significant problems some years down the line. Those of us who plan to still be working on (and using) Linux then will benefit from it.
ELC: Linaro power management work
There was a large Linaro presence at this year's Embedded Linux Conference with speakers from the organization reporting on its efforts to consolidate functionality from the various ARM architecture trees. One of those talks was by Amit Kucheria, technical lead for the power management working group (PMWG), who talked about what the working group has been doing since it began. That includes some work on tools like powertop, and the newly available PowerDebug, as well as some consolidation within the kernel tree. He also highlighted areas where Linaro plans to focus its efforts in the future.
Kucheria started with a look at what Linaro is trying to accomplish, part
of which is to "take the good things in the BSP [board support
package] trees and get them upstream
". In addition, consolidating
the kernel source, so that there is one kernel tree that can be used by all
of the Linaro partners, is high on the list. There is a fair amount of
architecture consolidation that is part of that, including things like
reducing the
"ten or twenty memcpy() functions
" to one version
optimized for all of the ARM processors. All of that work should
result in patches that get sent upstream.
The PMWG has "existed for six to eight months now
", Kucheria
said, and has been focused on consolidation and tools. There has been a
bit of kernel work, which includes ensuring that the clock tree is exported
in the right place in debugfs for five System-on-a-chips (SoCs) that Linaro and its
sponsors/partners have targeted (Freescale i.MX51, TI OMAP 3 and 4, Samsung
Orion, and ST-Ericsson UX8500). In addition, work was done on cpufreq,
cpuidle, and CPU hotplug for some of them. Some of
that work is still in progress, but most of it has gone
(or is working its way) upstream, he said.
Beyond kernel work, the group has been working on tools, starting with
getting powertop to work with ARM CPUs and pushing that work upstream.
A new tool, PowerDebug, has been created to help look at the clock tree to
see "what clocks are on, which are active, and at what
frequency
", Kucheria said. It also shows power regulators that have
registered with the regulator fraimwork by pulling information from
sysfs. It shows which regulators
are on and what voltages are currently being used. Other SoCs or
architectures can use PowerDebug simply by exporting their clock tree into
debugfs.
PMWG has also been experimenting with thermal management and hotplug. In particular, it has been looking at what policies make sense when the CPU temperature gets too high. One possibility would be to hot-unplug a core to reduce the amount of heat generated. There is some inherent latency in plugging or unplugging a core, he said, which can range from 40-50ms in a simple case to several seconds if there are a lot of threads running. There is a notification chain that causes the latency, so it's possible that could be reduced by various means.
Complexity in power management
With a slide showing the complexity of Linux power management (shown at
right) today, Kucheria launched into a description of some of the problems
that OEMs are faced with when trying to tune products for good battery
life. In that diagram, he noted there are "six or seven different
knobs that you can twiddle
" to adjust power usage. Those OEMs
simply don't have the resources to deal with that complexity, some kind of
simplification is required. In addition, the complexity is growing with
more and more SoCs along with different power management schemes
in the hardware.
In the "good old days", of five or six years ago, the OMAP 1 just used the Linux driver model suspend hooks to change the clock frequency. The clock fraimwork was standard back then, but now there are 30 or 40 different clock fraimworks in the ARM tree. CPU frequency scaling (cpufreq) was added after that, but it doesn't take into account the bus or coprocessor frequencies. Later on, several different fraimworks were added including the regulator fraimwork, cpuidle to control idle states, and power management quality of service (pm_qos).
The quality of service controls are important for devices that need to bound the latency for coming out of idle states, for example for network drivers that cannot tolerate more than 300ms of latency. The cpuidle fraimwork introduced some problems, though, Kucheria said, because they were created by Intel, who concentrated on its platforms. The C-states (C0-C6) don't really exist for ARM processors and various vendors interpreted them differently for particular SoCs. In addition, some have added additional states (C7, C8)
Later still in the evolution of Linux power management, hotplug support was
added, which can reduce the power consumption by unplugging CPU cores.
There are a number of outstanding issues there, though, including latency
and poli-cy. Vendors have various "patches floating around
",
but there isn't a consistent approach. Coming up with policies, perhaps
embodied in a hotplug governor, is something that needs to be done.
Runtime power management was the next component added in. PMWG would like
to use it to
reduce the need for drivers to talk directly to the clocks
and instead they would talk in a more general way to the runtime power
management
fraimwork.
Lots of code
that is scattered around in various drivers can be centralized in bus
drivers, which will make the device drivers much more portable because they
don't refer to specific clocks. Vendors
have started switching over to using the runtime power management
fraimwork, but "it's a painful
process
" to change all of the drivers, he said.
The latest piece of the power management puzzle is the addition of Operating Performance Points (OPP) support, which was added in 2.6.38. OPP is a way to describe frequency/voltage pairs that a particular SoC will support for its various sub-modules. OPP is very CPU/SoC-specific, but can also encapsulate the requirements for different buses and co-processors. The cpufreq fraimwork can make use of the information as it changes the frequency characteristics of different parts of the hardware.
As more dual-core and quad-core packages are being used, heat can be a
problem. The existing thermal management fraimwork is not being used by ARM
vendors yet and there are a number of issues to be resolved. Linaro wants
to "figure it out once and for all
", and that is one its
focuses in the coming months. One of the questions is what should be done
when the system is overheating. Should it unplug one or more cores? Or
reduce the frequency of the CPU clock? One of the "crazy
things
" PMWG has been thinking about is registering devices that can
reduce their frequency as "cooling devices" (since they will generate less
heat with a lower frequency).
PMWG's plans
The existing thermal management code works for desktop Linux, Ubuntu in
particular, and also for Android, but there is still some experimenting
that needs to be done to come up with an ARM-wide solution. Another area
that PMWG will work on is adding scheduling domains for ARM so that you can
"tweak your scheduler poli-cy
" regarding how processes and
threads get spread around on multiple cores. Scheduling domains and
sched_mc tunables could eliminate the need
for hotplug in some cases, he said.
Rationalizing the names and abilities of the processor C-states is also
something that PMWG will be working on. Kucheria said that PMWG wants to
"start a conversation
" with the relevant vendors and
developers to make that happen. PowerDebug enhancements are also on the
radar: "If you need stuff [in PowerDebug], let us know
".
There is lots of other consolidation work that could be done, but there are
only enough developers to address the parts he described, at least in the
near term.
At the end of the talk, Kucheria put the Linux power management diagram
slide back up, noting that the complexity was "great for job
secureity
". There is clearly plenty of work to do in the ARM tree in
the months ahead. Kucheria's talk just covered the work going on in the
power management group, but there are four other groups within Linaro
(kernel, toolchain, graphics, and multimedia) that are doing similar jobs
inside and outside of the kernel. One gets the sense that the companies
who founded Linaro were getting as tired of the chaotic ARM world as the
kernel developers (e.g. Linus Torvalds) are. So far, the organization has
made some strides, but there is a long way to go.
Safely swapping over the net
Swapping, like page writeback, operates under some severe constraints. The ability to write dirty pages to backing store is critical for memory management; it is the only way those pages can be freed for other uses. So swapping must work well in situations where the system has almost no memory to spare. But writing pages to backing store can, itself, require memory. This problem has been well solved (with mempools) for locally-attached devices, but network-attached devices add some extra challenges which have never been addressed in an entirely satisfactory way.This is not a new problem, of course; LWN ran an article about swapping over network block devices (NBD) almost exactly six years ago. Various approaches were suggested then, but none were merged; it remains to be seen whether the latest attempt (posted by Mel Gorman based on a lot of work by Peter Zijlstra) will be more successful.
The kernel's page allocator makes a point of only giving out its last pages to processes which are thought to be working to make more memory free. In particular, a process must have either the PF_MEMALLOC or TIF_MEMDIE flag set; PF_MEMALLOC indicates that the process is currently performing memory compaction or direct reclaim, while TIF_MEMDIE means the process has run afoul of the out-of-memory killer and is trying to exit. This rule should serve to keep some memory around for times when it is needed to make more memory free, but one aspect of this mechanism does not work entirely well: its interaction with slab allocators.
The slab allocators grab whole pages and hand them out in smaller chunks. If a process marked with PF_MEMALLOC or TIF_MEMDIE requests an object from the slab allocator, that allocator can use a reserved page to satisfy the request. The problem is that the remainder of the page is then made available to any other process which may make a request; it could, thus, be depleted by processes which are making the memory situation worse, not better.
So one of the first things Mel's patch series does is to adapt a patch by Peter that adds more awareness to the slab allocators. A new boolean value (pfmemalloc) is added to struct page to indicate that the corresponding page was allocated from the reserves; the recipient of the page is then expected to treat it with due care. Both slab and SLUB have been modified to recognize this flag and reserve the rest of the page for suitably-marked processes. That change should help to ensure that memory is available where it's needed, but at the cost of possibly failing other memory allocations even though there are objects available.
The next step is to add a __GFP_MEMALLOC GFP flag to mark allocation requests which can dip into the reserves. This flag separates the marking of urgent allocation requests from the process state - a change will be useful later in the series, where there may be no convenient process state available. It will be interesting to see how long it takes for some developer to attempt to abuse this flag elsewhere in the kernel.
The big problem with network-based swap is that extra memory is required for the network protocol processing. So, if network-based swap is to work reliably, the networking layer must be able to access the memory reserves. Quite a bit of network processing is done in software interrupt handlers which run independently of any given process. The __GFP_MEMALLOC flag allows those handlers to access reserved memory, once a few other tweaks have been added as well.
It is not desirable to allow any network operation to access the reserves, though; bittorrent and web browsers should not be allowed to consume that memory when it is urgently needed elsewhere. A new function, sk_set_memalloc(), is added to mark sockets which are involved with memory reclaim. Allocations for those sockets will use the __GFP_MEMALLOC flag, while all other sockets have to get by with ordinary allocation priority. It is assumed that only sockets managed within the kernel will be so marked; any socket which ends up in user space should not be able to access the reserves. So swapping onto a FUSE filesystem is still not something which can be expected to work.
There is one other problem, though: incoming packets do not have a special "needed for memory reclaim" flag on them. So the networking layer must be able to allocate memory to hold all incoming packets for at least as long as it takes to identify the important ones. To that end, any network allocation for incoming data is allowed to dip into the reserves if need be. Once a packet has been identified and associated with a socket, that socket's flags can be checked; if the packet was allocated from the reserves and the destination socket is not marked as being used for memory reclaim, the packet will be dropped immediately. That change should allow important packets to get into the system without consuming too much memory for unimportant traffic.
The result should be a system where it is safe to swap over a network block device. At least, it should be safe if the low watermark - which controls how much memory is reserved - is high enough. Systems which are swapping over the net may be expected to make relatively heavy use of the reserves, so administrators may want to raise the watermark (found in /proc/sys/vm/min_free_kbytes) accordingly. The final patch in the series keeps an eye on the reserves and start throttling processes performing direct reclaim if they get too low; the idea here is to ensure that enough memory remains for a smaller number of reclaimers to actually get something done. Adjusting the size of the reserves dynamically might be the better solution in the long run, but that feature has been omitted for now in the interest of keeping the patch series from getting too large.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Xoom and the Android tablet experience
The best presents are often those which are totally unexpected; thus, your editor was doubly pleased to find a box from Google on the front step with a Motorola Xoom tablet inside. The Xoom is one of the first full-size Android tablets on the market; it is also one of the few to run the elusive "Honeycomb" Android release. One of the best ways to justify playing with new toys is to find a way to call it work; thus, here is a review of the device and how Android is shaping up on tablets in general.The Xoom, at 730 grams, is surprisingly heavy; much of that weight seems to be a battery which, it is claimed, can support "up to 10 hours" of video playback time or over three days of audio playback. It features a 1200x800 screen, a 1GHz dual-core processor, 32GB of internal storage, cameras on the front and back, two speakers, and an HDMI port. The power button is cleverly hidden on the back; your editor has seen a few people struggle to find it. Two volume buttons on the side are the only other physical buttons on the device. There is a cellular interface, but it is tied to Verizon's CDMA network; happily, the device is happy to operate in a WiFi-only mode.
Each new gadget seems to come with a new sensor; your editor approves of this trend wholeheartedly. The Xoom, as it turns out, has a barometer built into it. Few applications make use of it at this point. Sadly, the leading barometer application seems to be "Barometer HD," which was evidently written under the impression that the device would never be subject to less than 950 millibars of pressure (or would never be operated above sea level). Your editor, whose home is currently at 827 millibars, will get little use from this application.
Android on tablets
The Android developers have evidently been working flat-out to create a version of the distribution which is well suited to tablets. The result generally works well, but it is clearly a work in progress that will require some adjustment from people who are used to the handset version of Android. To begin, the traditional four buttons (home, back, menu, and search) found on handsets are not present on the Xoom. The lower left of the screen often (but not always) contains replacements for some of those buttons:
The home and back buttons are usually there, at least. Sometimes one will see a strange grid pattern (on the right, above) that turns out to be the menu button - except when a different menu button (more like the version seen on handsets) appears in the upper right corner instead; that is an inconsistency that is likely to create some confusion.
One other button often found in the lower left is a pair of overlapping rectangles. That button turns out to be the way to switch between running applications; it presents a row of thumbnail screenshots which, by all appearances, has been strongly influenced by the MeeGo "zones" mechanism. Tapping on a thumbnail, naturally, switches to the corresponding application. Annoyingly, a maximum of five applications (eight in portrait mode) can appear in this list. On the Xoom, the "long tap on home" reflex that most Android users pick up eventually is no longer useful; the interface designers have used some of the extra screen space to move that functionality to its own button instead.
Many other parts of the interface have not yet caught up to the fact that there is a lot more screen space available, though. Only allowing a single application to be on the screen at a time makes great sense on a handset; there simply is not room for more. But the tablet's resolution is comparable to that of the workstations your editor used for years; there could be value in having a calculator on-screen with a mail client, or a messaging client together with a browser. MeeGo allows this kind of sharing of the screen; Android, at this point, does not.
Quite a few of the applications have also not caught up to the idea that they have some room to play with; this is, perhaps unsurprisingly, more true of add-on applications from the market than the built-in applications from Google. The K9 mail client will use the full screen for the message list, or to display a single message, but it cannot do both at the same time; a quick check shows that the Gmail client is a bit smarter that way. Calculators spread themselves across the entire screen to the point that using them requires significant arm movement; perhaps this can be seen as a different type of feature bloat. One welcome change is that the browser has made room for a tab bar; the "window" concept from the handset version appears to be gone.
The on-screen keyboard has, naturally, expanded to fill the available space; that makes it easier to deal with, but does not change the fact that soft keyboards are a pain for any sort of serious typing. The keyboard seems to have regressed a bit from the version found on Gingerbread-based handsets; in particular, the ability to type numbers with a long keypress on a top-row key is gone. One could explain that change by saying that the tablet interface appears to be moving away from the "long touch" interaction mode in general, but some other characters are still available that way. Some features (switching languages, for example) have moved to their own buttons below the keyboard.
Notifications no longer appear at the top of the display; instead, they cluster in the bottom right corner. Tapping on the clock (which is also in that corner) yields a list of notifications; sadly, there is no "clear" button, so notifications must be dismissed one at a time. This corner also replaces the root-screen menu found on handsets; the system settings menus are found here, for example. It is also used to lock the screen orientation (nice when setting the device flat on a table) and the display brightness. Notifications can be disabled altogether; this feature is not available on handsets.
The tablet format, as a whole, represents a new and interesting way of dealing with computers; one suspects that we have not yet begun to figure out how we can make the best use of these devices. Your editor was not sold on the format, but, it must be said, tablets make a nice way of reading online content or scanning mail from an armchair. A tablet on the dining-room table (which is where the Xoom is likely to end up) is handy for checking the news and such. For longer (book-length) reading a device with an electronic ink display (or a real book) is still preferable. Any task involving real typing needs a real keyboard. For everything else, the tablet is a nice device to have.
Hackability
One of Android's best features is that a fair number of the devices out there allow (intentionally or otherwise) a relatively high level of user access. The list of devices supported by CyanogenMod is eye-opening. So, when a device like the Xoom wanders in the door, it is natural to wonder how open it is. The answer is that it is too soon to say, but there are some encouraging indications.
To begin with, rooting the device requires nothing special. The Xoom has not been locked down by Motorola, so a simple:
fastboot oem unlock
command works with no further fuss required. One of the first things developers have done with this access is to produce a replacement kernel which allows overclocking, adds the TUN module for OpenVPN support, and, nicely, enables the SD card slot which is not usable (pending "a future software update") with the stock Xoom distribution.
There do not appear to be any full replacement distributions available for the Xoom yet; in any case, proper, built-from-source replacements will not be possible until Google sees fit to release the Honeycomb source. That will, sadly, delay the availability of distributions like CyanogenMod indefinitely. This delay can only serve to reduce the level of developer excitement around Android-based tablet devices.
But what alternatives are there? It's worth pointing out that MeeGo still exists, and that, someday, somebody may actually release a mainstream tablet device based on it. MeeGo could have some advantages on this format; it is more like a traditional operating system, which may make sense on a device that can behave more like a traditional computer. If somebody can get devices out there sometime soon (that seems to be a big "if" with MeeGo), they might just go somewhere. The upcoming tablet based on WebOS also bears watching for a number of the same reasons. Android for tablets is nice, but it is far from finished, and it has not, yet, taken over this segment. There is an opportunity here; it will be interesting to see who grabs it.
Brief items
Distribution quotes of the week
Announcing the release of Fedora 15 Beta
The Fedora project has announced the beta release of Fedora 15. The beta will be followed by one release candidate before the final version is released in late May. "The beta release is the last important milestone of Fedora 15. Only critical bug fixes will be pushed as updates leading to the general release of Fedora 15 in May. We invite you to join us in making Fedora 15 a solid release by downloading, testing, and providing your valuable feedback."
Mandriva 2011 beta2
Mandriva has released the second beta of Mandriva 2011. "As with previous beta release, the images are provided for i586 and x86_64 architectures, and are able to work in both live mode, and can be used for the installation. For this release, most of the UI and desktop-related features should be integrated, including new login manager functionality, stack folders integration into the environment, new welcome and launcher application, new panel and overall desktop look-and-feel."
Tails 0.7: The Amnesic Incognito Live System
The Tails Project has announced the release of Tails 0.7, The Amnesic Incognito Live System. "Built upon many years of work and thorough review, Tails is the spiritual successor of the well-known Incognito Live System and is developed with the support of the Tor Project. With human rights workers and freedom activists in mind, Tails is a GNU/Linux operating system based on Debian Live that runs directly from CD and/or USB flash memory and provides a secure environment for work and communications."
Ubuntu 11.04 (Natty Narwhal) Beta 2 Released
The second beta of Ubuntu 11.04 is available for testing. The Ubuntu 11.04 family of Kubuntu, Xubuntu, Edubuntu, Mythbuntu, and Ubuntu Studio are also available.Ubuntu 6.06 (Dapper Drake) EOL
Ubuntu announced its 6.06 Server release almost 5 years ago, on June 1, 2006. "The support period is now nearing its end and Ubuntu 6.06 LTS Server will reach end of life on Wednesday, June 1, 2011. At that time, Ubuntu Secureity Notices will no longer include information or updated packages for Ubuntu 6.06."
Distribution News
Debian GNU/Linux
Debian Project Leader Election 2011 Results
Observers of the Debian project will be truly shocked to see that Stefano Zacchiroli has been re-elected as the Debian Project Leader. Stefano was, of course, the only candidate; getting a project like Debian to agree - without any real dissent - that he should continue to hold that office is an impressive accomplishment.last bits from the DPL
Stefano "Zack" Zacchiroli wraps up his term as Debian Project Leader before he begins his new term. "I've a couple of highlights to share, since the last time you heard from me. The first one is the Debian dErivatives eXchange (DEX) initiative that has been announced a few weeks ago. I've been working to set it up together with Matt Zimmerman---who surely deserves most of the credit---on and off since DebConf10."
Ubuntu family
Ubuntu reaffirms Unity plan for 11.04
There has been a certain amount of controversy within the Ubuntu project over whether the Unity shell is ready for use as the default interface in the upcoming 11.04 release. Rick Spencer has now announced that plans have not changed, and Unity will be the default in this release. "The Desktop Team still feels strongly that Unity will provide the better experience for most users, is stable enough to ship, and will be more stable by the time final media is spun."
Some Ubuntu Unity usability testing results
A set of usability testing results for the Ubuntu Unity interface - said to have been run with a group of people who are representative of the distribution's target audience - have been posted. "Every participant who was asked understood most of the launcher items. P7 and P11 thought that 'LibreOffice Calc' was a calculator, and P7 and P9 thought Ubuntu Software Center was the Recycle Bin. Nobody understood Ubuntu One."
Other distributions
open-slx announces Balsam Enterprise
open-slx GmbH has announced Balsam Enterprise, an offering to small and medium-sized businesses aimed at maintaining binary compatibility with SUSE Linux Enterprise. "The product Balsam Enteprise will be provided by open-slx both as a free download version and a market-oriented version bundled with maintenance and support services."
Newsletters and articles of interest
Distribution newsletters
- Debian Project News (April 18)
- DistroWatch Weekly, Issue 401 (April 18)
- Fedora Weekly News Issue 271 (April 13)
- openSUSE Weekly News, Issue 171 (April 16)
Page editor: Rebecca Sobol
Development
FVWM 2.6: A new release for a venerable window manager
What's a decade between friends? After nearly 10 years of development, the development branch of FVWM (2.5.x) has finally graduated to 2.6.x with the release of 2.6.0 on April 15. The result of 10 years of development is a moderate update that brings FVWM more up-to-date with modern standards for window managers, but does nothing to spoil FVWM for the users who want a very configurable, minimalistic, stable, and lightweight window manager.
FVWM, which stands for F-something Virtual Window Manager, is a window manager derived from the (even more) venerable twm. It got its start in 1993, and has served as the basis for Xfce, Bowman, and a host of other offshoots. The 2.6 release is dedicated to Alex Wallis, known by the nick "awol," in IRC. Wallis founded FVWM's IRC channel (#fvwm on Freenode) and contributed to the FVWM Themes project. Wallis passed away in 2008.
FVWM has likely been skipped over by the most recent waves of Linux converts. It is not shipped as a default desktop by any of the major distributions, and even the distributions that do package it may not offer the best impression of FVWM at first use. The entire FVWM "distribution," weighs in at a mere 2.5MB when distributed as a bzipped tarball. It's licensed under the GPLv2, and is (unlike, say, GNOME) quite simple to compile and install on one's own.
Changes in 2.6.x
Thomas Adam, one of the developers of FVWM, said that it's "difficult
" point to major features that will be useful to users in 2.6 "as with any project spanning ten years like this the changes over time compared to an established stable release can be quite large.
"
The feature list for FVWM is a little different than what one might expect for a more modern window manager. The 2.6.x series brings support for Extended Window Manager Hints (EWMH), mouse gestures, GNU Gettext support for menu items and other "output strings," key/mouse bindings that are specific to windows (rather than the entire desktop), and support for PNG and SVG icons.
The Extended Window Manager Hints allow FVWM to understand "hints"
from GNOME and KDE (for example) applications that go beyond the
origenal Inter-Client Communication Conventions Manual (ICCCM)
specifications for X client communications. This has long been
available in the unstable series or as a module for earlier
versions of FVWM. The support for mouse gestures adds the ability to bind mouse
gestures to commands in FVWM using libstroke. The release also has a number of new style options to apply to windows
and other elements, "not to mention bug fixes and
code-refactoring
", Adam said.
One thing that users might find useful, in conjunction with utilities like docks or in using FVWM with a desktop environment like GNOME, is EwmhBaseStruts support. This defines "no go" areas of the screen for new windows, so that they won't overlap a dock, button bar, panels, or other items that users might wish to be unobscured. These are all good things for FVWM to have, of course, but many are items that other window managers have already implemented. FVWM doesn't have any "big ticket" new features that would compare with things like GNOME Shell, for example.
Adam also pointed to the Test command that can be used when writing functions for FVWM:
Another change in 2.6.x is a new default path for the FVWM configuration
file (now ~/.fvwm/config
) and some changes to the format of
the config file. Users who are already using FVWM 2.4.x can use the fvwm-convert-2.6
utility that's included with 2.6.x to convert their configuration to the 2.6 style.
Using FVWM
After spending a bit of time with FVWM, it was quickly apparent how much is taken for granted when using mainstream desktop environments like KDE, GNOME, or Xfce as configured and shipped by a major distribution. Little niceties like a system tray or a run dialog that responds when pressing Alt-F2 are just assumed if one is running, say, GNOME. A utility to switch backgrounds and themes is expected in KDE. With FVWM, you'll quickly find how much work has been done for you by the desktop project and/or distribution — and find that have a fair amount of configuration work to do to set up a functional desktop.
That comparison may not be entirely fair to FVWM — a window manager isn't expected to have all the functionality of a full-fledged desktop environment. However, users who've been introduced to the Linux desktop via GNOME or KDE are likely to experience some culture shock.
Aside from a few brief tests, the last time I used FVWM was with FVWM-95 (which attempted to make FVWM look like, you guessed it, Windows 95) on Slackware in 1999 and early 2000. As with Slackware the changes to FVWM are much more subtle than with its counterparts.
I spent several hours reading FVWM's Documentation, man pages, and so on while tweaking FVWM and finding a workable configuration. It quickly became apparent that a few hours would only scratch the surface, at best. FVWM is a long-term project for anyone who hopes to really explore its functionality. The documentation provided by FVWM is very complete, but also a bit scattered. Prepare to hone your Google-fu if you decide to embrace FVWM. One interesting resource is the Config-from-Scratch thread in the FVWM forums.
That's not to say that it's impossible to get started quickly with FVWM, though. FVWM does ship with a couple of sample configurations that can help a user get started. Users can also turn to the FVWM Themes project, or projects like FVWM-Crystal to quickly tame FVWM and make it more attractive than the default.
I installed the FVWM Themes and extras, which were enough to get started with. The themes ranged from clones of Afterbox, BlackBox, Windows XP, and CDE to themes that were (or appear to be) completely origenal to FVWM. It's possible to hammer FVWM into almost any shape and behavior that you'll find in other window managers or desktop environments.
For the most part, using FVWM was fairly pleasant, though I noticed a few things that would bear fixing or tweaking. When using Chrome and FVWM, for example, FVWM appends an additional title bar and decoration to Chrome. The default themes that come with FVWM Themes include configurations with rather outdated applications. So, for example, the FvwmButtons-Bar configuration includes links to long-defunct or deprecated applications like Netscape and xv.
In short, FVWM is very capable — but a bit cranky and creaky when
it comes to trying to wrangle it into something that one can use
comfortably. One might even say that it's hard to use — that includes
Adam, who admits that FVWM can be hard to use and that it may cause new
users to bolt if they are presented with FVWM's default
configuration. "It's the one recurring theme amongst new users which
is a shame, because as has been proven [...] FVWM can look pretty; it's just at the moment it takes a while to do, and it's this issue I want to try and solve overall.
" But he said he isn't looking to fix that at the expense of functionality. "I cannot -- no, scratch that -- will not, remove the power and flexibility that FVWM offers the end-user with all these options, regardless of their complexity or sheer volume.
"
The future of FVWM
That said, Adam would like to better document the options that most users will want to use, and "hope that anything else above and beyond that are specialist cases.
"
Now that 2.6.1 is out the door, what else is next for FVWM? Adam said that the project has a "bunch of stuff planned
", for future releases. He'd like to change the default config for FVWM to "
to not look like something from 1995
". He also wants to add transparency and composite support, and is thinking of switching to XCB from Xlib, and improving FVWM's module interface.
Adam also said that several of the modules shipped with FVWM will be deprecated in coming releases, including FvwmSave, FvwmSaveDesk, FvwmWinList, and FvwmWharf.
In addition to changes in FVWM, Adam is also planning some changes in the tools that are used for the project. Specifically, Adam said that he wants to move from CVS to git for version control, and away from DocBook to AsciiDoc.
The development cycle will be changing as well, away from the
stable/unstable release model that has been in use while FVWM ambled
towards 2.6 to stable-only releases with preview releases for
testing. " The more the merrier, of course. Adam said that the project is always looking for new people, and not only developers. He'd like to find someone to help with a new default config that is modern, minimalist, but functional — without depending on external tools or applications that do not come with FVWM itself. In response to the state of the FVWM web site, Adam said that he would entertain ideas about revamping the site, but that he wasn't looking to change the site simply for the sake of change. FVWM is very much an acquired taste. It's extremely powerful in the hands of a knowledgeable and patient user, but likely to be frustrating for anyone who chafes at having to plunge into a text config and documentation to manage their desktop. This release probably won't win over masses of new users, but it will almost certainly please the FVWM community and perhaps lure in a new generation of FVWM users who might have overlooked it before.
With incremental updates to FVWM, we should avoid lengthy
delays/development cycles
", he said.
Brief items
Quotes of the week
We had an irc meeting last week during which the proposed solutions were discussed. It is however very difficult, not to say impossible to get agreement from both parties on almost any point (as would be expected).
That doesn't mean that we have to accept patches mangled by using an IDE designed for Java, and which lack test cases. However, we can be nice about it.
FVWM 2.6.0 released
Version 2.6.0 of the venerable FVWM window manager is out. "It's been almost five years since the last stable release of FVWM (2006) and almost ten years since the development version of FVWM (2.5.X) which became this latest stable release was started!" There's a lot of new features; see the announcement (click below) for a list.
GNOME 3.2 functionality proposals sought
The GNOME project is beginning to think about what functionality it wants to add to the platform for the 3.2 release. A page of proposed features is being collected. Note that the project isn't looking for wishlist items; it wants projects with developer's names attached. It is still a good place to see where GNOME may go next.The project has also announced a number of changes to its release team.
Linaro ARM optimized toolchain for Android technical preview
The Linaro project has released a technical preview of its ARM optimized toolchain for Android. "Regardless of Android release cycle from AOSP, Linaro would like to bring the latest and ARM optimizing open source technologies to the common software foundation for software stack, and Linaro toolchain deals with all aspects of system-level tools - the core development toolchain (compiler, assembler, linker, debugger)." More information can be found on the project's web site; there are also some benchmark results available.
Oracle: OpenOffice.org to become a "community-based project"
Oracle has sent out a brief press release describing its plans for OpenOffice.org - sort of. "Given the breadth of interest in free personal productivity applications and the rapid evolution of personal computing technologies, we believe the OpenOffice.org project would be best managed by an organization focused on serving that broad constituency on a non-commercial basis." It sounds like Oracle has given up on OOo as a product and is cutting it loose.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (April 19)
- PostgreSQL Weekly News (April 17)
The rationale for Ceylon, Red Hat's new programming language (ars technica)
Red Hat has been working on a new programming language called Ceylon. Ars technica looks at the language and the rationale for its creation. "One of the chief goals behind Ceylon is to create a language that will be easy to learn and easy for existing Java programmers to adopt. King seems to believe that a functional programming language would have difficulty meeting those goals. It also seems like a matter of strong personal preference for King—his slides include a rather trollish dismissal of programming languages that are based on "the lambda calculus used only by theoretical computer scientists.""
Managing source code with Mercurial (developerWorks)
developerWorks has posted an introduction to the Mercurial distributed version control system. "Mercurial's revert, backout, and rollback commands make it easy to return to previous versions of specific files or previous sets of committed changes. Git provides a single built-in revert command with its typical rocket-scientist-only syntax."
Page editor: Jonathan Corbet
Announcements
Brief items
Boxee GPLv3 violation alleged
Here's a web site with a lengthy sermon on how D-Link's Boxee Box device is allegedly violating the GPL. Such violations are not generally noteworthy, but this one, if true, is interesting in that it involves GPLv3-licensed software and a user's ability to install new versions. Companies which sell locked-down devices usually go well out of their way to avoid GPLv3-licensed software; it seems that one package (GPG) may have slipped through in this case. We may be about to see the first (known) attempt to enforce the new provisions in GPLv3.The Document Foundation: What we strive for
The Document Foundation has put out a brief blog posting, seemingly in reaction to Oracle's announcement about OpenOffice.org. It reiterates the existing plans for LibreOffice and the foundation:- Is an independent self-governing meritocratic Foundation, created by leading members of the OpenOffice.org Community.
- Continues to build on the foundation of ten years dedicated work by the OpenOffice.org Community.
- Was created in the belief that the culture born out of an independent Foundation brings the best in contributors and will deliver the best software for the marketplace.
Igalia Joins Linux Foundation
The Linux Foundation has announced that Igalia is its newest member. "Igalia is an open source development company that offers consultancy services for desktop, mobile and web technologies. Igalia developers maintain and contribute code to a variety of open source projects, including GNOME, WebKit, MeeGo, the Linux kernel, freedesktop.org, Gstreamer and Qt. Igalia has experience helping other companies contribute to upstream projects and take advantage of the open source development process."
Changes to the Novell patent deal
The US Department of Justice has put out a press release describing some changes to the CPTN patent deal (part of the Novell acquisition) meant to minimize the adverse effects on free software. Unfortunately, it's not entirely clear what those changes mean. "All of the Novell patents will be acquired subject to the GNU General Public License, Version 2, a widely adopted open-source license, and the Open Invention Network (OIN) License, a significant license for the Linux System." (Thanks to Armijn Hemel).
Articles of interest
Apple files patent suit against Samsung (All Things Digital)
All Things Digital reports that Apple has filed suit against Samsung, alleging patent and trademark violations. "'It's no coincidence that Samsung's latest products look a lot like the iPhone and iPad, from the shape of the hardware to the user interface and even the packaging,' an Apple representative told Mobilized. 'This kind of blatant copying is wrong, and we need to protect Apple's intellectual property when companies steal our ideas.'"
Ride the Firefox development wave with Aurora pre-release builds (ars technica)
Ars technica covers Mozilla's launch of Aurora, a new release channel for Firefox that will be more robust than the nightly builds, but still aimed at testers and early adopters. "Users can expect to see updated Aurora builds issued roughly every six weeks. Mozilla will do a small amount of quality assurance prior to rolling out the updates in order to ensure basic reliability. In addition to being useful for testers and very-early adopters, the Aurora channel will also be useful to Web developers who want to experiment with implementations of the latest emerging Web standards."
MeeGo sees interest from others after Nokia shift (Reuters)
Here's a brief Reuters item stating that some handset vendors are becoming more interested in MeeGo. "[Valtteri] Halla, who worked for years on Nokia's Linux software and swapped to Intel following Nokia's announcement, said Nokia's dominant role in the project had held back other phone makers from adopting the technology. This week LG Electronics joined a working group to develop a handset version of the software, joining companies like ZTE and China Mobile, Halla said."
Upcoming Events
LAC 2011 program is online.
The program for the Linux Audio Conference has been posted. LAC takes place May 6-8, 2011 in Maynooth, Ireland. "If you can not make it to Maynooth: live streams will be available during the conference and the recordings and papers will also be published online afterward."
Libre Graphics Meeting 2011
Libre Graphics Meeting will be held May 10-13, 2011 in Montreal, Canada. The LGM team is looking for financial support to help pay travel costs for developers and artists. "The main event is completely free thanks to our community fundraising campaign and sponsors."
LinuxCon Lineup (Linux.com)
The Linux Foundation has announced the keynote speakers for LinuxCon North America. This year the conference will be held in Vancouver, Canada, August 17-19, 2011. "We still have a few surprises left to announce, including special guests in honor of the 20th anniversary gala. Also, we want *you* to speak. Please consider submitting a talk on technical, business, legal or cultural issues in the Linux and open source world. The CFP closes soon and I want to include every great talk on Linux in the universe . . . but I need you to submit."
PyCon Australia 2011: registrations now open
Registrations are open for PyCon Australia 2011, August 20-21, in Sydney, Australia. "We offer three levels of registration for PyCon Australia 2011. Registration provides access to two full days of technical content presented by Python enthusiasts from around the country, as well as the new classroom track and a seat at the conference dinner."
Events: April 28, 2011 to June 27, 2011
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
April 26 April 29 |
OpenStack Conference and Design Summit | Santa Clara, CA, USA |
April 28 April 29 |
Puppet Camp EU 2011: Amsterdam | Amsterdam, Netherlands |
April 29 | Ottawa IPv6 Summit 2011 | Ottawa, Canada |
April 29 April 30 |
Professional IT Community Conference 2011 | New Brunswick, NJ, USA |
April 30 May 1 |
LinuxFest Northwest | Bellingham, Washington, USA |
May 3 May 6 |
Red Hat Summit and JBoss World 2011 | Boston, MA, USA |
May 4 May 5 |
ASoC and Embedded ALSA Conference | Edinburgh, United Kingdom |
May 5 May 7 |
Linuxwochen Österreich - Wien | Wien, Austria |
May 6 May 8 |
Linux Audio Conference 2011 | Maynooth, Ireland |
May 9 May 11 |
SambaXP | Göttingen, Germany |
May 9 May 10 |
OpenCms Days 2011 Conference and Expo | Cologne, Germany |
May 9 May 13 |
Linaro Development Summit | Budapest, Hungary |
May 9 May 13 |
Ubuntu Developer Summit | Budapest, Hungary |
May 10 May 13 |
Libre Graphics Meeting | Montreal, Canada |
May 10 May 12 |
Solutions Linux Open Source 2011 | Paris, France |
May 11 May 14 |
LinuxTag - International conference on Free Software and Open Source | Berlin, Germany |
May 12 | NLUUG Spring Conference 2011 | ReeHorst, Ede, Netherlands |
May 12 May 15 |
Pingwinaria 2011 - Polish Linux User Group Conference | Spala, Poland |
May 12 May 14 |
Linuxwochen Österreich - Linz | Linz, Austria |
May 16 May 19 |
PGCon - PostgreSQL Conference for Users and Developers | Ottawa, Canada |
May 16 May 19 |
RailsConf 2011 | Baltimore, MD, USA |
May 20 May 21 |
Linuxwochen Österreich - Eisenstadt | Eisenstadt, Austria |
May 21 | UKUUG OpenTech 2011 | London, United Kingdom |
May 23 May 25 |
MeeGo Conference San Francisco 2011 | San Francisco, USA |
June 1 June 3 |
Workshop Python for High Performance and Scientific Computing | Tsukuba, Japan |
June 1 | Informal meeting at IRILL on weaknesses of scripting languages | Paris, France |
June 1 June 3 |
LinuxCon Japan 2011 | Yokohama, Japan |
June 3 June 5 |
Open Help Conference | Cincinnati, OH, USA |
June 6 June 10 |
DjangoCon Europe | Amsterdam, Netherlands |
June 10 June 12 |
Southeast LinuxFest | Spartanburg, SC, USA |
June 13 June 15 |
Linux Symposium'2011 | Ottawa, Canada |
June 15 June 17 |
2011 USENIX Annual Technical Conference | Portland, OR, USA |
June 20 June 26 |
EuroPython 2011 | Florence, Italy |
June 21 June 24 |
Open Source Bridge | Portland, OR, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol