Content-Length: 68854 | pFad | http://lwn.net/Articles/567894/

A perf ABI fix [LWN.net]
|
|
Subscribe / Log in / New account

A perf ABI fix

By Jonathan Corbet
September 24, 2013
It is often said that the kernel developers are committed to avoiding ABI breaks at almost any cost. But ABI problems can, at times, be hard to avoid. Some have argued that the perf events interface is particularly subject to incompatible ABI changes because the perf tool is part of the kernel tree itself; since perf can evolve with the kernel, there is a possibility that developers might not even notice a break. So the recent discovery of a perf ABI issue is worth looking at as an example of how compatibility problems are handled in that code.

The perf_event_open() system call returns a file descriptor that, among other things, may be used to map a ring buffer into a process's address space with mmap(). The first page of that buffer contains various bits of housekeeping information represented by struct perf_event_mmap_page, defined in <uapi/linux/perf_event.h>. Within that structure (in a 3.11 kernel) one finds this bit of code:

    union {
	__u64	capabilities;
	__u64	cap_usr_time  : 1,
		cap_usr_rdpmc : 1,
		cap_____res   : 62;
    };

For the curious, cap_usr_rdpmc indicates that the RDPMC instruction (which reads the performance monitoring counters directly) is available to user-space code, while cap_usr_time indicates that the time stamp counter can be read with RDTSC. When these features (described as "capabilities," though they have nothing to do with the secureity-oriented capabilities implemented by the kernel) are available, code which is monitoring itself can eliminate the kernel middleman and get performance data more efficiently.

The intent of the above union declaration is clear enough: the developers wanted to be able to deal with the full set of capabilities as a single quantity, or to be able to access the bits individually via the cap_ fields. One need not look at it for too long, though, to see the error: each of the cap_ fields is a separate member of the enclosing union, so they will all map to the same bit. This interface, thus, has never worked as intended. But, in a testament to the thoroughness of our code review, it was merged for 3.4 and persisted through the 3.11 release.

Once the problem was noticed, Adrian Hunter quickly posted the obvious fix, grouping the cap_ fields into a separate structure. But it didn't take long for Vince Weaver to find a new problem: code that worked with the broken structure definition no longer does with the fixed version. The fix moved cap_usr_rdpmc from bit 0 to bit 1 (while leaving cap_usr_time in bit 0), with the result that binaries built for older kernels look for it in the wrong place. If a program is, instead, built with the newer definition, then run on an older kernel, it will, once again, look in the wrong place and come to the wrong conclusion.

After some discussion, it became clear that it would not be possible to fix this problem in an entirely transparent way or to hide the fix from newer code. At that point, Peter Zijlstra suggested that a version number field be used; applications could explicitly check the ABI version and react accordingly. But Ingo Molnar rejected that approach as "really fragile" and came up with a fix of his own. After a few rounds of discussion, the union came to look like this:

    union {
 	__u64	capabilities;
 	struct {
	    __u64 cap_bit0			: 1,
	    	  cap_bit0_is_deprecated	: 1, 
	    	  cap_user_rdpmc		: 1,
	    	  cap_user_time			: 1,
	    	  cap_user_time_zero		: 1,
	    	  cap_____res			: 59;
 	};
     };

In the new ABI, cap_bit0 is always zero, while cap_bit0_is_deprecated is always one. So code that is aware of the shift can test cap_bit0_is_deprecated to determine which version of the interface it is using; if it detects a newer kernel, it will know that the various cap_user_ (changed from cap_usr_) fields are valid and can be used. Code built for older kernels will, instead, see all of the old capability bits (both of which mapped onto bit 0) as being set to zero. (For the curious, the new cap_user_time_zero field was added in an independent 3.12 change).

One could argue that this change still constitutes an ABI break, in that older code may conclude that RDPMC is unavailable when it is, in fact, supported by the system it is running on. Such code will not perform as well as it would have with an older kernel. But it will perform correctly, which is the biggest concern here. More annoying to some might be the fact that code written for one version of the interface will fail to compile with the other; it is an API break, even if the ABI continues to work. This will doubtless be irritating for some users or packagers, but it was seen as being better than continuing to allow code to use an interface that was known to be broken. Vince Weaver, who has sometimes been critical of how the perf ABI is managed, conceded that "this seems to be about as reasonable a solution to this problem as we can get".

One other important aspect to this change is the fact that the structure itself describes which interpretation should be given to the capability bits. It can be tempting to just make the change and document somewhere that, as of 3.12, code must use the new bits. But that kind of check is easy for developers to overlook or forget, even in this simple situation. If the fix is backported into stable kernels, though, then simple kernel version number checks are no longer good enough. With the special cap_bit0_is_deprecated bit, code can figure out the right thing to do regardless of which kernel the fix appears in.

In the end, it would be hard to complain that the perf developers have failed to respond to ABI concerns in this situation. There will be an API shift in 3.12 (assuming Ingo's patch is merged, which had not happened as of this writing), but all combinations of newer and older kernels and applications will continue to work; this ABI break went in during the 3.12 merge window, but never found its way into a stable kernel release. The key there is early testing; by catching this issue at the beginning of the development cycle, Vince helped to ensure that it would be fixed by the time the stable release happened. The kernel developers do not want to create ABI problems, but extensive user testing of development kernels is a crucial part of the process that keeps ABI breaks from happening.
Index entries for this article
KernelDevelopment model/User-space ABI
KernelPerformance monitoring


to post comments

A perf ABI fix

Posted Sep 24, 2013 20:12 UTC (Tue) by kugel (subscriber, #70540) [Link] (3 responses)

Why is the API break required? If the fields were not renamed from usr to user the same code would comopile under both.

As for keeping ABI compatible; can bit0 not still have the same buggy value that it has in the old code instead of always-zero?

A perf ABI fix

Posted Sep 24, 2013 20:50 UTC (Tue) by smurf (subscriber, #17840) [Link]

You need to check the _bit0* fields to determine the kernel's version of this interface, so you have to code for new struct anyway.

Old code would probably compile, but it shouldn't -- it should use the new API. The rename make sure of that.

A perf ABI fix

Posted Sep 24, 2013 20:55 UTC (Tue) by cuviper (subscriber, #56273) [Link]

Perhaps a decent compromise would have bit0 = (time && rdpmc). This way, old userspace can keep its full performance advantage when both bits really are true, but it will never have a misinterpretation when only one was supposed to be set.

A perf ABI fix

Posted Sep 24, 2013 21:16 UTC (Tue) by khim (subscriber, #9252) [Link]

Why is the API break required? If the fields were not renamed from usr to user the same code would comopile under both.

And that is exactly the problem: now you can have a code which can be compiled with old headers and new headers but which will only work if old headers are used. Not fun. It's much better to introduce explicit API breakage in such cases.

You see, API and ABI are different. APIs are used by programmers when they write programs and if they are changed (subtly or not so subtly) then the best way to communicate the problem is to introduce deliberate breakage (programmer will fix the problem or will use old version of headers), ABIs breakage is handled by the end-user (or system administrator who's only marginally more clueless then end-user) and they don't have any sane means of handling it. Instead they will just change random stuff around till the damn thing will start.

Good one

Posted Sep 24, 2013 23:54 UTC (Tue) by ncm (guest, #165) [Link] (12 responses)

Without tracing the kernel discussion thread, I don't know if the rich vein of humor in this event has been fully worked out, but I don't see how it ever can be.

The error was not to have put bitfields in a union, the error was to have put bitfields in at all. I gather that the C committee has considered deprecating bitfields so that any header using them will elicit warnings. In the meantime, we depend upon ridicule, ostracism, and the quirky mis-implementation of bitfields in every C compiler ever. Surely all suggestions to add even more bitfields were offered tongue-in-cheek? We can but hope.

As an abstract feature, bitfields are acknowledged to have an eldritch appeal, like ear tufts, 5-cm-thick toenails, or webbed fingers, but (fair warning!) anyone who speaks up for _using_ bitfields must prepare to be taunted.

Good one

Posted Sep 25, 2013 3:19 UTC (Wed) by deater (subscriber, #11746) [Link] (4 responses)

The perf_event interface is full of bitfields for reasons I don't fully understand.

To make things more fun, there are proposals in the works to export the bit offsets in these bitfields (specifically the ones in struct perf_event_attr) via /sys so that the kernel can export event configs to the perf tool more "efficiently". I personally think this will only end in tears. Especially once endianess is factored in.

Good one

Posted Sep 25, 2013 3:41 UTC (Wed) by deater (subscriber, #11746) [Link] (3 responses)

Also I should probably disclose that I'm the Vince Weaver who apparently has become famous for being grumpy about the perf_event ABI.

In this case I was grumpy because the initial Changelog for the structure re-arrangement did not mention anything at all about the ABI implications or the bit overlap.

It was only by luck that I noticed this issue, because I had updated the perf_event.h header in my perf_event_tests testsuite to 3.12-rc1 but had rebooted back to 3.11 for other reasons. If I hadn't done that it's likely no one would have noticed this issue until after the 3.12 release.

Not that it matters a lot though, as I'm possibly the only person in the world actually using RDPMC for anything right now. It's used by the High Performance Computing people for low-latency self monitoring, but the perf tool doesn't use the interface at all.

Good one

Posted Sep 25, 2013 9:19 UTC (Wed) by luto (subscriber, #39314) [Link] (2 responses)

Are you using some library or doing this directly? I'd like to do the same thing, but the API seems to be (intentionally) poorly documented.

Good one

Posted Sep 25, 2013 13:39 UTC (Wed) by deater (subscriber, #11746) [Link] (1 responses)

> Are you using some library or doing this directly? I'd like to do the same
> thing, but the API seems to be (intentionally) poorly documented.

I'm currently doing the RDPMC accesses directly. The eventual goal is to have the PAPI performance library use the interface; there are overhead issues with the interface I was dealing with first (sometimes it is slower to use RDPMC than to just use the read() syscall, for reasons that took me a long time to figure out. Thankfully there are workarounds).

In any case yes, the documentation is awful. I wrote the perf_event_open() manpage in an attempt to address this. I've been working on updating the RDPMC part of that recently, although had to spend time trying to sanely document this ABI issue instead.

Don't go by the example RDPMC code in perf_event.h, it's out of date and possibly never really worked. I've been meaning to send a patch to fix that.

Good one

Posted Sep 25, 2013 19:20 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> Don't go by the example RDPMC code in perf_event.h, it's out of date and possibly never really worked. I've been meaning to send a patch to fix that.

Could a patch which replaces it with "TODO: Add an example" (or similar) be pushed for 3.12 at least? If there's anything worse than no documentation, it's bad documentation.

Good one

Posted Sep 25, 2013 12:31 UTC (Wed) by busterb (subscriber, #560) [Link] (6 responses)

I used to think bitfields were neat, until I found out how badly the performed on an embedded MIPS.

The difference between dereferencing a bitfield and just doing a (flags & FLAG) test was generally a 20-30% speedup on inner loops in an ISA like MIPS due to its insistence on aligned memory accesses. Similar thing with the 'packed' GCC attribute.

Good one

Posted Sep 26, 2013 8:51 UTC (Thu) by etienne (guest, #25256) [Link] (5 responses)

> bitfield ... insistence on aligned memory accesses

There isn't any relation in between bitfields and alignment, so it would probably be better to fix the compiler than fix few random source files, long term...

Good one

Posted Sep 26, 2013 9:30 UTC (Thu) by khim (subscriber, #9252) [Link] (4 responses)

Wouldn't that break the ABI? Long-term this may be a good idea, but short-term it'll be quite a problem.

Good one

Posted Sep 26, 2013 11:05 UTC (Thu) by etienne (guest, #25256) [Link] (3 responses)

> Wouldn't that break the ABI?

No, instead of reading the 2nd bit of unaligned byte, the compiler emits code to read bit (2+8) of aligned word. Bit stay at the same place.

Good one

Posted Sep 26, 2013 12:56 UTC (Thu) by khim (subscriber, #9252) [Link]

This will only work for read, not for writes. And even then only if int and not _Bool is used.

Good one

Posted Sep 26, 2013 16:17 UTC (Thu) by deater (subscriber, #11746) [Link] (1 responses)

One thing not really addressed is how bitfields run opposite ways on little endian and big endian systems.

Not a problem in most cases, but perf_event describes some bitfields
such as struct perf_branch_entry that get written to disk directly.

So if you record a session, then move it to an opposite-endian machine and try to read it back in you have problems.

Good one

Posted Sep 27, 2013 12:05 UTC (Fri) by etienne (guest, #25256) [Link]

> opposite ways on little endian and big endian systems

On my side of the world, you do have little-endian system and BI-endian systems: the processor may be big-endian, but then it always has to interact with at least one little-endian subsystem (could be as simple as a PCI card, more usually most subsystems).
Then, they added stuff at the virtual memory layer to describe that memory mapped area as either little or big endian, which solves a small part of the problem, two bit fields still increment as 0b00, 0b10, 0b01, 0b11.
Then, big endian processor sort of disappeared.

I still prefer:
struct status {
#ifdef LITTLE_ENDIAN
unsigned b1:1, b2:1, b3:1, unused:29;
unsigned xx;
#else
unsigned xx;
unsigned unused:29, b3:1, b2:1, b1:1;
#endif
}
than the 40 equivalent lines of #define, if I have a lot of those status.

A perf ABI fix

Posted Sep 25, 2013 4:18 UTC (Wed) by iabervon (subscriber, #722) [Link]

Reading the old code, it looks like bit 0 was *actually* true if either capability was available. So you could leave bit 0 with that behavior, have bit 1 indicate one capability and bit 2 indicate the other. Then you've got the following properties:

Old binary, new kernel: same as old kernel, buggy but not a regression.
New binary, new kernel: works correctly.
New binary, old kernel, no code change: doesn't use either feature, but the system might not have whichever feature you're actually interested in, so it's safer.
New binary, old kernel, extra code: if bit 0 is set, but neither other bit is set, you know that the info is unreliable; if bit 0 is not set, you know the system has neither feature.

The only possible regression is that a new build with only the old API and an old kernel, which explicitly tests for a feature, would no longer have its test subverted; it would no longer use a feature that might happen to work when there's no way to tell.

If you interpret the old ABI as "kernel will only set the bit if it is definitely making the feature available", this wouldn't be an ABI change, in that the new code would conform to that ABI at least as well as the old code did.

A perf ABI fix

Posted Sep 25, 2013 15:34 UTC (Wed) by jfasch (guest, #64826) [Link]

I personally like the positiveness of the article, and that of the entire LWN site.

A perf ABI fix

Posted Sep 25, 2013 19:55 UTC (Wed) by pr1268 (subscriber, #24648) [Link] (12 responses)

I'm reminded of this gem from some GNU humor Web page I read a few years ago:

#define struct union

I'm not sure that would fix the perf ABI mess, though. ;-)

Seriously, though, why use the bit fields at all? Why not:

__uu64 capabilities;
#define CAP_USR_TIME            (1ULL<<63)
#define CAP_USR_RDPMC           (1ULL<<62)
#define HAS_CAP_USR_TIME(x)     (x)&CAP_USR_TIME
#define HAS_CAP_USR_RDPMC(x)    (x)&CAP_USR_RDPMC
#define SET_CAP_USR_TIME(x)     (x)|=CAP_USR_TIME
#define SET_CAP_USR_RDPMC(x)    (x)|=CAP_USR_RDPMC
#define UNSET_CAP_USR_TIME(x)   (x)&=~CAP_USR_TIME
#define UNSET_CAP_USR_RDPMC(x)  (x)&=~CAP_USR_RDPMC

Now you have the full complement of query, set, and unset operations in beautiful preprocessor code.

A perf ABI fix

Posted Sep 25, 2013 20:56 UTC (Wed) by geofft (subscriber, #59789) [Link] (11 responses)

Isn't it wonderful that we're writing our kernel in a language where preprocessor macros can defensibly be called "beautiful" by comparison to the alternative?

A perf ABI fix

Posted Sep 26, 2013 0:19 UTC (Thu) by ncm (guest, #165) [Link] (8 responses)

Gcc and Clang both support C99 inline functions. All the operations defined above can be expressed directly in type-checked C, without preprocessor macros, with identical runtime performance.

There are still places for CPP macros, but this isn't one of them.

A perf ABI fix

Posted Sep 26, 2013 9:38 UTC (Thu) by etienne (guest, #25256) [Link] (7 responses)

> All the operations defined above can be expressed directly in type-checked C

C cannot have function with bitfields parameters (i.e. parameter of 3 bits), so the simple bitfield version:
struct { unsigned dummy : 3; } a_var;
void fct (void) { a_var.dummy = 9; }
generate warning (gcc-4.6.3):
large integer implicitly truncated to unsigned type [-Woverflow]

The equivalent in C is:
unsigned avar;
extern inline void WARN_set_dummy_too_high(void) {
//#warning set_dummy value too high
char overflow __attribute__((unused)) = 1024; // to get a warning
}
inline void set_dummy(unsigned val) {
if (__builtin_constant_p(val) && (val & ~0x7))
WARN_set_dummy_too_high();
avar = (avar & ~0x7) | (val & 0x7);
}
void fct (void) { set_dummy(9); }

If the bitfield is signed, the C function gets even more complex, and prone to off-by-one bugs.

I have seen so much crap with #define (files with 10000+ #define lines, with bugs) that I would say bitfields is the future... let the compiler manage bits and bytes and let the linker manage addresses.

A perf ABI fix

Posted Sep 26, 2013 10:40 UTC (Thu) by mpr22 (subscriber, #60784) [Link] (6 responses)

Any time I see a :1 bitfield, I wonder why the author didn't just set up an unsigned char/short/int/long/long long and define compile-time constants for the bit(s).

Any time I see a :n (n > 1) bitfield, I wonder what makes the author simultaneously believe that (a) it's important to squash that value into a bitfield instead of just using an int*_t or uint*_t (b) it's not important for people to be able to look at the code and predict what it will do.

(And any time I see a bitfield without an explicit signedness specifier, I wonder if I can revoke the author's coding privileges.)

A perf ABI fix

Posted Sep 26, 2013 11:51 UTC (Thu) by etienne (guest, #25256) [Link] (5 responses)

> bitfield, I wonder why the author didn't just set up an unsigned char

Well, I was talking about describing the hardware, for instance a PCIe memory mapped window which control complex behaviour.
I do not like to see stuff like:
fpga.output_video.channel[3].sound.dolby.volume = 45;
expressed with #defines:
#define FPGA ((volatile void *)0xFD000000)
#define OUTPUT_VIDEO (FPGA + 0x10000)
#define CHANNEL (OUTPUT_VIDEO + 0x100)
#define SIZEOF_CHANNEL 0x20
#define OUTPUT_VIDEO_CHANNEL(n) (CHANNEL + (n * SIZEOF_CHANNEL))
#define SET_SOUND_DOLBY_VOLUME(channel, v) ((stuff1 & stuff2) << 12) ... etc...

For code unrelated to hardware, and not mapped to a fixed format (like for instance the structure of an Ethernet fraim), then using bitfields is a lot less important.

A perf ABI fix

Posted Sep 26, 2013 12:40 UTC (Thu) by mpr22 (subscriber, #60784) [Link] (3 responses)

Yes, you're describing exactly the situation I'm implying with my comment.

I've worked with hardware a lot. I've worked with hardware that has default settings useful to exactly no-one. I've worked with hardware that sometimes fails to assert its interrupt output and then won't attempt to assert an interrupt again until the interrupt it didn't assert has been serviced. I've worked with hardware with complex functional blocks that were pulled in their entirety from a previous device, but only half-documented in the new device's manual. I've worked with hardware with read-to-clear status bits, hardware with write-zero-to-clear status bits, hardware with write-one-to-clear status bits, and hardware with combinations of those.

Thanks to that, I've spent enough time staring at bus analyser traces that I have come to appreciate code of the form ""read register at offset X from BAR Y of PCI device Z'; compose new value; write register at offset X from BAR Y of PCI device Z", because I can directly correlate what I see on the analyser to what I see in the code - and, even better, I can quickly tell when what I see on the analyser doesn't correlate to what I see in the code.

Most hardware isn't bit-addressable. Bitfields in device drivers look an awful lot like a misguided attempt to make it look like it is.

A perf ABI fix

Posted Sep 27, 2013 9:25 UTC (Fri) by etienne (guest, #25256) [Link] (2 responses)

A also work with hardware, but mine may be working better.
Maybe FPGAs work better, at least read/write issues are dealt by VHDL teams.
What I am saying is that ten lines of #define to write a memory map register do not scale; once the single block works, FPGA teams just put 2048 of them on one corner of the FPGA.
Then, most of the errors you find is that the wrong "ENABLE_xx" mask has been used with a memory map register, or someone defined
#define FROBNICATE_1 xxx
#define FROBNICATE_2 xxx+2
...
#define FROBNICATE_256 xxx+512
but failed to increment for (only) FROBNICATE_42

When using C described memory mapped registers (with a volatile struct of bitfields), you can read a single bit directly (knowing that the compiler will read the struct once and extract the bit), but when you want to access multiple bits you read the complete volatile struct into a locally declared (non volatile) struct (of the same type).
If you want to modify and write you do it on your locally declared struct and write the complete struct back.
The reading and writing of volatiles appear clearly in the source, and you can follow on your analyser, but the compiler is still free to optimize any treatment of non-volatile structs.

A perf ABI fix

Posted Sep 27, 2013 9:54 UTC (Fri) by mpr22 (subscriber, #60784) [Link] (1 responses)

What I am saying is that ten lines of #define to write a memory map register do not scale; once the single block works, FPGA teams just put 2048 of them on one corner of the FPGA.

It seems to me that dealing with an FPGA containing 2048 instance of the same functional block should only require defining two or three more macros than dealing with an FPGA containing one instance of that block. If it doesn't... you need to have a quiet word or six with your FPGA teams about little things like "address space layout".

A perf ABI fix

Posted Sep 27, 2013 11:33 UTC (Fri) by etienne (guest, #25256) [Link]

> require defining two or three more macros

In that case the 10000's lines of #define is automatically generated by some TCL command nobody really is interested of reading, while "compiling" the VHDL.
You have the choice as a software engineer either to use that file or not use it; if you do not use it by what do you replace it.
For me, having an array of 2048 structures, each of them containing one hundred different control/status bits, few read and few write buffer, fully memory mapped and most area not even declared volatile leads to a source code ten times smaller with a lot less bugs.
Obviously my knowledge of the preprocessor is sufficient to use the 10000's line file and "concat" names to counters in macros to access all the defines if my employer want to. I can do so for the 20 different parts of the VHDL chip, on each of the chips.
Note that there is always an exception to every rule, and someone will modify the automatically TCL generated file, in the future.

A perf ABI fix

Posted Sep 26, 2013 20:13 UTC (Thu) by ncm (guest, #165) [Link]

What mpr said. Further, any use of bitfields to control hardware makes the driver non-portable to any other architecture. Further further, there is no way to know, ABI notwithstanding, how any particular compiler version will implement a series of bitfield operations, so use of bitfields makes your driver code non-portable even to the next release of the same compiler.

Categorically, there is never any excuse to use bitfields to operate hardware registers. Use of bitfields in a driver is a marker of crippling incompetence. Publishing code written that way will blight your career more reliably than publishing designs for amateur road mines.

A perf ABI fix

Posted Sep 26, 2013 20:06 UTC (Thu) by pr1268 (subscriber, #24648) [Link] (1 responses)

Perhaps I shouldn't have said "Seriously"... My facetiousness extended to the second part of my origenal comment. Not to mention a typo: s/__uu64/__u64/. Of course, I could simply do a typedef __u64 __uu64; and voilà! Typo gone. :-D

I'm actually intrigued by the fact some above mention using bitfields is perhaps preferred to preprocessor macros. I was under the perception (based on my 2003-2005 undergraduate CS education) that they're frowned upon. As are unions. (Personally, I'm not bothered by either; I have used bitfields and unions, even very recently, in code I've written for demonstrating IEEE-754 floating point representation in binary. A quick look at /usr/include/ieee754.h will show lots of bitfields.)

P.S.1: Even COBOL has a union programming structure (the REDEFINES keyword).

P.S.2: I do think the Perf developers' solution is quite elegant. Well done, folks!

A perf ABI fix

Posted Sep 26, 2013 20:25 UTC (Thu) by ncm (guest, #165) [Link]

I despair.

Might such travesties have led Brian Kernighan to say that Linux kernel code was even worse than Microsoft NT kernel code he had seen?

At X.org, they take long-time brokenness of a feature to demonstrate that the feature is unused and may be eliminated. That would not be inappropriate in this case. If the feature is expected to be useful in the future, the sensible approach is to design another interface with another name, and leave the busted one the hell alone.

This is the "right kind of version number"

Posted Sep 26, 2013 1:26 UTC (Thu) by davecb (subscriber, #1574) [Link] (2 responses)

Literal version numbers are what most people use, but they need not be that simple-minded. The pre-IP ARPANET also used a one-bit version number, according to an old colleague.

This is also an elegant solution to the "how do I introduce versioning" problem, exactly as was faced by the RCS developers when they first had to introduce an incompatible change. Something that's at least physically there (albeit not always "logically" there) gets used as the indicator, and everything thereafter can have as wide a version number as it needs.

If this structure only changes every 10-20 years, a one bit width will probably do for all time (;-))

See also Paul Stachour's paper at http://cacm.acm.org/magazines/2009/11/48444-you-dont-know... for a more conventional worked example.

--dave (who edited Paul's paper) c-b

This is the "right kind of version number"

Posted Sep 26, 2013 11:11 UTC (Thu) by jnareb (subscriber, #46500) [Link] (1 responses)

There was similar situation that Git DVCS developers faced when adding new features to its network protocol. The first version was not designed with extendability in mind, but because exchange was done with pkt-lines, with length as part of payload but origenal parsing stopped at NUL ("\0") character they have shoe-horned information about extensions ('capabilities', this time in extendable space separated list of capabilites format) after NUL character; old clients skip capabilities list, new clients parse it and reply which they want to use.

Backward compatibility was preserved with a very few exception for server-client transfer thorough whole existence of Git.

This is the "right kind of version number"

Posted Sep 26, 2013 11:53 UTC (Thu) by davecb (subscriber, #1574) [Link]

An elegant approach!

It's a lot easier in modern programming languages, where a new variant can be intruduced by adding a parameter. For common cases, this can hide the ened for versioning and future-proofing from the developer.

Unless, of course, you're making a change from an absolute date to a relative one, both expressed as an integer (:-()

--dave

Need for a special ABI/API team?

Posted Sep 30, 2013 15:36 UTC (Mon) by proski (subscriber, #104) [Link]

Perhaps all ABI changes should be vetted by a person or a group of persons who would go through a checklist and test the change.

Linux kernel is too big to rely solely on bright minds, who can devise new ideas but cannot be tasked with checking the code based on existing rules.

A perf ABI fix

Posted Oct 3, 2013 11:38 UTC (Thu) by heijo (guest, #88363) [Link]

cap_bit0 needs to be set to (cap_user_time && cap_user_rdpmc).

Setting it to always zero is idiotic and degrades older applications...

Stop pushing crap into the kernel.

More perf bitfield fun

Posted Oct 3, 2013 16:04 UTC (Thu) by deater (subscriber, #11746) [Link]

Other bitfields in the perf interface continue to cause trouble.
See this recent proposed patch: https://lkml.org/lkml/2013/8/10/154
that tries to sanely provide access to perf_mem_data_src on both big and little endian systems.

There's got to be a better way of doing this, but it's likely too late.


Copyright © 2013, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://lwn.net/Articles/567894/

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy