Bit Hacking

July 21st, 2011

Lion is the first operating system to require, and to fully take advantage of, 64-bit addressing modes in the Intel chips that power Apple’s Macintosh computers. One of the side-effects of this is that every object identifier in Mac OS X’s Cocoa programming framework (typically an address in memory), is now twice as long as it was in a 32-bit environment.

Apple has apparently taken advantage of the 64-bit runtime in Lion by optimizing the Objective C runtime itself to use some of these extra bits for, shall we say, clever purposes. Bavarious describes an optimization through which Apple is able to replace previously full-fledged opaque objects such as NSNumber with an object-placeholder that exists entirely as the 64-bit “object address” itself. This means that, for a wide range of “simple” objects, no additional memory allocation is required, and no retain/release memory management is required for the “object.”

The trick relies on a implementation detail of the system, that allocated blocks of memory will always be aligned at 16-byte offsets into the address space. This leaves a bunch of numbers that can be represented in 64-bits, that cannot reasonably be assigned to any other object. To understand this practically, imagine that your neighborhood’s postal addresses are all assigned at offsets of 10: 30, 40, 50, etc. A clever postal service could institute an addressing system that uses an “invalid” address such as “31,” to perhaps mean “deliver to 30 with expedited afternoon delivery.”

Cleverness like this with encoding extra information in memory addresses is a time-honored tradition. I recall the days of 24-bit addressing on classic Mac OS, where Apple, and many 3rd party developers, observed that the high 8 bits of a typical memory address could be tweaked and used to store additional information, because the system would never reference those bits when resolving a particular address.

In those days, using those extra bits turned out to be a pretty significant headache when 32-bit addressing ultimately came along, and lots of code had this “crufty” treatment of addresses to clean up. Perhaps it is a memory of situations like this that caused Jon “Wolf” Rentzsch to comment in his bookmarking of the above-referenced blog post:

“Every tagged pointer has its lowest bit set, hence tagged pointers are odd integers” Strikes me as a really bad idea. [Emphasis Mine]

But the difference now, in this scenario, is the “cute hacking” is all being done by a central power, with and in terms of opaque objects that only Apple has the authority to change. I think this is a really clever hack that will undoubtedly lead to some serious performance gains in Lion and beyond. It’s hard to imagine specific outcomes that will make Apple regret adopting this strategy. In the worst case scenario, an addressing system of future Macs will not leave any “spare” bits to be exploited, so the runtime will simply revert to its previous behavior.

 

2 Responses to “Bit Hacking”

  1. Michel Fortin Says:

    There’s a big difference between hacking the lowest bits of the pointer versus hacking the highest bits. Since those are pointers to *objects*, you can assume they will always be memory-aligned, and unless we go back to 8- or 16-bit processors the last bit is always going to be zero for non-packed data structures longer than one byte, such as Objective-C objects.

  2. johne Says:

    This “hack” is not just a “really bad idea”, it ranks as “Not Even Wrong” on the “Right, Wrong, and Not Even Wrong” scale.

    This change breaks code that depends on the previously documented fact that the first (and absolutely required) thing that pointers to Objective-C objects point to that objects “isa” Class.

    Can you guess what happens when you use `id object; Class objectClass = object->isa;` with these new, “fantastic” tagged pointers? Make no mistake about it, this was a well documented fact in the official documentation, including the official public headers (i.e., `objc/*.h`, and the `typedef` / declaration for `id`, with no warnings anywhere that `id` was a private, opaque type).

    Setting aside those fundamental problems, this is a /horrific/ idea when examined from the perspective of the C99 standard. There’s no equivocating or prevaricating on the fact that the “clever tagged pointers hack” results in all kinds of formal “undefined behavior” according to the C99 standard. A few of the obvious violations:

    6.2.5 Types

    27 A pointer to void shall have the same representation and alignment requirements as a pointer to a character type. Similarly, pointers to qualified or unqualified versions of compatible types shall have the same representation and alignment requirements. All pointers to structure types shall have the same representation and alignment requirements as each other. All pointers to union types shall have the same representation and alignment requirements as each other. Pointers to other types need not have the same representation or alignment requirements.

    Specifically, “All pointers to structure types shall have the same representation and alignment requirements as each other.”

    6.3.2.3 Pointers

    5 An integer may be converted to any pointer type. Except as previously specified, the result is implementation-defined, might not be correctly aligned, might not point to an entity of the referenced type, and might be a trap representation.

    6 Any pointer type may be converted to an integer type. Except as previously specified, the result is implementation-defined. If the result cannot be represented in the integer type, the behavior is undefined. The result need not be in the range of values of any integer type.

    Michel Fortin comments that “There’s a big difference between hacking the lowest bits of the pointer versus hacking the highest bits.”. I violently disagree- both make use of very specific architectural and implementation specific details. They both break when the initial assumptions turn out later to be wrong. What they break may be different, but breaking is still breaking.

    You then say

    “Since those are pointers to *objects*, you can assume they will always be memory-aligned, and unless we go back to 8- or 16-bit processors the last bit is always going to be zero for non-packed data structures longer than one byte, such as Objective-C objects.”

    … which is a very strong argument as to why this is a horrific idea. Anything that deals with points to objects is allowed to make that very same assumption. Anything that critically depends on this assumption will break when given a 10.7 tagged pointer. In fact, some of these assumptions are not actually assumptions at all, but are requirements mandated by the standard. This means that some of the “extra bits” that tagged points so cleverly use “because they are zero” are /required/ by the standard to be zero.

    This means that anything that breaks because of the lower bits not being zero in 10.7s tagged pointers can not, by very definition, be “buggy” or “incorrect”.

    As a practical example of where this could go horribly wrong, one need only consider the case of the optimizer for C. Optimizers make extensive use of the pedantic details in the C standard. This can result in unbelievably complicated corner cases that are shockingly non-obvious. Any optimization done by the C compiler that depends on the lower bits being zero as required by the C standard will break when used on a 10.7 tagged pointer.

    Although this is an admittedly contrived example, assume that for some reason the optimizer decides that some particular optimization and instruction selection requires that the pointer be logically shifted right by two places, and then eventually logically shifted back left two places at the end. The optimizer can take advantage of the fact that, in this particular context, `((p>>2)<<2)` must be identical to `p`.

    Now you have a condition where just having a 10.7 tagged pointer passing through code that has nothing to do with Objective-C (i.e., the standard C library, any of the numerous C based shared libraries) will now break when given a 10.7 tagged pointer.

    The fact this this is such an unbelievably bad idea from any number of compelling technical standpoints makes it all the more shocking that this massive change to the way pointers are dealt with that is ABI compatibility breaking isn't even mentioned as a foot note in the 10.7 developer release notes.

Comments are closed.

Follow the Conversation

Stay up-to-date by subscribing to the Comments RSS Feed for this entry.