It’s not the dynamic dispatch!

Joel has decided to blame the use of dynamic dispatch for Ruby’s speed issues:

Without knowing much about the implementation of Ruby, I would guess that the biggest issue is around late binding and especially duck typing, which prevents type inference or strong typing, which means that function calls will always be slow because you can never get something compiled down to the point where a function call is just a single CALL instruction (on x86)… you always have to be exploring the object, possibly even scanning a hash table to find the function you want to call.

In other words, “it’s always going to be slow because if it’s not strongly-typed it can’t be statically compiled to the most minimal possible instruction sequence!” Which is, simply put, a bullshit argument even if you ignore the fact that he said right up front it was bullshit.

There’s a lot that you can do to optimize dynamic dispatch. If you don’t believe me, take a look at the implementation of the Apple/NeXT Objective-C runtime, objc4. Go ahead, it’s Open Source. (The link is for the version that’s part of Mac OS X 10.4.7 for Intel processors.) It implements full Smalltalk-style dynamic dispatch — the same type of dynamic dispatch that Ruby uses. And what’s more, Ridiculous Fish also wrote a great article on how dynamic dispatch is implemented in the objc_msgSend() primitive — with particular attention focused on its performance characteristics!

No, it’s not message-based dynamic dispatch or “duck typing” (runtime polymorphism) that makes Ruby slow. It’s the fact that Ruby is a single-pass interpreted language. It’s not compiled to bytecode. It’s not compiled to native code. It’s scanned, parsed, and then immediately executed.

Imagine if your C compiler, or your Fortran compiler, or your Java compiler — or your Visual Basic compiler, for that matter — had to be invoked every time you ran your program. Imagine how slow that would be! That’s essentially what Ruby is doing, and that’s why it’s slow. Ruby 2.0 is planned to run on the YARV virtual machine, and there has also been work to compile Ruby code for LLVM. There’s nothing in Ruby’s nature that makes this a particularly difficult problem, especially since all of the issues of efficiently compiling dynamic languages with reflection and self-modification features were solved by Lisp in the 1960s and Smalltalk in the 1970s.

Incidentally, this is why I react so vehemently when people talk about “Lisp interpreters.” Lisp is almost never interpreted, specifically to avoid these types of performance issues. At the least most Lisp systems compile to a custom bytecode and then use an optimized bytecode engine to execute that. That way they can eliminate the scanning and parsing overhead — yes, it does exist for Lisp, because contrary to what Lispers may say, the language does have syntax — while still stay portable.

Others have also been piling on, such as Avi Bryant and Obie Fernandez. As Obie points out, Avi knows what he’s talking about. And so do folks who work with Objective-C intimately, day in and day out.

Code Width

There’s a question on wrapping code at 80 columns in the Joel on Software Forum. Most of the time, when I see 80 columns as a standard it’s to enable easy printing of the code. However, in this day and age, why are you printing code? I print code occasionally when I need to read it in-depth, but other than that I do most of my reading on-screen. It’s certainly not a regular occurrence.

I also use mostly large or widescreen displays for coding. The smallest display I use regularly is my laptop, which is 1152 by 768. Most of the time I use my 19-inch monitor, which I run at 1600 by 1200. As a result, if I want to write really long lines, I can. But I don’t.

I generally limit the length of lines I write to 80 to 120 characters. Most of the time this is easy, because Objective-C’s syntax lends itself to very readable multi-line expressions:

NSDictionary *d = [NSDictionary dictionaryWithObject:@"Chris"
                                              forKey:@"name"];

The main reason I do it, though, is that I use multiple windows while I code. I usually have at least 6 windows open in Xcode while I’m coding: The Xcode project itself, the source and header file for the class I’m working on, the source and header file for the unit test I’m working on, and the build results window (which also shows my unit test results). Often it’s much more, with more class and unit test sources & headers open. And it’s much easier to fit these windows all on a screen when they don’t need to be as wide as the screen to see their contents…

Joel Spolsky’s Bionic Office

In the latest installment of Joel on Software, Joel talks about his new office.

I like a lot of what he’s done with it. It’s largely the kind of space I’d design. I might or might not have private offices, now that I’m starting to buy into XP more, but I definitely like a lot of what I see.

This amused me, though:

> The monthly rent for our offices, when fully occupied, will run
> about $700 per employee. The build-out was done on budget and paid
> for almost entirely by the landlord. I suspect that $700 per person
> is on the high side for software developers throughout the world,
> but if it means we can hire from the 99.9 percentile instead of the
> 99 percentile, it’ll be worth it.

I strongly expect it’s actually on the low side, at least in large corporations.

Joel says each employee is getting about 425 rentable square feet, at a cost of $700 per month. That’s about $20 per square foot per year. That’s low for Class A office space, at least in some markets. Like, say, Schaumburg.

(Though with 53,255 square feet available in Zurich II, and a couple of newly-built office towers standing vacant, and a bunch of other Class A space vacant, perhaps Schaumburg rents have come down since I last looked in 2001…)

I knew it wasn’t all puppies and rainbows!

[Joel on Software][1], “[Local Optimization, or, The Trouble with Dell][2]”

> Unfortunately, the dirty little secret about Dell is that all they have
> really done is push the pain of inventory up to their suppliers and down
> to their customers. Their suppliers end up building big warehouses right
> next to the Dell plants where they keep the inventory, which gets
> reflected in the cost of the goods that Dell consumes. And every time
> there’s a little hiccup in supplies, Dell customers just don’t get their
> products.

This isn’t the only problem with Dell. Their other big problem is that they’re constantly starting price wars, which keep driving down their margins further and further, which makes not only their products but all of their competitors’ products crappier and crappier. It also fuels the delusion that price is the only factor that customers use when selecting a supplier. It’s a race to the bottom that nobody wins.

[1]: http://www.joelonsoftware.com/
[2]: http://www.joelonsoftware.com/news/20030115.html

Joel still doesn’t get it

A couple days ago in [Joel on Software][1], Joel claimed that in order for it to make economic sense to develop a Macintosh product, you had to be able to sell *25 times as many* copies as you would a Windows product.

**Bullshit.**

First of all, you can’t just assume that the relative market sizes between the Macintosh and Windows are accurately represented by their market shares. This is partly because market share is a measurement of new computer sales rather than installed base, and partly because there are broad swaths of each market that *aren’t* in the market for your application.

Secondly, it presumes that it costs the same to develop and bring to market a Macintosh product as it does to develop a Windows product. It doesn’t. It costs substantially less. The development tools on Mac OS X are the best on any platform, and speed development significantly; very small teams can create high-end applications in very short timeframes. There is a far smaller test matrix when you’re dealing with Macintosh software, and within that matrix there are far fewer bizarre interactions. There is significantly less competition in the Macintosh market, so you don’t have to spend as much on marketing and promotion of your product. Consumers also don’t have to wade through nearly as much complete garbage to discover the good applications.

Finally, you have to consider post-sales support. The support load presented by Macintosh software is also far lower than for the typical Windows product. This means lower post-sales costs, which means you get to keep more of the revenue generated by the product.

All this adds up to an excellent ROI story for Mac OS X development. You may still have the potential for a higher return on a Windows product, but you’ll also have substantially higher costs, a longer development timeline, and correspondingly greater project risk. All sides need to be weighed before deciding whether it’s worth pursuing one platform or another – you can’t just do a couple bogus back-of-the envelope calculations and decide you need to sell 25 times as many units to make Macintosh development worthwhile.

[1]: http://www.joelonsoftware.com/news/20020910.html

Open Source Human Interface Design

Peter Trudelle, Shall We Dance? Ten Lessons Learned from Netscape’s Flirtation with Open Source UI Development (CHI 2002 – Getting to Know You) We wound up spending longer getting requirements in the form of bug reports, and doing design by accretion, backout and rework. Don’t let rushed coders drive things, design the product. [via Joel on Software]

I don’t agree with Joel that it’s impossible to do good design with a geographically distributed team. I think it’s entirely possible for distributed collaboration to work. The big problem with Mozilla was too many cooks spoiling the broth; everyone thought they were a human interface expert, and everyone had different goals, and everyone pushed as hard as they possibly could to achieve their goals. A small, tightly focused team would have done just fine, even if they were distributed.

I really, honestly believe in the effectiveness of distributed collaboration, when it’s done properly. But trying to do human interface design with in an open and ever-changing and questionably-qualified group doesn’t sound like a recipe for success.

Joel Climbs Into the Trunk

Joel is smoking the good stuff! Or is he drinking the purple stuff?

Joel Spolsky of Joel on Software is actually claiming “.NET appears so far to be one of the most brilliant and productive development environments ever created.” He goes on to say “ASP.NET is as big a jump in productivity over ASP as Java is to C. Wow.”

Living inside Microsoft’s locked trunk for too long obviously ruins your sense of perspective. They’ve been steadily improving their tools over the years, but they started in such a truly awful state that what they have now seems like absolute luxury to Windows developers. If these people would bother to look outside their cramped quarters occasionally, they’d see that they really, honestly don’t have it very well off at all.

Here’s a choice quote:

I love the fact that you can use an ASP.NET calendar widget, which generates HTML that selects a date on a monthly calendar, and be confident that the “date” class you get back from that thing (System.DateTime, I believe) will be the exact same date class that the SQL Server classes expect. In the past you would not believe how much time we wasted doing things like reformatting dates for SQL statements, or converting COleDateTimes to someOtherKindOfDateTime.

I’d believe it because I’ve seen the kind of crap you have to go through to get anything done with Microsoft’s development tools and programming interfaces. In fact, I’ve written ported Macintosh software to Windows with them, and even tried to build Macintosh software with them (using their terrible Macintosh MFC SDK). It’s amazing Windows developers are ever able to get anything done.

Contrast that to OpenStep, WebObjects, and the Enterprise Objects Framework circa late 1995. There is one date class, NSGregorianDate. You didn’t have to write low-level code to access databases, so you didn’t have to worry about turning dates into strings you could embed in SQL statements. Instead, your Enterprise Objects – objects transparently backed by your database could just have attributes containing NSGregorianDate objects and The Right Thing would happen automatically.

As I read through this, I find more and more absolutely laughable comments. For instance:

First, they had the world’s best language designer, the man who was responsible for 90% of the productivity gains in software development in the last 20 years: Anders Hejlsberg, who gave us Turbo Pascal (thank you!), Delphi (thank you!), WFC (nice try!) and now .NET (smacked the ball outta the park).

Yes, the creator of Turbo Pascal is responsible for all of our productivity gains in software development. Especially since, even though Joel credits him as a “language designer” he didn’t really “design” Pascal or Object Pascal. Not, say, Alan Kay, leader of the team that invented true Object-Oriented Programming in the 1970s at Xerox and popularized it in 1980 with Smalltalk-80. Then again, that’s 22 years ago – maybe it doesn’t count (since Joel said “20 years”), even though industrywide movement to OOP didn’t really have critical mass until the 1990s…

Perhaps with .NET, Microsoft is only 5 years behind where NeXT was in 1995 with OpenStep, WebObjects, and the Enterprise Objects Framework. But they’re still 5 years behind where NeXT was, and now that NeXT has funding – in the form of millions of Apple Macintosh users running Mac OS X – they’re starting to move ahead quickly. If they can bring back the Objective-C version of EOF for Cocoa, they’ll be way ahead of the game.

Joel gets it this time

In the latest installment of Joel on Software, Joel talks about the importance of design in software projects.

In short, a small amount of up-front design work can result in massive savings down the line. If you don’t do that design work, not only do you wind up paying for it later, you wind up paying for it many many times over. Everybody I know has seen it happen, yet for some reason people still spout bullshit lines like “We don’t have time to do a spec.”

This is a lot better than some of Joel’s past rants, wherein he insists that it’s never better, faster, or cheaper to scrap a code base and start over than it is to refactor that codebase. I suspect Joel has never actually worked on truly horrible code. I’ve seen firsthand how, if you can manage to combat the Second System Effect, rewriting a horrible codebase can actually be faster, cheaper, and result in better software than trying to refactor it into something manageable.

Funny thing: The rewrite was one of those projects where the power of a detailed, written specification was supremely evident.