Without knowing much about the implementation of Ruby, I would guess that the biggest issue is around late binding and especially duck typing, which prevents type inference or strong typing, which means that function calls will always be slow because you can never get something compiled down to the point where a function call is just a single CALL instruction (on x86)… you always have to be exploring the object, possibly even scanning a hash table to find the function you want to call.
In other words, “it’s always going to be slow because if it’s not strongly-typed it can’t be statically compiled to the most minimal possible instruction sequence!” Which is, simply put, a bullshit argument even if you ignore the fact that he said right up front it was bullshit.
There’s a lot that you can do to optimize dynamic dispatch. If you don’t believe me, take a look at the implementation of the Apple/NeXT Objective-C runtime,
objc4. Go ahead, it’s Open Source. (The link is for the version that’s part of Mac OS X 10.4.7 for Intel processors.) It implements full Smalltalk-style dynamic dispatch â€” the same type of dynamic dispatch that Ruby uses. And what’s more, Ridiculous Fish also wrote a great article on how dynamic dispatch is implemented in the
objc_msgSend() primitive â€” with particular attention focused on its performance characteristics!
No, it’s not message-based dynamic dispatch or “duck typing” (runtime polymorphism) that makes Ruby slow. It’s the fact that Ruby is a single-pass interpreted language. It’s not compiled to bytecode. It’s not compiled to native code. It’s scanned, parsed, and then immediately executed.
Imagine if your C compiler, or your Fortran compiler, or your Java compiler â€” or your Visual Basic compiler, for that matter â€” had to be invoked every time you ran your program. Imagine how slow that would be! That’s essentially what Ruby is doing, and that’s why it’s slow. Ruby 2.0 is planned to run on the YARV virtual machine, and there has also been work to compile Ruby code for LLVM. There’s nothing in Ruby’s nature that makes this a particularly difficult problem, especially since all of the issues of efficiently compiling dynamic languages with reflection and self-modification features were solved by Lisp in the 1960s and Smalltalk in the 1970s.
Incidentally, this is why I react so vehemently when people talk about “Lisp interpreters.” Lisp is almost never interpreted, specifically to avoid these types of performance issues. At the least most Lisp systems compile to a custom bytecode and then use an optimized bytecode engine to execute that. That way they can eliminate the scanning and parsing overhead â€” yes, it does exist for Lisp, because contrary to what Lispers may say, the language does have syntax â€” while still stay portable.
Others have also been piling on, such as Avi Bryant and Obie Fernandez. As Obie points out, Avi knows what he’s talking about. And so do folks who work with Objective-C intimately, day in and day out.