It's Moose vs MooseX::Declare in a speed showdown!!
This benchmark uses MooseX::App::Cmd to create some simple command-line apps, then uses Dumbbench to run them over and over to see which which one is fastest. Well, actually we already know that plain Moose is going to win. The real question is, is it faster enough to be significant in amongst all the overhead of starting up Perl, App::Cmd, loading Moose-guts, etc.
A few notes:
-
In the results, "MXD" indicates a command that uses MooseX::Declare at every level (app, base command, and command subclass). "NoMXD" indicates a command that does not use MooseX::Declare at all.
-
Since each run in our benchmark is spawning a shell, we're going to be comparing compile-time speed and run-time speed, but mostly compile-time (because our command doesn't do very much actual work). That's mainly what I wanted to get at with this benchmark.
-
Importantly, MooseX::Method::Signatures is not used with MXD, because that's what actually kills MXD's run-time speed. Instead, Method::Signatures::Modifiers is used for the MXD version (as recommended in my first Method::Signatures talk). Method::Signatures is used with the NoMXD version, mainly to insure that we compare apples to apples.
-
You may wonder why I have the commands spit out output, but then the benchmark just throws that output away. Having the output makes it easy to test each command individually to make sure they work before I benchmark them. However, the benchmark is going to run the commands over and over, and I don't want to drown in all that output. Of course, I could go back and comment out all the output, but leaving it in means the commands are doing some actual work, which, again, makes for a better benchmark.
-
If you're wondering why I've done anything else the way I've done it, it's because I was hacking code from an active project I'm working on, so it's similar to real working code. I cut most of the extranneous bits out, but I left some of the structure in to help this be a more realistic benchmark.
Here are the results:
Rate MXD NoMXD
MXD 1.3791+-0.0013/s -- -1.3%
NoMXD 1.3969+-0.0014/s 1.29+-0.14% --
As you can see, we're talking about a difference of less than 1.5%, so I feel pretty comfortable that no one's going to notice that small an amount of slowdown. Certainly I feel like the added readability of my code is worth that tiny speed penalty.
The code I used for the benchmark is below. However, gists refuse to deal with directories, so, if you're trying to re-create this test, just make sure you replace any _
in a filename below with a /
, and make directories as appropriate for those to land in.