Monthly Archives: March 2012

Charge for bad code?

So let’s throw out a quick idea that we have batted around a bit internally. What if Mighty Moose were to be free? What if we were only to charge you for two features inside of the software that are not needed if you are maintaining a decent code base?

The basic idea is if you write good code, you will never need these features and thus Mighty Moose will be free for you (We will even throw in support!). If however you need these two areas you will pay for them.

The two areas in question are the minimizer (the thing that figures out which tests to run) and the cascading + incremental compilation build providers. If your tests are fast and your builds small, mighty moose is painless to use without these features. You would still get graphs, sequence diagrams, all the unit testing frameworks, gary etc. Its just it would build your solution and run “all” your tests (you could still use ignore + ignore categories etc)

We could look at this as being similar to a cigarette tax (cost shifting). We provide people with good code the tool for free by charging people who need the features that help them in bad situations. There is also a large support cost difference we have measured (quite similar to smokers using up more medical resources). We would be helping push you towards doing the right thing. We would of course also provide you guidance about how to fix these issues in your system. If however you choose to take the easy way out and have the tool alleviate your pain, there would be a cost associated with that decision.

From a managerial perspective we are also giving you a bit of ammunition. I can make the case very easily as to why these two problems are very expensive within an organization. This would provide setting a cost to one solution that can then be used in further decision making. From an options pricing model perspective we are pricing the option of not resolving the issues now.

I understand that this is a fairly innovative idea and may seem odd to some people at first so I wanted to drop it up here as a RFC to get some feedback.

How to make your build faster

Screen Shot 2012 03 26 at 5 47 40 PM
I did an informal survey a while ago and the results were frankly abysmal. I cannot understand how people work in such environments. I am ADD, if I have a feedback cycle more than a few seconds I have already hit ctrl+right arrow and am reading my email or sitting in twitter. This however has a very serious affect on my work. I am constantly pushing and popping my mental stack of what was going on (more often than not having cache misses as well). The sample size was pretty decent as well with over 700 respondents.

This post is going to look at some ways we can make our build times faster. There are loads of ways aside from buying SSDs. I work in a VM on a non-SSD drive on a laptop and rarely see a build over 5 seconds mostly because I focus very heavily on not having solutions with 20,000 projects in them but there are some other ways as well. Most codebases I see in “enterprise” environments should have more than one solution.

Project Structures

To start with we can use smarter project layouts. Patrick Smacchia has some good posts on it:

Advices on partitioning code through .NET assemblies

Hints on how to componentize existing code

Keeping well factored projects can go a long way and making a larger project will be faster than building many smaller ones (why? as an example they have a tendency of reusing many of the same references etc that need to be loaded repeatedly causing far more i/o).

Parallel Builds

These are good reading to start but there are many more options that are available. Let’s start with one of the comments from Ben “I think unless you go down the parallel build/test route (like NCrunch) then this issue of build times is not going to go away.”

What a great selling point. Luckily for builds everyone can already do this. Did you know that Visual Studio and MSBuild already do parallel builds? All you have to do is drop into your configuration and you will see an options screen like the one here

Screen Shot 2012 03 26 at 4 39 49 PM

Put in your max number of cores that you want. Work with msbuild? just use /maxcpucount:4. Of course this is still rather limiting as if you have project 1 that references project 2 which references project 3 you are in a serial situation anyways. The maxcpucount represents the number of projects that can be built concurrently (e.g.: providing no dependencies). Both MM and nCrunch support this option as well. This can get your build times to be quicker in some scenarios though I tend to see only a +- 20-25% increase on average.

Mighty Moose

We did a quick survey on Mighty Moose users about a week ago and I was amazed to see that very few people were using this feature (about 10% of a survey size +- 100-150 people). Mighty Moose has a smart build system inside of it that is actually much better than what msbuild can do because it has more information. I know dotnetdemon from red gate has something similar to it.

Basically since I know what is changing and I know how that affects the dependency graph I can be much more intelligent with my builds. I also have further knowledge through static analysis about what the affects of that change was (are public contracts changed etc?). Of course its a bit more complicated than this in practice (lots of edge cases) but it can make a huge difference in your build times for your typical builds in your TDD cycle.

[youtube: 05Z5sPrGEeo]

You can see the difference in the video above. MSBuild on an incremental build in AutoTest.Net about 4.5 seconds. MM for the same build + locate tests to run + run the tests about 2.5-3 seconds. Much bigger solutions will get far more gain (whats also odd is that the opposite advice applies with MM build system, more projects tend to be faster than few so long as they are intelligent and not a dependency nightmare) though AutoTest.Net itself is not a tiny piece of code either. It has a reasonable amount of code in it, even the incremental build is doing pretty well when you consider my dev machine is a VM on a non-SSD drive. MM can do better because we have more information than msbuild does as we stay running compared to it running then dying. You can enable the setting on the first screen in config (Build Setup) here.

Screen Shot 2012 03 26 at 6 17 23 PM

Some people have asked if its so much faster why do we make you opt in to using it? There are a few reasons for this. The largest is compatibility mode. Most people do not generally build their projects directly and often have reasons why they don’t actually build successfully. Building the solution gives the highest level of compatibility with existing VS projects. As such we make you opt in for it, but its worth opting in for!

Contest Winner

Htc evo 3d gsm OK so there is a winner to the naming contest (though I liked quite a few of them). My first pick would have been NowNow (a south african expression that I remember quite well, its right up there with Yes,no). However it was submitted too late and although I have the god like powers of changing the rules to the contest I will keep with what I wrote.

My second favourite was Valdemar because there is a great story behind it of the Danish king. However he turns out to be not such a savoury character when you look back at history. Then again to the victors goes the ability to write propaganda history.


The winner was one of the simplest answers “later”. This describes exactly what the little project is doing. I will hopefully finish it up pretty quickly and get a demo posted up. So the phone will go to GK. Even though the screen is cracked it still works ๐Ÿ™‚ and it has a 3d camera that sort of works when you view pictures on the cracked 3d screen. GK, drop me a line and we will figure out how to get it to you. I can even leave some drum and bass + dub step on it for you as it makes a great music player (and with airplane mode battery life is actually pretty good)

Hopefully you guys will all get to see “later” some time next week.

Naming Contest! Free Phone

I mentioned a few weeks ago that I had an extra phone. I am working on a skunkworks project and am a bit stumped on a good name.

Htc evo 3d gsm

So I will give the phone (I know its super nice eh!) to whoever can come up with a good name for the project.

Some background (without going too far):

The project is about delaying things until later. When I think about it I think about the “No problem car wash” in Jamaica (its where you drive your car into the river and they wash it for you).

Oh and if I pic your name (submissions must be done by 2000 GMT+2), the phone is yours!

Legal disclaimer: I am the sole judge of the contest. This contest is in no way just to get rid of my broken phone, it is for a valuable prize. I retain the rights to change the rules as I see fit or just to end the contest if I don’t like the stupid hat you are wearing, don’t fancy the cut of your jibe, or for whatever reason I may deem necessary. What I say goes. If you don’t like that don’t enter.

Code Coverage [2]

Yesterday I wrote about some of the issues I find with code coverage being shown in a UI. More often than not displaying code coverage leads to a false sense of security. We have made a conscious decision to not show line by line code coverage in Mighty Moose but instead have taken a different path.

Let’s go through a quick example from yesterday in the “blank slate” path.

public void can_multiple_by_0() {
Assert.AreEqual(0, Multiple(5,0));

int Multiply(int a, int b) {
return 0;

Simplest possible thing. Now letโ€™s add another test.

public void correctly_multiplies_two_numbers() {
Assert.AreEqual(6, Multiply(2,3));

fails. when I change code to

int Multiply(int a, int b) {
return a * b;

It says not covered, then after running it says covered. Does this mean my test was good? Would I see a visual difference if my test had been

public void correctly_multiplies_two_numbers() {

They would both just mark the line as being green.

This situation will work quite differently with the way that mighty moose works. Mighty Moose does not show line by line coverage. Instead it shows you method level coverage in the margin (gutter). When you add the second test you will see the number go up by one in the margin. The number in the margin is the # of tests covering your method at runtime. In other words you can see the coverage occur when you are working. You can see this process in this video. With just line by line coverage as discussed in the last post you would not see that the new test actually covers the method.

[youtube: hC8XP0LreG8]

Of course this does not allow you to see what lines are covered by those tests. It only tells you that those tests are covering the method in question. You need to understand what the tests actually are covering. This is by design. A common question I get to this is “well how could I know code the tests are covering”. Its this thing we do occasionally as developers called “thinking”. If your code is so complex that looking at the tests you can’t figure this out you probably have bigger problems.

Screen Shot 2012 03 22 at 1 17 15 PM

Going along with the # there is also a colour in a circle around the number. This represents a risk analysis MM is doing on your code (its pretty naive right now but actually works surprisingly, to me anyways, well. We may actually include line by line coverage in this metric shortly but we still won’t show you the line by line coverage. This is something that you can key off of to get a relative idea of safety. It does not preempt your responsibility to actually look at tests before you start say refactoring it is just something to give you an idea of your comfort level.

These “risk margins” are very important because I tend to find two common situations. Either this thing is very poorly tested or it tested pretty well. There are lots of things to improve the situations in the middle (code reviews and pairing are good strategies as is running an occasional code coverage report and going over it with developers on your team during a code review really I don’t hate code coverage just when its used heavily in my IDE ๐Ÿ™‚). The margins however give you a quick indicator whether you are in a good or a bad situation.

The margins are also telling you to go look at graphs when you don’t feel comfortable. This really helps with the other big problem of coverage. What on earth is that thing covering this and how far away is it? Does it make a difference if something is 40 method calls away vs a unit test calling directly?

Screen Shot 2012 03 22 at 1 22 56 PM

You can see the tests (they are yellow, interfaces are blue) and the paths they take to cover this particular method. Graphs are one of the most powerful things in mighty moose, I was surprised to see not a lot of people using them via the analytics. You can also use your arrow keys inside the graph to navigate to any node inside of the graph (maybe you are refactoring and want to look at a test?).

The basic idea here is that simple code coverage is not enough. There is more involved with being comfortable than just coverage. Distance is important as is ensuring that the test actually does something.

As they say to assume makes an ass out of u and me. Line by Line code coverage has a tendency of giving us false security. The goal when putting together this stuff in MM was to assist you in identifying your situation and getting more knowledge as quickly as possible. Not to give people a false sense of security. Even a green circle in the margin is just us saying this “seems” to have reasonable coverage. No tool as of today can tell you this thing actually has reasonable coverage.

Code Coverage

One of our most frequently asked questions about Mighty Moose is why do we not do line by line code coverage. We have the capability of doing it, it would take a few weeks to make sure sequence points are right, we already have an entire profiler implementation. We choose not to do it.

I have a personal issue with code coverage. I don’t believe it offers much value either showing me information as I am typing or looking through reports. I also believe that there is a downside to using code coverage that most people do not consider.

Today I started espousing some of these thoughts on twitter with Phillip Haydon who I had promised a few weeks ago to write this blog post to. He is one of the many people wanting line by line code coverage support built into Mighty Moose.

Screen Shot 2012 03 21 at 3 28 52 PM

This is a very normal discussion that I have with people. Let’s look at some of the scenarios of usage here. There are mainly three. The first is I am writing new code going through normal TDD cycles on a blank slate, the second is that I am coming through and adding to existing code, and the last is I am refactoring code.

Blank Slate

The first use case is the one most people see in demos (look at me wow, I can do SuperMarketPricing with this super awesome tool :P). And of course code coverage is very good looking here. You write your test. You go write some code. You see the code get covered. But was it covered by one test or more than one test? Let’s try a simple example (yes very simplified)

public void can_multiple_by_0() {
Assert.AreEqual(0, Multiple(5,0));

int Multiply(int a, int b) {
return 0;

Simplest possible thing. Now let’s add another test.

public void correctly_multiplies_two_numbers() {
Assert.AreEqual(6, Multiply(2,3));

fails. when I change code to

int Multiply(int a, int b) {
return a * b;

It says not covered, then after running it says covered. Does this mean my test was good? Would I see a visual difference if my test had been

public void correctly_multiplies_two_numbers() {

They would both just mark the line as being green. Basically I just got some eye-candy that made me feel good when it wasn’t really doing anything for me. Maybe I can mouse over the eye-candy to then get the count and list of tests but do you actually do that? I am too busy on my next test.

Adding to Existing Code

When I am adding to existing code, it already has some test coverage. This is where test coverage really is supposed to be very good as I can see that the code I am changing has good test coverage.

Of course, do you trust the tests that are covering your code? Do you test that they are good tests and actually test what they are supposed to? Working mostly on teams I find so many bad tests that I almost always look around to see what tests are and what they are doing before I rely upon them as a form of safety net that I am not breaking things. Hell they could all be meaningless. And of course as I said on twitter I find my past-self to be quite guilty of having bogus tests occasionally. He is just like my boss a real !@#hole who makes it hard for me to do things now (yes I work for myself).

Knowing that a test “covers” a line of code can not make me avoid the need to look around. If I can avoid the need to look around I probably also know I am in a high coverage situation and am very familiar with the tests (so telling me this line is covered is not that valuable).


The last one here is refactoring. Here I should get a sense of security by looking at my code coverage that I can safely refactor this piece of code.

This should sound fairly similar to the issue above when talking about adding to existing code that I still need to look around. The tests could be completely bogus. They could be a slew of integration tests coming through. They could be calling into the code yet never actually asserting off anything relevant to the section of code they are covering. There are countless reasons why I need to look around.

To me all of these scenarios add up to code coverage on its own being eye-candy that has a tendency of making me feel more secure than I really am. Bad tests happen. I don’t want to give people a false sense of security. The fact that *something* covers this code is not all that valuable without knowing where that thing is, what its goal is, how that relates to here.

Another issue that I have in general with code coverage is I find (especially amongst relatively inexperienced developers) that they write tests to reach code coverage and not to write expressive tests. Even worse is when you talk about a team that has made the asinine decision to have “100% code coverage for our whole system”. Better make sure those autoproperties have tests boys, those will be high value later! You may laugh but I worked with a team who was up in arms over the fact that the closing brace after a return statement was not considered “covered” and was “messing up their otherwise perfect metric”

In the next post we will look at what was done in Mighty Moose instead of line by line code coverage.


DSC 1098

Some people have asked that I drop up some occasional posts about places I go. I will only post them here not to code better.

Last week we went to Nepal for a quick vacation. It is gorgeous, I highly recommend visiting. We went Kathmandu->Chitwan->bhaktapur. We are definitely going back.

Nepal is definitely a third world country. I would recommend being a very experienced traveller if you intend to go “on the cheap” the way that we did (staying at guest houses, no real plans etc) but its quite worth it if you can get handle the not so good parts like no hot water, electricity being out for better parts of most days, seeing abject poverty.

So now, on with pictures! The first is Boudhanath in Kathmandu. This is an amazing landmark and worth visiting. It is a UNESCO World Heritage site as well.

In Chitwan you can take a safari on the back of an elephant.


You can also take a bath with the elephant after your safari


Cows are a pretty common site in Nepal.


Lots of breathtaking scenery (this is just off the road between Kathmandu and Chitwan).


Beautiful sunsets


Amazing architecture (this is the main square in Bhaktapur)


You might even find a bit of interesting vegetation on the side of the road.


Again the infrastructure is not the best.


But this place is so worth going to.

Events and Generic Formats

There was some interesting discussion before I left for Nepal about Event Stores. The general question is “can you have a generic event log similar to a transaction log in a database”. A related question is what is the difference between an event log and a transaction log.

Having an event log is not a new idea, its been around for decades. Databases due something very similar inside of their own transaction log. The major difference between an event log and a transaction log is that by definition an event log also captures intent. Consider the difference between:

RecordType: Customer
Id: 17
Name: Greg
Status: Normal


Id: 17
Name: Greg
Status: Normal

There are many semantic and linguistic differences between these two concepts. The first would be a transaction log and the second an event log. With a create these differences can be very subtle. Let’s try something less subtle.

RecordType: Customer
Status: Gold
Id: 17

Id: 17

Here intent is quite obviously different between the two messages. There could be a second event CustomerManuallyOverridenToGoldStatus which represents a manual override of our algorithm for dealing with customer promotions. A subscriber may care *why* the customer was promoted. This concept of intent (or “why”) is being represented in the event log.

As an important concept. If you can have two events use the same transition then you are by losing information

Things get to be a bit harry though and this is where the discussion started falling apart. I wish I could have dropped in a longer response but was travelling at the time. Can’t we model the first to be equivalent to the second? We see something in RESTful APIs.

RecordType: Customer
Action: AutomaticPromotion
Id: 17

YES you can do this! This produces a record that captures the intent as well. In fact this is how my second event store worked. There are lots of reasons you may want to do this (such as ability to use a generic state box on top in certain HA scenarios with the event store).

We can just consider this a different serialization mechanism. The key is that everything still maps directly back to a single event.

Now let’s get back to that original question of “event log vs transaction log”. An event log includes business level intent, this is not needed with a transaction log. An event log is a stricter definition of a transaction log. I don’t need to store the intent in order to be a transaction log though we can have a really interesting discussion about what the “business events” are in a database domain ๐Ÿ™‚.

Is an Event Log a type of journal or transaction log? Yes. I like to think though that even if you use the generic update as in the third example above it requires that you specify intent. Intent is a valuable thing. Can I build a transaction log that completely captures intent and does not lose information? Sure think about a database with a “Transaction” table. I would say this is actually just a serialization mechanism with the intent of being an event log.

If I don’t store intent there are an entire series of questions I can no longer ask the data.

QCon London (and QCon background)

Tomorrow I will be one of the keynote speakers at QCon London. I am really looking forward to the conference.

I go quite a ways back with infoq and qcon. The first time I spoke about CQRS at a reasonably sized conference was at QCon San Francisco in I believe 2007. I was petrified. I was to talk about building systems that were considered non-applicable to Domain Driven Design by using Messaging.

My front row included Martin Fowler, Eric Evans, and Gregor Hohpe. I had never even really talked with any of them before, scary people. Their eyes can penetrate you ๐Ÿ™‚ I had also not given very many talks at this point.

I must have had 7 cups of coffee before my talk. I met up with Aino (from TriFork, a regular at all the QCon/TriFork events) as she was my track host. She even commented that I was bouncing off the walls.

I will skip over the talk. Suffice to say that Eric (who is one of the nicest guys you will ever meet, really its hard to tease a disagreement out of :)) came to me afterwards with some pros (and cons) about the talk. I have always liked that we say pros and cons never the other way around, it makes it feel like its an overall positive experience.

It wasn’t. The harshest words I believe were “I think I might have understood 20-30% of what you said which means everyone else was likely below that”. Ouch.

However this became a chance to try to improve. I spent the next year doing user group talks, refining how I explained things, took some workshops on speaking etc etc. When I did the talk a second time people thought it was quite a good presentation.

It goes to show you that even what you may think is a bad talk may still have really valuable information that is just not being packaged or explained properly. This is an important realization both as a presenter and as an audience member. Often times asking the right questions at the end can help to crystallize concepts.

Fast-forward to 2012 and I will keynote QCon London. Invited by Aino who I am sure I made sift through a pile of red cards only five years ago. I will try to avoid having seven cups of coffee before this talk.


So I introduced Gary last night. Gary let’s you know when tests are slow. There are a crapload metric crapload of new pieces of functionality in the release coming out today. Tooltip support over much of the system giving information that we have had (such as execution times of code). Also this has been put into Sequence Diagrams. The improved project based system is also finally there (its been there for a while but better). From running a bit I would stay in Mighty Moose Mode. I just can’t dig Maniac Moose even though he has a cooler icon.

I have to say I love working on the last little bits of things, stabilizing and watching them come together. We still have some tricks up our sleeve as well but you will just have to wait to see those.

btw Gary is not an official name its just what I call him.