Tag Archives: Mighty Moose

Mighty Moose Free

BeefyMoose FlexAs some of you may know Svein (@ackenpacken) and I have decided to make Mighty Moose free. This is not a decision that is being take lightly and much thought has gone into it. This post is to explain why we went free and the future.

Almost two years ago now sitting in the Dubliner in Oslo we started having a discussion with Einar about AutoTest.Net. Wouldn’t it be cool if AutoTest.Net could only run the tests that needed to be run when a piece of code changed. We looked in the market and there was no software that could do it well (we only actually found one named infinitest in the java space and it was at the time (I have not tried it lately) very easy to trick).

We discussed with quite a few people the idea and were continually told that “its impossible”. We like “impossible problems”. After some quick spikes we even discussed it as being “non trivial but not hard” we figured a prototype could be working in about a month (given the usual software +-50% “fluff”). I love that now its no longer considered impossible but an expected behaviour of such tools

I returned to Montreal and started working on the bit that did the selection, it was dubbed the “minimizer” and worked off of static analysis (we realized very early on that code coverage is not a good solution to the problem of which tests to run as it describes the last run which may or may not have anything to do with the current run). After one month there was a prototype there, unfortunately it ran way too many tests. Often it would produce results of 200 tests when you really would expect 8 to run. After another 4-6 months of work it started to give pretty reasonable results in most situations. Sounds like a classical software estimation process to me 🙂

But enough reminiscing …

Mighty Moose is from this point forward free. Not “free in beta” or “free with a bunch of functionality turned off” but free (license is being updated as I write this). We announced this rather quietly at NDC.

Many people have asked me “why would you go free? you could charge for what you have.”. This is true. Continuous Testing tools will be taking off and yes we could charge for it however its not quite as simple as that.

To start with support is generally included with a paid product such as a dev tool. Let’s try some really quick math. Let’s say we were wildly successful the first year and got 2000 paying daily users (this would mean we would likely have at minimum double this actual users as you often give away free versions to user groups etc). This would suggest we have very deep penetration into the .NET TDD community (how many TDD’ers do you think there actually are?)

So 4000 users. What could we possibly charge for such a tool? USD$250 would seem to be about the average pricing our research suggested. So we would in the first year have a top line revenue of $500k USD. That sounds like a lot but lets start breaking that down further. For 4000 users how many dedicated support guys would be needed?

One interesting thing I have found is that very often a tool such as Mighty Moose (or any continuous testing tool) is the messenger of a problem not really the problem itself. This could be versioning issues with mSpec (I believe this is fixed now), never having run a redirected build before (this needs to be done as VS loves to lock files), or even just a new release of the CLR/Unit Testing Framework/etc. If you were to look at the width of support of a continuous testing tool its build automation + unit test running + profiler/analysis. This is very wide considering the price of the tool.

Along with this as dev tools are built to be very flexible quite often you will need to do some level of configuration on an existing project in order to get things working properly (out of box can only go so far .. even if it works you likely need to optimize things). It is not hard to trick any such tools. Just put a post build step that manipulates your output e.g.: moves files around (every such post build step would need custom integration written to understand it).

Even worse the support guys you have need to not only be developers but be good developers. In other words the role is basically 50% support 50% developing new features (dev team does support). You could try with just regular support guys and front line vs second line support but this will probably just annoy many users though it could possibly lower blended cost of support (e.g. front line + dev team). These methodologies however also annoys your developers as they are now doing direct support.

In order to get around the support cost question you are left with three possible options. The first is a no go from the start, support contracts. Developers don’t want to pay yearly fees on support costs. The second is to reach high levels of growth but an issue here is how many people actually write tests (let alone do TDD) in .NET. This brings the third model that most developer tool vendors use, I release a new major version of the software once per year.

This “major version release” strategy basically allows people to “upgrade” to the newest version. Generally I also will sunset support with this model to force upgrades (or at least lower support costs for older users). This model has been quite successful for many companies but comes with its own drawbacks. The largest drawback is that I as a company need to release a version of software whether or not I can come up with things that are really of value to my customers. Often I will end up with a few really valuable things but just listing those is not enough to get people to upgrade (think about a change log with 7 items on it…) so I start piling in crap to make a more impressive change log. This crap comes at a cost to the 80% of my users that don’t care about the particular feature and I end up building “mega tool”. This pattern can be seen over and over again whether we talk about Visual Studio itself, Microsoft Word, or any of the popular VS adding products.

This actually leads to a second issue as well. When in beta we have focused very heavily on features that only really interest the top 5% of developers (not a good base for a product, for a product we would be much more interested in the early/late majority areas of the s-curve). One question we get quite often is “how well will things work on my solution with 300 projects”. Now that we have said that we are free I can honestly tell you “I don’t care, when you have a solution with 300 projects in it you have bigger problems to worry about than getting continuous testing working”. Flashy new tools are not the answer, you should stop trying to hide your pain and start doing root cause analysis.

If we were to go down the lines of a paid product we would basically be forced to stop innovating. Our business goals would not be well aligned with our personal goals of changing the way people code. Our bread and butter would be the early/late majorities who want to use things in a much different way (they want slow incremental change) than the early adopters and on much different looking code bases. An example of this can be seen in a common feature request in our graphs of “making them look better and come up faster when there are more than 1000 nodes to show” (they come up with very small nodes). Yes we can do this (and spend 3 months of developer time working on it) but I am entering a point of alleviating pain instead of dealing with the underlying symptom (how on earth do you understand code that puts 1000 nodes in a graph, the graph will not solve this issue for you). This is also 3 months of developer time that we were not working on something actually useful.

I personally believe that the future is not in making tools that alleviate pain that many have but in helping them identify root causes of the pain and resolving it.

Continuous Testing

On my build times post from “Don”

And you want to run your tests _every time_ you hit ctrl+s !?

Actually I have run into this viewpoint quite a bit. I see it most commonly from people who are not doing TDD. The reasoning seems to be that in the process of working on a feature they may save 20 times over a few hours then try to build. The “Save” mode is 100% for TDD workflows. If you are not doing TDD workflows then it probably is not for you. That said after having this discussion many times there are other options that better match non-TDD workflows. Let’s go through all four workflows.

Realtime / Saving modes

The realtime (as you type) and saving (when you save) modes are the default modes and are focused on people who are interested in very short TDD feedback cycles. Both automated both your build and will figure out which tests to run and run them after the build is completed. If you type (or save over the top) existing runs are fully or partially cancelled depending where they are in the run.

Auto

Auto will no longer automate your builds. Basically it waits for you to build in Visual Studio and when it succeeds will automate the finding and running of the necessary unit tests. This uses the build in VS as the “commit”, not the saving of file.

Manual

The last mode that is available is Manual mode. In this mode no automation will occur (except for us seeing your builds and keeping our graphs updated). The software will not automate any builds or tests.

Many people have asked “Why does manual mode exist?”.

Manual mode actually has a fairly powerful workflow associated with it that may be beneficial in some scenarios that are not good candidates for continuous testing. In this mode Mighty Moose works just like most manual test runners with a small twist there is a feature called “Run Related Tests” (ctrl+shift+y,r).

Run Related Tests is quite interesting for those who are not in a situation to benefit from continuous testing. Let’s try a quick walk through.

type:

[Test]
public void MyFirstTest() {
Assert.AreEqual(Foo.Bar(3));
}

static class Foo {
public static int Bar(int x) { throw new UnimplementedException(); }
}

Now hit ctrl+shift+y,u with the cursor inside of the test. It will tell you that the test has run (failed).

With cursor on “Bar” f12 (goto definition)… replace code with

return 3;

from current cursor location ctrl+shift+y,r (run related tests). 1 test run one test passes. Imagining they are in different files now use ctrl+f6 and back in your tests. You can just keep flipping back and forth like this.

Charge for bad code?

So let’s throw out a quick idea that we have batted around a bit internally. What if Mighty Moose were to be free? What if we were only to charge you for two features inside of the software that are not needed if you are maintaining a decent code base?

The basic idea is if you write good code, you will never need these features and thus Mighty Moose will be free for you (We will even throw in support!). If however you need these two areas you will pay for them.

The two areas in question are the minimizer (the thing that figures out which tests to run) and the cascading + incremental compilation build providers. If your tests are fast and your builds small, mighty moose is painless to use without these features. You would still get graphs, sequence diagrams, all the unit testing frameworks, gary etc. Its just it would build your solution and run “all” your tests (you could still use ignore + ignore categories etc)

We could look at this as being similar to a cigarette tax (cost shifting). We provide people with good code the tool for free by charging people who need the features that help them in bad situations. There is also a large support cost difference we have measured (quite similar to smokers using up more medical resources). We would be helping push you towards doing the right thing. We would of course also provide you guidance about how to fix these issues in your system. If however you choose to take the easy way out and have the tool alleviate your pain, there would be a cost associated with that decision.

From a managerial perspective we are also giving you a bit of ammunition. I can make the case very easily as to why these two problems are very expensive within an organization. This would provide setting a cost to one solution that can then be used in further decision making. From an options pricing model perspective we are pricing the option of not resolving the issues now.

I understand that this is a fairly innovative idea and may seem odd to some people at first so I wanted to drop it up here as a RFC to get some feedback.

How to make your build faster

Screen Shot 2012 03 26 at 5 47 40 PM
I did an informal survey a while ago and the results were frankly abysmal. I cannot understand how people work in such environments. I am ADD, if I have a feedback cycle more than a few seconds I have already hit ctrl+right arrow and am reading my email or sitting in twitter. This however has a very serious affect on my work. I am constantly pushing and popping my mental stack of what was going on (more often than not having cache misses as well). The sample size was pretty decent as well with over 700 respondents.

This post is going to look at some ways we can make our build times faster. There are loads of ways aside from buying SSDs. I work in a VM on a non-SSD drive on a laptop and rarely see a build over 5 seconds mostly because I focus very heavily on not having solutions with 20,000 projects in them but there are some other ways as well. Most codebases I see in “enterprise” environments should have more than one solution.

Project Structures

To start with we can use smarter project layouts. Patrick Smacchia has some good posts on it:

Advices on partitioning code through .NET assemblies

Hints on how to componentize existing code

Keeping well factored projects can go a long way and making a larger project will be faster than building many smaller ones (why? as an example they have a tendency of reusing many of the same references etc that need to be loaded repeatedly causing far more i/o).

Parallel Builds

These are good reading to start but there are many more options that are available. Let’s start with one of the comments from Ben “I think unless you go down the parallel build/test route (like NCrunch) then this issue of build times is not going to go away.”

What a great selling point. Luckily for builds everyone can already do this. Did you know that Visual Studio and MSBuild already do parallel builds? All you have to do is drop into your configuration and you will see an options screen like the one here

Screen Shot 2012 03 26 at 4 39 49 PM

Put in your max number of cores that you want. Work with msbuild? just use /maxcpucount:4. Of course this is still rather limiting as if you have project 1 that references project 2 which references project 3 you are in a serial situation anyways. The maxcpucount represents the number of projects that can be built concurrently (e.g.: providing no dependencies). Both MM and nCrunch support this option as well. This can get your build times to be quicker in some scenarios though I tend to see only a +- 20-25% increase on average.

Mighty Moose

We did a quick survey on Mighty Moose users about a week ago and I was amazed to see that very few people were using this feature (about 10% of a survey size +- 100-150 people). Mighty Moose has a smart build system inside of it that is actually much better than what msbuild can do because it has more information. I know dotnetdemon from red gate has something similar to it.

Basically since I know what is changing and I know how that affects the dependency graph I can be much more intelligent with my builds. I also have further knowledge through static analysis about what the affects of that change was (are public contracts changed etc?). Of course its a bit more complicated than this in practice (lots of edge cases) but it can make a huge difference in your build times for your typical builds in your TDD cycle.

[youtube: 05Z5sPrGEeo]

You can see the difference in the video above. MSBuild on an incremental build in AutoTest.Net about 4.5 seconds. MM for the same build + locate tests to run + run the tests about 2.5-3 seconds. Much bigger solutions will get far more gain (whats also odd is that the opposite advice applies with MM build system, more projects tend to be faster than few so long as they are intelligent and not a dependency nightmare) though AutoTest.Net itself is not a tiny piece of code either. It has a reasonable amount of code in it, even the incremental build is doing pretty well when you consider my dev machine is a VM on a non-SSD drive. MM can do better because we have more information than msbuild does as we stay running compared to it running then dying. You can enable the setting on the first screen in config (Build Setup) here.

Screen Shot 2012 03 26 at 6 17 23 PM

Some people have asked if its so much faster why do we make you opt in to using it? There are a few reasons for this. The largest is compatibility mode. Most people do not generally build their projects directly and often have reasons why they don’t actually build successfully. Building the solution gives the highest level of compatibility with existing VS projects. As such we make you opt in for it, but its worth opting in for!

Code Coverage [2]

Yesterday I wrote about some of the issues I find with code coverage being shown in a UI. More often than not displaying code coverage leads to a false sense of security. We have made a conscious decision to not show line by line code coverage in Mighty Moose but instead have taken a different path.

Let’s go through a quick example from yesterday in the “blank slate” path.

[Test]
public void can_multiple_by_0() {
Assert.AreEqual(0, Multiple(5,0));
}

int Multiply(int a, int b) {
return 0;
}

Simplest possible thing. Now let’s add another test.

[Test]
public void correctly_multiplies_two_numbers() {
Assert.AreEqual(6, Multiply(2,3));
}

fails. when I change code to

int Multiply(int a, int b) {
return a * b;
}

It says not covered, then after running it says covered. Does this mean my test was good? Would I see a visual difference if my test had been

[Test]
public void correctly_multiplies_two_numbers() {
Assert.AreEqual(1,1);
}

They would both just mark the line as being green.

This situation will work quite differently with the way that mighty moose works. Mighty Moose does not show line by line coverage. Instead it shows you method level coverage in the margin (gutter). When you add the second test you will see the number go up by one in the margin. The number in the margin is the # of tests covering your method at runtime. In other words you can see the coverage occur when you are working. You can see this process in this video. With just line by line coverage as discussed in the last post you would not see that the new test actually covers the method.

[youtube: hC8XP0LreG8]

Of course this does not allow you to see what lines are covered by those tests. It only tells you that those tests are covering the method in question. You need to understand what the tests actually are covering. This is by design. A common question I get to this is “well how could I know code the tests are covering”. Its this thing we do occasionally as developers called “thinking”. If your code is so complex that looking at the tests you can’t figure this out you probably have bigger problems.

Screen Shot 2012 03 22 at 1 17 15 PM

Going along with the # there is also a colour in a circle around the number. This represents a risk analysis MM is doing on your code (its pretty naive right now but actually works surprisingly, to me anyways, well. We may actually include line by line coverage in this metric shortly but we still won’t show you the line by line coverage. This is something that you can key off of to get a relative idea of safety. It does not preempt your responsibility to actually look at tests before you start say refactoring it is just something to give you an idea of your comfort level.

These “risk margins” are very important because I tend to find two common situations. Either this thing is very poorly tested or it tested pretty well. There are lots of things to improve the situations in the middle (code reviews and pairing are good strategies as is running an occasional code coverage report and going over it with developers on your team during a code review really I don’t hate code coverage just when its used heavily in my IDE 🙂). The margins however give you a quick indicator whether you are in a good or a bad situation.

The margins are also telling you to go look at graphs when you don’t feel comfortable. This really helps with the other big problem of coverage. What on earth is that thing covering this and how far away is it? Does it make a difference if something is 40 method calls away vs a unit test calling directly?

Screen Shot 2012 03 22 at 1 22 56 PM

You can see the tests (they are yellow, interfaces are blue) and the paths they take to cover this particular method. Graphs are one of the most powerful things in mighty moose, I was surprised to see not a lot of people using them via the analytics. You can also use your arrow keys inside the graph to navigate to any node inside of the graph (maybe you are refactoring and want to look at a test?).

The basic idea here is that simple code coverage is not enough. There is more involved with being comfortable than just coverage. Distance is important as is ensuring that the test actually does something.

As they say to assume makes an ass out of u and me. Line by Line code coverage has a tendency of giving us false security. The goal when putting together this stuff in MM was to assist you in identifying your situation and getting more knowledge as quickly as possible. Not to give people a false sense of security. Even a green circle in the margin is just us saying this “seems” to have reasonable coverage. No tool as of today can tell you this thing actually has reasonable coverage.

Code Coverage

One of our most frequently asked questions about Mighty Moose is why do we not do line by line code coverage. We have the capability of doing it, it would take a few weeks to make sure sequence points are right, we already have an entire profiler implementation. We choose not to do it.

I have a personal issue with code coverage. I don’t believe it offers much value either showing me information as I am typing or looking through reports. I also believe that there is a downside to using code coverage that most people do not consider.

Today I started espousing some of these thoughts on twitter with Phillip Haydon who I had promised a few weeks ago to write this blog post to. He is one of the many people wanting line by line code coverage support built into Mighty Moose.

Screen Shot 2012 03 21 at 3 28 52 PM

This is a very normal discussion that I have with people. Let’s look at some of the scenarios of usage here. There are mainly three. The first is I am writing new code going through normal TDD cycles on a blank slate, the second is that I am coming through and adding to existing code, and the last is I am refactoring code.

Blank Slate

The first use case is the one most people see in demos (look at me wow, I can do SuperMarketPricing with this super awesome tool :P). And of course code coverage is very good looking here. You write your test. You go write some code. You see the code get covered. But was it covered by one test or more than one test? Let’s try a simple example (yes very simplified)

[Test]
public void can_multiple_by_0() {
Assert.AreEqual(0, Multiple(5,0));
}

int Multiply(int a, int b) {
return 0;
}

Simplest possible thing. Now let’s add another test.

[Test]
public void correctly_multiplies_two_numbers() {
Assert.AreEqual(6, Multiply(2,3));
}

fails. when I change code to

int Multiply(int a, int b) {
return a * b;
}

It says not covered, then after running it says covered. Does this mean my test was good? Would I see a visual difference if my test had been

[Test]
public void correctly_multiplies_two_numbers() {
Multiply(2,3);
Assert.AreEqual(1,1);
}

They would both just mark the line as being green. Basically I just got some eye-candy that made me feel good when it wasn’t really doing anything for me. Maybe I can mouse over the eye-candy to then get the count and list of tests but do you actually do that? I am too busy on my next test.

Adding to Existing Code

When I am adding to existing code, it already has some test coverage. This is where test coverage really is supposed to be very good as I can see that the code I am changing has good test coverage.

Of course, do you trust the tests that are covering your code? Do you test that they are good tests and actually test what they are supposed to? Working mostly on teams I find so many bad tests that I almost always look around to see what tests are and what they are doing before I rely upon them as a form of safety net that I am not breaking things. Hell they could all be meaningless. And of course as I said on twitter I find my past-self to be quite guilty of having bogus tests occasionally. He is just like my boss a real !@#hole who makes it hard for me to do things now (yes I work for myself).

Knowing that a test “covers” a line of code can not make me avoid the need to look around. If I can avoid the need to look around I probably also know I am in a high coverage situation and am very familiar with the tests (so telling me this line is covered is not that valuable).

Refactoring

The last one here is refactoring. Here I should get a sense of security by looking at my code coverage that I can safely refactor this piece of code.

This should sound fairly similar to the issue above when talking about adding to existing code that I still need to look around. The tests could be completely bogus. They could be a slew of integration tests coming through. They could be calling into the code yet never actually asserting off anything relevant to the section of code they are covering. There are countless reasons why I need to look around.

To me all of these scenarios add up to code coverage on its own being eye-candy that has a tendency of making me feel more secure than I really am. Bad tests happen. I don’t want to give people a false sense of security. The fact that *something* covers this code is not all that valuable without knowing where that thing is, what its goal is, how that relates to here.

Another issue that I have in general with code coverage is I find (especially amongst relatively inexperienced developers) that they write tests to reach code coverage and not to write expressive tests. Even worse is when you talk about a team that has made the asinine decision to have “100% code coverage for our whole system”. Better make sure those autoproperties have tests boys, those will be high value later! You may laugh but I worked with a team who was up in arms over the fact that the closing brace after a return statement was not considered “covered” and was “messing up their otherwise perfect metric”

In the next post we will look at what was done in Mighty Moose instead of line by line code coverage.

Gary

Gary
So I introduced Gary last night. Gary let’s you know when tests are slow. There are a crapload metric crapload of new pieces of functionality in the release coming out today. Tooltip support over much of the system giving information that we have had (such as execution times of code). Also this has been put into Sequence Diagrams. The improved project based system is also finally there (its been there for a while but better). From running a bit I would stay in Mighty Moose Mode. I just can’t dig Maniac Moose even though he has a cooler icon.

I have to say I love working on the last little bits of things, stabilizing and watching them come together. We still have some tricks up our sleeve as well but you will just have to wait to see those.

btw Gary is not an official name its just what I call him.

Application Analytics

Mighty Moose is nearing release. One of the new items that we have built into it is some user tracking. The user tracking is extremely benign (we are tracking some various usage scenarios of how people are using the software). This feature was implemented in about two hours so I want to discuss a bit about the tracking and how we implemented because its a pretty cool way of doing things that many other applications could value from. So yes keep reading as there is technical stuff as to how its done as well 🙂

To begin with, we do not track who you are. We only track certain events that happen in the software such as bringing up a graph or failing a build. No user information is stored or correlated. As of this point there is no software switch to turn off tracking information. We are a small group in beta and we will add the ability to turn off this in the future. If you don’t want tracking don’t upgrade to the newest version (we will make this clear in the release notes as well).

Now how did we add all sorts of analytics to Mighty Moose in just a few hours? Well in Good Enough Software fashion we leveraged something existing. We used google analytics. Basically we made Mighty Moose pretend to be a web browser. It sends up information to google pixels directly. Doing this allows us to look at the users in the same way as what you would get for a web site. We made a whole series of “urls” that represent things happening in the software. After implementing it, I see some people have done this in mobile apps but I have never seen previously people doing it in a desktop app so the approach is fairly interesting. Just because I have not seen someone doing it does not mean that nobody has done it, but it does not seem to be very popular

Code is included at the end of the post but basically we just push pixel hits to urls like continuoustests.com/events/BuildCompleted. From there we can easily jump into real time view (or analyze historical data, the flows are particularly interesting to us!).

Screen Shot 2012 03 04 at 5 38 35 AM

Again this is very simple to add to your application and you can get quite good analytics out of it. We spent all of two hours doing. How do users use your app?

Code: Just use TrackEvent … Feel free to change how you see fit. Much is just changed code from google server side ASPX examples. One note, this code is not called very often in MM so the ThreadPool.QueueUserWorkItem is probably good enough. If you did a lot of events you would probably want to use a queue + a thread that read from it there (and probably detect if you were failing 😀 to stop sending).

If you prefer gist here you go https://gist.github.com/1972785


public class Analytics
{
// Tracker version.
private const string Version = "4.4sa";

private const string CookieName = "__utmmobile";

// The path the cookie will be available to, edit this to use a different
// cookie path.
private const string CookiePath = "/";

// Two years in seconds.
private readonly TimeSpan CookieUserPersistence = TimeSpan.FromSeconds(63072000);

// 1x1 transparent GIF
private readonly byte[] GifData = {
0x47, 0x49, 0x46, 0x38, 0x39, 0x61,
0x01, 0x00, 0x01, 0x00, 0x80, 0xff,
0x00, 0xff, 0xff, 0xff, 0x00, 0x00,
0x00, 0x2c, 0x00, 0x00, 0x00, 0x00,
0x01, 0x00, 0x01, 0x00, 0x00, 0x02,
0x02, 0x44, 0x01, 0x00, 0x3b
};

private static readonly Regex IpAddressMatcher =
new Regex(@"^([^.]+\.[^.]+\.[^.]+\.).*");

// A string is empty in our terms, if it is null, empty or a dash.
private static bool IsEmpty(string input)
{
return input == null || "-" == input || "" == input;
}

// Get a random number string.
private static String GetRandomNumber()
{
Random RandomClass = new Random();
return RandomClass.Next(0x7fffffff).ToString();
}

// Make a tracking request to Google Analytics from this server.
// Copies the headers from the original request to the new one.
// If request containg utmdebug parameter, exceptions encountered
// communicating with Google Analytics are thown.
private static void SendRequestToGoogleAnalytics(string utmUrl)
{
try
{
WebRequest connection = WebRequest.Create(utmUrl);

((HttpWebRequest)connection).UserAgent = "";
connection.Headers.Add("Accept-Language",
"EN-US");

using (WebResponse resp = connection.GetResponse())
{
// Ignore response
}
}
catch (Exception ex)
{

throw new Exception("Error contacting Google Analytics", ex);

}
}

// Track a page view, updates all the cookies and campaign tracker,
// makes a server side request to Google Analytics and writes the transparent
// gif byte data to the response.
private static void TrackPageView(string path)
{
TimeSpan timeSpan = (DateTime.Now - new DateTime(1970, 1, 1).ToLocalTime());
string timeStamp = timeSpan.TotalSeconds.ToString();
string domainName = "continuoustests.com";
if (IsEmpty(domainName))
{
domainName = "";
}

var documentReferer = "-";

string documentPath = path;
var userAgent = "";

// Try and get visitor cookie from the request.
string utmGifLocation = "http://www.google-analytics.com/__utm.gif";

// Construct the gif hit url.
string utmUrl = utmGifLocation + "?" +
"utmwv=" + Version +
"&utmn=" + GetRandomNumber() +
"&utmhn=" + "continuoustests.com" +
"&utmr=" + "moose" +
"&utmp=" + path.Replace("/", "%2F") +
"&utmac=" + "MO-29683017-1" +
"&utmcc=__utma%3D999.999.999.999.999.1%3B" +
"&utmvid=" + (visitor - DateTime.Today.GetHashCode());

SendRequestToGoogleAnalytics(utmUrl);
}

static int visitor = Guid.NewGuid().GetHashCode();

private const string GaAccount = "MO-29683017-1";
private const string GaPixel = "/ga.aspx";
private static string GoogleAnalyticsGetImageUrl(string _url)
{
System.Text.StringBuilder url = new System.Text.StringBuilder();
url.Append(GaPixel + "?");
url.Append("utmac=").Append(GaAccount);
Random RandomClass = new Random();
url.Append("&utmn=").Append(RandomClass.Next(0x7fffffff));
url.Append("&utmr=").Append("moose");
url.Append("&utmp=").Append(_url.Replace("/", "%2F"));
url.Append("&guid=ON");
return url.ToString().Replace("&", "&");
}
public static void SendEvent(string name)
{
ThreadPool.QueueUserWorkItem(x =>
{
try
{
TrackPageView("/event/" + name);
}
catch { }
});
}
}

Mighty Moose LOLCats?!

Did you know Mighty Moose had LOLCATS inside?

Sometimes dev teams do odd things. This was ours but I really feel weird coding without it now.

Mighty Moose Demo Up

Ct

I put up a quick demo of Mighty Moose this weekend. Longer videos are in the process of uploading today so there will be more to come.