Sublime is Sublime Part 1

This is the first blog post in a series about getting sublime working for .NET development. We will look at how to get nice color schems, git integration, command line integration, handling project and solution files, and both manually and automatically building/running unit tests. Before getting into the details about how to get everything setup one of the most important questions that should be looked at is *why*.

Why not just use Visual Studio and Resharper? Are you just some kind of hipster?

And the chap below is not really a hipster. 

benny

I was for a long time a fan of VS+R#. It is quite a nice IDE. Over time however I started seeing some of the downsides of this particular setup.

The first is that its a windows/microsoft only experience. Currently the majority of my machines do not have windows installed on them. They are running linux. I am not saying that windows is a bad platform but to have my dev tools not work in linux is for me a downside to the tool.

VS + R# is also an extremely heavy tool chain. It is common for devs to need i7s+16gb of ram to be effective using the tool chain. This is especially true for those doing things like solution wide analysis with R#. I have never met a developer in VS who has not had the “I just want to open a file what are you doing!” moment while VS somehow takes 20 seconds to open a text file where a normal text editor should be able to do this in under 500ms.

Another advantage to not using VS is that the tooling can be used for other things. How many keystrokes have you memorized in Visual Studio? How applicable is that knowledge to working in erlang. Lately I have been writing a lot of code in erlang and in C. One positive aspect of just using a text editor is that my tooling is portable to multiple environments/languages. How many companies do you know that have programs to try to learn shortcut keys in their editor du jour? Is that knowledge portable?

There is also an issue of cost. We in the western world often forget how much things really cost from a global perspective. My entire investment on my tooling was $70 for a sublime text license though you can use sublime for free if you want on an “evaluation” period. It will annoy you occasionally about licensing but will still work (I was actually using it this way even though I had a license just because I wasn’t thinking to put my license in). $70 will get you roughly 1/3 of the way to a resharper license (with paid upgrades roughly once per year) forget about tooling like Visual Studio Ultimate at $13k. I could have a team of twenty for roughly the same cost as one developer.

But won’t I miss Resharper and all the great things it does?

Yes. You will. Resharper has some great functionality and when I use it I really feel like I am coding significantly faster. Sublime will not optimize using statements for you. Sublime will not see a type and offer to add a reference for you. Sublime does not have intellisense (although this can be reasonably easily built into it). Sublime does not have automatic refactorings. I would however challenge you to still try using it.

A while ago (roughly two years ago) I decided to actually measure my usage of resharper and how much faster it made me. To measure I wrote a program that would track keystrokes going to Visual Studio (obviously resharper won’t be helping me in chrome). I then put a hot key to track when I was starting to code and stopping. In measuring resharper did make me much faster in the process of inputting or changing code the problem was evev during code heavy times this was a rather small percentage of my overall work. A lot of time was being spent thinking about the code I was writing and/or researching about the code I was writing. I found it to be for me a micro-optimization (try measuring, your results may be different).

There are also downsides to tools such as resharper. One I noticed was that I had basically forgotten which namespaces things live in as the tool would automatically add references for me. Resharper also does some things that you would never do yourself. As an example in a rename that affects domain objects, dtos, and your client. In doing such a “refactor” you also just broke your wire protocol without realizing it. On many teams I would see developers commonly checking in 70+ files why? They were renaming things. Renames that bubble out this much are generally a sign of a problem.

Said simply resharper can help hide many design problems in your code especially problems with encapsulation. Being forced to type your code in will often times also lead to better ways of doing things. I gave an example of this in my 8 lines of code talk when dealing with an application service.

public class InventoryItemDeactivationService {
 private IInventoryItemRepository _repository;
 private IDriversLicenseLookupService _driversLicenseLookupService;

public InventoryItemDeactivationService(IInventoryItemRepository repository, IDriversLicenseLookupService driversLicenseLookupService) {
 Ensure.NotNull(repository);
 Ensure.NotNull(driversLicenseLookupService)
 _repository = repository;
 _driversLicenseLookupService = driversLicenseLookupService;
 }

public void DeactivateInventoryItem(Deactivate d) {
 var item = _repository.GetById(d.id);
 item.Deactivate(d.reason, _driversLicenseLookupService);
 }
}

compared a more functional approach of


public static void Deactivate(IInventoryItemRepository repository, IDriversLicenseLookupService driversLicenseLookupService, Deactivate d) {
 var item = _repository.GetById(d.id);
 item.Deactivate(d.reason, _driversLicenseLookupService);
 }
 var handler = x => Deactivate(new OracleRepository(), new CALookup(), x)

In the top piece of code a tool like resharper will help me greatly because everything I am doing I have to type three times. In the second example though I get much less help from my tooling as I am not repeating myself. I would suggest that if you are actually typing your code by hand you are less likely to use the first style and more likely to be interested in the second.

I also find people who are doing TDD in static languages with such tooling to not understand what the concept of a refactor is compared to a refuctor where as dynamic language people working in plain text editors do. A refactor is when you either change your code but not your tests or your tests but not your code. One side stays stable and you predict what will happen (one side stays as your pivot point). If you change both your code and your tests concurrently you no longer have a stable point to measure from. With this tooling often forces changes to happen on both sides and it is no longer “refactors” that people are doing.

In summary

I am not in any way saying that VS and R# are bad tools I have found both to be quite effective. I am simply suggesting that they have their downsides like any other tool. Being able to develop in a plain text editor is a wonderful development environment and has some advantages to larger tooling in some cases.

In this series we will look at how to get everything setup to have a reasonable code editing experience. We will look at integration of tools such as git. We will look at how to manage many of your day to day tasks. We will not get into things like how to edit a winform with a WYSIWYG editor as frankly you can’t do this.

5 Comments

  1. Posted March 20, 2014 at 9:07 am | Permalink | Reply

    I would say that the downsides to heavy tooling/refactoring set-ups that without thinking it is very easy to generate codebases that are hard to navigate and change without them. Simply trying this kind of set-up (as you describe) for a few months can help challenge that default and lead to better use of that tooling when it is available.

    OR so I’ve found anyway.

  2. Posted March 20, 2014 at 9:30 am | Permalink | Reply

    With regard to running on linux, any motivation as to not choose/look at something like monodevelop?
    IMO, it depends on what you are building and what kind of (corporate) environment you happen to land in. The closer you are (or need to be) to the “Windows” metal the less you’ll be inclined to use this approach. As the number of PCLs increases some of those issues might be mitigated, but it’s still a long way from home before the entire ecosystem starts to think this way (if ever).
    For most line of business systems, a lot of the core behavior could be written this way, with a ports-n-adapters approach to keep external/not portable deps to a minimum (and only dev those integration points in a Windows environment).

  3. andyclap
    Posted March 22, 2014 at 4:20 pm | Permalink | Reply

    The handler is calling Deactivate, rather than DeactivateInventoryItem … is that deliberate irony?!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: