CLion review

I have been using CLion on and off for about two months now. This is just a quick review of it.

The good.

I really like it as an editor. It is quick and intuitive to get into.
If you have background with intellij/R# the learning curve is quite low.
CMake support
Does a great job picking up subtle little issues in code
Auto add includes :)

The bad.

With sublime as example there are so many more plugins available not so much with clion
Getting existing projects working is a PITA
It feels really weird to install the jvm so I can edit my C code

I want to expand on the last point a little. Like many tools if you start off from scratch using it everything works wonderfully. If you have though an old project and try to move it to Clion its not so simple. As an example in private eye everything is done in makefiles including a bunch of cross platform type stuff. Getting this working in CLion is non-trivial so I tended to just use it as an editor editing particular files (luckily there are only a few). This may be I am just not an advanced enough user.


Its a nice tool. I haven’t fully moved over from vim but in a new larger project I am working on I am starting with CLion and sometimes editing in vim. Ask me again in 3-6 months my opinion after I’ve become a more advanced user.

Your Tools Control You

Been a while since I have blogged. Will be writing a few over the next few weeks.

Hopefully over time you as a developer change your opinions. One thing I have drastically changed my opinion over time on is tooling. As many know I used to work heavily on a tool called mighty moose (still works and is OSS). One of the things I built into mighty moose was a pretty cool graphing feature I thought was the bees knees at the time that I built it. You can see it and alot of other cool features Svein and I built into it here:

One thing that was interesting with the graphs was people would bring them up and file support complaints that the nodes were too little when they brought it up on their code because there were 500 boxes in the graph (one got an OutOfMemory on it, in debugging it was a graph with over 50k nodes in it). This was never a problem with the graph it was a problem with the code.

I still believe such tools can have value and help make better software however I don’t really use such tools any more. In fact I have gone from being very pro-tooling to being very anti-tooling. Tools control your thinking process. I can probably look at the output of your process and figure out what tools you were using!

A classic example of this many have worked with is a typical .NET 1.1 win/webforms app written in Visual Studio. VS has a great (I mean seriously wonderful) step debugger built into it. The problem is that people tend to use it. A typical workflow would be change some code, hit f5, start stepping through. One of the other cool features, truely innovative at the time, was the ability to easily change code on the fly.

Speaking of ASP.NET have you ever looked at what came out over http from it? If you ever have wondered what a dementor’s kiss feels like I reckon its similar.


The problem in this example is when you are then given the code that was developed in this way. You suddenly find out that is the only workflow that will actually work with the code. You will find other artifacts as well such as nested loops and longer functions as the tools work better that way! Its quite annoying to think step into vs step over.

This is a classic case of tools controlling your output.

Other examples of this can be seen in tools like intellij or resharper. A typical smell of such tooling is that interfaces are backwards in a domain model. Instead of the domain model defining the contract it wants and an implementer adapting to that interface they “extract interface” from the implementer. This is quite backwards but a typical smell.

Another example can be seen in code of people using containers, ask yourself do you use almost exclusively constructor injection? Have you constructor injected a dependency that you only used in one method? Does the granularity actually match there?

Given most of these things are small. But these small changes start adding up.

Back to mighty moose. I found it changed the way I was writing code. One thing I got lazy with was the structure of my code in terms of files because I had graphs to navigate. I also got lazy about tests because the minimizer would not run them most of the time. I even got a bit dangerous trusting my risk metrics.

Today I have gone back to a plain text editor.
I write unit tests but don’t TDD (TDD has its effects)
I like REPLs
I try to log like I am debugging a problem (should be normally debuggable without a debugger, let’s just say when running involves burning eproms you think more about it first)

What’s interesting is I find I am almost back to where I was 15 years ago writing embedded code on 68030/68040s with a VT terminal plugged in for log messages. I see more and more the push back on big tooling and its a good thing (of course the “pushbacks” keep adding more and more functionality and becoming IDEs themselves! kind of reminds me of the 3 year old 50kloc “micro orms”)

p.s. I’m still looking for a nice text editor setup for F# with support for type providers etc if someone knows one.

The Test that Cried Wolf

There was an interesting post on the BDD list today which is a pretty common question:

TLDR I want to automate receiving a SMS in my test to verify my SMS send with <vendor> worked what is the best way to do this?

An answer came back that you can use twilio and recive the message through their API
This is in general a terrible idea and you should avoid it.
The argument quickly came back that its easy and relatively cheap to automate why not?


People have a mistaken view that something being cheap and simple to autmate make that thing a good idea to automate. The reason its so terrible to automate the sending of a text message has nothing to do with the cost of the initial automation (though its not as simple as people think, I have done it!). The reason its so terrible is that it will become the Test-That-Cried-Wolf.

Let’s start with the service you will use to receive text messages (in this case twilio)

1 day, 23 hours ago     This service is operating normally at this time.
2 days ago      We are investigating a higher than normal error rate in TwiML and StatusCallback webhooks
1 week, 6 days ago      This service is operating normally, and was not impacted by the POST request issue.
1 week, 6 days ago      We are investigating an issue with POST requests to /Messages and /SMS/Messages.
2 weeks, 1 day ago      Twilio inbound and outbound messaging experienced an outage from 1.30 to 1.34pm PDT. The service is operating normally at this time.
2 weeks, 1 day ago      Our messaging service is currently impacted. We are investigating and will provide further updates as soon as possible.
2 weeks, 1 day ago      All queued messages have been delivered. All inbound messages are being delivered normally.
2 weeks, 1 day ago      All inbound messages are being delivered normally. Our engineers are still working on delivering queued messages. We expect this to be resolved before 6pm PDT
2 weeks, 1 day ago      A percentage of incoming long code messages, that were received between 3.02pm and 3.45pm are queued for delivery. Our engineers are actively investigating the situation.
2 weeks, 2 days ago     A number of Twilio services experienced degraded network connectivity from 8:47am PT to 8:50am PT.  All services are now operating normally.
2 weeks, 2 days ago     This service is operating normally at this time.
2 weeks, 2 days ago     We are getting reports of elevated errors. Our Engineering Team is aware and are working to resolve.
2 weeks, 5 days ago     This service is operating normally at this time.
2 weeks, 5 days ago     We are investigating a problem where webhooks in response to incoming SMS or MMS messages may be delayed or may be made multiple times.

What happens when your service that you only use for receiving SMS in your test is having a problem? Test Fails.
What happens when your service sending the SMS is having a problem? Test Fails.
There are at minimum two other providers here. Test Fails.
Anyone who has owned a phone knows that SMS are not always delivered immediately. How long do you wait? Test Fails.
Anyone who has owned a phone knows that SMS is not guarenteed delivery. Test Fails.

Start adding these up and if you run your tests on a regular basis you can easily expect 1-2 failures/week. On most teams I deal with a failed test gets looked at immediately to figure out why its failing. In all of these cases it will have nothing to do with anything in your code and is a temporal issue (quite likely not impacting production). How many times will you research this problem before you say “well it does that all the time”.

The cost of such tests is not in their initial implementation but in their false positives . When >90% of the test failures have nothing to do with your system the failures will GET IGNORED. What’s the point of having a test when you ignore the failures? These are the tests-that-cry-wolf and should be avoided. There is a place for such tests, they are on the operations side where any crying-wolf is a possible production issue and WILL be investigated.

Another Security Model

I had an interesting question when sitting with a client today. The Event Store supports internally role based security through ACLs they would prefer to use a claim based system with it. An interesting idea, is there a reasonably easy way to do this? Well yes but it requires a bit of coding around (would be a nice thing to have in a library somewhere *wink wink*)

The general idea with claims based security is that something else will do the authentication and the application will act only on a series of claims that are given. In this particular example they want to control access to streams based upon claims about the user and to do it in a reasonably generic way.

As an example for a user you may receive the following claims.

    organization : 37,
    department : 50,
    team : 3,
    user : 12

What they want is to be able to use these in conjunction with streams to determine whether or not a given user should have access to the stream (and to be reasonably dynamic with it.)

Obviously we will not be able to easily do this with the internal security (well you could but it would be very ugly) but it can be built relatively easily on top. It is quite common to for instance run Event Store only on localhost and to only expose a proxy publicly (this kind of thing can be done in the proxy, while not an ideal solution it can get us pretty close to what we want.)

If we just want to work with say a single claim “can read event streams” we could simply do this in the proxy directly and check the claim before routing the request. Chances are however you want to do quite a bit more with this and make it more dynamic however which is where the conversation went in particular what about per stream and per stream-type setup dynamically? Well we could start using the stream metadata for this.

For a resource (stream metadata)

    organization : 37,
    department : 50,
    team : 13,
    user : 121

Now we could try taking the intersection of this with the claims provided on the user.

The intersection would result in

    organization : 37,
    department : 50,

We might include something along the lines of

    approve = "organization,department",
    somethingelse = "organization,department,team"
    delete = "user"

Where the code would then compare the intersection to the verb you were trying to do (must have all). This is a reasonably generic way of handling things but we can one step further and add in a bit more.


    approve = "organization,department",
    somethingelse = "organization,department,team"
    delete = "user"

This is now defined in a default which will be merged with the stream metadata (much like how ACLs work). If a value is provided in the stream metadata it will override the default (based on type of stream). This allows us to easily setup defaults for streams as well. The logic in the proxy is roughly as follows:

Read {type}-defaultsetting (likely cached)
Read streammetadata
Merge streammetadata + {type-defaultsetting} to effective metadata
Calculate intersection of effective metadata with user info
Check intersection vs required permission for operation

This provides a reasonably generic solution that is quite useful in many circumstances. The one issue with it is that if someone roots your box they can access the data directly without permissions as they can bypass the proxy and talk directly on localhost (to be fair your probably have bigger problems at this point). It is however a reasonable solution for many situations.

Ouro’s Birthday

We are now over 3 years working on Event Store! Ouro’s second birthday will be happening in London Sept 17. In general Ouro’s birthday is always a lot of fun!

This year we will have some short talks by varying contributors on the OSS side (what they have done, what they are working on etc) and I will do a talk on major functionality completed/on the way including showing off some new goodies.

There will be plenty of people from the community there and I am sure lots of good discussions. After there will be a bit of a celebration with free food/beer and of course some ouro swag. Come on out and hang out for the evening! RSVP required:

Sublime is Sublime Closing

Well its an early morning. I can blame the travel from London for that. I managed to struggle through to the end of the second period watching the canadiens game last night. I was a bit worried entering the third but was quite happy to see they won when I woke up :)

In this post I just want to sum up the other posts from the sublime series as well as add a few tidbits. In the post series we have learned how to setup sublime for .net development. We have covered how to setup project/solution support. How to get intellisense and some basic refactoring. Even how to get automated builds and tests running (all in linux).

We have also looked at a lot of other things that are built on top of sublime that are fairly useful if you are doing other types of development such as javascript or html5. Many of these tools far outclass the Visual Studio equivalents and are usable with many other environments (such as using a ruby backend).

I have personally given up on using Visual Studio as a whole. I will however keep a vm with it on it for some very specific tasks that it does well (such as line by line debugging). These are not things I use in my daily worksflow but are nice to have when you absolutely need them.

Some other changes have come about in the use of sublime as my primary editor. A big one is that when I am writing one off code (which I do alot) I do not bother creating project or solution files any more. I instead just create C# files then either run the command line directly to the compiler or create a small make file. It sounds odd but its actually much simpler than creating project/solution files overall.

There will also be much going on in this space coming up. As of now the sublime plugin supports maybe 20% of what omnisharp is capable of. There will be quite a bit of further support coming in. As an example I was looking the other day at supporting run tests in context from inside of sublime (in test->run test, on fixture->run fixture). There is also much coming in for refactoring support and my guess is that you will see even more coming in on this due to nrefactory moving to roslyn. I think within a year you will find most of this tooling built in.

Another thing that I added to sublime though there isn’t really an official plugin for it yet is sublime repl + script cs. I find it quite common to grab a function and work it out in the repl first and then move it back into the code. A perfect example of this happened to me while in London. I was trying to combine to Uris and was getting some odd behaviour. Three minutes in the repl showed exactly what the issue was.

Moving to sublime will change the way that you work a bit but is definitely worth trying. Remember that a primary benefit of working in this way is that everything that you are doing is a composition of pieces that will also apply to any other code you happen to be working on (whether its C/Ruby/Erlang/even F#).

Banking Example Again

I was reading through this yesterday on my way out of London. Go on take a minute and read it.

I do find it funny that the bitcoin exchanges were taken down by such things but the article is pretty ridiculous in how it presents its problem/solution. Banks don’t actually work as described in this post. There is not a column “balance” in an account table as presented unless the developers just had no clue what they were doing.

mybalance ="account-number") newbalance = mybalance - amount database.write("account-number", newbalance) dispense_cash(amount) // or send bitcoins to customer - See more at:

This is absurd your balance while perhaps being denormalized on your account is really the result of an equation (summation of the value of your transactions). All of these problems discussed would just go away if the system had been designed to record a journal properly (and as the journal is append only most other issues would go away).

I have always hated that the typical example of distributed transactions is transfering between two accounts. Banks don’t work this way!

Sublime is Sublime 12

OK so we are at the last blog post in the sublime series. I have I hope saved the best one for last. One of the largest questions I have received during this series is how do I get intelisense and that R# is awesome because it supports things like goto definition and rename…

In this post we will add all of these features to sublime. There is a great project out there called OmniSharp that supports most of them (and in the future can support many many more!). Let’s get going then and add our sublime support. is the project

So to install:

goto your packages directory (linux here so it may be different in windows or mac, just look in packages in sublime to find the folder)

cd ~/.config/sublime-text-3/Packages/
git clone
cs OmniSharpSublime
git submodule update --init 

Now you have gotten all the needed files. The next thing we will need to do is build OmniSharp


Now edit your project file for sublime and add at the root level

 "solution_file": "./EventStore.sln"

Remember the path is relative from your project file! restart sublime if its running. Try typing out variable names and you will see you have intellisense. If you hit a . you will notice that it does not come up :( by default the auto complete keystroke is alt + / you can remap this to ctrl+space if you want by editing your keymap in sublime.

Want to go to definition of a method? Try f12 by default (again can be remapped its up to you!)

In the next post we will recap everything that we have done so far!

Sublime is Sublime 11

So we are now into the 11th post. There are only two more to go after this one. One with some more functionality and one as a summary. In the last post we installed OpenIDE and showed the very basics of its functionality, adding a file to an existing project.

OpenIDE can do much more than this. It has most support for sln/prj that you will need. Let’s start by making a new project.

greg@goblin:~/src/foo$ oi create console src/HelloConsole
Created src/HelloConsole
greg@goblin:~/src/foo$ ls
greg@goblin:~/src/foo/src$ ls
HelloConsole.csproj  Program.cs  Properties

This will create a proejct from a template. The following are the available templates (listed from help).

	create : Uses the create template to create what ever project related specified by the template
		console : Creates a new C# console application
			ITEM_NAME : The name of the Project/Item to create
		library : Creates a new C# library project
			ITEM_NAME : The name of the Project/Item to create
		service : Creates a new C# windows service
			ITEM_NAME : The name of the Project/Item to create

You could remove Program.cs with oi deletefile foo/Program.cs if you wanted and it would also be removed from the project as well.

You can create your own templates as well they are just scripts. This applies to both new items and project templates. If for example you wanted to make a custom item for a new item (say a custom xunit testfixture).

Go to your OpenIDE release. cd .OpenIDE/languages/C#-files/

You will see here there is create and new. These hold the templates for the create and new commands they are implemented as python but can be scripted in any language

As an example here is the template for a new interface

#!/usr/bin/env python
import sys

if __name__ == "__main__":
	if sys.argv[1] == 'get_file_extension':
	elif sys.argv[1] == 'get_position':
	elif sys.argv[1] == 'get_definition':
		print("Creates an new C# interface")
		classname = sys.argv[1]
		namespace = sys.argv[2]
		parameterfile = sys.argv[3]
		print("using System;")
		print("namespace " + namespace)
		print("	interface " + classname)
		print("	{")
		print("	}")

and here is the template for a new Console Application.

#!/usr/bin/env python
import sys
from files.copydir import copy as copydir

if __name__ == "__main__":
	if sys.argv[1] == 'get_file':
	elif sys.argv[1] == 'get_position':
	elif sys.argv[1] == 'get_definition':
		print("Creates a new C# console application")
		copydir("console", sys.argv[1])

There is still much that can be added to OpenIDE (and it does a ton of other things we have not covered). But in general it can get you around the issues of dealing with project and solution files including references.

Sublime is Sublime 10

Ok we have been moving right along through the sublime features and getting setup for .net development. I have been half saving the next few posts deliberately until the end as they will be covering the largest arguments I hear against the use of other editors than VS when dealing with .NET code.

But my team uses Visual Studio, I can’t just give up on using project/solution files and use some hipster editor.

This has for a long time been the single largest hurdle in using non-VS editors. If you want to get an idea of how bad it is and you have been following along the posts, try adding a file to a project or reference another project in Sublime. Ouch manually editing project files. How do you know that your manual edit will work when opened in Visual Studio?

To be fair even if it took you 15 seconds per file/reference that you added in Sublime the overall time on the project would be minimal but it is a serious pain in the ass. Nothing makes you feel slower than having to manually edit xml that was automatically done for you previously.

To get some of this functionality we will install a new tool though a whole new tool is not really needed for this. It could be done with some basic shell scripts. The tool is OpenIDE by @ackenpacken. OpenIDE does a whole lot more than what we need it to. I have been chatting with him recently about maybe making it more modular hell even Mighty Moose is contained within it as of now.

OpenIDE supports some of the generic things you would want when working with .NET code. The ability to edit project/solution files. The ability to handle templating for new additions. Reference management. There are also some other tools out there as well such as omnisharp but I fear all of them are too complex and not modular enough as there hasn’t been much of a push for that kind of tooling. Part of this post series is to help mold demand for these kinds of tools.

Now for OpenIDE install. You can grab the sources for OpenIDE here Svein has recently added a binary repository here Pull the binaries repository or build from sources. Put the output into your $PATH. OpenIDE also comes with bash completion if you want to install it which can help greatly! Now you are good to start.

Let’s make sure OpenIDE works: oi

You should get help.

oi package install C-Sharp

In the root of your project type oi init C#

Now oi is setup and ready to go. From the command line let’s try

greg@goblin:~/src/EventStore/src/EventStore$ oi new class esquery/bar
Created class
Full path /home/greg/src/EventStore/src/EventStore/esquery/bar.cs

Note that I did not put bar.cs just esquery/bar and yes you get tab completion on this.

If I now look at what changed.

greg@goblin:~/src/EventStore/src/EventStore$ git status
# On branch dev
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#	modified:   esquery/esquery.csproj
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#	esquery/bar.cs
greg@goblin:~/src/EventStore/src/EventStore$ git diff esquery/esquery.csproj
diff --git a/src/EventStore/esquery/esquery.csproj b/src/EventStore/esquery/esqu
index dec282f..9f3c95f 100644
--- a/src/EventStore/esquery/esquery.csproj
+++ b/src/EventStore/esquery/esquery.csproj
@@ -84,6 +84,7 @@
     <Compile Include="CommandProcessor.cs" />
     <Compile Include="Program.cs" />
     <Compile Include="Properties\AssemblyInfo.cs" />
+    <Compile Include="bar.cs" />
     <None Include="app.config" />

You can also run this command directly inside of sublime. Just use ctrl+shift+c and type in your command. This is just the beginning though. OpenIDE and such tools can support most of your integration with things like project/solution files.


Get every new post delivered to your Inbox.

Join 10,247 other followers