After I have been using NCrunch for the last couple of weeks this is exactly how I have felt, what a great addon.
If you are writing unit tests (which you should do) you simply have to try NCrunch, it is a wonderful addon for Visual Studio. It works for all resonably recent versions of Visual Studio from 2008 through 2012 RC.
The Visual Studio 2012 RC Test Explorer has improved the built in features of Visual Studio, but I much prefer the automatic execution of my tests that NCrunch provides and direct feedback on state of my tests right in the code as I write or modify the code. It is simply wonderful to almost instantly get feedback if I break something when working on a piece of code.
Best of all, it is still free so there's really no reason not to give it a spin and see if it works for you too.
It can be downloaded from http://www.ncrunch.net/ and there's both a tutorial video there and some pretty decent documentation.
Anyway here are my notes on how I have been using NCrunch this far.
You will now have a 'NCrunch' menu on your menu bar. You should open this a select 'Enable NCrunch' to start the configuration of NCrunch.
I set Max number of threads used for NCrunch to two, depends of course on how powerful your CPU is, but seems to work OK for me.
I like fast and on the solutions I have been working on memory hasn't been an issue on the hardware I have been using, but again this will depend on the size of your solution and your hardware.
I have been using dark visual studio 2010 themes lately and I think I prefer the dark Visual Studio 2012 theme. This means I have to remember to change the "lines with no coverage" from black to a light grey colour or else I will have difficulties seeing lines that aren't covered.
I have played safe with respect to parallel test execution, maybe it is not necessary, but NCrunch has performed fast enough for my solutions this far and well, maybe I will go parallel when I get more experience with NCrunch and larger solutions.
I prefer to run all my tests so I chose the first option.
Starting with all tests has been the best option for me this far. I want to run most tests anyway.
Lets make a unit test for the SayHello method. We start out by making the simplest test possible, we just instantiate a robot and call it with null and see what happens.
The dots to the left of the test turns red, and there's a red cross over the SayHello call on the robot.
If we hover the mouse over the red cross a tooltip will tell us that the method we are testing throws a null reference exception. Clearly the method doesn't work with a null parameter, nor does it validate its parameter. Lets fix that.
We now have added parameter checking, but we still have a red cross indicating that there is a test not expecting to see this exception. Lets go back to the test and fix.
We add an ExpectedException attribute to the test, and we are green. Switching back to the SayHello method we observer that we still aren't testing all of the method. Only the first two lines are executed by our current test.
We need to add another test. Lets test the method by passing in a person named Olav and Assert that the robot will respond with the string "Hello Olav". We quickly notice that the test still fails. If we hover the mouse over the red cross we can see the error message. The really cool part of this is that the test runner is fully automatic. We don't have to build and we don't even have to save our changes to see that the test is still not quite passing. Fix the expected text or the method so that the expectation and the method implementation match and we will go green again.
Note also how we are constantly switching back and forth between the method we are testing and the tests. Since we aren't even building our code this switching back and forth can go real quick once we get into the rythm of it and it promotes a more TDD like approach to write tests, even if we aren't starting with the tests. Of course you can do real TDD as well, but for those of us who haven't yet fully adopted that way of coding, well this may bring us closer to it.
These tests may not be possible to run on our build server.
Suppose we implement a Save method on the robot which saves the state of the robot to some database (the save method should probably not be on robot in a real system, but for simplicity lets just assume that it is). We want to test this functionality, but database dependent tests often take longer to run so we don't want to run it all the time.
One way to control which tests should be run on the build server is to use TestCategory attributes on the test methods. We can mark all our real unit tests with the category Unittest and the integration tests with IntegrationTest as show below.
Click 'Add Engine Mode' and name the new Engine mode. Then select HasCategory UnitTest as shown below, then click OK.
To activate the new mode, select it from the NCrunch Set Engine Mode menu
You will then be notified by this message shown in the right hand bottom corner of visual studio that NCrunch is switching engine mode.
Now NCrunch will only run tests that are decorated with the UnitTest TestCategory attribute.
There are three levels of configuration. All solutions which applies to all solution, then there are Solution wide settings and finally project wide settings.
The dot colours are set at the topmost level, 'All solutions'.
That's the most basic features of NCrunch. There are a few more interesting features such as Metrics and Risk/Progress Bar, but those will have to wait until a future post.