Wednesday, July 14, 2010

Introducing Cone

NUnit is one of the most widely used and mature .Net Unit testing frameworks out there. Anyone doing TDD on .Net has heard of and probably used it. I've been tempted to use other excellent alternatives like MbUnit and someone once forced me to use Visual Studio Unit Testing Framework, the only good thing about that one is that the name tells you a lot about the experience. It's long, it's integrated, it's going the opposite direction of everyone else.

But for one reason or another I always come back to by old byddy NUnit. It's probably the familiarity and that the features that once did draw me to MbUnit like parameterized/data-driven tests are now available with NUnit.

All that being said, there's some stuff that always kinda irked me with NUnit, and most of the apply to the others to.

  1. Test cases are named after their fixture, not after the thing they're verifying.
  2. Nesting fixtures to provide context and logical grouping don't really work.
  3. Test names tend to duplicate part of the code since assertion messages tells me the values, but not how they were obtained

1) Test case names

The majority of unit-tests written are either targetting a single class, or targetting that class in some context. Fixture names tend to be akin to "FooTests" or "FooWhenBarTests" and are dutifully rendered as such often including a namespace with "Tests" in it. Quite typically it looks like this when looking at a single test case
Acme.Tests.BowlingTests.Score_should_return_0_for_gutter_game

Now that's just plain silly! Because what happens is that for every single such case we go and mentally remap the class under test to "Acme.Bowling", so why not show that instead? Maybe providing some extra context on the side?

2) Nested Fixtures

If you've ever tried to nest fixtures in NUnit to create a logical grouping for tests then you're familiar with testnames like
"Acme.Tests.FooTests+BarTests.oh_my_god"
oh, these also end up not being nested but as sibling to all your other tests.

3) Test Name duplicate assertions, because assertions only gives you the values

If you lean to classicist state based testing this happens quite a lot, in order to give enough context to make a failing test immediately obvious you either have to write a custom assert message or add something to your test name, depending on your asserts/test either works, both feels like a hack.

Since I'm having a vacation and renovating. I spent a few hours watching the paint dry and hacking a NUnit addin to solve those issues. The result, henceforth called Cone, solves those issues for the majority of cases. The picture below should give you a feel for how it looks and feel. Things worth noting:

  • Fixture names are derived from the Described type, not from the fixture name
  • Underscores are translated to whitespace to aid scanability
  • The actual failing expression "bowling.Score" is shown in the Assertion message
My initial experience have been very pleasant, but that could just be me being enamored with my own idea. That's where you come in. Please tell me what you think about the idea, is it worth persuing further and packing up properly and release it? If so, is there any obvious use cases that you couldn't live without?

Wednesday, May 5, 2010

The Fallacy of the Ever-Green Build

The fallacy of the ever green build is built on top of good intentions and hyperbole. Let's start with getting some of the preliminaries clear, these behaviors are strictly "not ok"
  • Checking in code that is known to be broken.
  • Not taking responsibility for fixing the build once broken, by yourself or by asking for help.
  • Not helping a team-mate stuck on how to fix the build.
Given that these clearly defective behaviors aren't present, and under the assumption that you have a normally stable build process and a decent enough build machine, the fastest machine affordable is a good advice. Then given the commandment "thou shalt never break the build" or "when you break the build God kills a kitten", most don't respond with enlightenment, they figure out "good" ways to not break the build:
  • Writing fewer tests, tests not present never breaks the build
    • prime example is not verifying behavior of common components or leaving them underspecified.
  • Delaying checkins far longer than necessary "to make sure", waste full since it squanders time that could have been used for better things
  • Not working, no work = no code = no broken build.
If you're using the fastest machine in the building to display a green light, you're not making good use of that investment. On the other hand, if you use it to find valuable but seldom occurring problems in a way that is less costly than the alternative then it's a good investment. That basically entails loosing up a bit on the "run every test we got! never break the build" in favor of "run the fast, high coverage tests most likely to yield sufficient certainty the soundness of your commit". There's learning to be had from every build failure, the only thing you learn from "green" is that you're probably hiding problems. Now I'm talking about a small controlled amount of high information build failures, missing files and not bothering to spend a couple of minutes to run the tests for the area just changed don't qualify.

Friday, January 8, 2010

Did I just μBDD in NUnit?

 

Have a look at this picture, what do you see?

uBDD-NUnit

 

Yup, that’s the old familiar “no frills we mean business and are professional about our testing” looking NUnit GUI. But have a look at the test names, they look like some dynamic-typing loving hippie wrote them! What are they doing in my NUnit!?

The answer is, they’re lending a BDD-like Given/When/Then flavor to our old workhorse, in a way that’s fully compatible and complementary to the standard NUnit model. Does it require magic and fairy-dust? Nope not really, just some creative use of the TestCaseDataSource attribute, the test fixture looks like this:

uBDD-Fixture

GWT doesn’t always make sense, but when it do it can lead to some really communicative tests.

If you want the small (~40 lines) source for the Scenario class just leave a comment.