Wednesday, July 14, 2010

Introducing Cone

NUnit is one of the most widely used and mature .Net Unit testing frameworks out there. Anyone doing TDD on .Net has heard of and probably used it. I've been tempted to use other excellent alternatives like MbUnit and someone once forced me to use Visual Studio Unit Testing Framework, the only good thing about that one is that the name tells you a lot about the experience. It's long, it's integrated, it's going the opposite direction of everyone else.

But for one reason or another I always come back to by old byddy NUnit. It's probably the familiarity and that the features that once did draw me to MbUnit like parameterized/data-driven tests are now available with NUnit.

All that being said, there's some stuff that always kinda irked me with NUnit, and most of the apply to the others to.

  1. Test cases are named after their fixture, not after the thing they're verifying.
  2. Nesting fixtures to provide context and logical grouping don't really work.
  3. Test names tend to duplicate part of the code since assertion messages tells me the values, but not how they were obtained

1) Test case names

The majority of unit-tests written are either targetting a single class, or targetting that class in some context. Fixture names tend to be akin to "FooTests" or "FooWhenBarTests" and are dutifully rendered as such often including a namespace with "Tests" in it. Quite typically it looks like this when looking at a single test case
Acme.Tests.BowlingTests.Score_should_return_0_for_gutter_game

Now that's just plain silly! Because what happens is that for every single such case we go and mentally remap the class under test to "Acme.Bowling", so why not show that instead? Maybe providing some extra context on the side?

2) Nested Fixtures

If you've ever tried to nest fixtures in NUnit to create a logical grouping for tests then you're familiar with testnames like
"Acme.Tests.FooTests+BarTests.oh_my_god"
oh, these also end up not being nested but as sibling to all your other tests.

3) Test Name duplicate assertions, because assertions only gives you the values

If you lean to classicist state based testing this happens quite a lot, in order to give enough context to make a failing test immediately obvious you either have to write a custom assert message or add something to your test name, depending on your asserts/test either works, both feels like a hack.

Since I'm having a vacation and renovating. I spent a few hours watching the paint dry and hacking a NUnit addin to solve those issues. The result, henceforth called Cone, solves those issues for the majority of cases. The picture below should give you a feel for how it looks and feel. Things worth noting:

  • Fixture names are derived from the Described type, not from the fixture name
  • Underscores are translated to whitespace to aid scanability
  • The actual failing expression "bowling.Score" is shown in the Assertion message
My initial experience have been very pleasant, but that could just be me being enamored with my own idea. That's where you come in. Please tell me what you think about the idea, is it worth persuing further and packing up properly and release it? If so, is there any obvious use cases that you couldn't live without?

Wednesday, May 5, 2010

The Fallacy of the Ever-Green Build

The fallacy of the ever green build is built on top of good intentions and hyperbole. Let's start with getting some of the preliminaries clear, these behaviors are strictly "not ok"
  • Checking in code that is known to be broken.
  • Not taking responsibility for fixing the build once broken, by yourself or by asking for help.
  • Not helping a team-mate stuck on how to fix the build.
Given that these clearly defective behaviors aren't present, and under the assumption that you have a normally stable build process and a decent enough build machine, the fastest machine affordable is a good advice. Then given the commandment "thou shalt never break the build" or "when you break the build God kills a kitten", most don't respond with enlightenment, they figure out "good" ways to not break the build:
  • Writing fewer tests, tests not present never breaks the build
    • prime example is not verifying behavior of common components or leaving them underspecified.
  • Delaying checkins far longer than necessary "to make sure", waste full since it squanders time that could have been used for better things
  • Not working, no work = no code = no broken build.
If you're using the fastest machine in the building to display a green light, you're not making good use of that investment. On the other hand, if you use it to find valuable but seldom occurring problems in a way that is less costly than the alternative then it's a good investment. That basically entails loosing up a bit on the "run every test we got! never break the build" in favor of "run the fast, high coverage tests most likely to yield sufficient certainty the soundness of your commit". There's learning to be had from every build failure, the only thing you learn from "green" is that you're probably hiding problems. Now I'm talking about a small controlled amount of high information build failures, missing files and not bothering to spend a couple of minutes to run the tests for the area just changed don't qualify.

Friday, January 8, 2010

Did I just μBDD in NUnit?

 

Have a look at this picture, what do you see?

uBDD-NUnit

 

Yup, that’s the old familiar “no frills we mean business and are professional about our testing” looking NUnit GUI. But have a look at the test names, they look like some dynamic-typing loving hippie wrote them! What are they doing in my NUnit!?

The answer is, they’re lending a BDD-like Given/When/Then flavor to our old workhorse, in a way that’s fully compatible and complementary to the standard NUnit model. Does it require magic and fairy-dust? Nope not really, just some creative use of the TestCaseDataSource attribute, the test fixture looks like this:

uBDD-Fixture

GWT doesn’t always make sense, but when it do it can lead to some really communicative tests.

If you want the small (~40 lines) source for the Scenario class just leave a comment.

Tuesday, November 17, 2009

Visual Resharper

I've been using Visual Studio 2010 Beta 2 at home for a while and truly it really rocks. But something has been nagging me, there just seems to be so much stuff borrowed from everyones favourite plugin, Resharper. And I belive I've found the reason. If you take a screenshot of the new logo and reverse the red and blue channels something quite familar appears.

Wednesday, November 4, 2009

How to reset registry and file permissions.

If you somehow manage to get into trouble with registry or file permissions and feel that the only sane way out would be to reset them. Here's the magic incantation to put into the black box:
secedit /configure /cfg %windir%\repair\secsetup.inf /db secsetup.sdb /verbose

Friday, July 10, 2009

Using NUnit with VS2010 Beta and .NET Framework 4.0

I've been test driving Visual Studio 2010 Beta recently and it comes with, and defaults to, .NET Framework 4.0, exciting stuff all around until you realize that if you target the 4.0 Framework you end up with this when trying to run your tests. Let's call this, less than helpful. Some googling turns up one solution, rebuild NUnit from source. Now while that is a viable solution you should never just go for the first solution that enters your mind. After some pondering I came to think of the metadata storage signature defintion present in all .NET Assemblies and how it actually does contain the desired framework version.

Using your hexeditor of choice (I like XVI32) simply open "nunit.exe" and search for "v2" it should turn up something like the screenshot below:

Notice the "BSJB" just preceding the version string, that's the metadata signature basically telling us we're in the right place. Now change "v2.0.50727" into "v4.0.20506" save and start NUnit. It will now run under the 4.0 framework instead, happily running your tests.

Oh, if you think that't both rebuilding from source, and hacking metadata is maybe not really "the right solution (tm)" you could just configure it instead.

Sunday, July 5, 2009

Thinking in context.

There has been som general hubbub about establishing WIP limits in the Kanban community latly, some have gone so far as to claim that it is wasteful. Or more exactly that they are wasteful since if you're mindfull you can see the same bottlenecks without them. And theoretically I think that is true. But this is one of thoose times theory just won't help.

In my, not so humble, oppinion establishing WIP limits helps us the same way a budgets do. It help us set clear priorites and makes us think about our general goals. It also works as a clear leading indicator for when things are getting out of hand. One could make the claim that having a budget is waste, and if you're really disciplined I guess that is a viable option. But for me, it's not that I strictly need it, it's just way simpler than the alternatives.

Twitter via @hiranabe provided this gem: Kuroiwa-san(ex-Toyota mgr) concluded speech by emphasizing "Thinking for yourself in your context" is the heart of Lean

This is another very tangible postive effect of imposing limits, they help us establish context. The heart of lean and agile processes is thinking in context, anything that helps us faster establish context and be present have a great positive impact on the speed of communication. Thereby helping us improve, reflect, adjust and evaluate. That in turn helps us deliver more value faster, and sustain those improvements over time.

Thinking is not the key, thinking about the right stuff is. Establishing context is vital for that.