Tuesday, November 17, 2009

Visual Resharper

I've been using Visual Studio 2010 Beta 2 at home for a while and truly it really rocks. But something has been nagging me, there just seems to be so much stuff borrowed from everyones favourite plugin, Resharper. And I belive I've found the reason. If you take a screenshot of the new logo and reverse the red and blue channels something quite familar appears.

Wednesday, November 4, 2009

How to reset registry and file permissions.

If you somehow manage to get into trouble with registry or file permissions and feel that the only sane way out would be to reset them. Here's the magic incantation to put into the black box:
secedit /configure /cfg %windir%\repair\secsetup.inf /db secsetup.sdb /verbose

Friday, July 10, 2009

Using NUnit with VS2010 Beta and .NET Framework 4.0

I've been test driving Visual Studio 2010 Beta recently and it comes with, and defaults to, .NET Framework 4.0, exciting stuff all around until you realize that if you target the 4.0 Framework you end up with this when trying to run your tests. Let's call this, less than helpful. Some googling turns up one solution, rebuild NUnit from source. Now while that is a viable solution you should never just go for the first solution that enters your mind. After some pondering I came to think of the metadata storage signature defintion present in all .NET Assemblies and how it actually does contain the desired framework version.

Using your hexeditor of choice (I like XVI32) simply open "nunit.exe" and search for "v2" it should turn up something like the screenshot below:

Notice the "BSJB" just preceding the version string, that's the metadata signature basically telling us we're in the right place. Now change "v2.0.50727" into "v4.0.20506" save and start NUnit. It will now run under the 4.0 framework instead, happily running your tests.

Oh, if you think that't both rebuilding from source, and hacking metadata is maybe not really "the right solution (tm)" you could just configure it instead.

Sunday, July 5, 2009

Thinking in context.

There has been som general hubbub about establishing WIP limits in the Kanban community latly, some have gone so far as to claim that it is wasteful. Or more exactly that they are wasteful since if you're mindfull you can see the same bottlenecks without them. And theoretically I think that is true. But this is one of thoose times theory just won't help.

In my, not so humble, oppinion establishing WIP limits helps us the same way a budgets do. It help us set clear priorites and makes us think about our general goals. It also works as a clear leading indicator for when things are getting out of hand. One could make the claim that having a budget is waste, and if you're really disciplined I guess that is a viable option. But for me, it's not that I strictly need it, it's just way simpler than the alternatives.

Twitter via @hiranabe provided this gem: Kuroiwa-san(ex-Toyota mgr) concluded speech by emphasizing "Thinking for yourself in your context" is the heart of Lean

This is another very tangible postive effect of imposing limits, they help us establish context. The heart of lean and agile processes is thinking in context, anything that helps us faster establish context and be present have a great positive impact on the speed of communication. Thereby helping us improve, reflect, adjust and evaluate. That in turn helps us deliver more value faster, and sustain those improvements over time.

Thinking is not the key, thinking about the right stuff is. Establishing context is vital for that.

Tuesday, June 30, 2009

Ownership, Responsibility and Sharing.

Im a big fan of collective code ownership. This is my attempt to clear up one common point of confusion. Responsibility does not imply ownership. Making everyone responsible for the whole codebase most often fails miserably for the same reasons very few can keep any sufficently large system wholy in their heads. If everyone is responsible, most often nobody or a select few is. My current thinking about this is that people should be responsible for a slice of functionality or a module depending on your circumstances. But they're responsible, they don't own it. Anyone can, and should be encouraged to, change the code. The responsible guardian for that part should monitor changes, clarify conceptual missunderstandings, know his part of the system on a sufficent depth to be able to detect at a structural and conceptual level when duplication is creeping in to other parts of the system. Having responsibility does not mean soly working on that part, it means shepherding it and making sure that peers adhers to agreed conventions.

I've seen tremendous benefits from this, it lets people get really good at one part of the system, giving them both pride and the ability to at a much deeper level draw parallels to other parts during pairing sessions and work on other parts. It avoids the "everyone knows almost nothing about everything"-problem thereby reducing waste due to relearning and rediscovery. It sets expectations and builds team spirit, if we know that someone will be checking our work we tend to take greater care. And noone want to let a team-mate down. Theese effects augument conventional practices like pair programming in a very positive way.

Thursday, June 25, 2009

Wednesday, May 27, 2009

Microsoft - You're doing it right!

I'm generally not known for being a big Microsoft fanboy. But I have to say that my recent experinces have left me happy. I've been using the F# CTP since it came out and currently running the Visual Studio 2010 beta. Both are great products but that's not what this post is about.

This post is about how pleased I am with their handling of submitted bugs. The F# team is extremly friendly, quick to respond and equally good at providing updates and responding in a timely manner. Maybe not shocking given that it seems to be a small dedicated team. What mandates this post is that the VS2010 beta team also exhibits theese qualities. Submitting a bug is easy and painfree and they seem to be handled in a very good manner providing timely updates as it flows through the process.

So all thumbs up for the awsome F# and VSEditor teams!

Friday, April 24, 2009

Fake - The future of .NET build tools?

Tired of XML based build systems?
Thought so. That's why I spent a few minutes hacking togheter the basis for my own build system. I call it Fake. And looks like this:
let clean = task "Clean" (fun () ->
    Console.WriteLine("Cleaning."))

[<Default>]
let build =
    task "Build the lot" (clean => fun () ->
        Console.WriteLine("Building.."))

let loadTestData =
    task "Load some test data" (fun () ->
        Console.WriteLine("Loading test data..."))

let test =
    task "Test it" ([build; loadTestData] => fun () ->
        Console.WriteLine("Running Tests...."))
Then simply
x:\..\>fake test
Cleaning.
Building..
Loading test data...
Running Tests....
x:\..\>
Is this idea worthwhile? Tell me in the comments.

Tuesday, April 21, 2009

Team Foundation vs Subversion and Bazaar - Round 1: Update my workspace.

I usually work with Subversion or Bazaar but currently I'm on a project using Team Foundation Server. Today I got the silly idea of updating my workspace using the command line interface. Assuming that you are standing in the directory you want to update this task can be accomplished as follows:
SubversionTeam Foundation
svn uptf get .\* /version:T /recursive
Now one of theese is sane, the other is complelty insane. I won't tell you wich is wich.

Wednesday, April 8, 2009

RED - Re Evolutionary Development

Something is happening in devloperland, if you put your ear to the tubes of the blogosphere you can hear a faint message. The Software Craftmanship movement is gathering momentum with a simple message, and one of the high priests of XP has abdicated.

There's a new focus comming and it cuts right through all excuses, the message is simple.
YOU are responsible.

No one else is going to give you the mandate to work in a fashion you know you really ought to, no one else is going to educate your peers for you, and no one else is going to fix your broken process. Yes I know it's horrible. The business demands the impossible and your cow orkers are all a bunch of imbicils. That's exactly why it's your problem. You're the only sane, educated, competent, levelheaded person around, it's your responsibility to do something about the madness.

We need to stop trying to blame everyone else for our problems, we need to stop discussing what's wrong with "other people" and actually start taking action. Every single day, strive to improve, learn and share!

I'll dub this "Re Evolutionary Development" or RED for short and as all good philosophies it needs a set of principles the first one simply is:

RED Principle #1

Each day ask yourself.
  • What did I learn?
  • What did I share?

Monday, March 9, 2009

The Care and Feeding of your Build - Stability.

Having an automated, fast, repeatable build provides the heartbeat of the project. Sadly it's often neglected and viewed as tedious to set up, boring to maintain and the only time it actually does get any attention is when it doens't work. Good build systems exhibit a few key charecteristics, today Im going to talk about one of the finer points, stability.

What is stability?

The stability of a build can be stated as:Unless something changed, do nothing.
Or as Ant best practices item "14. Perform the Clean Build Test" puts it:

Assuming your buildfile has clean and compile targets, perform the following test. First, type ant clean. Second, type ant compile. Third, type ant compile again. The third step should do absolutely nothing. If files compile a second time, something is wrong with your buildfile.

Why stability?

Stability is important since it has a direct effect on the length of your build/test cycle. Any inefficency introduced grows both with the project and with the number of team members. This means that unless you keep your build stable every compile will cost you a small amount of time, for every team member, over time this adds up to substantial amount.

Easy ways to fail.

There's a few ways that almost every build system I've worked with failed the stability test the most common offenders I've found are.

Unconditional Post Build xcopy

It's often convinent and sometimes neccessary to copy output files to some other directory. Often this is done to create smaller solutions/projectfiles for the IDE and more stable projects are simply built and copied to a folder with precompiled binaries and referenced from there. This is a good strategy. The problem arise when the copy is unconditional since this often forces a rebuild of all dependent projects even though the shared library did not change! If you're using Visual Studio, unless you got a really really good reason always use "When the build updates the project output" option for "Run the post-build event:". If you're using Ant/NAnt/Rake/ and have a target that does compile+copy always check before copying.

Unguarded build targets

Common offenders in this category is test/coverage targets, creation of installation packages and "zip tasks", care should always be taken to ensure that something actually did change before redoing theese procedures. Often this can be accomplished by comparing timestamps for the source and destination files. If no tangible output is generated by default it can make sense to introduce a marker file and touche it on completion of the task.

Summary

Keep your build fast by avoiding redundant work, take care to never do work unless something actually did change.

Saturday, March 7, 2009

Manifesto for Software Craftmanship

Ever felt that there's something missing from the Agile Manifesto? Feeling left out in all the management fluff? Ever wondered, but where's the focus on my craft? I sure have.

There's a old, new, movement on the raise, a movement for software craftmanship. Sign the Manifesto for Software Craftmanship and help us raise the bar.

Saturday, February 21, 2009

Pizza Points - story estimation made round!

There's two commonly used methods for agile estimation Story Points and Ideal Programming Somethings, commonly days or hours. Both methods have their merits with some bias towards Story Points (SP) from Mike Cohn and various other well known names although Ideal Programming Somethings seems to be more commonly used by the teams I've spoken to.

There seems to be quite a bit of confusion regarding how they relate and how both terms relate to hours. To remedy some of this I want to propose a new system, Pizza Points, that is driven by an easy to explain intuitive and rich metaphor.

Lets look at the similarities between work and pizza as used in the following discussion.

  • Pizza is round, work tends to be circular.
  • Pizza can be filling and deeply satisfying, as can work.
  • Pizza comes in different sizes, as does stories and tasks.
  • Pizza can have lots of varying and interesting fillings, work can be filled with many intresting things.
  • As we mature we can eat more pizza and as we learn a domain and tools we can tackle bigger tasks.

Depending on the peculiarities of your favourite pizza parlour the size and form may vary, maybe you have children, normal and family, maybe the range is small, medium, large, and extra-large often it's round and sometimes you get oddly square bites. There's no guarantee that a small pizza is the same size between to different places and there's likewise no sense in assuming that a pizza point is equally sized between two teams. That said, if you stick to one place, keep your team intact, any given size will overtime be quite consistent.

So how do we get started using Pizza Points?

We have to start by establishing some sort of baseline size, no diffrent from the initial sizing of story points. Find a fairly small, easily graspable story discuss the criterias for done and label it "children", "small" or why not, one. Continue estimation by thinking about the relative *size* not filling, give them descriptive names, standard, family, 2, 3, 5, 8, 13, 20, 40, 100, xxx-large.

It's as easy as that. The thing to remember is that size is actually a constant but the filling might greatly influence how much we can eat. I like pizza and can eat quite a lot given toppings like different cheeses, ham, pineapple for example. Give me anchovies and you'll have me struggling an evening to come close to finishing even a child size bite. The size haven't changed, my aptitude and motivation did.

If you're the one placing orders and want to get as much pizza eaten as possible during any period of time it can be wise to ask your team for their taste preferences. But real life sometimes dictates that we put anchovies on their plate, that's a big responsibility.

To summarize, think size not filling, match fillings to team, expect size to vary depending on team. Also expect mature, adult, teams to eat more than children.

And don't forget to order planning pizza as a reminder during long estimation sessions.

Monday, February 9, 2009

The story about TypeMock.

To mock or not. That's the question. Here's how I think some BDDers and Mockists labled their Kool-Aid before drinking it.

Given TypeMock
When I want to test
Then everything looks like a mock object.

Tuesday, February 3, 2009

Pencil.Unit and Micro Lightweight Unit Testing

Joe Armstrong of Erlang fame has the following to say on how he write unit tests Micro Lightweight Unit Testing
Here's a condenced retrace of his steps using F# and Pencil.Unit
Step 1) Write Micro Unit Test
Theory "Fib should work for known values from Wikipedia"
    [(0,0); (1, 1); (2, 1); (3, 2); (20, 6765)]
    (fun (n, e) -> Fib n |> Should Equal e)
Step 2) Implement Fib
let rec Fib = function
    | 0 -> 0
    | 1 -> 1
    | _ as n -> Fib(- 1) + Fib(- 2)
Step 3) Theorize about FastFib
Theory "FastFib should give same result as Fib"
    [0; 1; 2; 3; 25]
    (fun n -> FastFib n |> Should Equal (Fib n))
Step 4)Implement FastFib
let FastFib n =
    let rec loop n a b =
        match n with
        | 0 -> a
        | _ -> loop (- 1) b (+ b)
    loop n 0 1

Monday, February 2, 2009

Taking Pencil.Unit for a spin, F# syntax highlighting.

Decided to take the current iteration of the testing code posted earlier for a spin by trying to actually build something usefull with it. Since Im yet to find a decent syntax highlighter that supports F# and doesn't generate utterly disgusting HTML I decided to try to test and hack my way to one that at least suits my quite humble needs. The result was this:
(* Building a (very simple) syntax higligher with Pencil.Unit *)
#light

open System
open System.Text
open System.IO
open Pencil.Unit

type Token =
    | Comment of string
    | Keyword of string
    | Preprocessor of string
    | String of string array
    | Text of string
    | WhiteSpace of string
    | NewLine
    | Operator of string

let Classify x =
    match x with
    | "abstract" | "and" | "as" | "assert"
    | "base" | "begin"
    | "class"
    | "default" | "delegate" | "do" | "done" | "downcast" | "downto"
    | "elif" | "else" | "end" | "exception" | "extern"
    | "false" | "finally" | "for" | "fun" | "function"
    | "if" | "in" | "inherit" | "inline" | "interface" | "internal"
    | "lazy" | "let"
    | "match" | "member" | "module" | "mutable"
    | "namespace" | "new" | "null"
    | "of" | "open" | "or" | "override"
    | "private" | "public"
    | "rec" | "return"
    | "static" | "struct"
    | "then" | "to" | "true" | "try" | "type"
    | "upcast" | "use"
    | "val" | "void"
    | "when" | "while" | "with"
    | "yield" -> Keyword x
    | _ when x.[0] = '#' -> Preprocessor x
    | _ -> Text x

let IsKeyword = function
    | Keyword _ -> true
    | _ -> false

let IsPreprocessor = function
    | Preprocessor _ -> true
    | _ -> false

Theory "Classify should support all F# keywords"

    ("abstract and as assert base begin class default delegate do done
    downcast downto elif else end exception extern false finally for
    fun function if in inherit inline interface internal lazy let
    match member module mutable namespace new null of open or
    override private public rec return static struct then to
    true try type upcast use val void when while with yield"
    .Split([|' ';'\t';'\r';'\n'|], StringSplitOptions.RemoveEmptyEntries))

    (fun x -> Classify x |> IsKeyword |> Should Equal true)

Fact "Classify should treat leading # as Preprocessor"
    (Classify "#light" |> IsPreprocessor |> Should Equal true)

let Tokenize (s:string) =
    let start = ref 0
    let p = ref 0
    let next() = p := !p + 1
    and hasMore() = !p < s.Length
    and sub() = s.Substring(!start, !p - !start)
    and current() = s.[!p]
    and prev() = s.[!p - 1]
    let peek() = if (!p + 1) < s.Length then
                    s.[!p + 1]
                 else
                    (char)0
    and isWhite() =
        match current() with
        | ' ' | '\t' -> true
        | _ -> false
    and isOperator() = "_(){}<>,.=|-+:;[]".Contains(string (current()))
    and isNewLine() = current() = '\r' || current() = '\n'
    let notNewline() = not (isNewLine())
    and notBlockEnd() = not(current() = ')' && prev() = '*')
    let inWord() = not (isWhite() || isNewLine() || isOperator())
    let read p eatLast =
        while hasMore() && p() do
            next()
        if eatLast then
            next()
        sub()
    let readWhite() = WhiteSpace(read isWhite false)
    and readNewLine() =
        next()
        if isNewLine() then
            next()
        NewLine
    and readWord() = Classify(read inWord false)
    and readOperator() = Operator(read isOperator false)
    and readString() =
        let isEscaped() = prev() = '\\'
        let inString() = isEscaped() || current() <> '\"'
        next()
        let s = read inString true
        String(s.Split([|'\r';'\n'|], StringSplitOptions.RemoveEmptyEntries))
    seq {
        while hasMore() do
            start := !p
            let token =
                match current() with
                | '\"' -> readString()
                | '/' when peek() = '/' -> Comment(read notNewline false)
                | '(when peek() = '*' -> Comment(read notBlockEnd true)
                | _ when isWhite() -> readWhite()
                | _ when isOperator() -> readOperator()
                | _ when isNewLine() -> readNewLine()
                | _ -> readWord()
            yield token}

let ToString x =
    let encode = function
        | Comment _ -> "c"
        | Keyword _ -> "k"
        | Preprocessor _ -> "p"
        | String _ -> "s"
        | Text _ -> "t"
        | WhiteSpace _ -> "w"
        | NewLine -> "n"
        | Operator _ -> "o"
    x |> Seq.fold (fun (r:StringBuilder) -> encode >> r.Append) (StringBuilder())
    |> string

Fact "Tokenize should categorize"(
    Tokenize "#light let foo" |> ToString |> Should Equal "pwkwt")

Fact "Tokenize should handle string"(
    Tokenize "\"Hello World\"" |> ToString |> Should Equal "s")

Fact "Tokenize should split string into lines"(
    let lines = function
        | String x -> x
        | _ -> [||]
    Tokenize "\"Hello\r\nWorld\"" |> Seq.hd |> lines |> Seq.length |> Should Equal 2)

Theory "Tokenize should separate start on operators"
    ("_ ( ) { } < > [ ] , = | - + : ; .".Split([|' '|]))
    (fun x -> Tokenize x |> ToString |> Should Equal "o")

Fact "Tokenize should end on separators"(
    Tokenize "foo)" |> ToString |> Should Equal "to")

Fact "Tokenize should handle escaped char in string"(
    Tokenize "\"\\\"\"" |>  ToString |> Should Equal "s")

Fact "Tokenize should handle //line comment"(
    Tokenize "//line comment" |> ToString |> Should Equal "c")

Fact "Tokenize should handle (* block comments *)"(
    Tokenize "(* block comment )*) " |> ToString |> Should Equal "cw")

Fact "Tokenize should handle newline"(
    Tokenize "\r\n" |> ToString |> Should Equal "n")

Fact "Tokenize should separate whitespace and newline"(
    Tokenize "    \r\n" |> ToString |> Should Equal "wn")

let Sanitize (s:string) = s.Replace("&", "&amp;").Replace("<", "&lt;").Replace(" ", "&nbsp;")

Fact "Sanitize should replace < with &lt;"(
    Sanitize "<" |> Should Equal "&lt;")

Fact "Sanitize should repalce & with &amp;"(
    Sanitize "&" |> Should Equal "&amp;")

Fact "Sanitize should repalce ' ' with &nbsp;"(
    Sanitize " " |> Should Equal "&nbsp;")

type IHtmlWriter =
    abstract Literal : string -> unit
    abstract Span : string -> string -> unit
    abstract NewLine : unit -> unit

let HtmlEncode (w:IHtmlWriter) =
    let span style s =
        w.Span style s
    function
    | Comment x -> span "c" x
    | Keyword x -> span "kw" x
    | Preprocessor x -> span "pp" x
    | String x ->
        span "tx" x.[0]
        for i = 1 to x.Length - 1 do
            w.NewLine()
            span "tx" x.[i]
    | Operator x -> span "op" x
    | Text x | WhiteSpace x -> w.Literal x
    | NewLine -> w.NewLine()

let AsHtml s =
    let r = StringBuilder("<div class='f-sharp'>")
    let encode = HtmlEncode {new IHtmlWriter with
        member this.Literal s = r.Append(Sanitize s) |> ignore
        member this.Span c s = r.AppendFormat("<span class='{0}'>{1}</span>", c, Sanitize s) |> ignore
        member this.NewLine() = r.Append("<br>") |> ignore}
    Tokenize s |> Seq.iter encode
    string (r.Append("</div>"))

Fact "AsHtml sample"(
    let sample = "#light\r\nlet numbers = [1..10]"
    let expected =
        String.Concat [|"<div class='f-sharp'><span class='pp'>#light</span><br>"
        ;"<span class='kw'>let</span>&nbsp;numbers&nbsp;<span class='op'>=</span>&nbsp;"
        ;"<span class='op'>[</span>1<span class='op'>..</span>"
        ;"10<span class='op'>]</span></div>"|]
    sample |> AsHtml |> Should Equal expected)

//Render myself.
File.ReadAllText(__SOURCE_FILE__)
|> AsHtml |> (fun x -> File.WriteAllText(__SOURCE_FILE__ + ".html", x))
As outputed from itself. I really like how "Fact" and "Theory" turned out and it seems to suite my current needs just fine.

Friday, January 30, 2009

Fact about Pizza.

I find this test as amusing as it's silly:
#light

open Pencil.Unit

Fact "Pizza should have cheese."
    ("Pizza" |> Should (Contain "Cheese"))
And the output:
Pizza should have cheese. Failed with "Pizza" doesn't contain "Cheese".

Craftmanship over Crap

Uncle Bob makes a compelling case for adding one more value to the Agile Manifesto, initially and for effect it was "Craftmanship over Crap" he later changed it to the less dramatic "Craftmanship over Execution" and asked others if they could find an even better phrasing and many good points were raised, for the full story look here. I've been mulling over this a bit, looking at the proposals and in the meantime hoping that my green wristband will guide me, but then it struck me. The original formulation is from a clean code perspective absolutley perfect, with one minor detail. It's not Crap the word, it's CRAP the acronym, and that's a honest misstake. Let's look a bit closer CRAP is Coupled Redundant Arbitrary Duplication and in essence that's exectly the anti-thesis of clean code. The marvelous thing about it is that even the acronym in itself exhibits a distinctly CRAP quality. Coupled because it doesn't make sense without the other parts. Redundant because it duplicates what crap is. Arbitrary since it could be something else. Duplication is redundant.

Tuesday, January 27, 2009

Minimal Unit Tests in F#

It's easy to get caught up in always building bigger, cooler more complex thingmajigs. Sometimes we forget our roots, take Unit Testing for exmaple, there's numerous frameworks and doodahs to facilitate that, but how slim and still provide value could it be. Over a cup of hot cacao I decided to find out. Here's the result:
#light

namespace Pencil.Unit

open System
open System.Diagnostics
open System.Collections.Generic

type IMatcher =
    abstract Match<'a> : 'a -> 'a -> bool
    abstract Format<'a> : 'a -> 'a -> string

module Unit =
    let (|>) x f = let r = f x in r |> ignore; r //work-around for broken debug info.
    let mutable Count = 0
    let Errors = List<String>()
    let Should (matcher:IMatcher) = fun e a ->
        Count <- Count + 1
        if matcher.Match e a then
            Console.Write('.')
        else
            let frame = StackTrace(true).GetFrame(1)
            let trace = String.Format(" ({0}({1}))",
                frame.GetFileName(),
                frame.GetFileLineNumber())
            Console.Write('F')
            Errors.Add((matcher.Format e a) + trace)

    let Equal = {
        new IMatcher with
            member x.Match e a = a.Equals(e)
            member x.Format e a =
                String.Format("Expected:{0}, Actual:{1}", e, a)}
open Unit
//Tests goes here
2 * 4 |> Should Equal 8
2 + 1 |> Should Equal 2

//Report the result
Console.WriteLine("{2}{2}{0} tests run, {1} failed.", Count, Errors.Count, Environment.NewLine)
if Errors.Count <> 0 then
    Errors |> Seq.iter
        (fun e -> Console.WriteLine("{0}", e))
And the output from the above:
.F

2 tests run, 1 failed.
(Expected:2, Actual:3 (F:\Pencil.Unit.fs(34))
Less than 40 lines of F# and we're on our way to unit testing goodness.

Monday, January 26, 2009

Why done should be "in use".

Done. Funny word that. When is something done? Is it when it's checked into version control? When is passes all our autmated acceptance tests? When QA says so? Or is it when we got real users deriving value from it? I would say the later based upon this simple observation. As a user theese things are all equal: * Feature not implemented. * Feature not deployed. * Don't know about the feature. * Can't find the feature. The implication is that unless we figure out how to build, deploy and educate users we simply haven't delivered busniess value. Scary isn't it?

Thursday, January 15, 2009

Falling into the pit of success, by design.

Jeff doesn't care about resource cleanups. And I can agree to some part,but the problem is that his solution "forget it and write an article making fun of overzealous disposers" seems a bit, well, short sighted. The thing is that I agree on the point that it's silly that our best mainstream "solution" to disposal thusfar is "using". Now, come on the best we could do was invent sugar so that our try finally blocks looks prettier? Let us have a quick look at the problem using File instead of SqlConnections, the semantics are the same but it's possible to illustrate something that people actually do with fewer lines using it. It seems that Jeff thinks that we should be fine with doing this:
public void JeffIsWingingIt()
{
    var target = File.CreateText("output.file");
    target.WriteLine("Very important stuff.");
}
The sad part here is that code like this often *seems* to be working but in reality quite often you'll end up with truncated output and a long hunt to find the responsible party. So the correct version looks like this:
public void CorrectButNoFun()
{
    using(var target = File.CreateText("output.file2"))
        target.WriteLine("Very important stuff.");
}
Sadly that not at all as appealing, but at least we get all the data written to disk and the file handle reclaimed in an orderly matter. But I would say the API is to blame in this case, it encourages us to do the wrong thing (forgetting to Dispose when done). There's quite easy to fix this and make it like this instead.
public void DesignedForSuccess()
{
    FileUtil.Write("output.file3",
        target => target.WriteLine("Very important stuff."));
}

static class FileUtil
{
    public static void Write(string path,
    Action<TextWriter> action)
    {
        using(var file = File.CreateText(path))
            action(file);
    }
}
Here resource handling and work are cleanly separated and abstracted. Using a similar approach you can easily and safely work with SqlConnection,SqlCommand,SqlDataReader and a huge varaity of other error prone but Disposable classes. Once and only once. Don't repeat yourself. It applies to resource handling to.

Fixing a bug, more than a local hack.

This is the true story of a bug, and the process of solving it. For reasons not relevant to the discussion I had the need to determine if a given 'Type' was part of the "System" namespace and hence could be considred a Framework type, it was deemed enough that just checking that the full name began with "System." was good enough for our purpose. The implementation is obvious and self-explanatory:
static bool IsFrameworkType(Type type)
{
        return type.FullName.StartsWith("System.");
}
Simple elegant and to the point. But since I said this was the story of a bug something went astray. The thing is that under certain quite arcane circumstances "Type.FullName" can return null, the details aren't really important, the problem is that the code above crashes with a NullReferenceException if that happens. And that's important for me since then I won't get to see the result of my program run. The quick, obvious fix is to check for null. As seen below:
static bool IsFrameworkType(Type type)
{
        var name = type.FullName;
        return name != null && name.StartsWith("System.");
}
And that's how the majority of bugs get "fixed", a local fix at the point of failiure, a pat on the back for a now passing test and away we dart to the next challange. And that I think is part of the problem why software takes so long. If we keep our local focus what can we do to both fix this bug and design safety in? We can reengineer our API to handle this gracefully. In languages with open classes we could fix our string API, in C# we have extension methods and if we work in an environment without them utility classes and free functions can aid us. So what's the root cause here? I would say that "StartsWith" is the culprit in this case. Because we know that we will have something to check against "System.", but we're not sure that we have a target for our StartsWith call. For this situation "System.".IsStartOf would make more sense, since that way we know that we have a instance to start with. Using extension methods we arrive at this:
static class StringExtensions
{
    public static bool IsStartOf(this string prefix, string s)
    {
            return s != null && s.StartsWith(prefix);
    }
}

static bool IsFrameworkType(Type type)
{
        return "System.".IsStartOf(type.FullName);
}
Hunting through the code-base I found a couple of other places where this could be used to solve similar problems. How often do you take this extra step to not only fix the problem locally but ponder if you can modify the API or system to prevent it from happening somewhere else? Make it part of your routine, and you're one "why" closer to a clean and productive code base. Further analysis also made it possible to ensure that all types that eventually got sent to this function indeed did have proper FullNames. But that's another story.

Saturday, January 10, 2009

How to write a good commit comment.

When using version control one common problem area are the dreaded commit comments. Most teams and individuals seems to gravitate towards simply not using them, wich is really sad. My best advice to remedy this situation is really really simple. Don't write your commit comment after you done your changes, write it *before* you start working, that way you already have it done when you're done, and you get the added benefit of clearly articulating what you're currently working on. As Stephen Covey puts it "Begin with the end in mind.", it's true for highly successfull people, it ought to be true for highly successfull programmers.

Thursday, January 8, 2009

Clean Code and Mom

If you've ever seen a code rot you know that few substances can so quickly go from shining examples of good to festering bug infested piles that everyone is affraid to touch. Why does this happen and what can we do to combat it?

There's much good advice out there for prevention, "Clean Code" by Robert C Martin is a good starting point and if things have already begun to stink have a look at Micheal Feathers "Working Effectivly Witch Legacy Code". They're well worth your time but since you're reading a blog Im guessing that right now you're searching for a quick snack.

It's quite easy to explain how we manage to get ourselves into this mess and basicly it's one portion human nature and an equal serving of wisdom from mom.

We follow examples.

As humans and developers we're extremly good at following examples, it's what we've evolved and trained to do. We learn to talk and behave by following the examples of our parents and peers, we aquire new skills by copying those who mastered them before us and we invent new ideas most of the time by missunderstanding something we set out to copy. Based on this it's deceptivly simple to conclude that in theory achiving and maintaining high code quality should simply be a matter of establishing a good set of practices and have people learn from and copy them. And as far as theory goes Im quite sure that it does actually hold true, apart from when it doesn't, and in practice that is most of the time.

Why does such a wonderull theory fail so miserably in practice? We could speculate that it's becuase we don't provide the right examples, or that people are ignorant of the wonderful code we've written and therefore doesn't emulate our flawless style. Or could it be that they simply lack the skill and aptitude for it? There's probably truth to be found in all of theese but I find it easier to lean on what mom used to tell me when I was little.

You become what you eat. Now before you dismiss me by saying that this actually reinforces the previous point (wich would be true) the underlying assumption in that dissmissal would be that we on average eat high quality code. If that was the case bad stinking code with wtf/minute ratios approaching positive infinity would not be a problem. Basicly I think the core problem is that most of the code we work with on a daily basis through maintenance and enhancement is the code that didn't work right! In any given codebase changes and work tend to cluster in the worst written, badly designed pockets of crap to ever compile and that's where we send our young and untrained. They come out of it producing sub-par code from the bad examples that have now ingrained their previously healthy minds, the examples they've seen are all terrible examples of what not to do. With experience we can learn to see the differance, we become hardened and learn that the code we see most often are the examples to not follow, we go on expiditions to find the code that have been working and elegantly solving our needs without us even realizing it was there, and we learn to emulate that.

So how does this answer the question how things can go so fast downhill? The answer is that on average people will produce code looking quite much what they last saw. Most of the code people see is horrible that's why they're staring at it. As the amount of bad code increases so does the probability of seeing bad code.

How to break the cycle is left as an exercise for the reader.