
Tuesday, November 17, 2009
Visual Resharper

Wednesday, November 4, 2009
How to reset registry and file permissions.
secedit /configure /cfg %windir%\repair\secsetup.inf /db secsetup.sdb /verbose
Friday, July 10, 2009
Using NUnit with VS2010 Beta and .NET Framework 4.0
Using your hexeditor of choice (I like XVI32) simply open "nunit.exe" and search for "v2" it should turn up something like the screenshot below:

Oh, if you think that't both rebuilding from source, and hacking metadata is maybe not really "the right solution (tm)" you could just configure it instead.
Sunday, July 5, 2009
Thinking in context.
There has been som general hubbub about establishing WIP limits in the Kanban community latly, some have gone so far as to claim that it is wasteful. Or more exactly that they are wasteful since if you're mindfull you can see the same bottlenecks without them. And theoretically I think that is true. But this is one of thoose times theory just won't help.
In my, not so humble, oppinion establishing WIP limits helps us the same way a budgets do. It help us set clear priorites and makes us think about our general goals. It also works as a clear leading indicator for when things are getting out of hand. One could make the claim that having a budget is waste, and if you're really disciplined I guess that is a viable option. But for me, it's not that I strictly need it, it's just way simpler than the alternatives.
Twitter via @hiranabe provided this gem: Kuroiwa-san(ex-Toyota mgr) concluded speech by emphasizing "Thinking for yourself in your context" is the heart of Lean
This is another very tangible postive effect of imposing limits, they help us establish context. The heart of lean and agile processes is thinking in context, anything that helps us faster establish context and be present have a great positive impact on the speed of communication. Thereby helping us improve, reflect, adjust and evaluate. That in turn helps us deliver more value faster, and sustain those improvements over time.
Thinking is not the key, thinking about the right stuff is. Establishing context is vital for that.
Tuesday, June 30, 2009
Ownership, Responsibility and Sharing.
I've seen tremendous benefits from this, it lets people get really good at one part of the system, giving them both pride and the ability to at a much deeper level draw parallels to other parts during pairing sessions and work on other parts. It avoids the "everyone knows almost nothing about everything"-problem thereby reducing waste due to relearning and rediscovery. It sets expectations and builds team spirit, if we know that someone will be checking our work we tend to take greater care. And noone want to let a team-mate down. Theese effects augument conventional practices like pair programming in a very positive way.
Thursday, June 25, 2009
Wednesday, May 27, 2009
Microsoft - You're doing it right!
I'm generally not known for being a big Microsoft fanboy. But I have to say that my recent experinces have left me happy. I've been using the F# CTP since it came out and currently running the Visual Studio 2010 beta. Both are great products but that's not what this post is about.
This post is about how pleased I am with their handling of submitted bugs. The F# team is extremly friendly, quick to respond and equally good at providing updates and responding in a timely manner. Maybe not shocking given that it seems to be a small dedicated team. What mandates this post is that the VS2010 beta team also exhibits theese qualities. Submitting a bug is easy and painfree and they seem to be handled in a very good manner providing timely updates as it flows through the process.
So all thumbs up for the awsome F# and VSEditor teams!Friday, April 24, 2009
Fake - The future of .NET build tools?
Thought so. That's why I spent a few minutes hacking togheter the basis for my own build system. I call it Fake. And looks like this:
Console.WriteLine("Cleaning."))
[<Default>]
let build =
task "Build the lot" (clean => fun () ->
Console.WriteLine("Building.."))
let loadTestData =
task "Load some test data" (fun () ->
Console.WriteLine("Loading test data..."))
let test =
task "Test it" ([build; loadTestData] => fun () ->
Console.WriteLine("Running Tests...."))
x:\..\>fake test Cleaning. Building.. Loading test data... Running Tests.... x:\..\>Is this idea worthwhile? Tell me in the comments.
Tuesday, April 21, 2009
Team Foundation vs Subversion and Bazaar - Round 1: Update my workspace.
Subversion | Team Foundation |
svn up | tf get .\* /version:T /recursive |
Wednesday, April 8, 2009
RED - Re Evolutionary Development
There's a new focus comming and it cuts right through all excuses, the message is simple.
YOU are responsible.
No one else is going to give you the mandate to work in a fashion you know you really ought to, no one else is going to educate your peers for you, and no one else is going to fix your broken process. Yes I know it's horrible. The business demands the impossible and your cow orkers are all a bunch of imbicils. That's exactly why it's your problem. You're the only sane, educated, competent, levelheaded person around, it's your responsibility to do something about the madness.
We need to stop trying to blame everyone else for our problems, we need to stop discussing what's wrong with "other people" and actually start taking action. Every single day, strive to improve, learn and share!
I'll dub this "Re Evolutionary Development" or RED for short and as all good philosophies it needs a set of principles the first one simply is:RED Principle #1
Each day ask yourself.- What did I learn?
- What did I share?
Monday, March 9, 2009
The Care and Feeding of your Build - Stability.
What is stability?
The stability of a build can be stated as:Unless something changed, do nothing.Or as Ant best practices item "14. Perform the Clean Build Test" puts it:
Assuming your buildfile has clean and compile targets, perform the following test. First, type ant clean. Second, type ant compile. Third, type ant compile again. The third step should do absolutely nothing. If files compile a second time, something is wrong with your buildfile.
Why stability?
Stability is important since it has a direct effect on the length of your build/test cycle. Any inefficency introduced grows both with the project and with the number of team members. This means that unless you keep your build stable every compile will cost you a small amount of time, for every team member, over time this adds up to substantial amount.Easy ways to fail.
There's a few ways that almost every build system I've worked with failed the stability test the most common offenders I've found are.Unconditional Post Build xcopy
It's often convinent and sometimes neccessary to copy output files to some other directory. Often this is done to create smaller solutions/projectfiles for the IDE and more stable projects are simply built and copied to a folder with precompiled binaries and referenced from there. This is a good strategy. The problem arise when the copy is unconditional since this often forces a rebuild of all dependent projects even though the shared library did not change! If you're using Visual Studio, unless you got a really really good reason always use "When the build updates the project output" option for "Run the post-build event:". If you're using Ant/NAnt/Rake/Unguarded build targets
Common offenders in this category is test/coverage targets, creation of installation packages and "zip tasks", care should always be taken to ensure that something actually did change before redoing theese procedures. Often this can be accomplished by comparing timestamps for the source and destination files. If no tangible output is generated by default it can make sense to introduce a marker file and touche it on completion of the task.Summary
Keep your build fast by avoiding redundant work, take care to never do work unless something actually did change.Saturday, March 7, 2009
Manifesto for Software Craftmanship
There's a old, new, movement on the raise, a movement for software craftmanship. Sign the Manifesto for Software Craftmanship and help us raise the bar.
Saturday, February 21, 2009
Pizza Points - story estimation made round!
There seems to be quite a bit of confusion regarding how they relate and how both terms relate to hours. To remedy some of this I want to propose a new system, Pizza Points, that is driven by an easy to explain intuitive and rich metaphor.
Lets look at the similarities between work and pizza as used in the following discussion.
- Pizza is round, work tends to be circular.
- Pizza can be filling and deeply satisfying, as can work.
- Pizza comes in different sizes, as does stories and tasks.
- Pizza can have lots of varying and interesting fillings, work can be filled with many intresting things.
- As we mature we can eat more pizza and as we learn a domain and tools we can tackle bigger tasks.
Depending on the peculiarities of your favourite pizza parlour the size and form may vary, maybe you have children, normal and family, maybe the range is small, medium, large, and extra-large often it's round and sometimes you get oddly square bites. There's no guarantee that a small pizza is the same size between to different places and there's likewise no sense in assuming that a pizza point is equally sized between two teams. That said, if you stick to one place, keep your team intact, any given size will overtime be quite consistent.
So how do we get started using Pizza Points?
We have to start by establishing some sort of baseline size, no diffrent from the initial sizing of story points. Find a fairly small, easily graspable story discuss the criterias for done and label it "children", "small" or why not, one. Continue estimation by thinking about the relative *size* not filling, give them descriptive names, standard, family, 2, 3, 5, 8, 13, 20, 40, 100, xxx-large.
It's as easy as that. The thing to remember is that size is actually a constant but the filling might greatly influence how much we can eat. I like pizza and can eat quite a lot given toppings like different cheeses, ham, pineapple for example. Give me anchovies and you'll have me struggling an evening to come close to finishing even a child size bite. The size haven't changed, my aptitude and motivation did.
If you're the one placing orders and want to get as much pizza eaten as possible during any period of time it can be wise to ask your team for their taste preferences. But real life sometimes dictates that we put anchovies on their plate, that's a big responsibility.
To summarize, think size not filling, match fillings to team, expect size to vary depending on team. Also expect mature, adult, teams to eat more than children.
And don't forget to order planning pizza as a reminder during long estimation sessions.Monday, February 9, 2009
The story about TypeMock.
Given TypeMock
When I want to test
Then everything looks like a mock object.
Tuesday, February 3, 2009
Pencil.Unit and Micro Lightweight Unit Testing
Here's a condenced retrace of his steps using F# and Pencil.Unit
Step 1) Write Micro Unit Test
[(0,0); (1, 1); (2, 1); (3, 2); (20, 6765)]
(fun (n, e) -> Fib n |> Should Equal e)
| 0 -> 0
| 1 -> 1
| _ as n -> Fib(n - 1) + Fib(n - 2)
[0; 1; 2; 3; 25]
(fun n -> FastFib n |> Should Equal (Fib n))
let rec loop n a b =
match n with
| 0 -> a
| _ -> loop (n - 1) b (a + b)
loop n 0 1
Monday, February 2, 2009
Taking Pencil.Unit for a spin, F# syntax highlighting.
#light
open System
open System.Text
open System.IO
open Pencil.Unit
type Token =
| Comment of string
| Keyword of string
| Preprocessor of string
| String of string array
| Text of string
| WhiteSpace of string
| NewLine
| Operator of string
let Classify x =
match x with
| "abstract" | "and" | "as" | "assert"
| "base" | "begin"
| "class"
| "default" | "delegate" | "do" | "done" | "downcast" | "downto"
| "elif" | "else" | "end" | "exception" | "extern"
| "false" | "finally" | "for" | "fun" | "function"
| "if" | "in" | "inherit" | "inline" | "interface" | "internal"
| "lazy" | "let"
| "match" | "member" | "module" | "mutable"
| "namespace" | "new" | "null"
| "of" | "open" | "or" | "override"
| "private" | "public"
| "rec" | "return"
| "static" | "struct"
| "then" | "to" | "true" | "try" | "type"
| "upcast" | "use"
| "val" | "void"
| "when" | "while" | "with"
| "yield" -> Keyword x
| _ when x.[0] = '#' -> Preprocessor x
| _ -> Text x
let IsKeyword = function
| Keyword _ -> true
| _ -> false
let IsPreprocessor = function
| Preprocessor _ -> true
| _ -> false
Theory "Classify should support all F# keywords"
("abstract and as assert base begin class default delegate do done
downcast downto elif else end exception extern false finally for
fun function if in inherit inline interface internal lazy let
match member module mutable namespace new null of open or
override private public rec return static struct then to
true try type upcast use val void when while with yield"
.Split([|' ';'\t';'\r';'\n'|], StringSplitOptions.RemoveEmptyEntries))
(fun x -> Classify x |> IsKeyword |> Should Equal true)
Fact "Classify should treat leading # as Preprocessor"
(Classify "#light" |> IsPreprocessor |> Should Equal true)
let Tokenize (s:string) =
let start = ref 0
let p = ref 0
let next() = p := !p + 1
and hasMore() = !p < s.Length
and sub() = s.Substring(!start, !p - !start)
and current() = s.[!p]
and prev() = s.[!p - 1]
let peek() = if (!p + 1) < s.Length then
s.[!p + 1]
else
(char)0
and isWhite() =
match current() with
| ' ' | '\t' -> true
| _ -> false
and isOperator() = "_(){}<>,.=|-+:;[]".Contains(string (current()))
and isNewLine() = current() = '\r' || current() = '\n'
let notNewline() = not (isNewLine())
and notBlockEnd() = not(current() = ')' && prev() = '*')
let inWord() = not (isWhite() || isNewLine() || isOperator())
let read p eatLast =
while hasMore() && p() do
next()
if eatLast then
next()
sub()
let readWhite() = WhiteSpace(read isWhite false)
and readNewLine() =
next()
if isNewLine() then
next()
NewLine
and readWord() = Classify(read inWord false)
and readOperator() = Operator(read isOperator false)
and readString() =
let isEscaped() = prev() = '\\'
let inString() = isEscaped() || current() <> '\"'
next()
let s = read inString true
String(s.Split([|'\r';'\n'|], StringSplitOptions.RemoveEmptyEntries))
seq {
while hasMore() do
start := !p
let token =
match current() with
| '\"' -> readString()
| '/' when peek() = '/' -> Comment(read notNewline false)
| '(' when peek() = '*' -> Comment(read notBlockEnd true)
| _ when isWhite() -> readWhite()
| _ when isOperator() -> readOperator()
| _ when isNewLine() -> readNewLine()
| _ -> readWord()
yield token}
let ToString x =
let encode = function
| Comment _ -> "c"
| Keyword _ -> "k"
| Preprocessor _ -> "p"
| String _ -> "s"
| Text _ -> "t"
| WhiteSpace _ -> "w"
| NewLine -> "n"
| Operator _ -> "o"
x |> Seq.fold (fun (r:StringBuilder) -> encode >> r.Append) (StringBuilder())
|> string
Fact "Tokenize should categorize"(
Tokenize "#light let foo" |> ToString |> Should Equal "pwkwt")
Fact "Tokenize should handle string"(
Tokenize "\"Hello World\"" |> ToString |> Should Equal "s")
Fact "Tokenize should split string into lines"(
let lines = function
| String x -> x
| _ -> [||]
Tokenize "\"Hello\r\nWorld\"" |> Seq.hd |> lines |> Seq.length |> Should Equal 2)
Theory "Tokenize should separate start on operators"
("_ ( ) { } < > [ ] , = | - + : ; .".Split([|' '|]))
(fun x -> Tokenize x |> ToString |> Should Equal "o")
Fact "Tokenize should end on separators"(
Tokenize "foo)" |> ToString |> Should Equal "to")
Fact "Tokenize should handle escaped char in string"(
Tokenize "\"\\\"\"" |> ToString |> Should Equal "s")
Fact "Tokenize should handle //line comment"(
Tokenize "//line comment" |> ToString |> Should Equal "c")
Fact "Tokenize should handle (* block comments *)"(
Tokenize "(* block comment )*) " |> ToString |> Should Equal "cw")
Fact "Tokenize should handle newline"(
Tokenize "\r\n" |> ToString |> Should Equal "n")
Fact "Tokenize should separate whitespace and newline"(
Tokenize " \r\n" |> ToString |> Should Equal "wn")
let Sanitize (s:string) = s.Replace("&", "&").Replace("<", "<").Replace(" ", " ")
Fact "Sanitize should replace < with <"(
Sanitize "<" |> Should Equal "<")
Fact "Sanitize should repalce & with &"(
Sanitize "&" |> Should Equal "&")
Fact "Sanitize should repalce ' ' with "(
Sanitize " " |> Should Equal " ")
type IHtmlWriter =
abstract Literal : string -> unit
abstract Span : string -> string -> unit
abstract NewLine : unit -> unit
let HtmlEncode (w:IHtmlWriter) =
let span style s =
w.Span style s
function
| Comment x -> span "c" x
| Keyword x -> span "kw" x
| Preprocessor x -> span "pp" x
| String x ->
span "tx" x.[0]
for i = 1 to x.Length - 1 do
w.NewLine()
span "tx" x.[i]
| Operator x -> span "op" x
| Text x | WhiteSpace x -> w.Literal x
| NewLine -> w.NewLine()
let AsHtml s =
let r = StringBuilder("<div class='f-sharp'>")
let encode = HtmlEncode {new IHtmlWriter with
member this.Literal s = r.Append(Sanitize s) |> ignore
member this.Span c s = r.AppendFormat("<span class='{0}'>{1}</span>", c, Sanitize s) |> ignore
member this.NewLine() = r.Append("<br>") |> ignore}
Tokenize s |> Seq.iter encode
string (r.Append("</div>"))
Fact "AsHtml sample"(
let sample = "#light\r\nlet numbers = [1..10]"
let expected =
String.Concat [|"<div class='f-sharp'><span class='pp'>#light</span><br>"
;"<span class='kw'>let</span> numbers <span class='op'>=</span> "
;"<span class='op'>[</span>1<span class='op'>..</span>"
;"10<span class='op'>]</span></div>"|]
sample |> AsHtml |> Should Equal expected)
//Render myself.
File.ReadAllText(__SOURCE_FILE__)
|> AsHtml |> (fun x -> File.WriteAllText(__SOURCE_FILE__ + ".html", x))
Friday, January 30, 2009
Fact about Pizza.
#light open Pencil.Unit Fact "Pizza should have cheese." ("Pizza" |> Should (Contain "Cheese"))And the output:
Pizza should have cheese. Failed with "Pizza" doesn't contain "Cheese".
Craftmanship over Crap
Tuesday, January 27, 2009
Minimal Unit Tests in F#
#light namespace Pencil.Unit open System open System.Diagnostics open System.Collections.Generic type IMatcher = abstract Match<'a> : 'a -> 'a -> bool abstract Format<'a> : 'a -> 'a -> string module Unit = let (|>) x f = let r = f x in r |> ignore; r //work-around for broken debug info. let mutable Count = 0 let Errors = List<String>() let Should (matcher:IMatcher) = fun e a -> Count <- Count + 1 if matcher.Match e a then Console.Write('.') else let frame = StackTrace(true).GetFrame(1) let trace = String.Format(" ({0}({1}))", frame.GetFileName(), frame.GetFileLineNumber()) Console.Write('F') Errors.Add((matcher.Format e a) + trace) let Equal = { new IMatcher with member x.Match e a = a.Equals(e) member x.Format e a = String.Format("Expected:{0}, Actual:{1}", e, a)} open Unit //Tests goes here 2 * 4 |> Should Equal 8 2 + 1 |> Should Equal 2 //Report the result Console.WriteLine("{2}{2}{0} tests run, {1} failed.", Count, Errors.Count, Environment.NewLine) if Errors.Count <> 0 then Errors |> Seq.iter (fun e -> Console.WriteLine("{0}", e))And the output from the above:
.F 2 tests run, 1 failed. (Expected:2, Actual:3 (F:\Pencil.Unit.fs(34))Less than 40 lines of F# and we're on our way to unit testing goodness.
Monday, January 26, 2009
Why done should be "in use".
Thursday, January 15, 2009
Falling into the pit of success, by design.
public void JeffIsWingingIt() { var target = File.CreateText("output.file"); target.WriteLine("Very important stuff."); }The sad part here is that code like this often *seems* to be working but in reality quite often you'll end up with truncated output and a long hunt to find the responsible party. So the correct version looks like this:
public void CorrectButNoFun() { using(var target = File.CreateText("output.file2")) target.WriteLine("Very important stuff."); }Sadly that not at all as appealing, but at least we get all the data written to disk and the file handle reclaimed in an orderly matter. But I would say the API is to blame in this case, it encourages us to do the wrong thing (forgetting to Dispose when done). There's quite easy to fix this and make it like this instead.
public void DesignedForSuccess() { FileUtil.Write("output.file3", target => target.WriteLine("Very important stuff.")); } static class FileUtil { public static void Write(string path, Action<TextWriter> action) { using(var file = File.CreateText(path)) action(file); } }Here resource handling and work are cleanly separated and abstracted. Using a similar approach you can easily and safely work with SqlConnection,SqlCommand,SqlDataReader and a huge varaity of other error prone but Disposable classes. Once and only once. Don't repeat yourself. It applies to resource handling to.
Fixing a bug, more than a local hack.
static bool IsFrameworkType(Type type) { return type.FullName.StartsWith("System."); }Simple elegant and to the point. But since I said this was the story of a bug something went astray. The thing is that under certain quite arcane circumstances "Type.FullName" can return null, the details aren't really important, the problem is that the code above crashes with a NullReferenceException if that happens. And that's important for me since then I won't get to see the result of my program run. The quick, obvious fix is to check for null. As seen below:
static bool IsFrameworkType(Type type) { var name = type.FullName; return name != null && name.StartsWith("System."); }And that's how the majority of bugs get "fixed", a local fix at the point of failiure, a pat on the back for a now passing test and away we dart to the next challange. And that I think is part of the problem why software takes so long. If we keep our local focus what can we do to both fix this bug and design safety in? We can reengineer our API to handle this gracefully. In languages with open classes we could fix our string API, in C# we have extension methods and if we work in an environment without them utility classes and free functions can aid us. So what's the root cause here? I would say that "StartsWith" is the culprit in this case. Because we know that we will have something to check against "System.", but we're not sure that we have a target for our StartsWith call. For this situation "System.".IsStartOf would make more sense, since that way we know that we have a instance to start with. Using extension methods we arrive at this:
static class StringExtensions { public static bool IsStartOf(this string prefix, string s) { return s != null && s.StartsWith(prefix); } } static bool IsFrameworkType(Type type) { return "System.".IsStartOf(type.FullName); }Hunting through the code-base I found a couple of other places where this could be used to solve similar problems. How often do you take this extra step to not only fix the problem locally but ponder if you can modify the API or system to prevent it from happening somewhere else? Make it part of your routine, and you're one "why" closer to a clean and productive code base. Further analysis also made it possible to ensure that all types that eventually got sent to this function indeed did have proper FullNames. But that's another story.
Saturday, January 10, 2009
How to write a good commit comment.
Thursday, January 8, 2009
Clean Code and Mom
If you've ever seen a code rot you know that few substances can so quickly go from shining examples of good to festering bug infested piles that everyone is affraid to touch. Why does this happen and what can we do to combat it?
There's much good advice out there for prevention, "Clean Code" by Robert C Martin is a good starting point and if things have already begun to stink have a look at Micheal Feathers "Working Effectivly Witch Legacy Code". They're well worth your time but since you're reading a blog Im guessing that right now you're searching for a quick snack.
It's quite easy to explain how we manage to get ourselves into this mess and basicly it's one portion human nature and an equal serving of wisdom from mom.
We follow examples.
As humans and developers we're extremly good at following examples, it's what we've evolved and trained to do. We learn to talk and behave by following the examples of our parents and peers, we aquire new skills by copying those who mastered them before us and we invent new ideas most of the time by missunderstanding something we set out to copy. Based on this it's deceptivly simple to conclude that in theory achiving and maintaining high code quality should simply be a matter of establishing a good set of practices and have people learn from and copy them. And as far as theory goes Im quite sure that it does actually hold true, apart from when it doesn't, and in practice that is most of the time.
Why does such a wonderull theory fail so miserably in practice? We could speculate that it's becuase we don't provide the right examples, or that people are ignorant of the wonderful code we've written and therefore doesn't emulate our flawless style. Or could it be that they simply lack the skill and aptitude for it? There's probably truth to be found in all of theese but I find it easier to lean on what mom used to tell me when I was little.
You become what you eat. Now before you dismiss me by saying that this actually reinforces the previous point (wich would be true) the underlying assumption in that dissmissal would be that we on average eat high quality code. If that was the case bad stinking code with wtf/minute ratios approaching positive infinity would not be a problem. Basicly I think the core problem is that most of the code we work with on a daily basis through maintenance and enhancement is the code that didn't work right! In any given codebase changes and work tend to cluster in the worst written, badly designed pockets of crap to ever compile and that's where we send our young and untrained. They come out of it producing sub-par code from the bad examples that have now ingrained their previously healthy minds, the examples they've seen are all terrible examples of what not to do. With experience we can learn to see the differance, we become hardened and learn that the code we see most often are the examples to not follow, we go on expiditions to find the code that have been working and elegantly solving our needs without us even realizing it was there, and we learn to emulate that.
So how does this answer the question how things can go so fast downhill? The answer is that on average people will produce code looking quite much what they last saw. Most of the code people see is horrible that's why they're staring at it. As the amount of bad code increases so does the probability of seeing bad code.
How to break the cycle is left as an exercise for the reader.