Historical Debugger and Test Impact Analysis in Visual Studio Team System 2010

I was browsing some of the awaited-for Visual Studio 2010 and I found a video about two interesting features related to what we do everyday (and I do not mean coding ;))

The first feature is related to debugging “history”. Normally, when you find a bug while running your program, you insert multiple breakpoints on the statements you suspect might caused it, and repeat the scenario or restart the debug session.

With history debugging, you don’t need to do so, since the history debugger “logs” every event in your application, so you can “pause” running and inspect all the previous events and may be check if there were “swallowed” exceptions.

The second feature, is when you want to modify some “unit-tested” code either for bug fixing or refactoring, and you want to re-run the impacted tests. Normally it is hard to determine which tests are affected so you run all of them, but with Test Impact Analysis in VS 2010, you can get the impacted tests once you save your modifications, so you can either commit your changes or undo.

Here is the 19 minutes video showing these two features on Channel 9

Historical Debugger and Test Impact Analysis in Visual Studio Team System 2010

Duck Typing

While browsing the net, I found that Typemock Isolator 5.1.1 has been released, which is a mocking framework that I should try one day.

What I’m interested in is its ability to “swap” calls between real and fake objects even if they don’t implement the same interface, like this:

var duck = new Duck(); // The real, untestable class, with a Talk() method
var dummy = new Dummy(); // The fake, testable class, with another Talk() method
Isolate.Swap.CallsOn(duck).WithCallsTo(dummy);
Assert.AreEqual("Quack", dummy.Talk()); // Then the dummy is a duck

I guess the Isolate class is “borrowed” from Ruby, but anyway, it makes testing easier.

This can be possible by applying the concept of “Duck Typing” which was named as James Whitcomb Riley said


If it walks like a duck and quacks like a duck, I would call it a duck

 

So, regardless of the properties and methods of both the dummy and the real duck, a dummy can be considered a duck if it satisfies the properties and methods of a real duck in the current context, or in other words; it does not have to inherit from a duck nor satisfy all behavior of a real duck, but only what we need right now [Talk()]

Duck typing is a principle of Dynamic Typing (Which I plan to blog about next), and it is a heavily used in Dynamic Languages like Python, Ruby.

It is sometimes used in Static Languages, and may be more often than we expect. Here is a blog about Duck Notation in C# in which we find out that iterators in a foreach statement use duck typing. Also in BOO you can define a variable as duck.

Currently there is an open source project to enable duck-typing in C#, and also it is planned to be in the upcoming C# 4.0

I have failed a sprint!

Welcome back everyone. I have been busy for the last month working hard trying to achieve a substantial success in this sprint as it had a special value to me; first, because it came after a long time of being idle, and second, because we had just resolved some external issues that held us back in the previous sprints.

At the beginning of the sprint we went through with a high velocity without the need for any overtime nor affecting quality which impressed me and later we had some tasks that required additional efforts than expected so we started losing velocity. We were still inline with the optimal burn-down but not for long.

I was still convinced that the remaining tasks can be squeezed in the final few days of the sprint. So we kept the hard work crossing our fingers to finish on time. Unfortunately, we had to extend the sprint for one day, ignore some features, jeopardize quality, and finally (as expected), the final build was rejected.

It is hard to admit the failure, but the bitterness of failure is the medicine that heals your weaknesses, and our sprint retrospective was really fruitful and full of lessons learned, so here are some:

  1. Pairing of tasks does not always result in dividing the effort (the famous example: nine women cannot give birth to a baby in one month).
  2. Instead of pairing, you can divide tasks into smaller ones, which have more benefits; more accurate estimation, a checklist to define what’s is meant by “done”, and less dependency of tasks.
  3. It is better to be pessimistic in you estimation till your team proves otherwise than to commit to what you are not 100% sure you can achieve.

And my personal lesson I learned was:

  1. It feels good to be superman, but it feels worse if you fails to prove it.

Random Data Generator Tool

Have you ever needed to fill a table with random data and started to get bored after 10th or 20th row.

And have you tried to enter rubbish data for the sake of saving time as you cannot find meaningful names for 50 or 100 persons (you can use your friends names, but this won’t be so quick, and what about if you need their phone numbers 🙂

Here is a quick and easy tool that can get you full tables of any size of rows with any size and type of columns you need. And the output can be in HTML, XML, Excel, CSV, or SQL Inserts.

http://www.generatedata.com/#generator

The best of all is that it is totally free.