C# 4.0: What’s next

Since I have talked before about Visual Studio 2010 and Dynamic Typing, it is now time to talk about the upcoming C# 4.0 which we are expecting soon.

To understand how C# evolved and what is the paradigm of C# 4.0, let’s get a quick overview of previous versions.

In 1998 the C# began as a simple, modern, object-oriented, and type-safe programming language for the .NET platform. Since launched in the summer of 2000, C# has become one of the most popular programming languages in use today.

With version 2.0 the language evolved to provide support for generics, anonymous methods, iterators, partial types, and nullable types.

When designing version 3.0 of the language the emphasis was to enable LINQ (Language Integrated Query) which required the addition of:

  • Implicitly Typed Local Variables.
  • Extension Methods.
  • Lambda Expressions.
  • Object and Collection Initializers.
  • Anonymous types.
  • Implicitly Typed Arrays.
  • Query Expressions and Expression Trees.

Although C# is an object-oriented programming, but C# 3.0 included some capabilities of functional programming to enable LINQ

In version 4.0 the C# programming language continues to evolve, although this time the C# team were inspired by dynamic languages such as Perl, Python, and Ruby.

Another paradigm that is driving language design and innovation is concurrency and that is a paradigm that has certainly influenced the development of Visual Studio 2010 and the .NET Framework 4.0.

Essentially the C# 4.0 language innovations include:

  • Dynamically Typed Objects.
  • Optional and Named Parameters.
  • Improved COM Interoperability.
  • Safe Co-variance and Contra-variance.

To show these features, I’ll show some examples posted by Doug Holland on his The C# Programming Language Version 4.0 blog post…

Dynamically Typed Objects

In C# today you might need to get an instance of a class and then calls the Add method on that class to get the sum of two integers:

Calculator calc = GetCalculator();
int sum = calc.Add(10,20);

Our code gets all the more interesting if the Calculator class is not statically typed but rather is written in COM, Ruby, Python, or even JavaScript. Even if we knew that the Calculator class is a .NET object but we don’t know specifically which type it is then we would have to use reflection to discover attributes about the type at runtime and then dynamically invoke the Add method.

object calc = GetCalculator();
Type type = calc.GetType();
object result = type.InvokeMember("Add", BindingFlags.InvokeMethod, null,
    new object[] { 10, 20 });
int sum = Convert.ToInt32(result);

With the C# 4.0 we would simply write the following code:

dynamic calc = GetCalculator();
int sum = calc.Add(10,20);

In the above example we are declaring a variable, calc, whose static type is dynamic. We’ll then be using dynamic method invocation to call the Add method and then dynamic conversion to convert the result of the dynamic invocation to a statically typed integer.

Optional and Named Parameters

Another major benefit of using C# 4.0 is that the language now supports optional parameters (like C and C++) and named parameters.

One design pattern you’ll often see as that a particular method is overloaded because the method needs to be called with a variable number of parameters.

In C# 4.0 a method can be refactored to use optional parameters as the following example shows:

public StreamReader OpenTextFile(string path, Encoding encoding =  null, bool
    detectEncoding = false, int bufferSize = 1024) { }

Given this declaration it is now possible to call the OpenTextFile method omitting one or more of the optional parameters.

OpenTextFile("foo.txt", Encoding.UTF8);

It is also possible to use the C# 4.0 support for named parameters and as such the OpenTextFile method can be called omitting one or more of the optional parameters while also specifying another parameter by name.

OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 1024);

Named arguments must be provided last although when provided they can be provided in any order.

Improved COM Interoperability

When you work with COM interop methods, you had to pass a reference to Missing.Value for unneeded parameters, like the following:

object filename = "test.docx";
object missing = System.Reflection.Missing.Value;

doc.SaveAs(ref filename,
    ref missing, ref missing, ref missing,
    ref missing, ref missing, ref missing,
    ref missing, ref missing, ref missing,
    ref missing, ref missing, ref missing,
    ref missing, ref missing, ref missing);

Now, in C# 4.0 you can only write:

doc.SaveAs(filename);

Notice that you are able to omit the ref modifier although the use of the ref modifier is still required when not performing COM interoperability.

Also it was necessary to ship a Primary Interop Assembly (PIA) along with your managed application. This is not necessary when using C# 4.0 because the compiler will instead inject the interop types directly into the assemblies of your managed application and will only inject those types you’re using and not all of the types found within the PIA.

Safe Co-variance and Contra-variance

Co-variance means that an instance of a subclass can be used when an instance of a parent class is expected, while Contra-variance means that an instance of a super class can be used when an instance of a subclass is expected. When neither is possible, it is called Invariance.

Since version 1.0 arrays in .NET has been co-variant meaning that an array of string can be assigned to an array of object.

string[] strings = new string[] { "Hello" };
object[] objects = names;

Unfortunately arrays in .NET are not safely co-variant. Since objects variable was an array of string the following will succeed.

objects[0] = "Hello World";

Although if an attempt is made to assign an integer to the objects array an ArrayTypeMismatchException is thrown.

objects[0] = 1024;

To achieve safety, both C# 2.0 and C# 3.0 generics are invariant and so a compiler error would result from the following code:

List strings= new List();
List<object> objects = strings;

For the Liskov Substitution Principle (LSP) to be achieved, methods of a derived class need to have contra-variant inputs and co-variant outputs.

Generics with C# 4.0 now support safe co-variance and contra-variance through the use of the in (contra-variant) and out (co-variant) contextual keywords. Let’s take a look at how this changes the definition of the IEnumerable<T> and IEnumerator<T> interfaces.

public interface IEnumerable
{
    IEnumerator GetEnumerator();
}

public interface IEnumerator
{
    T Current { get; }
    bool MoveNext();
}

Given that an IEnumerable collection is read only, there is no ability specified within the interface to insert new elements, it is safe to treat a more derived element as something less derived. With the out contextual keyword we are contractually affirming that IEnumerable<out T> is safely co-variant. We can now write the following code without a compiler error:

IEnumerablestrings = GetStrings();
IEnumerable<object> objects = strings;

Using the in contextual keyword we can achieve safe contra-variance, that is treating something less derived as something more derived.

public interface IComparer
{
    int Compare(T x, T y);
}

Given that IComparer<in T> is safely contra-variant we can now write the following code:

IComparer objectComparer = GetComparer();
IComparer stringComparer = objectComparer;

It is important to notice that co-variance in IEnumerable<object> refers to the fact that its Current property can return a string instead of an object as output, while contra-variance in IComparer<string> refers to the fact that its Compare method can accept an object instead of a string as input.

Advertisements

Agile and CMMI

I wanted to talk about this issue a long time ago, but I didn’t have much to share with you.

My first trigger for this post was few months ago when Software Engineering Institute (SEI) of Carnegie Mellon University published a report under the name CMMI or Agile: Why not embrace both! which is a confirmation by CMMI authors that you can satisfy both requirements.

The report summarizes the history of both paradigms and why it is perceived that CMMI is waterfall while it isn’t. It also summarized the misconceptions in both ideas and how to overcome them. And it ends by a call for action for both practitioners and trainers to bridge the gap between both.

The report also addresses other issues such as:

  • The Origins of Agile Methods
  • The Origins of CMMI
  • Lack of Accurate Information
  • Terminology Difficulties
  • There Is Value in Both Paradigms
  • Challenges When Using Agile
  • Challenges When Using CMMI
  • Problems Not Solved by CMMI nor Agile

Yesterday, I have been in a 2-days workshop on the same issue where we have conducted some real life case studies to discover how we can come out with processes that satisfy both, and get useful feedback from experts in both schools.

If I can summarize what I have learnt in those 2 days, it is that you have to be first convinced why you need Agile and why you need CMMI, and when you do, you will be able to select to which level you want to embrace each, and what are the practices that would "add value" to your organization (not only individual projects)

Finally, I would like to refer you to this 2 years old post by Jeff Sutherland in which he says that Scrum supports CMMI level 5

Thinking out of the “box”

Ever got into a dead-end with no one to ask for help, or new ideas to try?

In 1975, one creative guy (not from an IT background) have came out with a “box” of ideas cards to give him clues whenever he gets puzzled and starts looking around without a purpose. He called it Oblique Strategies.

These cards actually doesn’t include answers, rather, it has some phrases that can trigger your mind to try new solutions that you might have not thought of.

The funny thing is that you can use these cards in software engineering and yet you can find it really useful. Here is how.

SQL Server data types

Actually I have thought to write this post mainly for me as I always forget the differences of the different (yet similar) data types in SQL Server

Here is what I need to remember:

  • numeric is the same as decimal and they are both exact decimal values
  • float and real are approximate values
  • fLoat(n) complies with ISO SQL, n is the mantissa digits
  • [float(25) – float(53)] are stored in 8 bytes while [float(1) – float(24)] are stored in 4 bytes 
  • real is the same as float(24)
  • money is an exact decimal value stored on 8 bytes
  • nchar and nvarchar are the Unicode versions (2 bytes per character) while char and varchar use only 1 byte (so they can still accept UTF-8 characters but not UTF-16 like Japanese or Chinese)

That was some short notes, for the full details you can check

http://msdn.microsoft.com/en-us/library/ms187752.aspx

StyleCop for ReSharper has been released

StyleCop for ReSharper has been released last week as a plugin for ReSharper to bring StyleCop code styling issues to ReSharper real-time syntax highlighting, quick-fixes, and Code CleanUp.

Although I personally don’t like many of StyleCop rules, but like ReSharper rules, they can be turned off individually. The plugin performance can also be customized from within the ReSharper options window.

More about StyleCop:

Microsoft StyleCop analyzes C# source code to enforce a set of style and consistency rules. It can be run from inside of Visual Studio or integrated into an MSBuild project. It is another static analysis tool by Microsoft like FxCop, but the later analyzes the compiled object code for design issues, not the source code.

More about ReSharper:

ReSharper is a Visual Studio add-in created by JetBrains (the creator of many software productivity tools like IntelliJ®IDEA the Java IDE, TeamCity™ the build integration server, and dotTrace Profiler)

I have been using ReSharper almost since I started using Visual Studio and it has always been a complement to it. Even when Visual Studio evolved from 2003 to 2005, and finally 2008, ReSharper has still much to add.

One of the features I like most in ReSharper is its ability to detect syntax and styling error while you type, refactor code quickly, suggest variable names based on classnames, highlight redundant code, make code more readible, find references and jump to it directly if only once.

There has been some talks that Visual Studio 2010 is actually 2008 + ReSharper because of a mistake they made when they posted some snapshots of Visual Studio 2010 with ReSharper menus and screens in it.

Microsoft on the other hand had been promoting one of ReSharper competitors and announced a cooperation with DevExpress to license a free version of CodeRush Express exculsively for C# developers working on Visual Studio.

kick it on DotNetKicks.com

//Comments about comments

Although they are usually enforced by coding standards, and I strongly agree, I believe code comments should be minimized for many reasons:

  • Except for XML comments of methods and classes, most comments are not affected by automatic refactoring tools
  • Most developers forget to update comments after updating code
  • Sometimes they mix with temporarily commented code
  • Every one adds one or more additional lines to the source code
  • They might make the code ugly
  • They implicitly admit that your code cannot express your ideas
  • They cannot be validated by type checking or unit testing
  • Most of the times, there are more better alternatives than comments

Yet I believe they are highly needed in these cases

  • Writing important messages to self or others
  • Writing code review comments
  • Writing //TODO tasks for self or others
  • Writing failed trials to avoid reusing them later while refactoring
  • Writing justification for code that might not appear logical at first look

And finally here are some cases when comments can be replaced by better alternatives

if (i >= 5) // number of orders
if (numOrders >= 5) // <-- better option: use meaningful var names
if (custType == 5) // credit customer
if (custType == CustomerTypes.Credit) // <-- better option: use enums

if (custCredit >= 30) // credit over 30%
if (OverCredit()) // <-- better option: use methods (or properties)

private bool OverCredit()
{
    return custCredit >= 30;
}
if (a == b)
{
    if (d == c)
    {
        // ... long lines of code ...
    } // if (d == c)
} // if (a == b)

// Most decent IDEs highlight matching braces when u select one of them
// Also long lines of codes should be avoided, use methods
// begin saving data
    // ... long lines of code ...
// end saving data

#region Saving data               // <-- better option: use regions
    // ... long lines of code ...
#endregion

SaveData()          // <—- another better option: extract as method
// Modified by modeeb @ Feb 6, 2009 02:32:12 PM
// better option: use a version control system that have "blame" or "annotate" functionality

The Economic Crisis and Software

 

A week ago there were news about Google laying off recruiters and yesterday I read More rumors of Microsoft job cuts and it is clear that the global economic crisis is going to have its impact on Software industry.

The end goal is to continue in business and to cut costs. One way to do so is by outsourcing, which is the trend of most American and European software companies now.

On a personal level, you need to save your employer’s money but doing the right job right from the first time. I know it is easily said, but also it can be easily achieved with a little concentration and a good focus on quality.

By quality I mean quality in every aspect of the process; it is not enough that it works. Quality in code includes readability, simplicity, reducing load, traffic, and waiting time.

Sometimes developers feel relaxed that a code review or a senior developer will discover their bugs. Testers are always considered a second line of defense and hence written code is not tested thoroughly by developers. In the coming years, this would not be accepted.

One of the most inspiring words in my life as a developer was in the movie Antitrust, in which a software company owner says “… this business is binary, you are a 1 or a 0 … alive or dead … there is no second place”, you need to watch this 3 min video, or watch the full movie, it really worth it 🙂

As you heard in the video, “… there is no room for idle time or second guesses … new discoveries are made hourly …” so you need to read constantly. Always read and always keep in track with what’s new in your field.

Equally important, you should build up on others experiences. No need to go through the same learning curve. Always start where others have reached. Best practices, patterns, frameworks, libraries really are time and effort savers, leaving you enough time to add something new … to innovate.

Good luck.