Using the Remote Debugger

By jay at July 22, 2010 20:05 Tags: , ,

Cet article est disponible en francais.

To continue in the same kind of articles about Visual Studio features that have been available for a while now, but are commonly under-used, I'll talk in this post about the Remote Debugger.

 

Local Debugging

Visual Studio has a debugger that allows the debugging of program when running it using F5, or "Debug / Start Debugging". Visual Studio will then start in a special mode that allows step by step execution of the program, use features like BreakPoints, TracePoint, Watches, IntelliTrace, create MiniDumps and many more.

The debugger runs the program on the local machine, and uses the permissions of the locally logged on user.

Nothing out of the ordinary. Well, maybe the Reverse Debugging with IntelliTrace in VS2010, which is very cool.

 

Hardware Specific and CrapWare

I don't know about you, but I keep my development PC as stable as possible. I rarely install new software, so that I keep the overall performance stable over time. I will most of the time install new software versions only after having tested them on other PCs to determine their behavior.

Call me maniac, that's what it is :)

But then, what to do when the need for testing an installation program comes up ? Or when you need to debug plugins for NI TestStand or Labview ? Or when the software needs a very specific kind of hardware that cannot be installed on your development PC ? (Rainbow Keys, anyone ?)


The answer is simple : The Remote Debugger ! When possible, I will test and debug my software on a virtual machine, or on a physical machine that has the appropriate environment to execute the software.

That way, the development environment stays stable, and I don't need to make installation of software that could add some crapware and eat up the few bytes of RAM left :)

The Remote Debugger ?

The idea is to continue using the development machine, where the source code is and to connect via the network on a machine that will execute the program. After that, the remote debugging session is very similar to a local session, with the exception of the "Edit and Continue" that is not supported. But most of the time, we can live without it.

 

Running the debugger from Visual Studio

It is possible to run the execution on the remote machine by using the "Use Remote Machine" option in the "Debug" tab of a C# project. It is important to note that checking this option implies that all paths specified in "Working Directory" or "External Program" are those of the remote machine.

Aditionnally, Visual Studio will not copy the binaries and PDB files on the remote machine. You have to make the copy of the files at the appropriate location, by using a "Post Build Action", a UNC path in the form of "\\mymachine\c$\temp".

 

Attach to a Running Process

It is also possible to attach to a running process, by using the "Debug / Attach To Process" option. You just need to fill in the "Qualifier" and set the name of the remote debugger, and to choose the process to debug.

Quick hint: The option "Show processes from all users" is not enabled by default. This means that is you want to debug a Windows Service, you will not see it in the list until you enable it.

Finally, the "Attach To Process" window is also very useful with local processes. It is a very handy feature to create a memory dump of a process that takes too much memory, and analyze it.

 

Installing the Remote Debugger

The Remote Debugger is an additional Visual Studio component that is located on the installation media, in the "Remote Debugger" folder. Three versions exist : x86, x64 and ia64 (RIP, Itanium...). If you have to debug a 32 process on 64 bits machine, I advise that you install both the x86 and x64 versions. You will have to choose which remote debugger to run depending on the .NET runtime that is used. You can see which version to use in the "Type" column of the "Attach to Process" window.

Here's what to do :

  • If you are using VS2008 SP1, you can download it here, and for VS2010 you can use the install located on the DVD
  • Once installed on the remote machine, install the RDBG service with the wizard, using the LocalSystem account.
  • You may have a message about a security issue. If you do, follow these steps :
    • Open the "Local Security Policy" section of the "Administrative Tools" control panel
    • Go to the "Local Policies" / "Security Options"
    • Double click on "Network access: Sharing and security model for local accounts" and set the value to "Classic : Local users authenticate as themselves"
    • Close the window
  • If your machine is not on the same domain as your development machine, or even if it's not on a domain at all, add a local use account on the remote machine that has the same name as your current username, and make it a member of the administrators group. The password also has to be the same.
  • Start the remote debugger on the remote machine. Note that to debug a 32 bits process, you have to run the 32 Bits version of the debugger.
  • On the development machine, open the "Attach to process" window, and type the identifier of the remote debuger (shown on the remote debugger window). It should look like this: administrator@my-machine.

Note that the firewall on both the development and the remote machine can prevent the remote debugger from working properly. You can temporarily disable it, but make sure to enable it back after. If you only want to enable specific ports, the port 135/TCP is used. The Remote Debugger uses DCOM as its communication protocol.

 

And if my breakpoints stay empty red circles ?

This is a very common situation that means that the pdb files do not match the loaded binaries. Make sure that you've copied the pdb files at the same time you did the dlls.

The "Debug / Windows / Modules" shows if the debug symbols have been loaded properly, and if it's not the case, the "View / Output / Debug" window will most of the time show why.


Happy debugging !

Version Properly using AssemblyVersion and AssemblyFileVersion

By jay at July 18, 2010 15:25 Tags: , ,

Cet article est disponible en francais.

We talk quite easily about new technologies and things we just learned about, because that's the way geeks work. But for newcomers, this is not always easy. This is a large recurring debate, but I find that it is good to step back from time to time and talk about good practices for these newcomers.

 

The AssemblyVersion and AssemblyFileVersion Attributes

When we want to give a version to a .NET assembly, we can choose one of two common ways :

 

Most of the time, and by default in Visual Studio 2008 templates, we can find the AssemblyInfo.cs file in the Properties section of a project. This file will generally only contain an AssemblyVersion attribute, which will force the value of the AssemblyFileVersion value to the AssemblyVersion's value. The AssemblyFileVersion attribute is now added by default in the Visual Studio 2010 C# project templates, and this is a good thing.

It is possible to see the value of the AssemblyFileVersion attribute in file properties window the Windows file explorer, or by adding the "File Version" column, still in the Windows Explorer.

We can also find a automatic numbering provided by the C# compiler, by the use of :

[assemblyAssemblyVersion("1.0.0.*")]

 

Each new compilation will create a new version.

This feature is enough at first, but when you start having somehow complex projects, you may need to introduce continuous integration that will provide nightly builds. You will want to give a version to the assemblies in such a way it is easy to find which revision has been used in the source control system to compile those assemblies.

You can then modify the Team Build scripts to use tasks such as the AssemblyInfo task of MSBuild Tasks, and generate a new AssemblyInfo.cs file that will contain the proper version.

 

Publishing a new version of an Assembly

To come back to the subject of versionning an assembly properly, we generally want to know quickly, when a project build has been published, which version has been installed on the client's systems. Most of the time, we want to know which version is used because there is an issue, and that we will need to provide an updated assembly that will contain a fix. Particularly when the software cannot be reinstalled completely on those systems.

A somehow real world example

Lets consider that we have a solution with two assemblies signed with a strong name Assembly1 and Assembly2, with Assembly1 that uses types available in Assembly2, and that finally are versioned with an AssemblyVersion set to 1.0.0.458. These assemblies are part of an official build published on the client's systems.

If we want to provide a fix in Assembly2, we will create a branch in the source control from the revision 1.0.0.458, and make the fix in that branch which will give revision 460, so the version 1.0.0.460.

If we let the Build System compile that revision, we will get assemblies that will be marked as 1.0.0.460. If we only take Assembly2, and we place it on the client's systems, the CLR will refuse to load this new version if the assembly, because Assembly1 requires to have Assembly to of the version 1.0.0.458. We can use the bindingRedirect parameter in the configuration file to get around that, but this is not always convenient, particularly when we update a lot of assemblies.

We can also compile this new version with the AssemblyVersion of 1.0.0.460 set to 1.0.0.458 for Assembly2, but this willl have the disadvantage of lying about the actual version of the file, and that will make diagnostics more complex in case of an other issue that could happen later.

Adding AssemblyFileVersion

To avoid having those issues with the resolution of assembly dependencies, it is possible to keep the AssemblyVersion constant, but use the AssemblyFileVersion to provide the actual version of the assembly.

The version specified in the AssemblyFileVersion is not used by the .NET Runtime, but is displayed in the file properties in the Windows Explorer.

We will then have the AssemlyVersion set to the original published version of the application, and set the AssemblyFileVersion to the same version, and later change only the AssemblyFileVersion when we published fixes of these assemblies.

Microsoft uses this technique to version the .NET runtime and BCL assemblies, and we take a look at System.dll for .NET 2.0, we can see that the AssemblyVersion is set to 2.0.0.0, and that the AssemblyFileVersion is set, for instance, to 2.0.50727.4927.

 

Other examples of versionning issues

We can find other cases of loading issues linked to the mismatch of the version for a loaded assembly that is different from the expected assembly version.

Custom Behaviors in WCF

WCF gives the developer a way to provide custom behaviors to alter the default behaviors for out-of-the-box bindings, and it is necessary to provide the fully qualified name, without errors. This is a pretty annoying but in WCF 4.x because it is somewhat complex to debug, and it is a very good case of use for the deactivation of "Just My Code" to find out why the assembly is not being loaded.

A good new though, this pretty old bug has been fixed in WCF 4.0 !

Dynamic Proxy Generators

Some dynamic proxy generators like Castle Dynamic Proxy 2 or Spring.NET use fully qualified types to generate the code for the proxy's, and loading issues can occur if the assembly referenced by the proxy is not exactly what is being loaded, with or without a Strong Name. These frameworks are heavily used with AOP, or by frameworks nHibernate, ActiveRecords or iBatis.

To be a bit more precise, the use of the ProxyGenerator.CreateInterfaceProxyWithTarget method generates a proxy that targets the assembly that is referenced during the generation of the code for the proxied interface.

To give an example, let's take an interface I1 in an assembly A1(1.0.0.0), which has a method that uses a type T1 in an assembly A2(1.0.0.0). If we change the assembly A2 and that its version becomes A2(2.0.0.0), the proxy will not be properly generated because the reference T1/A2(1.0.0.0) will be used because compiled in A1(1.0.0.0), regardless if we loaded A2(2.0.0.0)

The best practice of not changing the AssemblyVersion avoid loading issues of this kind. These issues are not blocking, but this more work to do to get around it.

And You ?

This is only a example of "Best Practice", which seems to have worked properly so far.

And you ? What do you do ? Which practices do you use to version your assemblies ?

[VS2010] On the Impacts of Debugging with “Just My Code”

By jay at July 05, 2010 19:58 Tags: , ,

Cet article est disponible en francais.

The “Just My Code” feature has been there for a while in Visual Studio. Since Visual Studio 2005 actually. And it's fairly easy to miss its details...

High level, this feature only shows you the stack that contains your code, mostly those assemblies that are in debug mode and have debugging symbols (pdb files). Most of the time, this is interesting, particularly if you’re debugging fairly simple code.

But if you’re debugging somehow complex issues, where you want to intercept exceptions that may be rethrown in some parts of the code that are not “Just Your Code”, then you have to disable it.

If you’re an experienced .NET developer, chances are you disabled it because it annoyed you at some point. I did, until a while back.

 

Debugger Exception Handling

The “Just my Code” (I’ll call it JMC for the rest of the article) feature changes a few things in the way the debugger handles exceptions.

If it is enabled, you’ll notice two columns in the “Debug / Exceptions” menu :

  • Thrown, which means that if you check that box, the debugger will break on the least deep rethrow in the stack of the exception
  • User-unhandled, which means that if you check that box the debugger will break if the exception has not been handled by any user code exception handler in the current stack.

 

If it is not enabled, then the same dialog box will display one column :

  • Thrown, which means that the debugger will break as soon as the exception is thrown

 

You’ll probably notice a big difference in the way the debugger handles the “Thrown” option. To be a bit more clear about that difference, let’s consider this code sample :

    static void Main(string[] args) 
    { 
        try 
        { 
            var t = new Class1(); 
            t.Throw(); 
        } 
        catch (Exception e) 
        { 
            Console.WriteLine(e); 
        } 
      }
    

Main executable, in debug configuration with debugging symbols enabled

    public class Class1 
    { 
        public void Throw() 
        { 
            try 
            { 
                Throw2(); 
            } 
            catch (Exception e) 
            { 
                throw; 
            } 
        }
        private void Throw2() 
        { 
            throw new InvalidOperationException("Test"); 
        } 
      }

Different assembly, in debug configuration without debugging symbols.

If we execute this code with the debugger with JMC enabled and with the “Thrown” column check for “InvalidOperationException”, here is the stack trace :

     NotMyCode.dll!NotMyCode.Class1.Throw() + 0x51 bytes
  > MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

And here is the stack trace without the JMC feature :

     NotMyCode.dll!NotMyCode.Class1.Throw2() + 0x46 bytes
NotMyCode.dll!NotMyCode.Class1.Throw() + 0x3d bytes
> MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

You’ll notice the impact of the “least deep in the stack rethrow”, which means that if you enable JMC, you will not have the original location of the exception.

Then you may wonder why it may be interesting to have the original location of the exception in the debugger. It is a debugging technique that is commonly used to find tricky issues that throw exceptions deep in code you do not own, and one of these exceptions is often TypeInitializerException. It can be useful to break at the original location to have the proper context, or stack that lead to the exception.

Lately, I’ve been using this technique of “Break on all exceptions” without JMC to troubleshoot loading of 32 bits assemblies in a 64 Bits CLR. You don’t exactly know which exception you’re looking for in the first place, and having JMC “hiding” some exceptions is not of a great help.

Also, to be fair, a more deep and intense debugging often leads to the use of WinDBG and the SOS extension (and here is a good SOS cheat sheet). But that’s another topic.

 

Step Into “Debugging Experience” with JMC

If you’ve read this far, you may now ask yourself why you would ever want to enable JMC. After all, you can handle your code yourself and with enough experience, you can easily mentally ignore pieces of the stack that are not yours. Actually, the gray font used for code that does not have debugging symbols helps a lot for that.

Well, there’s one example of good use of JMC : The debugger “Step into” feature. A very simple feature that allows step by step debugging of the software.

If you’re in debugging mode, you’ll step into the code that is called on the next line, if that’s possible, and see what’s in there.

So demonstrate this, let’s consider this example :

    static void Main(string[] args) 
    { 
        var myObject = new MyObject();

        Console.WriteLine(myObject); 
    }
    
    class MyObject 
    { 
        public override string ToString() 
        { 
            return "My object";
        } 
    }
      

This is a very simple program that will use the fact that Console.WriteLine will call the ToString method on the object that is passed as a parameter.

The point of this sample is to make “My Code” (Main) call some of “No My Code” (Console.WriteLine) that will call “My Code” (MyObject.ToString). Easy.

Now if you run this sample with the debugger with JMC disabled, if you try to “Step Into” Console.WriteLine, you’ll actually step over. This is not very helpful from the point of view of debugging you own code.

A very concrete example of that lack of “Step Into” can be found when you have proxies like the ones found in Spring.NET or Castle's DynamicProxy, they get in the way of simple debugging. You can’t step into objects that have been proxied to perform some AOP, for instance.

But if you enable JMC, well, you can actually “Step Into” your own code, even if the next actual method when you step into was not one of yours.

 

Final Words

Using JMC in this context is very useful and natural I would say. And the feature has been there for so long I missed its original goals. It originally got into my way for deep debugging purposes, and I dismissed as a “junior” feature, even cosmetic. Well, I was wrong…

Anyway, in Visual Studion 2010, the JMC has been improved a bit, as the way to enable and disable it is now far more easier to reach because it is now in the IntelliTrace “Show Calls View”.

Time to switch to Visual Studio 2010, people ! :)

Hyper-V VM Mover 1.0.2.0 on CodePlex

By jay at June 29, 2010 19:37 Tags: , ,

I've decided, after a long time, to publish the source code of my little utility on CodePlex : http://vmmove.codeplex.com

It that allows to perform attach and detach operations of Hyper-V VMs.

I discussed a while back the origin of Hyper-V VM Mover, and as of now, Microsoft still has no official method for attaching and detaching VMs without export and import operations.

Feel free to submit updates and comments on the tool !

[WP7Dev] Using the WebClient with Reactive Extensions for Effective Asynchronous Downloads

By jay at June 22, 2010 21:07 Tags: , , , , ,

There’s a very cool framework that has slipped into the Windows Phone SDK : The Reactive Extensions.

It's actually a quite misunderstood framework, mainly because it is a bit hard to harness, but when you get a handle on it, it is very handy ! I particularly like the MemoizeAll extension, a tricky one, but very powerfull.

But I digress.

 

A Non-Reactive String Download Sample

On Windows Phone 7, the WebClient class only has a DownloadStringAsync method and a corresponding DownloadStringCompleted event. That means that you're forced to be asynchronous, to be nice to the UI and not make the application freeze on the user, because of the bad coding habit of being synchronous on remote calls.

In a world without the reactive extensions, you would use it like this :

public void StartDownload()
{
    var wc = new WebClient();
    wc.DownloadStringCompleted += 
      (e, args) => DownloadCompleted(args.Result);
                  
    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public void DownloadCompleted(string value)
{
    myLabel.Text = value;
}

Pretty easy. But you soon find out that the execution of the DownloadStringCompleted event is performed on the UI Thread. That means that if, for some reason you need to perform some expensive calculation after you’ve received the string, you’ll freeze the UI for the duration of your calculation, and since the Windows Phone 7 is all about fluidity and you don't want to be the bad guy, you then have to queue it in the ThreadPool.

But you also have to update the UI in the dispatcher, so you have to come back from the thread pool.

You then have :

 public void StartDownload()
 {
     WebClient wc = new WebClient();
     wc.DownloadStringCompleted += 
        (e, args) => ThreadPool.QueueUserWorkItem(d => DownloadCompleted(args.Result));

     // Start the download
     wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
  }

 public void DownloadCompleted(string value)
 {
     // Some expensive calculation
     Thread.Sleep(1000);

     Dispatcher.BeginInvoke(() => myLabel.Text = value);
 }

That’s a bit more complex. And then you notice that you also have to handle exceptions because, well, it’s the Web. It’s unreliable.

So, let’s add the exception handling :

public void StartDownload()
{
    WebClient wc = new WebClient();

    wc.DownloadStringCompleted += (e, args) => {
        try {
            ThreadPool.QueueUserWorkItem(d => DownloadCompleted(args.Result));
        }
        catch (WebException e) {
            myLabel.Text = "Error !";
        }
    };
   
    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public void DownloadCompleted(string  value)
{
    // Some expensive calculation
    Thread.Sleep(1000);
    Dispatcher.BeginInvoke(() => myLabel.Text = value);
}

That’s starting to be a bit complex. But then you have to wait for an other call from an other WebClient to end its call and show both results.

Oh well. Fine, I'll spare you that one.

 

The Same Using the Reactive Extensions

The reactive extensions treats asynchronous events like a stream of events. You subscribe to the stream of events and leave, and you let the reactive framework do the heavy lifting for you.

I’ll spare you the explanation of the duality between IObservable and IEnumerable, because Erik Meijer explains it very well.

So, I’ll start again with the simple example, and after adding the System.Observable and System.Reactive references, I can downloading a string :

public void StartDownload()
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => newString.EventArgs.Result);

    // Subscribe to the observable, and set the label text
    o.Subscribe(s => myLabel.Text = s);


    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

This does the same thing the very first example did. You’ll notice the use of Observable.FromEvent to transform the event into a string from the DownloadStringCompleted event args. For this exemple, the event stream will only contain one event, since the download only occurs once. Each of these ocurrence of the event is then “projected”, using the Select statement, to a string that represents the result of the web request.

It’s a bit more complex for the simple case, because of the additional plumbing.

But now we want to handle the threads context changes. The Reactive Extensions has the concept of scheduler, to observe an IObservable in a specific context.

So, we use the scheduler like this :

public void StartDownload()
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // Let's make sure that we’re on the thread pool
                      .ObserveOn(Scheduler.ThreadPool)

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => ProcessString(newString.EventArgs.Result))

                      // Now go back to the UI Thread
                      .ObserveOn(Scheduler.Dispatcher)

                      // Subscribe to the observable, and set the label text
                      .Subscribe(s => myLabel.Text = s);

    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public string ProcessString(string s)
{
    // A very very very long computation
    return s + "1";
}
 

In this example, we’ve changed contexts twice to suit our needs, and now, it’s getting a bit less complex than the original sample.

And if we want to handle exceptions, well, easy :

    .Subscribe(s => myLabel.Text = s, e => myLabel.Text = "Error ! " + e.Message);

And you have it !

 

Combining the Results of Two Downloads

Combining two or more asynchronous operations can be very tricky, and you have to handle exceptions, rendez-vous and complex states. That make a very complex piece of code that I won’t write here, I promised, but instead I’ll give you a sample using Reactive Extensions :

public IObservable<string> StartDownload(string uri)
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // Let's make sure that we're not on the UI Thread
                      .ObserveOn(Scheduler.ThreadPool)

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => ProcessString(newString.EventArgs.Result));

    wc.DownloadStringAsync(new Uri(uri));

    return o;
}

public string ProcessString(string s)
{
    // A very very very long computation
    return s + "<!-- Processing End -->";
}

public void DisplayMyString()
{
    var asyncDownload = StartDownload("http://bing.com");
    var asyncDownload2 = StartDownload("http://google.com");

    // Take both results and combine them when they'll be available
    var zipped = asyncDownload.Zip(asyncDownload2, (left, right) => left + " - " + right);

    // Now go back to the UI Thread
    zipped.ObserveOn(Scheduler.Dispatcher)

          // Subscribe to the observable, and set the label text
          .Subscribe(s => myLabel.Text = s);
}

You’ll get a very interesting combination of google and bing :)

[LINQ] Finding the next available file name

By jay at June 10, 2010 20:16 Tags: , ,

Cet article est disponible en francais.


Sometimes, the most simple examples are the best.

 

Let’s say you have a configuration file, but you want to make a copy of it before you modify it. Easy, you copy that file to “filename.bak”. But what happens there’s already that file ? Well, either you replace it, or you create an autoincremented file.

 

If you want to do the latter, you could do it using a for loop. But since you’re a happy functional programming guy, you want to make it using LINQ.

 

You then can do it like this :

    public static string CreateNewFileName(string filePath)
    {
        if (!File.Exists(filePath))
            return filePath;

        // Don't do that for each file.
        var name = Path.GetFileNameWithoutExtension(filePath);
        var extension = Path.GetExtension(filePath);

        // Now find the next available file
        var fileQuery = from index in Enumerable.Range(2, 10000)

                        // Build the file name
                        let fileName = string.Format("{0} ({1}){2}", name, index, extension)

                        // Does it exist ?
                        where !File.Exists(fileName)

                        // No ? Select it.
                        select fileName;

        // Return the first one.
        return fileQuery.First();
    }

Note the use of the let operator, which allows the reuse of what is called a “range variable”. In this case, it avoids using string.Format multiple times.

 

The case of Infinity

There’s actually one problem with this implementation, which is the arbitrary “10000”. This might be fine if you don’t intend to make more than 10000 backups of your configuration file. But if you do, to lift that limit, we could write this iterator method :

    public static IEnumerable<int> InfiniteRange(int start)
    {
         while(true)
         {
             yield return start++;
         }
    }

Which basically will return an new value each time you ask for one. To use that method you have to make sure that you have an exit condition (the file does not exist, in the previous example), or you may well be enumerating until the end of times... Actually up to int.MaxValue, for the nit-pickers, but .NET 4.0 adds System.Numerics.BigInteger to be sure to get to the end of times. You never know.

 

To use this iterator, just replace :

        var fileQuery = from index in Enumerable.Range(2, 10000)

by

        var fileQuery = from index in InfiniteRange()

And you’re done.

[VS2010] Configure Code Analysis for the Whole Solution

By jay at March 06, 2010 17:38 Tags: ,

Cet article est disponible en francais.

In Visual Studio, configuring Code Analysis was a bit cumbersome. If you had more than a bunch (say 10), this could take a long time to manage and have a single set of rules for all your solution. You had to resort to update all the projects files by hand, or use a small tool that would edit the csproj file to set the same rules everywhere.

Not very pleasant, nor efficient. Particularly when you have hundreds of projects.

In Visual Studio 2010, the product team added two things :

  1. Rules are now in external files, and not embedded in the project file. That makes the rules reuseable in other projects in the solution. Nice.
  2. There’s a new section in the Solution properties named “Code Analysis Settings”, that allows to set rule files to use for single projects, and even better, for all projects ! Very nice.

That option is also available from the “Analyze” menu, with “Configure Code Analysis for Solution”.

One gotcha there though, to be able to select all files, you can’t use Ctrl+A but you have to select all files by selecting the first item, then hold Ctrl while selecting the last item. Maybe the Product Team will fix that for the release...

Migrating Rules from VS2008

If you’re migrating your projects from VS2008, and were using code analysis there, you’ll notice that the converter will generate a file named “Migrated rules for MyProject.ruleset” for every project in the solution. That’s nice if all your projects don’t have the same rules. But if they do, you’ll have to manage all of them...

Like all programmers, I’m lazy, and I wrote a little macro that will remove all generated ruleset files for the current solution, and use a single rule set.

This is not a very efficient macro, but since it won’t be used that often... You’ll probably live with the bad performance, and bad VB.NET code :)

Here it is :

Sub RemoveAllRuleset()

    For Each project As Project In DTE.Solution.Projects
        FindRuleSets(project)
    Next

End Sub

Sub FindRuleSets(ByVal project As Project)

    For Each item As ProjectItem In project.ProjectItems

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then
                FindRuleSets(item.SubProject)
            End If
        End If

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then

                Dim ruleSets As List(Of ProjectItem) = New List(Of ProjectItem)

                For Each subItem In item.SubProject.ProjectItems
                    If subItem.Name.StartsWith("Migrated rules for ") Then
                        ruleSets.Add(subItem)
                    End If
                Next

                For Each ruleset In ruleSets
                    ruleset.Remove()
                Next
            End If
        End If
    Next

End Sub

Reactive Framework: MemoizeAll

By jay at February 04, 2010 18:21 Tags: , , ,

Cet article est disponible en francais.

For some time now, with the release of the Rx Framework and the reactive/interactive programming, some new features were highlighted through a very good article of Bart De Smet dealing with System.Interactive and the “lazy-caching”.

When using LINQ, one can find two sorts of operators: The “lazy” operators that take elements one by one and forward them when they a requested (Select, Where, SelectMany, First, …), and the operators that I would call “Rendez-vous” for which the entirety of the elements of the enumerators need to be enumerated (Count, ToArray/ToList, OrderBy, …) to produce a result.

 

“Lazy” Operators

Lazy operators are pretty useful as they offer a good performance when it is not required to enumerate all the elements of an enumerable. This can also be useful when it may take a very long time to enumerate each element of an enumerator, and that we only want to get the first few elements.

For instance this :
         

static IEnumerable<int> GetItems()
{
    for (int i = 0; i < 5; i++)
    {
        Console.WriteLine(i);
        yield return i + 1;
    }
}

static void Main()
{
   Console.WriteLine(GetItems().First());
   Console.WriteLine(GetItems().First());
}

Will output :


0
1
0
1

Only the first element of the enumerator will be enumerated from GetItems().

However, these operators expose a behavior that is important to know about: Each time they are enumerated, they also enumerate their source again. That could either be a advantage (Enumerating multiple times a changing source) or a problem (enumerating multiple times a resource intensive source).

 

“Rendez-vous” Operators

These operators are also interesting because they force the enumeration of all the elements of the enumerable, and in the particular case of ToArray, this allows the creation of an immutable version of the content of the enumerable. These are useful in conjunction with lazy operators to prevent them to enumerate their source again, when enumerated multiple times.

If we the previous sample, and update it a bit:


static void Main()
{
   var items = GetItems().ToArray();

   Console.WriteLine(items.Count());
   Console.WriteLine(items.Count());
}

We get this result :


0
1
2
3
4
5
5

Because Count() needs to know all the elements of the source enumerator to determine the count.

These operators also enumerate their source with each use, but using ToArray/ToList prevents their result to enumerate the source again.

The case of multiple enumerations

A concrete example of the problem posed by the multiple enumerations is the creation of an enumerable partitionning operator. In this example, we can see that the enumerable passed as the source is used by to "Where" different operators, which implies that the source enumerable will be enumerated twice. Storing the whole content of the source enumerable by means of a ToArray/ToList is possible, but that would be a possible waste of resource, mainly because we can't know if the output enumerable will be enumerated completely (If that is possible, as in the case of an infinite enumerable, ToArray is not applicable).

An intermediate operator between "Lazy" and "Rendez-vous" would be useful.

EnumerableEx.MemoizeAll

The EnumerableEx class brings us an extension, MemoizeAll (built from the Memoization concept), that is just the middle ground we're looking for, and will cache elements from the source enumerator when they are requested. A sort of "lazy" ToArray.

If we take the example of Mark Needham, we would modify it like this :


var evensAndOdds = Enumerable.Range(1, 10)
                             .MemoizeAll()
                             .Partition(x => x % 2 == 0);

In this example, the MemoizeAll does not have a real benefit on the performance side, since Enumerable.Range is not a very expensive operator. But in the case where the source of the "Partition" operator would be a most expensive enumerable, like a Linq2Sql query, the lazy caching could be very effective.

One of the comments suggests that a GroupBy based implementation could be written, but this operator also evaluates the source operator when a group is enumerated. The MemoizeAll is then again appropriate for better performance, but as always, this is a tradeoff between processing and memory.

By the way, Bart de Smeth discusses the part of the elimination of side effects linked the multiple enumeration of enumerables by using Memoize and MemoizeAll, which is not really an issue in the previous example, but is nonetheless a very interesting subject.

 

.NET 4.5 ?

On a side note, I find regrettable that the EnumerableEx extensions did not make their way in .NET 4.0... They are very useful, and not very complex. They may have arrived too late in the development cycle of .NET 4.0... Maybe in .NET 4.5 :)

WinForms, DataBinding and Updates from multiple Threads

By jay at January 02, 2010 23:08 Tags: ,

Cet article est disponible en francais.

When one is trying to use the MVC model on the WinForms, it is possible to use the INotifyPropertyChanged interface to allow DataBinding between the controler and form.

It is then possible to write a controller like this :

    

public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
PropertyChanged(this, new PropertyChangedEventArgs("Status"));
}
}
}

The form is defined like this :


public partial class MyForm : Form
{
private MyController _controller = new MyController();

public MyForm()
{
InitializeComponent();

// Make a link between labelStatus.Text and _controller.Status
labelStatus.DataBindings.Add("Text", _controller, "Status");
}

private void buttonChangeStatus_Click(object sender, EventArgs e)
{
_controller.ChangeStatus();
}

}


The form will update the “labelStatus” when the “Status” property of controller changes.

All of this code is executed in the main thread, where the message pump of the main form is located.

 

A touch of asynchronism

Let’s imagine now that the controller is going to perform some operations asynchronously, using a timer for instance.

We update the controller by adding this :


private System.Threading.Timer _timer;

public MyController()
{
_timer = new Timer(
d => ChangeStatus(),
null,
TimeSpan.FromSeconds(1), // Start in one second
TimeSpan.FromSeconds(1) // Every second
);
}


By altering the controller this way, the “Status” property is going to be updated regularly

The operation model of the System.Threading.Timer implies that the ChangeStatus method is called from a different thread than the thread that created the main form. Thus, when the code is executed, the update of the label is halted by the following exception :

   Cross-thread operation not valid: Control 'labelStatus' accessed from a thread other than the thread it was created on.

The solution is quite simple, the update of the UI must be performed on the main thread using Control.Invoke().

That said, in our example, it’s the DataBinding engine that hooks on the PropertyChanged event. We must make sure that the PropertyChanged event is called “decorated” by a call to Control.Invoke().

We could update the controller to invoke the event on the main Thread:


set
{
_status = value;

// Notify that the property has changed
Action action = () => PropertyChanged(this, new PropertyChangedEventArgs("Status"));
_form.Invoke(action);
}


But that would require the addition of WinForms depend code in the controller, which is not acceptable. Since we want to put the controller in a Unit Test, calling the Control.Invoke() method would be problematic, as we would need a Form instance that we would not have in this context.

 

Delegation by Interface

The idea is to delegate to the view (here the form) the responsibility of placing the call to the event on the main thread. We can do so by using an interface passed as a parameter of the controller’s constructor. It could be an interface like this one :


public interface ISynchronousCall
{
void Invoke(Action a);
}


The form would implement it:


void ISynchronousCall.Invoke(Action action)
{
// Call the provided action on the UI Thread using Control.Invoke()
Invoke(action);
}


We would then raise the event like this :


_synchronousInvoker.Invoke(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

But like every efficient programmer (read lazy), we want to avoid writing an interface.

 

Delegation by Lambda

We will try to use lambda functions to call the method Control.Invoke() method. For this, we will update the constructor of the controller, and instead of taking an interface as a parameter, we will use :


public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker;
...
}

To clarify, we give to the constructor an action that has the responsibility to call an action that is passed to it by parameter.

It allows to build the controller like this :


_controller = new MyController(a => Invoke(a));

Here, no need to implement an interface, just pass a small lambda that invokes an actions on the main thread. And it is used like this :


_synchronousInvoker(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

This means that the lambda specified as a parameter will be called on the UI Thread, in the proper context to update the associated label.

The controller is still isolated from the view, but adopts anyway the behavior of the view when updating “databound” properties.

If we would have wanted to use the controller in a unit test, it would have been constructed this way :


_controller = new MyController(a => a());

The passed lambda would only need to call the action directly.

 

Bonus: Easier writing of the notification code

A drawback of using INotifyPropertyChanged is that it is required to write the name of the property as string. This is a problem for many reasons, mainly when using refactoring or obfuscation tools.

C# 3.0 brings expression trees, a pretty interesting feature that can be used in this context. The idea is to use the expression trees to make an hypothetical “memberof” that would get the MemberInfo of a property, much like typeof gets the System.Type of a type.

Here is a small helper method that raises events :


private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}

A method that can be used like this :


_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);

The “Status” property is used as a property in the code, not as a string. It is then easier to rename it with a refactoring tool without breaking the code logic.

Note that the lambda () => Status is never called. It is only analyzed by the InvokePropertyChanged method as being able to provide the name of a property.

 

The Whole Controller


public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

private System.Threading.Timer _timer;
private readonly Action<Action> _synchronousInvoker;

public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker

_timer = new Timer(
d => Status = DateTime.Now.ToString(),
null,
1000, // Start in one second
1000 // Every second
);
}

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);
}
}

/// <summary>
/// Raise the PropertyChanged event for the property “get” specified in the expression
/// </summary>
/// <typeparam name="T">The type of the property</typeparam>
/// <param name="expr">The expression to get the property from</param>
private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}
}

SharePoint WebDAV, IIS 7.5 and Windows Server 2008 R2

By jay at December 02, 2009 18:02 Tags: ,

A neat feature of Sharepoint 2007 (or WSS 3.0) is the ability to browse the content of a site as if it were a network drive. This is done under the hood by using WebDAV, a standard protocol that Microsoft used to implement this feature.

If you happen to have to install WSS 3.0 on a Windows 2008 R2 Box, you’ll quickly find out that this feature does not work properly, with interesting messages like “Access Denied” or “The network path could not be found” when trying to map a folder.

Using IIS 6.0, you’d simply need to make sure that the WebDAV Web Service Extension is “Prohibited”.

With IIS 7.5, there are multiple places dealing with WebDAV but only one to look at :

  • Open the “Modules” configuration section for the Sharepoint web site
  • Find the “WebDAVModule” entry
  • Remove it, your’re done !

The interesting bit about this is that even though the WebDAV component is disabled in every possible section of the site, the module seems to intecept the WebDAV PROPFIND verb and returns a 405 (Not Allowed) error.

Since the verb is handled by an ASP.NET httpHandler, it never gets the chance to deal with it... and you can’t see your files in the Windows Explorer.

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.