Configuring Multiple TFS 2012 Build Services on one Machine

By jay at October 27, 2012 12:18 Tags: , , ,

Since TFS2010, because there’s been the introduction of project collections, build agents are quite the trouble for IT admins.

On one hand, there’s the fact that putting every project in a single collection can be a maintenance and project isolation nightmare, and on the other hand the fact that there can officially be one build controller per machine tied to a single project collection, forcing to have many machines to support continuous integration build environments.

Luckily, there’s been a workaround published two years ago on how to run multiple build services on the same machine, and it’s working pretty fine.

 

However with TFS2012, this hack needs to be adjusted to work properly.

Prior to running the configuration tool, instead of setting this:

set TFSBUILDSERVICEHOST=buildMachine-collection2

The new environment variable is:

set TFSBUILDSERVICEHOST.2012=buildMachine-collection2

I’m guessing that this can be used to run both 2010 and 2012 build services on the same machine without interferences, though I haven’t tried it.

Just as a reminder, to use this feature, you need to set non-overlapping build working folders and different listening ports for every new instance. This will avoid conflicts…

Works pretty great, now with TFS2012 !

To be fair when comparing Rx to C# 5.0 Async...

By Admin at August 03, 2011 20:45 Tags: , , , , ,

After reading the 900th morning brew, one article by mike taulty about comparing Rx, TPL and async caught my attention.

Mike tries to explain the history, differences and similarities between all these frameworks, and kudos, that's not an easy thing to do.

Asynchrony, (and I'm not talking parallelism), is a complex topic that fools even the best of us.

 

Comparing Rx and Async

Rx and Async and much more similar in that regard, because going off to an other thread is not mandatory, and most operators use either the CurrentThread (timebase priority queue) scheduler or the Immediate (passive wait) scheduler.

This means that the code you are writing is doing some cooperative multi-threading, or fibers-like processing. Everything happens on the same thread, except that work is optionally being queued up, and that thread works as long as there's work left to do, then waits for outstanding operations. These oustanding operations can be I/O completion port bound, like Stream.BeginRead/EndRead.

But back to the comparison, mike is trying to do buffer reading of content from a web response stream, and doing so using the Async CTP's ReadAsync and WriteAsync eases a lot the writing of that kind of code. (Also, the Rx example does not work correctly, but I'll talk about that later in this post.)

These two functions are not tied to the complexity of the BeginRead/EndRead, and behave very much like an IObservable would. Call ReadAsync, you get a Task and wait on it.

Let's jump to the end result and we can get this with Rx based composed operators, using methods symilar to the ReadAsync and GetResponseAsync :

string url = "http://www.microsoft.com";

var webRequest = WebRequest.Create(url);

webRequest.ToResponse()
          .SelectMany(wr => wr.GetResponseStream().ToBytes())
          .ForEach(
            b => Console.WriteLine(Encoding.Default.GetString(b))
          );

That way, it is a lot easier to read. ToResponse maps to GetResponseAsync and ToBytes maps to ReadAsync.

I'll concede the complexity of the SelectMany operator that is related to the fact that IObservable deals with sequences (the duality with IEnumerable). What we would need is more like a IObservableValue that returns only one value. At that point, an appropriate operator would be something like SelectOne, but that's an other topic I'll discuss soon.

 

The ToResponse operator

This one is easy, and is pretty much an encapsulation of the code provided in Mike's Rx example :

public static IObservable ToResponse(this WebRequest request)
{
    var asyncGetResponse = Observable.FromAsyncPattern(
                            request.BeginGetResponse, request.EndGetResponse);

    return Observable.Defer(asyncGetResponse);
}

The use of defer allows the execution of the actual call to GetResponse when someone is subscribing to the deffered observable.

 

The ToBytes operator

That one is a bit tricker :

public static IObservable ToBytes(this Stream stream)
{
    return 
        Observable.Create(
            observer =>
            {
                byte[] buffer = new byte[24];

                var obsReadFactory = Observable.Defer(() => stream.AsReader()(buffer, 0, buffer.Length));

                return Observable
                         .Repeat(obsReadFactory)
                         .Select(i => buffer.Take(i).ToArray())

                         // Subscribe on the thread pool, otherwise the repeat operator will operate during the 
                         // call to subscribe, preventing the whole expression to complete properly
                         .SubscribeOn(Scheduler.ThreadPool)

                         .Subscribe(
                             _ =>
                             {
                                 if (_.Length > 0)
                                 {
                                     observer.OnNext(_);
                                 }
                                 else
                                 {
                                     observer.OnCompleted();
                                 }
                             },
                             observer.OnError,
                             observer.OnCompleted
                         );
            }
        );
}

and needs a bit of explaining.

The ToBytes extension is creating a Observable that will be able to control finely the OnNext/OnCompleted events, especially because of the loopy nature of the BeginRead API. Loopy means that in a synchronous mode, you needs to call Read through a loop until you get all you need. The BeginRead/EndRead still expose this loopy nature, but in an asynchronous way.

With Rx, that loop can be introduced with the use of Repeat, the same way a while(true) would do.

The Select operator is pretty straightforward, even if it may not be as fast as a Buffer.BlockCopy, it's pretty conscise.

The SubscribeOn is the tricky part of this method, and is very important for the OnCompleted events to get through. If this operator is not present, the call to the ToBytes method blocks in the Subscribe of the SelectMany operator in the final example. This means that events like OnCompleted get buffered and not interpreted, and the repeat operator will continue indefinitely to turns into loops getting nothing, because noone's unsubscribing. This would be CPU consuming, and memory consuming because the observable expression could not get dispose.

Then in the subscribe, we notify the observer that either there's a new buffer, or that we're done because the EndRead method returned 0 (or an exception).

 

Continuing the comparison

All this to say that the .NET 5.0 (or whatever it will be called) has a Task friendly BCL that makes it easy to write asynchronous code.

I'm definitely not saying that Rx is as easy as Async will be, but, with a minimum of understanding and abstraction, it can be as powerful and even more powerful because of its ability to compose observables.

Now, for the issues in Mike's Rx sample :

  • The TakeWhile operator only completes when the source observable completes, and the source is a Repeat observable, which never ends, meaning that the whole subscription will never get disposed
  • The Observable.Repeat operator runs on the Immediate scheduler, meaning that the OnNext method will be called in the thread context of the original Subscribe, and the expression will not get disposed.

The sample actually shows something, but the CPU will stay at 100% until the process ends.

 

A word on Asynchrony

Asynchrony is a complex topic, very easy to get wrong, even with the best intentions. Parallelism is even more complex (and don't get me started on the lock() keyword).

I'm expecting that People are going to have a hard time grasping it, and I'm worried that Async will make it too easy to make parallelism (not asynchrony this time) mistakes, because it is will be easy to introduce mutating states in the loop, hence hard to reproduce transitory states bugs. Calling "new Thread()" was scary, and for good reasons, but using await will not, but with mostly the same bad consequences.

I'd rather have better support for immutable structures method or class purity and some more functional concepts baked into C#, where the language makes it harder to make mistakes, than trying to bend (or abstract) asynchrony to make it work with the current state of C#.

Then again, I'm not saying Rx is better, it's trying to work around the fact that the BCL and C# 3.0 don't have asynchrony baked in, so the complexity argument still stands.

 

On the other side, the more developers use Async, the more developers will need async savvy consultants as firemen :)

 

 

Why using a timer may not be the best idea

By jay at July 25, 2011 00:00 Tags: , , , , , ,

TLDR: The Reactive Extensions provide a TestScheduler class that abstracts time and allows for a precise control of a virtual time base. Using the Rx Schedulers mecanism instead of real timers enables the creation of fast running unit tests that validate time related algorithms without the inconvients of the actual time.

 

Granted, the title is a bit provocative, but nonetheless it's still a bad idea to use timer classes like System.Threading.Timer.

Why is that ? Because classes like this one are based on the actual time, and that makes it a problem because it is non-deterministic. This means that each time you want to test a piece of code that depends on time, you'll be having a somehow different result, and particularly if you're using very long delays, like a few hours, you probably will not want to wait that long to make sure your code works as expected.

What you want actually, to avoid the side effect of "real" time passing by, is virtual time.

 

Abstracting Time with the Reactive Extensions

The reactive extensions are pretty good at abstracting time, with the IScheduler interface and TestScheduler class.

Most operators in the Rx framework have an optional scheduler they can use, like the Dispatcher or the ThreadPool schedulers. These schedulers are used to change the context of execution of the OnNext, OnCompleted and OnError events.

But for the case of time, the point is to freeze the time and make it "advance" to the point in time we need, and most importantly when we need it.

Let's say that we have a view model with a command that performs a lengthy action on the network, and that we need that action to timeout after a few seconds.

Consider this service contract :

public interface IRemoteService
{
    /// 
    /// Gets the data from the server,
    /// returns an observable call to the server 
    /// that will provide either no values, one
    /// value or an error.
    /// 
    IObservable<string> GetData(string url);
}

This implementation is exposing an observable based API, where the consumer of this contract must take into account the fact that getting data must be performed asynchronously, because it may not provide any value for a long time or even not return anything at all.

Next, a method of a view model that is using it :

public void GetServerDataCommand()
{
    IsQueryRunning = true;
    ShowError = false;

    Service.GetData(_url)
           .Timeout(TimeSpan.FromSeconds(15))
           .Finally(() => IsQueryRunning = false)
           .Subscribe(
               s => OnQueryCompleted(s),
               e => ShowError = true
           );
}

And we can test it using Moq like this :

var remoteService = new Mock<IRemoteService>();

// Returns far in the future
remoteService.Setup(s => s.GetData(It.IsAny()))
             .Returns(
                Observable.Return("the result")
                          .Delay(TimeSpan.FromDays(1))
             );

var model = new ViewModel(remoteService.Object);

// Call the fake server
model.GetServerDataCommand();
Assert.IsTrue(model.IsQueryRunning);
Assert.IsFalse(model.ShowError);

// Sleep for a while, before the timeout occurs
Thread.Sleep(TimeSpan.FromSeconds(5));
Assert.IsTrue(model.IsQueryRunning);
Assert.IsFalse(model.ShowError);

// Sleep beyond the timeout
Thread.Sleep(TimeSpan.FromSeconds(11));

// Do we have an error ?
Assert.IsFalse(model.IsQueryRunning);
Assert.IsTrue(model.ShowError);

The problem with this test is that it depends on actual time, meaning that it takes at least 16 seconds to complete. This is not acceptable in a automated tests scenario, where you want your tests to run as fast as possible.

 

Adding the IScheduler in the loop

We can introduce the use of an injected IScheduler instance into the view model, like this :

.Timeout(TimeSpan.FromSeconds(15), _scheduler)

Meaning that the both Start and Timeout will get executed on the scheduler we provide, for which the time is controlled.

We can update the test like this :

var remoteService = new Mock<IRemoteService>();
var scheduler = new TestScheduler();

// Never returns
remoteService.Setup(s => s.GetData(It.IsAny<string>()))
             .Returns(
                Observable.Return("the result", scheduler)
                          .Delay(TimeSpan.FromDays(1), scheduler)
            );

var model = new ViewModel(remoteService.Object, scheduler);

// Call the fake server
model.OnGetServerData2();
Assert.IsTrue(model.IsQueryRunning);
Assert.IsFalse(model.ShowError);

// Go before the failure point
scheduler.AdvanceTo(TimeSpan.FromSeconds(5).Ticks);
Assert.IsTrue(model.IsQueryRunning);
Assert.IsFalse(model.ShowError);

// Go beyond the failure point
scheduler.AdvanceTo(TimeSpan.FromSeconds(16).Ticks);

// Do we have an error ?
Assert.IsFalse(model.IsQueryRunning);
Assert.IsTrue(model.ShowError);

When the scheduler is created, it holds of all the scheduled operations until AdvanceTo is called. Then the scheduled actions are executed according to the virtual current time.

That way, your tests run at full speed, and you can test properly your time depedent code.

Using the Remote Debugger

By jay at July 22, 2010 20:05 Tags: , ,

Cet article est disponible en francais.

To continue in the same kind of articles about Visual Studio features that have been available for a while now, but are commonly under-used, I'll talk in this post about the Remote Debugger.

 

Local Debugging

Visual Studio has a debugger that allows the debugging of program when running it using F5, or "Debug / Start Debugging". Visual Studio will then start in a special mode that allows step by step execution of the program, use features like BreakPoints, TracePoint, Watches, IntelliTrace, create MiniDumps and many more.

The debugger runs the program on the local machine, and uses the permissions of the locally logged on user.

Nothing out of the ordinary. Well, maybe the Reverse Debugging with IntelliTrace in VS2010, which is very cool.

 

Hardware Specific and CrapWare

I don't know about you, but I keep my development PC as stable as possible. I rarely install new software, so that I keep the overall performance stable over time. I will most of the time install new software versions only after having tested them on other PCs to determine their behavior.

Call me maniac, that's what it is :)

But then, what to do when the need for testing an installation program comes up ? Or when you need to debug plugins for NI TestStand or Labview ? Or when the software needs a very specific kind of hardware that cannot be installed on your development PC ? (Rainbow Keys, anyone ?)


The answer is simple : The Remote Debugger ! When possible, I will test and debug my software on a virtual machine, or on a physical machine that has the appropriate environment to execute the software.

That way, the development environment stays stable, and I don't need to make installation of software that could add some crapware and eat up the few bytes of RAM left :)

The Remote Debugger ?

The idea is to continue using the development machine, where the source code is and to connect via the network on a machine that will execute the program. After that, the remote debugging session is very similar to a local session, with the exception of the "Edit and Continue" that is not supported. But most of the time, we can live without it.

 

Running the debugger from Visual Studio

It is possible to run the execution on the remote machine by using the "Use Remote Machine" option in the "Debug" tab of a C# project. It is important to note that checking this option implies that all paths specified in "Working Directory" or "External Program" are those of the remote machine.

Aditionnally, Visual Studio will not copy the binaries and PDB files on the remote machine. You have to make the copy of the files at the appropriate location, by using a "Post Build Action", a UNC path in the form of "\\mymachine\c$\temp".

 

Attach to a Running Process

It is also possible to attach to a running process, by using the "Debug / Attach To Process" option. You just need to fill in the "Qualifier" and set the name of the remote debugger, and to choose the process to debug.

Quick hint: The option "Show processes from all users" is not enabled by default. This means that is you want to debug a Windows Service, you will not see it in the list until you enable it.

Finally, the "Attach To Process" window is also very useful with local processes. It is a very handy feature to create a memory dump of a process that takes too much memory, and analyze it.

 

Installing the Remote Debugger

The Remote Debugger is an additional Visual Studio component that is located on the installation media, in the "Remote Debugger" folder. Three versions exist : x86, x64 and ia64 (RIP, Itanium...). If you have to debug a 32 process on 64 bits machine, I advise that you install both the x86 and x64 versions. You will have to choose which remote debugger to run depending on the .NET runtime that is used. You can see which version to use in the "Type" column of the "Attach to Process" window.

Here's what to do :

  • If you are using VS2008 SP1, you can download it here, and for VS2010 you can use the install located on the DVD
  • Once installed on the remote machine, install the RDBG service with the wizard, using the LocalSystem account.
  • You may have a message about a security issue. If you do, follow these steps :
    • Open the "Local Security Policy" section of the "Administrative Tools" control panel
    • Go to the "Local Policies" / "Security Options"
    • Double click on "Network access: Sharing and security model for local accounts" and set the value to "Classic : Local users authenticate as themselves"
    • Close the window
  • If your machine is not on the same domain as your development machine, or even if it's not on a domain at all, add a local use account on the remote machine that has the same name as your current username, and make it a member of the administrators group. The password also has to be the same.
  • Start the remote debugger on the remote machine. Note that to debug a 32 bits process, you have to run the 32 Bits version of the debugger.
  • On the development machine, open the "Attach to process" window, and type the identifier of the remote debuger (shown on the remote debugger window). It should look like this: administrator@my-machine.

Note that the firewall on both the development and the remote machine can prevent the remote debugger from working properly. You can temporarily disable it, but make sure to enable it back after. If you only want to enable specific ports, the port 135/TCP is used. The Remote Debugger uses DCOM as its communication protocol.

 

And if my breakpoints stay empty red circles ?

This is a very common situation that means that the pdb files do not match the loaded binaries. Make sure that you've copied the pdb files at the same time you did the dlls.

The "Debug / Windows / Modules" shows if the debug symbols have been loaded properly, and if it's not the case, the "View / Output / Debug" window will most of the time show why.


Happy debugging !

Version Properly using AssemblyVersion and AssemblyFileVersion

By jay at July 18, 2010 15:25 Tags: , ,

Cet article est disponible en francais.

We talk quite easily about new technologies and things we just learned about, because that's the way geeks work. But for newcomers, this is not always easy. This is a large recurring debate, but I find that it is good to step back from time to time and talk about good practices for these newcomers.

 

The AssemblyVersion and AssemblyFileVersion Attributes

When we want to give a version to a .NET assembly, we can choose one of two common ways :

 

Most of the time, and by default in Visual Studio 2008 templates, we can find the AssemblyInfo.cs file in the Properties section of a project. This file will generally only contain an AssemblyVersion attribute, which will force the value of the AssemblyFileVersion value to the AssemblyVersion's value. The AssemblyFileVersion attribute is now added by default in the Visual Studio 2010 C# project templates, and this is a good thing.

It is possible to see the value of the AssemblyFileVersion attribute in file properties window the Windows file explorer, or by adding the "File Version" column, still in the Windows Explorer.

We can also find a automatic numbering provided by the C# compiler, by the use of :

[assemblyAssemblyVersion("1.0.0.*")]

 

Each new compilation will create a new version.

This feature is enough at first, but when you start having somehow complex projects, you may need to introduce continuous integration that will provide nightly builds. You will want to give a version to the assemblies in such a way it is easy to find which revision has been used in the source control system to compile those assemblies.

You can then modify the Team Build scripts to use tasks such as the AssemblyInfo task of MSBuild Tasks, and generate a new AssemblyInfo.cs file that will contain the proper version.

 

Publishing a new version of an Assembly

To come back to the subject of versionning an assembly properly, we generally want to know quickly, when a project build has been published, which version has been installed on the client's systems. Most of the time, we want to know which version is used because there is an issue, and that we will need to provide an updated assembly that will contain a fix. Particularly when the software cannot be reinstalled completely on those systems.

A somehow real world example

Lets consider that we have a solution with two assemblies signed with a strong name Assembly1 and Assembly2, with Assembly1 that uses types available in Assembly2, and that finally are versioned with an AssemblyVersion set to 1.0.0.458. These assemblies are part of an official build published on the client's systems.

If we want to provide a fix in Assembly2, we will create a branch in the source control from the revision 1.0.0.458, and make the fix in that branch which will give revision 460, so the version 1.0.0.460.

If we let the Build System compile that revision, we will get assemblies that will be marked as 1.0.0.460. If we only take Assembly2, and we place it on the client's systems, the CLR will refuse to load this new version if the assembly, because Assembly1 requires to have Assembly to of the version 1.0.0.458. We can use the bindingRedirect parameter in the configuration file to get around that, but this is not always convenient, particularly when we update a lot of assemblies.

We can also compile this new version with the AssemblyVersion of 1.0.0.460 set to 1.0.0.458 for Assembly2, but this willl have the disadvantage of lying about the actual version of the file, and that will make diagnostics more complex in case of an other issue that could happen later.

Adding AssemblyFileVersion

To avoid having those issues with the resolution of assembly dependencies, it is possible to keep the AssemblyVersion constant, but use the AssemblyFileVersion to provide the actual version of the assembly.

The version specified in the AssemblyFileVersion is not used by the .NET Runtime, but is displayed in the file properties in the Windows Explorer.

We will then have the AssemlyVersion set to the original published version of the application, and set the AssemblyFileVersion to the same version, and later change only the AssemblyFileVersion when we published fixes of these assemblies.

Microsoft uses this technique to version the .NET runtime and BCL assemblies, and we take a look at System.dll for .NET 2.0, we can see that the AssemblyVersion is set to 2.0.0.0, and that the AssemblyFileVersion is set, for instance, to 2.0.50727.4927.

 

Other examples of versionning issues

We can find other cases of loading issues linked to the mismatch of the version for a loaded assembly that is different from the expected assembly version.

Custom Behaviors in WCF

WCF gives the developer a way to provide custom behaviors to alter the default behaviors for out-of-the-box bindings, and it is necessary to provide the fully qualified name, without errors. This is a pretty annoying but in WCF 4.x because it is somewhat complex to debug, and it is a very good case of use for the deactivation of "Just My Code" to find out why the assembly is not being loaded.

A good new though, this pretty old bug has been fixed in WCF 4.0 !

Dynamic Proxy Generators

Some dynamic proxy generators like Castle Dynamic Proxy 2 or Spring.NET use fully qualified types to generate the code for the proxy's, and loading issues can occur if the assembly referenced by the proxy is not exactly what is being loaded, with or without a Strong Name. These frameworks are heavily used with AOP, or by frameworks nHibernate, ActiveRecords or iBatis.

To be a bit more precise, the use of the ProxyGenerator.CreateInterfaceProxyWithTarget method generates a proxy that targets the assembly that is referenced during the generation of the code for the proxied interface.

To give an example, let's take an interface I1 in an assembly A1(1.0.0.0), which has a method that uses a type T1 in an assembly A2(1.0.0.0). If we change the assembly A2 and that its version becomes A2(2.0.0.0), the proxy will not be properly generated because the reference T1/A2(1.0.0.0) will be used because compiled in A1(1.0.0.0), regardless if we loaded A2(2.0.0.0)

The best practice of not changing the AssemblyVersion avoid loading issues of this kind. These issues are not blocking, but this more work to do to get around it.

And You ?

This is only a example of "Best Practice", which seems to have worked properly so far.

And you ? What do you do ? Which practices do you use to version your assemblies ?

[VS2010] On the Impacts of Debugging with “Just My Code”

By jay at July 05, 2010 19:58 Tags: , ,

Cet article est disponible en francais.

The “Just My Code” feature has been there for a while in Visual Studio. Since Visual Studio 2005 actually. And it's fairly easy to miss its details...

High level, this feature only shows you the stack that contains your code, mostly those assemblies that are in debug mode and have debugging symbols (pdb files). Most of the time, this is interesting, particularly if you’re debugging fairly simple code.

But if you’re debugging somehow complex issues, where you want to intercept exceptions that may be rethrown in some parts of the code that are not “Just Your Code”, then you have to disable it.

If you’re an experienced .NET developer, chances are you disabled it because it annoyed you at some point. I did, until a while back.

 

Debugger Exception Handling

The “Just my Code” (I’ll call it JMC for the rest of the article) feature changes a few things in the way the debugger handles exceptions.

If it is enabled, you’ll notice two columns in the “Debug / Exceptions” menu :

  • Thrown, which means that if you check that box, the debugger will break on the least deep rethrow in the stack of the exception
  • User-unhandled, which means that if you check that box the debugger will break if the exception has not been handled by any user code exception handler in the current stack.

 

If it is not enabled, then the same dialog box will display one column :

  • Thrown, which means that the debugger will break as soon as the exception is thrown

 

You’ll probably notice a big difference in the way the debugger handles the “Thrown” option. To be a bit more clear about that difference, let’s consider this code sample :

    static void Main(string[] args) 
    { 
        try 
        { 
            var t = new Class1(); 
            t.Throw(); 
        } 
        catch (Exception e) 
        { 
            Console.WriteLine(e); 
        } 
      }
    

Main executable, in debug configuration with debugging symbols enabled

    public class Class1 
    { 
        public void Throw() 
        { 
            try 
            { 
                Throw2(); 
            } 
            catch (Exception e) 
            { 
                throw; 
            } 
        }
        private void Throw2() 
        { 
            throw new InvalidOperationException("Test"); 
        } 
      }

Different assembly, in debug configuration without debugging symbols.

If we execute this code with the debugger with JMC enabled and with the “Thrown” column check for “InvalidOperationException”, here is the stack trace :

     NotMyCode.dll!NotMyCode.Class1.Throw() + 0x51 bytes
  > MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

And here is the stack trace without the JMC feature :

     NotMyCode.dll!NotMyCode.Class1.Throw2() + 0x46 bytes
NotMyCode.dll!NotMyCode.Class1.Throw() + 0x3d bytes
> MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

You’ll notice the impact of the “least deep in the stack rethrow”, which means that if you enable JMC, you will not have the original location of the exception.

Then you may wonder why it may be interesting to have the original location of the exception in the debugger. It is a debugging technique that is commonly used to find tricky issues that throw exceptions deep in code you do not own, and one of these exceptions is often TypeInitializerException. It can be useful to break at the original location to have the proper context, or stack that lead to the exception.

Lately, I’ve been using this technique of “Break on all exceptions” without JMC to troubleshoot loading of 32 bits assemblies in a 64 Bits CLR. You don’t exactly know which exception you’re looking for in the first place, and having JMC “hiding” some exceptions is not of a great help.

Also, to be fair, a more deep and intense debugging often leads to the use of WinDBG and the SOS extension (and here is a good SOS cheat sheet). But that’s another topic.

 

Step Into “Debugging Experience” with JMC

If you’ve read this far, you may now ask yourself why you would ever want to enable JMC. After all, you can handle your code yourself and with enough experience, you can easily mentally ignore pieces of the stack that are not yours. Actually, the gray font used for code that does not have debugging symbols helps a lot for that.

Well, there’s one example of good use of JMC : The debugger “Step into” feature. A very simple feature that allows step by step debugging of the software.

If you’re in debugging mode, you’ll step into the code that is called on the next line, if that’s possible, and see what’s in there.

So demonstrate this, let’s consider this example :

    static void Main(string[] args) 
    { 
        var myObject = new MyObject();

        Console.WriteLine(myObject); 
    }
    
    class MyObject 
    { 
        public override string ToString() 
        { 
            return "My object";
        } 
    }
      

This is a very simple program that will use the fact that Console.WriteLine will call the ToString method on the object that is passed as a parameter.

The point of this sample is to make “My Code” (Main) call some of “No My Code” (Console.WriteLine) that will call “My Code” (MyObject.ToString). Easy.

Now if you run this sample with the debugger with JMC disabled, if you try to “Step Into” Console.WriteLine, you’ll actually step over. This is not very helpful from the point of view of debugging you own code.

A very concrete example of that lack of “Step Into” can be found when you have proxies like the ones found in Spring.NET or Castle's DynamicProxy, they get in the way of simple debugging. You can’t step into objects that have been proxied to perform some AOP, for instance.

But if you enable JMC, well, you can actually “Step Into” your own code, even if the next actual method when you step into was not one of yours.

 

Final Words

Using JMC in this context is very useful and natural I would say. And the feature has been there for so long I missed its original goals. It originally got into my way for deep debugging purposes, and I dismissed as a “junior” feature, even cosmetic. Well, I was wrong…

Anyway, in Visual Studion 2010, the JMC has been improved a bit, as the way to enable and disable it is now far more easier to reach because it is now in the IntelliTrace “Show Calls View”.

Time to switch to Visual Studio 2010, people ! :)

[VS2010] How to disable the Power Tools Ctrl+Click Go to Definition

By Admin at June 13, 2010 17:17 Tags: ,

Last week, Microsoft released the Visual Studio 2010 Productivity Power Tool Extensions, which includes a lot of features that probably should have made it to the VS2010 RTM, but somehow did not.

 

A must install, really. Just for the fixed Add Reference dialog that includes a search filter. A big time saver.

 

But there’s also an other feature, the Ctrl+Click Go to Definition that allows to go to the definition with a single left click (the F12 key in the default keyboard bindings).

 

If you’re like me, you may be lazy enough to let the text editor select complete words and you’re probably using Ctrl + left lick to select words so you don’t have to aim too much with the mouse pointer. That particular Power Tools feature conflicts directly with it and you end up constantly going to the definition of types when you want to select text... And that’s pretty annoying.

 

There does not seem to be a way to disable that feature from the IDE, so you may well disable it the hard way :

  • Go to C:\Users\USER_NAME\AppData\Local\Microsoft\VisualStudio\10.0\Extensions\Microsoft\Visual Studio 2010 Pro Power Tools\10.0.10602.2200
  • Remove or rename the GoToDefProPack.dll file.
  • Enjoy your complete word selection again !

Have fun :)

[VS2010] Configure Code Analysis for the Whole Solution

By jay at March 06, 2010 17:38 Tags: ,

Cet article est disponible en francais.

In Visual Studio, configuring Code Analysis was a bit cumbersome. If you had more than a bunch (say 10), this could take a long time to manage and have a single set of rules for all your solution. You had to resort to update all the projects files by hand, or use a small tool that would edit the csproj file to set the same rules everywhere.

Not very pleasant, nor efficient. Particularly when you have hundreds of projects.

In Visual Studio 2010, the product team added two things :

  1. Rules are now in external files, and not embedded in the project file. That makes the rules reuseable in other projects in the solution. Nice.
  2. There’s a new section in the Solution properties named “Code Analysis Settings”, that allows to set rule files to use for single projects, and even better, for all projects ! Very nice.

That option is also available from the “Analyze” menu, with “Configure Code Analysis for Solution”.

One gotcha there though, to be able to select all files, you can’t use Ctrl+A but you have to select all files by selecting the first item, then hold Ctrl while selecting the last item. Maybe the Product Team will fix that for the release...

Migrating Rules from VS2008

If you’re migrating your projects from VS2008, and were using code analysis there, you’ll notice that the converter will generate a file named “Migrated rules for MyProject.ruleset” for every project in the solution. That’s nice if all your projects don’t have the same rules. But if they do, you’ll have to manage all of them...

Like all programmers, I’m lazy, and I wrote a little macro that will remove all generated ruleset files for the current solution, and use a single rule set.

This is not a very efficient macro, but since it won’t be used that often... You’ll probably live with the bad performance, and bad VB.NET code :)

Here it is :

Sub RemoveAllRuleset()

    For Each project As Project In DTE.Solution.Projects
        FindRuleSets(project)
    Next

End Sub

Sub FindRuleSets(ByVal project As Project)

    For Each item As ProjectItem In project.ProjectItems

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then
                FindRuleSets(item.SubProject)
            End If
        End If

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then

                Dim ruleSets As List(Of ProjectItem) = New List(Of ProjectItem)

                For Each subItem In item.SubProject.ProjectItems
                    If subItem.Name.StartsWith("Migrated rules for ") Then
                        ruleSets.Add(subItem)
                    End If
                Next

                For Each ruleset In ruleSets
                    ruleset.Remove()
                Next
            End If
        End If
    Next

End Sub

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.