[Rx] Using the ObserveOn and SubscribeOn operators

By jay at July 24, 2011 20:04 Tags: , , , , ,

TLDR: This post talks about how the Reactive Extensions ObserveOn operator changes the execution context (the thread) of the IObservable OnNext/OnComplete/OnError methods, whereas the SubscribeOn operator changes the execution context of the implementation of the Subscribe method in the chain of observers. Both methods can be useful to improve the performance of an application's UI by putting work on background threads.

 

When developing asynchronous code, or consuming asynchronous APIs, you find yourself forced to use specific methods on a specific thread.

The most prominent examples being WPF/Silverlight and Winforms, where you cannot use UI bound types outside of the UI Thread. In the context of WPF, you'll find yourself forced to use the Dispatcher.Invoke method to manipulate the UI in the proper context.

However, you don't want to execute everything on the UI Thread, because the UI performance relies on it. Doing too much on the UI thread can lead to a very bad percieved performance, and angry users...

 

Rx framework's ObserveOn operator

I've discussed a few times the use of ObserveOn in the context of WP7, where it is critical to leave the UI thread alone and avoid choppy animations, for instance.

The ObserveOn operator changes the context of execution (scheduler) of a chain of operators until an other operator changes it.

To be able to demonstrate this, let's write our own scheduler :

public class MyScheduler : IScheduler
{
    // Warning: ThreadStatic is not supported on Windows Phone 7.0 and 7.1
    // This code will not work properly on this platform.
    [ThreadStatic]
    public static int? SchedulerId;

    private int _schedulerId;
    private IScheduler _source;
        
    public MyScheduler(IScheduler source, int schedulerId)
    {
        _source = source;
        _schedulerId = schedulerId;
    }

    public DateTimeOffset Now { get { return _source.Now; } }

    public IDisposable Schedule<TState>(
              TState state, 
              Func<IScheduler, TState, IDisposable> action
           )
    {
        return _source.Schedule(state, WrapAction(action));
    }

    private Func<IScheduler, TState, IDisposable> WrapAction<TState>(
              Func<IScheduler, TState, IDisposable> action)
    {
        return (scheduler, state) => {

            // Set the TLS with the proper ID
            SchedulerId = _schedulerId;

            return action(_source, state);
        };
    }
}

This scheduler's purpose is to intercept calls to the ISchedule methods (You'll fill the missing Schedule methods by yourself) and flag them with a custom thread ID. That way, we'll know which scheduler is executing our code.

Note that this code will not work properly on Windows Phone 7, since ThreadStaticAttribute is not supported. And it's still not supported on 7.1... Seems like not enough people are using ThreadStatic to make its way to the WP7 CLR...

Anyway, now if we write the following Rx expression :

Observable.Timer(TimeSpan.FromSeconds(1), new MyScheduler(Scheduler.ThreadPool, 42))
          .Do(_ => Console.WriteLine(MyScheduler.SchedulerId))
          .First();

We force the timer to raise OnNext on the ThreadPool through our scheduler, and we'll get the following :

42

Which means that the lambda passed as a parameter to the Do operator got executed in the context of the Scheduler used when declaring the Timer operator.

If we go a bit farther :

Observable.Timer(TimeSpan.FromSeconds(1), new MyScheduler(Scheduler.ThreadPool, 42))
          .Do(_ => Console.WriteLine("Do(1): " + MyScheduler.SchedulerId))
          .ObserveOn(new MyScheduler(Scheduler.ThreadPool, 43))
          .Do(_ => Console.WriteLine("Do(2): " + MyScheduler.SchedulerId))
          .ObserveOn(new MyScheduler(Scheduler.ThreadPool, 44))
          .Do(_ => Console.WriteLine("Do(3): " + MyScheduler.SchedulerId))
          .Do(_ => Console.WriteLine("Do(4): " + MyScheduler.SchedulerId))
          .First();

We'll get the following :

Do(1): 42
Do(2): 43
Do(3): 44
Do(4): 44

Each time a scheduler was specified, the following operators OnNext delegates were executed on that scheduler.

In this case, we're using the Do operator which does not take a scheduler as a parameter. There some operators though, like Delay, that implicitly use a scheduler that changes the context.

Using this operator is particularly useful when the OnNext delegate is performing a context sensitive operation, like manipulating the UI, or when the source scheduler is the UI and the OnNext delegate is not related to the UI and can be executed on an other thread.

You'll find that operator handy with the WebClient or GeoCoordinateWatcher classes, which both execute their handlers on the UI thread. Watchout for Windows Phone 7.1 (mango) though, this may have changed a bit.

 

An Rx Expression's life cycle

Using an Rx expression is performed in a least 5 stages :

  • The construction of the expression,
  • The subscription to the expression,
  • The optional execution of the OnNext delegates passed as parameters (whether it be observers or explicit OnNext delegates),
  • The observer chain gets disposed either explicitly or implicitly,
  • The observers can optionally get collected by the GC.

The third part's execution context is covered by ObserveOn. But for the first two, this is different.

The expression is constructed like this : 

var o = Observable.Timer(TimeSpan.FromSeconds(1));

Almost nothing's been executed here, just the creation of the observers for the entire expression, in a similar way IEnumerable expressions work. Until you call the IEnumerator.MoveNext, nothing is performed. In Rx expressions, until the Subscribe method is called, nothing is happening.

Then you can subscribe to the expression :

var d = o.Subscribe(_ => Console.WriteLine(_));

At this point, the whole chain of operators get their Subscribe method called, meaning they can start sending OnNext/OnError/OnComplete messages.

 

The case of Observable.Return and SubscribeOn

Then you meet that kind of expressions :

Observable
   .Return(42L)
   // Merge both enumerables into one, whichever the order of appearance
   .Merge(
      Observable.Timer(
         TimeSpan.FromSeconds(1), 
         new MyScheduler(Scheduler.ThreadPool, 42)
      )
   )
   .Subscribe(_ => Console.WriteLine("Do(1): " + MyScheduler.SchedulerId));

Console.WriteLine("Subscribed !");

This expression will merge the two observables into one that will provide two values, one from Return and one from the timer.

And this is the output :

Do(1):
Subscribed !
Do(1): 42

The Observable.Return OnNext was executed during the call to Subscribe, and has that thread has no SchedulerId, meaning that a whole lot of code has been executed in the context of the caller of Subscribe. You can imagine that if that expression is complex, and that the caller is the UI Thread, that can become a performance issue.

This is where the SubscribeOn operator becomes handy :

Observable
   .Return(42L)
   // Merge both enumerables into one, whichever the order of appearance
   .Merge(
      Observable.Timer(
         TimeSpan.FromSeconds(1), 
         new MyScheduler(Scheduler.ThreadPool, 42)
      )
   )
   .SubscribeOn(new MyScheduler(Scheduler.ThreadPool, 43))
   .Subscribe(_ => Console.WriteLine("Do(1): " + MyScheduler.SchedulerId));

Console.WriteLine("Subscribed !");

You then get this :

Subscribed !
Do(1): 43
Do(1): 42

The first OnNext is now executed under of a different scheduler, making subscribe a whole lot faster from the caller's point of view.

 

Why not always Subscribe on an other thread ?

That might come in handy, but you may not want that as an opt-out because of this scenario :

Observable.FromEventPattern(textBox, "TextChanged")
          .SubscribeOn(new MyScheduler(Scheduler.ThreadPool, 43))
          .Subscribe(_ => { });

Console.WriteLine("Subscribed !");

You'd get an "Invalid cross-thread access." System.UnauthorizedAccessException, because yo would try to add an event handler to a UI element from a different thread. 

Interestingly though, this code does not work on WP7 but does on WPF 4.

An other scenario may be one where delaying the subscription may loose messages, so you need to make sure you're completely subscribed before raising events.

 

So there you have it :) I hope this helps you understand a bit better those two operators.

Team Build and Windows Phone 7

By jay at May 01, 2011 00:00 Tags: , , ,

Building Windows Phone 7 applications in an agile way encourages the use of Continuous Integration, and that can be done using Team System 2010.

There are a few pitfalls to avoid to get there, but this can be acheived quite easily with great results.

I won't cover the goodness of automated builds, this has already been covered a lot.

 

Adding Unit Tests

Along with the continous integration to create hopefully successful builds out of every check-in of source code, you'll also find the automated execution of unit tests. Team System has the ability to provide nice code coverage and unit tests success rates in Reporting Services reports, where statistics can be viewed, which gives good health indicators of the project.

Unfortunately for us, at this point there are no ways to automatically execute tests using the WP7 .NET runtime. But if you successfuly use an MVVM approach, your view models and non UI code can be compiled for multiple platforms, because they do not rely on the UI components that may be specific to the WP7 platform. That way, we are still able to test our code using the .NET 4.0 runtime with MSTest and Visual Studio 2010 test projects.

To avoid repeating code, multi-targeted files can be integrated into single target projects either by :

  • Using the Project Linker tool and link multiple projects,
  • Creating project files in the same folder and use the "include file" command when showing all files in the solution explorer. Make sure to change the output assembly name to something like "MyAssembly.Phone.dll" to avoid conflicts.

Multi-targeted files are using the #if directive and the WINDOWS_PHONE define, or the lack thereof, to compile code for the current runtime target.

There is also the option of creating projects with the Portable Library add-in, but there are some caveats on that side, and there are a few constraints when using this method. You may need to externalize code that is not supported, like UrlDecode.

Testing with MSTest ensures that your code runs successfully on .NET 4.0 runtime, but this does not test on the WP7 runtime. So to be sure that your code is successfully running on it, and since Windows Phone 7 is build on Silverlight 3, tools like the SL3 Unit Test framework can be used to manually run tests in the emulator. This cannot be integrated into the build for now, unfortunately; you'll have to place that in your QA tests.

 

TeamBuild with the WP7 toolkit

To be able to buid WP7 applications on your build machine, you need to install the SDK on your build machine, and a Team Build agent and/or controller.

Creating a build definition is done the same way as for any other build definition, except for one detail. You need to set the MSBuild platform to x86 instead of Auto if your build machine is running on a 64 Bits Windows. This forces the MSBuild runtime to use the 32 bits runtime, and not 64 bits, where the SDK does not work properly.

If you don't, when building your WP7 application, you'll find that intriguing message :

Could not load file or assembly 'System.Windows, Version=2.0.5.0'

Which is particularly odd considering that you've already installed the SDK, and that dll is definitely available.

You may also find that if you install that DLL from the SDK in the GAC, you'll get that other nice message :

Common Language Runtime detected an invalid program.

Which is most commonly found when mixing 32 bits and 64 bits assemblies, for which the architecture has been explicitly specified instead of "Any CPU". So don't install that DLL in the GAC and set the MSBuild architecture to x86.

 

That's it  for now, and Happy WP7 building !

[WP7] HttpWebRequest and the Flickr app "Black Screen" issue

By jay at April 22, 2011 14:54 Tags: , , , , ,

TL;DR: While trying to fix the "Black Screen" issue of the Windows Phone 7 flickr app 1.3, I found out that HttpWebRequest is internally making a synchronous call to the UI Thread, making a network call negatively impact the UI. The entire building of an asynchronous web query is performed on the UI thread, and you can't do anything about it.

Edit: This post was formerly named "About the UI Thread performance and HttpWebRequest", but was in fact about Yahoo's Flickr application and was enhanced accordingly.

When programming on Windows Phone 7, you'll hear often that to improve the perceived performance, you'll need to get off of the UI Thread (i.e. the dispatcher) to perform non UI related operations. By good perceived performance, I mean having the UI respond immediately, not stall when some background processing is done.

To acheive this, you'll need to use the common asynchrony techniques like queueing in the ThreadPool, create a new thread, or use the Begin/End pattern.

All of this is very true, and one very good example of bad UI Thread use is the processing of the body of a web request, particularly when using the WebClient where the raised events are in the context of the dispatcher. From a beginner's perspective, not having to care about changing contexts when developing a simple app that updates the UI, provides a particularly good and simple experience.

But that has the annoying effect of degrading the perceived performance of the application, because many parts of the application tend to run on the UI thread.

 

HttpWebRequest to the rescue ?

You'll find that the HttpWebRequest is a better choice in that regard. It uses the Begin/End pattern and the execution of the AsyncCallback is performed in the context of ThreadPool. This performs the execution of the code in that callback in a way that does not impact the perceived performance of the application.

Using the Reactive Extensions, this can be written like this :

var request = WebRequest.Create("http://www.google.com");

var queryBuilder = Observable.FromAsyncPattern(
                                (h, o) => request.BeginGetResponse(h, o),
                                ar => request.EndGetResponse(ar));

queryBuilder()
                /* Perform the expensive work in the context of the AsyncCall back */
                /* from the WebRequest. This will be the ThreadPool. */
                .Select(response => DoSomeExpensiveWork(response))

                /* Go back to the UI Thread to execute the OnNext method on the subscriber */
                .ObserveOnDispatcher()
                .Subscribe(result => DisplayResult(result));

That way, you'll get most of your code to execute out of the UI thread, where that does not impact the perceived performance of the application.

 

Why would it not be to the rescue then ?

Actually, it will always be (as of Windows Phone NoDo), but there's a catch. And that's a big deal, from a performance perspective.

Consider this code :

 public App()
 {
  /* some application initialization code */


  ManualResetEvent ev = new ManualResetEvent(false);

     ThreadPool.QueueUserWorkItem(
  d =>
  {
      var r = WebRequest.Create("http://www.google.com");
      r.BeginGetResponse((r2) => { }, null);

      ev.Set();
  }
     );

     ev.WaitOne();
 }

This code is basically beginning a request on the thread pool, while blocking the UI thread in the App.xaml.cs file. This makes the construction (but not the actual call on the network) of the WebRequest synchronous, and makes the application wait for the request to begin before showing any page to the user.

While this code is definitely not a best practice, there was a code path in the Flickr 1.3 application that was doing something remotely similar, in a more convoluted way. And if you try it for yourself, you'll find that the application hangs in a deadlock during the startup of the application, meaning that our event is never set.

 

What's happening ?

If you dig a bit, you'll find that the stack trace for a thread in the thread pool is the following :

  mscorlib.dll!System.PInvoke.PAL.Threading_Event_Wait() 
  mscorlib.dll!System.Threading.EventWaitHandle.WaitOne() 
  System.Windows.dll!System.Windows.Threading.Dispatcher.FastInvoke(...) 
  System.Windows.dll!System.Net.Browser.AsyncHelper.BeginOnUI(...)
  System.Windows.dll!System.Net.Browser.ClientHttpWebRequest.BeginGetResponse(...) 
  WindowsPhoneApplication2.dll!WindowsPhoneApplication2.App..ctor.AnonymousMethod__0(...)

The BeginGetResponse method is trying to execute something on the UI thread. And in our example, since the UI thread is blocked by the manual reset event, the application hangs in a deadlock between a resource in the dispatcher and our manual reset event.

This is also the case for the EndGetResponse method.

But if you dig even deeper, you'll find in the version of the System.Windows.dll assembly in the WP7 emulator (the one in the SDK is a stub for all public types), that the BeginGetResponse method is doing all the work of actually building the web query on the UI thread !

That is particularly disturbing. I'm still wondering why that network-only code would need to be executed to UI Thread.

 

What's the impact then ?

The impact is fairly simple : The more web requests you make, the less your UI will be responsive, both for processing the beginning and the end of a web request. Each call to the methods BeginGetResponse and EndGetResponse implicitly goes to the UI thread.

In the case of Remote Control applications like mine that are trying to have remote mouse control, all are affected by the same lagging behavior of the mouse. That's partially because the UI thread is particularly busy processing Manipulation events, this explains a lot about the performance issues of the web requests performed at the same time, even by using HttpWebRequest instead of WebClient. This also explains why until the user stops touching the screen, the web requests will be strongly slowed down.

 

The Flickr 1.3 "Black Screen" issue

In the Flickr application for which I've been able to work on, a lot of people were reporting a "black screen" issue, where the application stopped working after a few days.

The application was actually trying to update a resource from the application startup in an asynchronous fashion using the HttpWebRequest. Because of a race condition with an other lock in the application and UI Thread that was waiting in the app's initialization, this resulted in an infinite "Black Screen" that could only be bypassed by reinstalling the application.

Interestingly enough, at this point in the application's initialization, in the App's class constructor, the application is not killed after 10 seconds if it is not showing a page to the user. However, if the application stalls in the constructor of the first page, the application is automatically killed by the OS after something like 10 seconds.

Regarding the use of the UI Thread inside the HttpWebRequest code, applications that are network intensive to get a lot of small web resources like images, this is has a negative impact on the performance. The UI thread is constantly interrupted to process network resources query and responses.

 

Can I do something about it ?

During the analysis of the emulator version of the System.Windows.dll assembly, I noticed that the BeginGetResponse is checking whether the current context is the UI Thread, and does not push the execution on the dispacther.

This means that if you can group the calls to BeginGetResponse calls in the UI thread, you'll spend less time switching between contexts. That's not the panacea, but at the very least you can gain on this side.

 

What about future versions of Windows Phone ?

On the good news side, Scott Gu annouced at the Mix 11 that the manipulation events will be moved out the the UI thread, making the UI "buttery smooth" to take his words. This will a lot of applications benefit from this change.

Anyway, let's wait for Mango, I'm guessing that will this will change is a very positive way, and allow us to have high performance apps on the Windows Phone platform.

[WP7] A nasty concurrency bug in the bundled Reactive Extensions

By Admin at April 21, 2011 20:05 Tags: , , , , ,

The Reactive Extensions have been included in Windows Phone 7, which comes out of the box, and that you can include the Microsoft.Phone.Reactive.dll assembly.

 

This is a good thing, because it participates in the democratization of the Rx framework, and aside from the fact that the namespace is not exactly the same as the desktop version, Microsoft.Phone.Reactive instead of System.Concurrency and System.Linq, there were no major bugs until recently.

A few applications I've worked on that use the Rx framework, a very interesting unhandled exception was popping-up from time to time in my exception tracker. Unlike the double tap issue I found the same way on my Remote Control application, that one was a bit more tricky, see for yourself :

Exception : System.NullReferenceException: NullReferenceException
at Microsoft.Phone.Reactive.CurrentThreadScheduler.Trampoline.Run()
at Microsoft.Phone.Reactive.CurrentThreadScheduler.EnsureTrampoline(Action action)
at Microsoft.Phone.Reactive.AnonymousObservable`1.Subscribe(IObserver`1 observer)
at Microsoft.Phone.Reactive.ObservableExtensions.Subscribe[TSource](IObservable`1 source, Action`1 onNext, Action`1 onError, Action onCompleted)
at Microsoft.Phone.Reactive.ObservableExtensions.Subscribe[TSource](IObservable`1 source, Action`1 onNext)

This exception popped-up out of nowhere, without anything from the caller that would make that "Trampoline" method fail. No actual parameter passed to the Subscribe method was null, I made sure of that by using a precondition for calling Subscribe.

 

Well, it turns out that it's actually a bug that is related to the use of the infamous not-supported-but-silent ThreadStaticAttribute, for which I've had to work around to make umbrella work properly on Windows Phone 7.

The lack of a Thread Local Storage creates a concurrency issue around a priority queue that is kept by the CurrentThreadScheduler to perform delayed operations. The system wide queue was accessed by multiple threads at the same time, creating random NullReferenceExceptions.

This means that any call to the Subscribe method may have failed if an other call to that same method was being made at the same time.

 

In short, do not use the bundled version of Rx in Windows Phone 7 (as of NoDo), but prefer using the latest release from the DevLabs, which does not have this nasty bug.

[Reactive] Being fluent with CompositeDisposable and DisposeWith

By jay at April 04, 2011 13:48 Tags: , , ,

When you're writing a few queries with the reactive extensions, you'll probably end up doing a lot of this kind of code :

moveSubscription = Observable.FromEvent<MouseEventArgs>(this, "MouseMove")
                             .Subscribe(_ => { });

clickSubscription = Observable.FromEvent<MouseEventArgs>(this, "MouseClick")
                              .Subscribe(_ => { }); 

You'll probably call the dispose method on both subscriptions in some dispose method in the class that creates the subscriptions. But this is a bit too manual for my taste.

Rx provides the CompositeDisposable class, which is basically a list of IDisposable instances that all get disposed when CompositeDisposable.Dispose() is called. So we can write it like this :

var cd = new CompositeDisposable();

var moveSubscription = Observable.FromEvent<MouseEventArgs>(this, "MouseMove")
                                 .Subscribe(_ => { });
cd.Add(moveSubscription);

var clickSubscription = Observable.FromEvent<MouseEventArgs>(this, "MouseClick")
                                  .Subscribe(_ => { });
cd.Add(clickSubscription);

That's better in a sense that it is not needed to get my subscriprions out, but this is still too "manual". There are two lines for each subscriptions, and the need for temporary variables.

That's where a simple DisposeWith extension comes in handy :

public static class DisposableExtensions
{
 public static void DisposeWith(this IDisposable observable, CompositeDisposable disposables)
 {
     disposables.Add(observable);
 }
}

Very simple extension, a bit more fluent, and that allows to avoid creating a temporary variable just to unsubscribe on a subscription:

var cd = new CompositeDisposable();

Observable.FromEvent<MouseEventArgs>(this, "MouseMove")
          .Subscribe(_ => { })
          .DisposeWith(cd);

Observable.FromEvent<MouseEventArgs>(this, "MouseClick")
          .Subscribe(_ => { })
          .DisposeWith(cd);

This is a method with a side effect, but that's acceptable in this case, and it always ends the declaration of a observable query.

[WP7Dev] Double tap when you expect only one

By jay at March 27, 2011 19:24 Tags: , , ,

I've been developing a free application to do some PC remote control on Windows Phone 7, and it's been very instructive in many ways.

To improve the quality of the software, and be notified when an unhandled exception occurs somewhere in my code, or in someone else's code executed on my behalf, I've added a small opt-in unhandled exception reporting feature. This basically sends me back information about the device, most of what's available in DeviceExtendedProperties for device aggregation of exceptions, plus some informations like the culture and, of course, the exception stacktrace and details.

 

The MarketplaceDetailTask exception

A few recurring exceptions have popped up a lot recently, and one coming often is the following :

Exception : System.InvalidOperationException: Navigation is not allowed when the task is not in the foreground. Error: -2147220989 
at Microsoft.Phone.Shell.Interop.ShellPageManagerNativeMethods.CheckHResult(Int32 hr) 
at Microsoft.Phone.Shell.Interop.ShellPageManager.NavigateToExternalPage(String pageUri, Byte[] args) 
at Microsoft.Phone.Tasks.ChooserHelper.Navigate(Uri appUri, ParameterPropertyBag ppb) 
at Microsoft.Phone.Tasks.MarketplaceLauncher.Show(MarketplaceContent content, MarketplaceOperation operation, String context) 
at Microsoft.Phone.Tasks.MarketplaceDetailTask.Show() 

This code is called when a user clicks on the purchase image located on some page of the software, and it looks like this :


void PurchaseImage_ManipulationCompleted(object sender, ManipulationCompletedEventArgs e)
{
    var details = new MarketplaceDetailTask();
    details.ContentIdentifier = "d0736804-b0f6-df11-9264-00237de2db9e";
    details.Show();
}

The call is performed directly on the image's ManipulationCompleted event.

I've been trying to reproduce it for a few times and I finally got it: The user is tapping more than once on the image.

I can see a few reasons why:

  • The ManipulationCompleted event is fairly sensitive and is raised multiple times when the user did not tap twice
  • The user did actually tap twice because the action did not answer fast enough.
  • The user tapped twice because he is used to always tap twice, as some PC users do... (You know, double clicking on hyperlinks in browsers, things like that)

 

What do we do about it ?

There may be actually more, be this actually tells me a lot.

First, I should be having some kind of visual feedback on the click of that image, to tell the user that he has done something (and also to actually follow the design guidelines)

Second, that even if the feedback is there, that there may always be two subsequent clicks, and one that may be executed after the first has called the MarketplaceDetailTask.Show(), and the application has been deactivated. I cannot do much about it, except handle the exception silently or track the actual application state and not call the method.

I'll go with the exception handler for now as it is not a very critical peace of code, but I'd rather have some way of that tell me that the application cannot do that reliably and not have to handle an exception. The API is rather limited on that side, where the PhoneApplicationService is only raising events and does not expose the current "activation" state.

 

Any other examples of exceptions ?

I'll talk more about some other findings this opt-in exception reporting feature has brought me, with some that seem to be pretty tricky. 

Revisited with the Reactive Extensions: DataBinding and Updates from multiple Threads

By Admin at July 26, 2010 18:42 Tags: , , ,

Cet article est disponible en francais.

Recently, I wrote an article about WinForms, DataBinding and Updates from multiple threads, where I showed how to externalize the execution of event handler code on the UI thread.

I used a technique based on Action<Action> that takes advantage of closures, and the fact that an action will carry its context down to the point where it is executed. All this with no strings attached.

This concept of externalization can be revisited with the Reactive Extensions, and the IScheduler interface.

 

The Original Sample

But let's get right to the original code sample :

    public MyController(Action<Action> synchronousInvoker)
    {
        _synchronousInvoker = synchronousInvoker;
        ...
    }

This code is the constructor of the controller for my form, and the synchronous invoker action will take something like this :

    _controller = new MyController(a => Invoke(a));

And the invoker lambda is used like this :


    _synchronousInvoker(
        () => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
    );

Where Invoke is actually Control.Invoke(), used to execute code on the UI Thread where updates to the UI controls can be safely executed.

While the Action<Action> trick is working just fine in acheiving the isolation of concerns, it is not very obvious just by looking at the constructor signature what you are supposed to pass to it.

 

Using the IScheduler interface

To be able to abstract the location used to execute the content of Reactive operators, the Rx team introduced the concept of Scheduler, with a bunch of default scheduler implementations.

It basically exposes an abstracted way for users of the IScheduler instance to schedule the execution of a method in the specific way defined by the scheduler. In our case, we want to execute our handler code on the WinForms message pump, and "there's a scheduler for that".

The sample can be easily updated to use the IScheduler instead of the Action<Action> delegate, and make use of the IScheduler.Schedule() method.

    public MyController(ISheduler scheduler)
    {
        _scheduler = scheduler;
        ...
    }

And replace the call by :


    _scheduler.Schedule(
        () => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
    );

Not a very complex modification, but it is far more readable.

And we can use the provided scheduler for the Winforms, the yet undocumented System.Concurrency.ControlScheduler which is not in the Scheduler class because it cannot be created statically and requires a Control instance :

    _controller = new MyController(new ControlScheduler(this));

where this is an instance of a control.

This is much better, and for the unit testing of the Controller, you can easily use the System.Concurrency.CurrentThreadScheduler, because you don't need to switch threads in this context.

 

What about the Reactive Extensions and Silverlight for Windows Phone 7 ?

In a very strange move, the WP7 team moved the IScheduler class from System.Concurrency to the very specific Microsoft.Phone.Reactive namespace.

I do not quite understand the change of namespace, and it makes code that used to compile easily on both platforms not compatible.

Maybe they considered the Reactive Extensions implementation for Windows Phone too different from the desktop implementation... But the compact framework was built around that assertion, and most of the common code stays in the same namespaces.

If someone has an explanation for that strange refactoring, I'm listening :)

Using the Remote Debugger

By jay at July 22, 2010 20:05 Tags: , ,

Cet article est disponible en francais.

To continue in the same kind of articles about Visual Studio features that have been available for a while now, but are commonly under-used, I'll talk in this post about the Remote Debugger.

 

Local Debugging

Visual Studio has a debugger that allows the debugging of program when running it using F5, or "Debug / Start Debugging". Visual Studio will then start in a special mode that allows step by step execution of the program, use features like BreakPoints, TracePoint, Watches, IntelliTrace, create MiniDumps and many more.

The debugger runs the program on the local machine, and uses the permissions of the locally logged on user.

Nothing out of the ordinary. Well, maybe the Reverse Debugging with IntelliTrace in VS2010, which is very cool.

 

Hardware Specific and CrapWare

I don't know about you, but I keep my development PC as stable as possible. I rarely install new software, so that I keep the overall performance stable over time. I will most of the time install new software versions only after having tested them on other PCs to determine their behavior.

Call me maniac, that's what it is :)

But then, what to do when the need for testing an installation program comes up ? Or when you need to debug plugins for NI TestStand or Labview ? Or when the software needs a very specific kind of hardware that cannot be installed on your development PC ? (Rainbow Keys, anyone ?)


The answer is simple : The Remote Debugger ! When possible, I will test and debug my software on a virtual machine, or on a physical machine that has the appropriate environment to execute the software.

That way, the development environment stays stable, and I don't need to make installation of software that could add some crapware and eat up the few bytes of RAM left :)

The Remote Debugger ?

The idea is to continue using the development machine, where the source code is and to connect via the network on a machine that will execute the program. After that, the remote debugging session is very similar to a local session, with the exception of the "Edit and Continue" that is not supported. But most of the time, we can live without it.

 

Running the debugger from Visual Studio

It is possible to run the execution on the remote machine by using the "Use Remote Machine" option in the "Debug" tab of a C# project. It is important to note that checking this option implies that all paths specified in "Working Directory" or "External Program" are those of the remote machine.

Aditionnally, Visual Studio will not copy the binaries and PDB files on the remote machine. You have to make the copy of the files at the appropriate location, by using a "Post Build Action", a UNC path in the form of "\\mymachine\c$\temp".

 

Attach to a Running Process

It is also possible to attach to a running process, by using the "Debug / Attach To Process" option. You just need to fill in the "Qualifier" and set the name of the remote debugger, and to choose the process to debug.

Quick hint: The option "Show processes from all users" is not enabled by default. This means that is you want to debug a Windows Service, you will not see it in the list until you enable it.

Finally, the "Attach To Process" window is also very useful with local processes. It is a very handy feature to create a memory dump of a process that takes too much memory, and analyze it.

 

Installing the Remote Debugger

The Remote Debugger is an additional Visual Studio component that is located on the installation media, in the "Remote Debugger" folder. Three versions exist : x86, x64 and ia64 (RIP, Itanium...). If you have to debug a 32 process on 64 bits machine, I advise that you install both the x86 and x64 versions. You will have to choose which remote debugger to run depending on the .NET runtime that is used. You can see which version to use in the "Type" column of the "Attach to Process" window.

Here's what to do :

  • If you are using VS2008 SP1, you can download it here, and for VS2010 you can use the install located on the DVD
  • Once installed on the remote machine, install the RDBG service with the wizard, using the LocalSystem account.
  • You may have a message about a security issue. If you do, follow these steps :
    • Open the "Local Security Policy" section of the "Administrative Tools" control panel
    • Go to the "Local Policies" / "Security Options"
    • Double click on "Network access: Sharing and security model for local accounts" and set the value to "Classic : Local users authenticate as themselves"
    • Close the window
  • If your machine is not on the same domain as your development machine, or even if it's not on a domain at all, add a local use account on the remote machine that has the same name as your current username, and make it a member of the administrators group. The password also has to be the same.
  • Start the remote debugger on the remote machine. Note that to debug a 32 bits process, you have to run the 32 Bits version of the debugger.
  • On the development machine, open the "Attach to process" window, and type the identifier of the remote debuger (shown on the remote debugger window). It should look like this: administrator@my-machine.

Note that the firewall on both the development and the remote machine can prevent the remote debugger from working properly. You can temporarily disable it, but make sure to enable it back after. If you only want to enable specific ports, the port 135/TCP is used. The Remote Debugger uses DCOM as its communication protocol.

 

And if my breakpoints stay empty red circles ?

This is a very common situation that means that the pdb files do not match the loaded binaries. Make sure that you've copied the pdb files at the same time you did the dlls.

The "Debug / Windows / Modules" shows if the debug symbols have been loaded properly, and if it's not the case, the "View / Output / Debug" window will most of the time show why.


Happy debugging !

Version Properly using AssemblyVersion and AssemblyFileVersion

By jay at July 18, 2010 15:25 Tags: , ,

Cet article est disponible en francais.

We talk quite easily about new technologies and things we just learned about, because that's the way geeks work. But for newcomers, this is not always easy. This is a large recurring debate, but I find that it is good to step back from time to time and talk about good practices for these newcomers.

 

The AssemblyVersion and AssemblyFileVersion Attributes

When we want to give a version to a .NET assembly, we can choose one of two common ways :

 

Most of the time, and by default in Visual Studio 2008 templates, we can find the AssemblyInfo.cs file in the Properties section of a project. This file will generally only contain an AssemblyVersion attribute, which will force the value of the AssemblyFileVersion value to the AssemblyVersion's value. The AssemblyFileVersion attribute is now added by default in the Visual Studio 2010 C# project templates, and this is a good thing.

It is possible to see the value of the AssemblyFileVersion attribute in file properties window the Windows file explorer, or by adding the "File Version" column, still in the Windows Explorer.

We can also find a automatic numbering provided by the C# compiler, by the use of :

[assemblyAssemblyVersion("1.0.0.*")]

 

Each new compilation will create a new version.

This feature is enough at first, but when you start having somehow complex projects, you may need to introduce continuous integration that will provide nightly builds. You will want to give a version to the assemblies in such a way it is easy to find which revision has been used in the source control system to compile those assemblies.

You can then modify the Team Build scripts to use tasks such as the AssemblyInfo task of MSBuild Tasks, and generate a new AssemblyInfo.cs file that will contain the proper version.

 

Publishing a new version of an Assembly

To come back to the subject of versionning an assembly properly, we generally want to know quickly, when a project build has been published, which version has been installed on the client's systems. Most of the time, we want to know which version is used because there is an issue, and that we will need to provide an updated assembly that will contain a fix. Particularly when the software cannot be reinstalled completely on those systems.

A somehow real world example

Lets consider that we have a solution with two assemblies signed with a strong name Assembly1 and Assembly2, with Assembly1 that uses types available in Assembly2, and that finally are versioned with an AssemblyVersion set to 1.0.0.458. These assemblies are part of an official build published on the client's systems.

If we want to provide a fix in Assembly2, we will create a branch in the source control from the revision 1.0.0.458, and make the fix in that branch which will give revision 460, so the version 1.0.0.460.

If we let the Build System compile that revision, we will get assemblies that will be marked as 1.0.0.460. If we only take Assembly2, and we place it on the client's systems, the CLR will refuse to load this new version if the assembly, because Assembly1 requires to have Assembly to of the version 1.0.0.458. We can use the bindingRedirect parameter in the configuration file to get around that, but this is not always convenient, particularly when we update a lot of assemblies.

We can also compile this new version with the AssemblyVersion of 1.0.0.460 set to 1.0.0.458 for Assembly2, but this willl have the disadvantage of lying about the actual version of the file, and that will make diagnostics more complex in case of an other issue that could happen later.

Adding AssemblyFileVersion

To avoid having those issues with the resolution of assembly dependencies, it is possible to keep the AssemblyVersion constant, but use the AssemblyFileVersion to provide the actual version of the assembly.

The version specified in the AssemblyFileVersion is not used by the .NET Runtime, but is displayed in the file properties in the Windows Explorer.

We will then have the AssemlyVersion set to the original published version of the application, and set the AssemblyFileVersion to the same version, and later change only the AssemblyFileVersion when we published fixes of these assemblies.

Microsoft uses this technique to version the .NET runtime and BCL assemblies, and we take a look at System.dll for .NET 2.0, we can see that the AssemblyVersion is set to 2.0.0.0, and that the AssemblyFileVersion is set, for instance, to 2.0.50727.4927.

 

Other examples of versionning issues

We can find other cases of loading issues linked to the mismatch of the version for a loaded assembly that is different from the expected assembly version.

Custom Behaviors in WCF

WCF gives the developer a way to provide custom behaviors to alter the default behaviors for out-of-the-box bindings, and it is necessary to provide the fully qualified name, without errors. This is a pretty annoying but in WCF 4.x because it is somewhat complex to debug, and it is a very good case of use for the deactivation of "Just My Code" to find out why the assembly is not being loaded.

A good new though, this pretty old bug has been fixed in WCF 4.0 !

Dynamic Proxy Generators

Some dynamic proxy generators like Castle Dynamic Proxy 2 or Spring.NET use fully qualified types to generate the code for the proxy's, and loading issues can occur if the assembly referenced by the proxy is not exactly what is being loaded, with or without a Strong Name. These frameworks are heavily used with AOP, or by frameworks nHibernate, ActiveRecords or iBatis.

To be a bit more precise, the use of the ProxyGenerator.CreateInterfaceProxyWithTarget method generates a proxy that targets the assembly that is referenced during the generation of the code for the proxied interface.

To give an example, let's take an interface I1 in an assembly A1(1.0.0.0), which has a method that uses a type T1 in an assembly A2(1.0.0.0). If we change the assembly A2 and that its version becomes A2(2.0.0.0), the proxy will not be properly generated because the reference T1/A2(1.0.0.0) will be used because compiled in A1(1.0.0.0), regardless if we loaded A2(2.0.0.0)

The best practice of not changing the AssemblyVersion avoid loading issues of this kind. These issues are not blocking, but this more work to do to get around it.

And You ?

This is only a example of "Best Practice", which seems to have worked properly so far.

And you ? What do you do ? Which practices do you use to version your assemblies ?

[VS2010] On the Impacts of Debugging with “Just My Code”

By jay at July 05, 2010 19:58 Tags: , ,

Cet article est disponible en francais.

The “Just My Code” feature has been there for a while in Visual Studio. Since Visual Studio 2005 actually. And it's fairly easy to miss its details...

High level, this feature only shows you the stack that contains your code, mostly those assemblies that are in debug mode and have debugging symbols (pdb files). Most of the time, this is interesting, particularly if you’re debugging fairly simple code.

But if you’re debugging somehow complex issues, where you want to intercept exceptions that may be rethrown in some parts of the code that are not “Just Your Code”, then you have to disable it.

If you’re an experienced .NET developer, chances are you disabled it because it annoyed you at some point. I did, until a while back.

 

Debugger Exception Handling

The “Just my Code” (I’ll call it JMC for the rest of the article) feature changes a few things in the way the debugger handles exceptions.

If it is enabled, you’ll notice two columns in the “Debug / Exceptions” menu :

  • Thrown, which means that if you check that box, the debugger will break on the least deep rethrow in the stack of the exception
  • User-unhandled, which means that if you check that box the debugger will break if the exception has not been handled by any user code exception handler in the current stack.

 

If it is not enabled, then the same dialog box will display one column :

  • Thrown, which means that the debugger will break as soon as the exception is thrown

 

You’ll probably notice a big difference in the way the debugger handles the “Thrown” option. To be a bit more clear about that difference, let’s consider this code sample :

    static void Main(string[] args) 
    { 
        try 
        { 
            var t = new Class1(); 
            t.Throw(); 
        } 
        catch (Exception e) 
        { 
            Console.WriteLine(e); 
        } 
      }
    

Main executable, in debug configuration with debugging symbols enabled

    public class Class1 
    { 
        public void Throw() 
        { 
            try 
            { 
                Throw2(); 
            } 
            catch (Exception e) 
            { 
                throw; 
            } 
        }
        private void Throw2() 
        { 
            throw new InvalidOperationException("Test"); 
        } 
      }

Different assembly, in debug configuration without debugging symbols.

If we execute this code with the debugger with JMC enabled and with the “Thrown” column check for “InvalidOperationException”, here is the stack trace :

     NotMyCode.dll!NotMyCode.Class1.Throw() + 0x51 bytes
  > MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

And here is the stack trace without the JMC feature :

     NotMyCode.dll!NotMyCode.Class1.Throw2() + 0x46 bytes
NotMyCode.dll!NotMyCode.Class1.Throw() + 0x3d bytes
> MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

You’ll notice the impact of the “least deep in the stack rethrow”, which means that if you enable JMC, you will not have the original location of the exception.

Then you may wonder why it may be interesting to have the original location of the exception in the debugger. It is a debugging technique that is commonly used to find tricky issues that throw exceptions deep in code you do not own, and one of these exceptions is often TypeInitializerException. It can be useful to break at the original location to have the proper context, or stack that lead to the exception.

Lately, I’ve been using this technique of “Break on all exceptions” without JMC to troubleshoot loading of 32 bits assemblies in a 64 Bits CLR. You don’t exactly know which exception you’re looking for in the first place, and having JMC “hiding” some exceptions is not of a great help.

Also, to be fair, a more deep and intense debugging often leads to the use of WinDBG and the SOS extension (and here is a good SOS cheat sheet). But that’s another topic.

 

Step Into “Debugging Experience” with JMC

If you’ve read this far, you may now ask yourself why you would ever want to enable JMC. After all, you can handle your code yourself and with enough experience, you can easily mentally ignore pieces of the stack that are not yours. Actually, the gray font used for code that does not have debugging symbols helps a lot for that.

Well, there’s one example of good use of JMC : The debugger “Step into” feature. A very simple feature that allows step by step debugging of the software.

If you’re in debugging mode, you’ll step into the code that is called on the next line, if that’s possible, and see what’s in there.

So demonstrate this, let’s consider this example :

    static void Main(string[] args) 
    { 
        var myObject = new MyObject();

        Console.WriteLine(myObject); 
    }
    
    class MyObject 
    { 
        public override string ToString() 
        { 
            return "My object";
        } 
    }
      

This is a very simple program that will use the fact that Console.WriteLine will call the ToString method on the object that is passed as a parameter.

The point of this sample is to make “My Code” (Main) call some of “No My Code” (Console.WriteLine) that will call “My Code” (MyObject.ToString). Easy.

Now if you run this sample with the debugger with JMC disabled, if you try to “Step Into” Console.WriteLine, you’ll actually step over. This is not very helpful from the point of view of debugging you own code.

A very concrete example of that lack of “Step Into” can be found when you have proxies like the ones found in Spring.NET or Castle's DynamicProxy, they get in the way of simple debugging. You can’t step into objects that have been proxied to perform some AOP, for instance.

But if you enable JMC, well, you can actually “Step Into” your own code, even if the next actual method when you step into was not one of yours.

 

Final Words

Using JMC in this context is very useful and natural I would say. And the feature has been there for so long I missed its original goals. It originally got into my way for deep debugging purposes, and I dismissed as a “junior” feature, even cosmetic. Well, I was wrong…

Anyway, in Visual Studion 2010, the JMC has been improved a bit, as the way to enable and disable it is now far more easier to reach because it is now in the IntelliTrace “Show Calls View”.

Time to switch to Visual Studio 2010, people ! :)

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.