Improving the Startup Time of Xaml Metro Style Apps with Multicore JIT

By jay at June 11, 2012 05:15 Tags: , , ,

Ce billet est disponible en francais.

TL;DR: Microsoft introduced the Multicore JIT, which allows the recording of JITted methods during the startup of the app. This recording can be packaged in a Metro Style app for faster startup on Multicore CPUs by performing background compilation. Improvements range between 20% to 50%.

 

Since the beginning of the year, I’ve had the chance to work with some very interesting people at Microsoft, and one of the feature that came out from them was about the use of a new .NET 4.5 feature called Multicore JIT in Metro Style apps.

More...

No Threads for you ! (in metro style apps)

By jay at March 17, 2012 13:06 Tags: , , , , , , ,

Cet article est disponible en francais.

As would say this guy, since you’ve most probably been using threads the wrong way (as Microsoft seems to think), you won’t be able to use the Thread class anymore in Metro Style applications. The class is simply not available anymore, and neither are Timer or ThreadPool.

That may come a shock to you, but this actually makes a lot of sense. But don’t worry, the concept of parallel execution is still there, but it takes the form of Tasks.

 

Why using Threads is not good for you

Threads are very powerful but there are a lot of terrible gotchas that come with it :

  • Unhandled exceptions in threads handlers, either raised from a Timer, a Thread or ThreadPool thread, lead to the termination of the process
  • Using Abort is quite bad for the process, and should be avoided
  • People tend to use Thread.Sleep to arbitrarily wait for some constant time that will most probably be incorrect, and that will waste CPU resources to manage a thread that does not do anything while it waits,
  • People tend to come up with complex designs to chain operations on threads, which most of the time fail miserably.

There are some more, but these a main scenarios where using Threads fall short.

More...

Windows 8 Event Viewer’s Immersive-Shell and Metro Style apps

By jay at March 16, 2012 20:41 Tags: , , , , , , , ,

TL;DR: This article talks about an app startup error that can happen with Metro Style apps in Windows 8, how the presence of an app.config file can prevent the app from starting and how the Windows event log viewer’s new Immersive-Shell section can help.

 

The Windows 8 Metro style Xaml/C# application development is an interesting experience.

Since .NET is merely on top of a WinRT and its native structure, you’re left in a bit of a darkness sometimes, when it comes to debugging problems that come from WinRT.

Silverlight and Windows Phone also have their fair share of blind issues of this kind, either by having the application that exits with no apparent reason (when it is in fact a StackOverflow) or because you’ve set two namespaces names with the same content.

You’ve basically left at guessing, particularly on Windows Phone and Silverlight for the desktop, and if you’re lucky enough you’re having a error code that specific enough so that you can narrow your solution to a dozen google can find for you. If you’re not, well you’ve got a E_ERROR. Fail, as they say.

Windows 8 is actually a bit better at that, because of the Event Viewer. There’s a lot of details that appear there, and it’s very informative.

More...

Xaml integration with WinRT and the IXamlMetadataProvider interface

By jay at March 07, 2012 21:04 Tags: , , , , ,

TL;DR: This article talks about the internals of the WinRT/Xaml implementation in Windows 8 and how it deals with databinding, its use of the IXamlMetadataProvider interface, tips & tricks around it, and how to extend the resolver to create dynamic databinding scenarios.

 

Xaml has been around for a while, and it’s been a big part of Silverlight and WPF. Both frameworks are mostly managed, and use a CLR feature known as Reflection, or type introspection.

This is a very handy feature used by Silverlight/WPF to enable late binding to data types, where strings can be used to find their actual classes counter-parts, either for value converters, UserControls, Data-Binding, etc...

 

The burden of .NET reflection

It comes with a cost, though. Reflection is a very expensive process, and up until very recently in Silverlight, there was no way to avoid the use of Reflection. The recent addition of the ICustomTypeProvider interface allows for late binding without the use of reflection, which is a big step what I think is the right direction. Having this kind of interface allows for custom types that define pre-computed lists of fields and properties, without having the runtime to load every metadata available for an object, and perform expensive type safety checks.

This burden of the reflection is particularly visible on Windows Phone, where it is suggested to limit the use of DataBinding, which is performed on the UI thread. The Silverlight runtime needs to walk the types metadata to find observable properties so that it can properly perfrom one or two-way bindings, and this is very expensive.

There are ways to work around this without having ICustomTypeProvider, mainly by generating code that does everything the Xaml parser and DataBinding engines do, but it’s mainly experimental, yet it gives great results.

 

WinRT, native code and the lack of Reflection

In Windows 8, WinRT is pure native code, and now integrates what used to be the WPF/Xaml engine. This new engine can be seen at the cross roads of Silverlight, WPF and Silverlight for Windows Phone. This new iteration takes a bit of every framework, with some tweaks.

These tweaks are mainly related to the fact that WinRT is a native COM based API, that can be used indifferently from C# or C++.

For instance, xml namespaces have changed form and cannot reference assemblies anymore. Declarations that used to look like this :

     xmlns:common="clr-namespace:Application1.Common"

Now look like this :

     xmlns:common="using:Application1.Common"

Where the using only defines the namespace to be used to find the types specified in the inner xaml.

Additionally, WinRT does not know anything about .NET and the CLR, meaning it cannot do reflection. This means that the Xaml implentation in WinRT, to be compatible with the way we all know Xaml, needs to be able to do some kind of reflection.

 

Meet the IXamlMetadataProvider interface

To be able to do some kind reflection, the new Metro Style Applications profile generates code based on the types that are used in the Xaml files of the project. It takes the form of a hidden file, named XamlTypeInfo.g.cs.

That file can be found in the “obj” folder under any Metro Style project that contains a Xaml file. To find it, just click on the “Show all files” button at the top of the Solution Explorer file. You may need to compile the project for it to be generated.

In the entry assembly, the file contains a partial class that extends the App class to make it implement the IXamlMetadataProvider interface. WinRT uses this interface to query for the details of types it found while parsing Xaml files.

This type acts as map for every type used in all Xaml files a project, so that WinRT can get a definition it can understand, in the form of IXamlType and IXamlMember instances. This takes the form of a big switch/case construct, that contains string representation of fully qualified types. See this example :

private IXamlType CreateXamlType(string typeName) 
{ 
  XamlSystemBaseType xamlType = null; 
  XamlUserType userType;

  switch (typeName) 
  { 
    case "Windows.UI.Xaml.Controls.UserControl": 
      xamlType = new XamlSystemBaseType(typeName, typeof(Windows.UI.Xaml.Controls.UserControl)); 
      break;

    case "Application1.Common.RichTextColumns": 
      userType = new XamlUserType(this, typeName, typeof(Application1.Common.RichTextColumns), GetXamlTypeByName("Windows.UI.Xaml.Controls.Panel")); 
      userType.Activator = Activate_3_RichTextColumns; 
      userType.SetContentPropertyName("Application1.Common.RichTextColumns.RichTextContent"); 
      userType.AddMemberName("RichTextContent", "Application1.Common.RichTextColumns.RichTextContent"); 
      userType.AddMemberName("ColumnTemplate", "Application1.Common.RichTextColumns.ColumnTemplate"); 
      xamlType = userType; 
      break; 

  } 
  return xamlType; 
} 

It also creates hardcoded methods that can explicitly get or set the value of every properties a DependencyObject, like this :

case "Application1.Common.RichTextColumns.RichTextContent": 
    userType = (XamlUserType)GetXamlTypeByName("Application1.Common.RichTextColumns"); 
    xamlMember = new XamlMember(this, "RichTextContent", "Windows.UI.Xaml.Controls.RichTextBlock"); 
    xamlMember.SetIsDependencyProperty(); 
    xamlMember.Getter = get_1_RichTextColumns_RichTextContent; 
    xamlMember.Setter = set_1_RichTextColumns_RichTextContent; 
    break;

Note that if you want to step into this code without the debugger ignoring you, you need to disable the “Just my code” feature in the debugger options.

Also, in case you wonder, the Code Generator scans for all referenced assemblies for implementations of the IXamlMetadataProvider interface, and will generate code that will query these providers to find Xaml type definitions.

 

Code Generation is good for you

Now, this code generation approach is very interesting for some reasons.

The first and foremost is performance, because the runtime does not need to use reflection to determine what can be computed at runtime. This is a enormous performance gain, and this will be beneficial for the perceived performance as the runtime will not waste precious CPU cycles to compute data that can be determined at compile time.

More generally, in my projects, I've been using this approach of generating as much code as possible, to avoid using reflection and waste time and battery on something that can be only done once and for all.

The second reason is extensibility, as this IXamlMetadataProvider can be extended to add user-provided types that are not based on DependencyObject. This is an other good impact on performance.

 

Adding custom IXamlMetadataProvider

It is possible to extend the lookup behavior for standard types that are not dependency objects. This opens the same range of scenarios that ICustomTypeProvider provides.

All that is needed is to implement the IXamlMetadataProvider interface somewhere in an assembly, and the code generator used for XamlTypeInfo.g.cs will pick those up and add them in the Xaml type resolution chain. Note that for some unknown reason, it does not work in the main assembly but only for referenced assemblies.

Every time the databinding engine will need to get the value behind a databinding expression, it will call the IXamlMetadataProvider.GetXamlType method to get the definition of that type, then get the databound property value.

A very good feature, if you ask me.

 

The case of hidden DependencyObject

By hidden dependency properties, I’m talking about DependencyObject types that are not directly referenced in Xaml files. This can be pretty useful for complex controls that generate convention based databinding, such as the SemanticZoom control, that provides a implicit “Key” property to perform the Zoomed out view.

Since this XamlTypeInfo.g.cs code is generated from all known Xaml files, this means that these hidden DependencyObject types that do not have code generated for them. This forces the CLR to intercept these failed requests and fallback on actual .NET reflection based property searching for databinding, which is not good for performance.

This fallback behavior was not implemented in the Developer Preview, and the binding would just fail with a NullReferenceException without any specific reason given to the developer.

 

The case of Xaml files located in another assembly

If your architecting a bit your solution, you’re probably using MVVM or a similar pattern, and you’re probably putting your views in another assembly.

If you do that, this means that there will not be any xaml file in your main assembly (aside from the App.xaml file), leading to an empty XamlTypeInfo.g.cs file. This will make any type resolution requested by WinRT fail, and your application will mostly likely not run.

In this case, all you need to do is create a dummy Xaml file that will force the generation of the XamlTypeInfo.g.cs, and basically make your layer separation work.

 

Until next time, happy WinRT'ing !

[WPDev] The hidden cost of IL Jitting

By jay at December 02, 2011 22:35 Tags: , , , , , , ,

We’ve gotten used to it. Method jitting is negligible. Or is it really ?

 

IL JITing

The compilation from IL to the native architecture assembly language (or JITting) has been part of the CLR from the very beginning. That’s an operation that was added to make the code execute faster, as interpreting the IL was too slow. By default, it’s happening on the fly, when the code path comes to a method that needs to be jitted, and that impacts the code speed when executing the method the first time.

That compilation step is not exactly free. A lot of code analysis and CPU specific optimizations are performed during this step. This is what arguably makes already JITted code run faster than generic compiled code, where the compiler has no knowledge of the target architecture.

This analysis takes a bit of time, but it is taking less and less time to execute, due to CPUs getting faster, or multi-core JITting features like the one found in .NET 4.5.

We’ve come to a point, on desktop and server machines, where the JIT time is negligible, since it’s gotten fast enough not to be noticed, or be an issue, in the most common scenarios.

Still, if there were times when JITing would be an issue, like it used to be around .NET 1.0, NGEN would come to the rescue. This tool (available in a standard .NET installation) pre-compiles the assemblies for the current platform, and creates native images stored on the disk. When an assembly is NGENed, they appear in the debugger’s “module” window named as “your_assembly.il.dll”, along with some other fancy decorations.

But while there are some caveats, like the restrictions with cross assembly method inline being ignored. It always comes down to a balance between start-up speed and code execution speed.

 

JITing on Windows Phone

On the phone though, CPU is very limited, especially on Generation 1 (Nodo) phones. The platform is too, considering is relative young age. At least on surface.

We’ve got used to create quite a bit of code to ease the development, add levels of abstraction for many common concepts, and lately, for asynchrony.

I’ll take the example of Reactive Extensions (Rx) in this article, just to make a point.

If you execute the following code on a Samsung Focus:

    
    List<timespan> watch = new List<timespan>();

    var objectObservable = Observable.Empty<object>();

    var w = Stopwatch.StartNew();
    Observable.Timeout<object>(objectObservable, TimeSpan.FromSeconds(1));
    watch.Add(w.Elapsed);

    w = Stopwatch.StartNew();
    Observable.Timeout<object>(objectObservable, TimeSpan.FromSeconds(1));
    watch.Add(w.Elapsed);

    output.Text = string.Join(", ", watch.Select(t => t.TotalMilliseconds.ToString()));

You'll consistently get something similar to this :

    20.60, 1.19


Calling an Rx method like this does almost nothing, it’s basicallt just setup. But 20ms is a long time ! Particularly when done on the UI thread, or any other thread for that matter.

These rough measurements tend to show that the Windows Phone platform (as of Mango at least) is not performing any NGEN like pre-jitting, leaving the app the burden of jitting code on the fly.

Still, not everything can be accounted to JITing, there must be type metadata loading, type constructors that are called.

 

Generating code with T4

So to sort that out a bit more, let’s use some T4 templates to generate code and isolate the JIT a bit more :

<#@ template language="C#" #>
using System;
using System.Collections.Generic;

public class Dummy
{
   public static void Test()
   {
      List<int> list = new List<int>();

      <#for (int i = 0; i < 100; i++) { #>
	  list.Add(<#= i.ToString() #>);
      <#} #>
   }
}

 

For different values of iterations, here's what gets out, when timing the call to the method :

Calls First call Subsequent calls
100 1.6ms > 0.03ms
1000 15.7ms > 0.09ms
5000 72.8ms > 2 ms
10000 148ms > 2ms

 

While this type of code is not exactly a good real-life scenario, this shows a bit the cost the IL jitting step. These are very simple method calls, no branching instructions, no virtual calls, … in short, nothing complex.

But with real code, the IL is a bit more intricate, and there’s got to be more logic involved in the JIT when generating the native code.

 

Wrapping up

Unfortunately, there’s not much that can be done here, except by reducing the amount of actual lines of IL that are generated. But that can be a though job, particularly when customers are expecting a lot from applications.

One could suggest to push as much code as  possible on a background thread, even code that seemingly does nothing particularly expensive. But that cannot always be performed on an other thread, particularly if that code depends on UI elements.

Finally, pre-jitting assemblies when installing the applications could be an interesting optimization for the Windows Phone platform, and I’m wondering why this has not made its way to the platform yet…

NuGet package customizations and optional references

By jay at November 25, 2011 20:55 Tags: , , , ,

This article describes a bit of what NuGet does and why you should take a look at it, but also a package installation customization to a work around a problem with packages that contain optional assemblies.

 

NuGet is a fantastic tool. Its ability to ease package discovery, installation and update is a big time saver. As a solution maintainer, you can spend less time deploying and updating your external dependencies, particularly when they’re often updated.

 

Private NuGet Repositories

It can be used to easily install public packages exposed via Microsoft’s servers, but it can also be used to create private package repositories.

I've been using it to publish internal framework that is often updated and used in many projects. The automatic creation of packages in a build script and the ease of deployment using the NuGet OData server is, again, a big time saver.

Each time a check-in is performed to update the internal framework, a new package is automatically created and it appears as a package update for the projects that use it.

 

Existing libraries and NuGet

The package model is built around some basic rules:

  • Each package is installed using its version as the directory name
    The NuGet engine will update all the references “HintPath” nodes of projects files for them to point to the updated package location.
  • By default, the install action of a package adds references to all the available assemblies in the target project.
    There’s a pretty good reason for that; you don’t want to download and install assemblies that you don’t need. There is no support for “sub-packages”, even with reference exclusions. Big frameworks, like Enterprise Library, get chunked into small dependent packages, and you can install only those needed by your projects.
  • A project that installed a package that has references to package-excluded references will not have these manual references updated to the latest package.
    That is a side effect of the second rule. If you add reference-excluded assemblies to a package, the action of updating that package will not update manual references you created by referencing these assemblies.

     

    For existing libraries that may not easily be split into small pieces, primarily for time constraints, moving to NuGet can be a tough job. If you make a package with all your assemblies and only reference a few of them by default in there, then updating the package can quickly become an annoying “Find and Replace” job to manually change references that did not get updated automatically by the NuGet engine.

     

    Using the Install.ps1 script

    Fortunately, there is a file that can be included in the package, which is by convention named and located in tools\install.ps1.

    It is a PowerShell script that gets called automatically when the package is installed. But the interesting part is that this installer file gets called with a DTE.Project instance that can be used to manipulate the actual project in VS2010 !

    This means that when the package is installed, it is possible to update the references that were manually created to the previous package assemblies.

     

    Updating the manual references

    This is not a straightforward as it may seem, but after working around a HintPath update issue, here is a small helper to do the job:

    param($installPath, $toolsPath, $package, $project)
        
        $packageName = $package.Id + '.' + $package.Version;
        $packageId = $package.Id;
    
        # Full assembly name is required
        Add-Type -AssemblyName 'Microsoft.Build, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
    
        $projectCollection = [Microsoft.Build.Evaluation.ProjectCollection]::GlobalProjectCollection
        
        # There is no indexer on ICollection<T> and we cannot call
        # Enumerable.First<T> because Powershell does not support it easily and
        # we do not want to end up MethodInfo.MakeGenericMethod.
        $allProjects = $projectCollection.GetLoadedProjects($project.Object.Project.FullName).GetEnumerator(); 
    
        if($allProjects.MoveNext())
        {
            foreach($item in $allProjects.Current.GetItems('Reference'))
            {
                $hintPath = $item.GetMetadataValue("HintPath")
                
                $newHintPath = $hintPath -replace $packageId + ".*?\\", "$packageName\\"
            
                if ($hintPath -ne $newHintPath)
                {
                    Write-Host "Updating $hintPath to $newHintPath"
                    $item.SetMetadataValue("HintPath", $newHintPath);
                }
            }
        }

    This script is called for each project the package is installed onto, and scans all the references of the project that match the current package to update them.

    You’ll notice that the ICollection<T> interface is not particularly PowerShell friendly. Too bad the Powershell syntax does not allow the use of generic methods, otherwise that nasty GetEnumerator / MoveNext could have gone away. Still, Powershell is dynamically typed so using IEnumerable.Current is fine.

Asynchronous Programming with the Reactive Extensions (while waiting for async/await)

By jay at November 25, 2011 20:43 Tags: , , , , , ,

This article was originally published on the MVP Award Blog in December 2011.

Nowadays, with applications that use more and more services that are in the cloud, or simply perform actions that take a user noticeable time to execute, it has become vital to program in an asynchronous way.

But we, as developers, feel at home when thinking sequentially. We like to send a request or execute a method, wait for the response, and then process it.

Unfortunately for us, an application just cannot wait synchronously for a call to end anymore. Reasons can be that the user expects the application to continue responding, or because the application joins the results of multiple operations, and it is necessary to perform all these operations simultaneously for good performance.

Frameworks that are heavily UI dependent (like Silverlight or Silverlight for Windows Phone) are trying the force the developer's hand into programming asynchronously by removing all synchronous APIs. This leaves the developer alone with either the Begin/End pattern, or the plain old C# events. Both patterns are not flexible, not easily composable, often lead to memory leaks, and are just plain difficult to use or worse, to read.

C# 5.0 async/await

Taking a quick look at the not so distant future, Microsoft has taken the bold approach to augment its new .NET 4.5 to include asynchronous APIs and in the case of the Windows Runtime (WinRT), restrict some APIs to be asynchronous only. These are based on the Task class, and are backed by languages to ease asynchronous programming.

In the upcoming C# 5.0 implementation, the async/await pattern is trying to handle this asynchrony problem by making asynchronous code look synchronous. It makes asynchronous programming more "familiar" to developers.

If we take this example:

    static void Main(string[] args)
    {
        // Some initialization of the DB...
        Task<int> t = GetContentFromDatabase();

        // Execute some other code when the task is done
        t.ContinueWith(r => Console.WriteLine(r.Result));

        Console.ReadLine();
    }

    public static async Task<int> GetContentFromDatabase()
    {
        int source = 22;

        // Run starts the execution on another thread
        var result = (int) await Task.Run(
            () => { 
                // Simulate DB access
                Thread.Sleep(1000);
                return 10; 
            }
        );

        return source + result * 2;
    }

The code in GetContentFromDatabaselooks synchronous, but under the hood, it's actually split in half (or more) where the await keyword is used.

The compiler is applying a technique used many times in the C# language, known as syntactic sugar. The code is expanded to a form that is less readable, but is more of a plumbing code that is painful to write – and get right – each time. The using statement, iterators and more recently LINQ are very good examples of that syntactic sugar.

Using a plain old thread pool call, the code actually looks a lot more like this, once the compiler is done:

    public static void Main()
    {
        MySyncMethod(result => Console.WriteLine(result));
        Console.ReadLine();
    }

    public static void GetContentFromDatabase (Action<int> continueWith)
    {
        // The first half of the async method (with QueueUserWorkItem)
        int source = 22;

        // The second half of the async method
        Action<int> onResult = result => {
            continueWith(source + result * 2);
        };

        ThreadPool.QueueUserWorkItem(
            _ => {
                // Simulate DB access
                Thread.Sleep(1000);

                onResult(10);
            }
        );
    }

This sample somewhat more complex, and does not properly handle exceptions. But you probably get the idea.

Asynchronous Development now

Nonetheless, you may not want or will be able to use C# 5.0 soon enough. A lot of people are still using .NET 3.5 or even .NET 2.0, and new features like async will take a while to be deployed in the field. Even when the framework has been offering it for a long time, the awesome LINQ (a C# 3.0 feature) is still being adopted and is not that widely used.

The Reactive Extensions (Rx for friends) offer a framework that is available from .NET 3.5 and functionality similar to C# 5.0, but provide a different approach to asynchronous programming, more functional. More functional is meaning fewer variables to maintain states, and a more declarative approach to programming.

But don't be scared. Functional does not mean abstract concepts that are not useful for the mainstream developer. It just means (veryroughly) that you're going to be more inclined to separate your concerns using functions instead of classes.

But let's dive into some code that is similar to the two previous examples:

    static void Main(string[] args)
    {
        IObservable<int> query = GetContentFromDatabase();

        // Subscribe to the result and display it
        query.Subscribe(r => Console.WriteLine(r));

        Console.ReadLine();
    }

    public static IObservable<int> GetContentFromDatabase()
    {
        int source = 22;

        // Start the work on another thread (using the ThreadPool)
        return Observable.Start<int>(
                   () => {
                      Thread.Sleep(1000);
                      return 10;
                   }
               )

               // Project the result when we get it
               .Select(result => source + result * 2);
    }

From the caller's perspective (the main), the GetContentFromDatabase method behaves almost the same way a .NET 4.5 Task would, and the Subscribe pretty much replaces the ContinueWith method.

But this simplistic approach works well for an example. At this point, you could still choose to use the basic ThreadPoolexample shown earlier in this article.

A word on IObservable

An IObservable is generally considered as a stream of data that can push to its subscribers zero or more values, and either an error or completion message. This “Push” based model that allows the observation of a data source without blocking a thread. This is opposed to the Pull model provided by IEnumerable, which performs a blocking observation of a data source. A very good video with Erik Meijer explains these concepts on Channel 9.

To match the .NET 4.5 Task model, an IObservable needs to provide at most one value, or an error, which is what the Observable.Start method is doing.

A more realistic example

Most of the time, scenarios include calls to multiple asynchronous methods. And if they're not called at the same time and joined, they're called one after the other.

Here is an example that does task chaining:

    public static void Main()
    {
        // Use the observable we've defined before
        var query = GetContentFromDatabase();

              // Once we get the token from the database, transform it first
        query.Select(r => "Token_" + r)

             // When we have the token, we can initiate the call to the web service
             .SelectMany(token => GetFromWebService(token))

             // Once we have the result from the web service, print it.
             .Subscribe(_ => Console.WriteLine(_));
    }

    public static IObservable<string> GetFromWebService(string token)
    {
        return Observable.Start(
            () => new WebClient().DownloadString("http://example.com/" + token)
        )
        .Select(s => Decrypt(s));
    }

The SelectMany operator is a bit strange when it comes to the semantics of an IObservable that behaves like a Task. It can then be thought of a ContinueWith operator. The GetContentFromDatabase only pushes one value, meaning that the provided lambda for the SelectMany is only called once.

Taking the Async route

A peek at WinRT and the Build conference showed a very interesting rule used by Microsoft when moving to asynchronous API throughout the framework. If an API call nominally takes more than 50ms to execute, then it's an asynchronous API call.

This rule is easily applicable to existing .NET 3.5 and later frameworks by exposing IObservable instances that provide at most one value, as a way to simulate a .NET 4.5 Task.

Architecturally speaking, this is a way to enforce that the consumers of a service layer API will be less tempted to synchronously call methods and negatively impact the perceived or actual performance of an application.

For instance, a "favorites" service implemented in an application could look like this, using Rx:

    public interface IFavoritesService
    {
        IObservable<Unit> AddFavorite(string name, string value);
        IObservable<bool> RemoveFavorite(string name);
        IObservable<string[]> GetAllFavorites();
    }

All the operations, including ones that alter content, are executed asynchronously. It is always tempting to think a select operation will take time, but we easily forget that an Addoperation could easily take the same amount of time.

A word on unit: The name comes from functional languages, and represents the void keyword, literally. A deep .NET CLR limitation prevents the use of System.Void as a generic type parameter, and to be able to provide a void return value, Unit has been introduced.

Wrap up

Much more can be achieved with Rx but for starters, using it as a way to perform asynchronous single method call seems to be a good way to learn it.

Also, a note to Rx experts, shortcuts have been taken to explain this in the most simple form, and sure there are many tips and tricks to know to use Rx effectively, particularly when it is used all across the board. The omission of the Completed event is one of them.

Finally, explaining the richness of the Reactive Extensions is a tricky task. Even the smart guys of the Rx team have a hard time doing so... I hope this quick start will help you dive into it!

WinRT and the syntactic sugar around .NET event handlers

By jay at October 17, 2011 19:48 Tags: , , , , ,

If you've watched the great number of videos available from the Build conference, you've probably noticed that the layer between .NET and WinRT is very thin.

So thin that in permeates through to C# 5.0, even though it's not immediately visible to the naked eye.

 

Also, that Windows 8 developer preview is pretty stable... I'm writing this blog post using it, and it's pretty good :) (lovin' the inline spell checker, everywhere !!)

 

What about WinRT ?

The Windows Runtime has been explained here, there and by Miguel de Icasa (and there too by Julien Dollon), but to summarize in other words, WinRT is (at least for now) the new way to communicate with the Windows Core, with an improved developer experience. It's the new preferred (and only, as far as I know) way to develop Metro style applications, in many languages like C#/F#/VB, C++, JavaScript and more...

The API is oriented toward developing tablet applications, with the power and connectivity limitation that kind of platform has, plus the addition of what makes Windows Phone so interesting. That means Live Tiles, background agents, background transfers, XAML, background audio, social APIs, camera, sensors, location, and new features like sharing and search contracts, ...

My favorite part of all this is the new addition of a rule that make a LOT of sense : If an API call nominally takes more than 50ms to execute, then it's an asynchronous api call. But not using the ugly Begin/End pattern, rather through the nice async/await pattern, WinRT style (I'll elaborate on that in a later post). I've even started to apply that rule to my existing development with the Reactive Extensions (And that's yet an other later post).

Microsoft has taken the approach of cleaning up the .NET framework with the ".NET Core" profile. For instance, the new TypeInfo class now separates the introspection part from the type safety part that were historically merged in the System.Type type. This segregation limits the loading of type metadata only when necessary, and not when just doing a simple typeof(). Now, the System.Type type is fairly simple, and to get back all the known methods like GetMethods() or GetProperties() there's an extension method called Type.GetTypeInfo() in System.Reflection that gives back all the reflection side.

There are a lot of other differences, I'll discuss in a later post. (yeah, that's a lot to talk about !)

For the .NET developer, WinRT takes the form of *.winmd files that follow the .NET standard metadata format (kind of like TLB files on steroids, if you know what I mean...). These files can be directly referenced from .NET code like any other assembly, it's then very easy to call the underlying Windows platform. No more P/Invoke.

Just before you start freaking out, WinRT does not replace the standard .NET 4.5 full platform you already know, remember that. That's just a new profile, much like Windows Phone or Xbox 360 are profiles, but targeted at Metro style applications. (It's not applications anymore, it's apps :) just so you know...)

 

But how thin is the layer, really ?

To accommodate all these languages, compromises had to be made and underneath, WinRT is native code. Native code means no garbage collection, limited value types, a pretty different exception handling (SEH), and so on.

The CLR and C# compiler teams have made a great job at trying to hide all this but there are still some corner cases where you can see those differences appear.

For instance, you'll find that there are two EventHandler types : the existing System.EventHandler, and the new Windows.UI.Xaml.EventHandler. What's the difference ? See for yourself :

namespace System
{
    [ComVisible(true)]
    public delegate void EventHandler(object sender, EventArgs e);
}

And the other one :

namespace Windows.UI.Xaml
{
    // Summary:
    //     Represents a basic event handler method.
    [Version(100794368)]
    [WebHostHidden]
    [Guid(3817893849, 19739, 19144, 164, 60, 195, 185, 8, 116, 39, 152)]
    public delegate void EventHandler(object sender, object e);
}

The difference is subtle, but it's there : the second parameter is an object. This is kind of troubling, and having to juggle between the two is going to be a bit messy. That's going to be the forced return of conditional compilation and the myriads of #if and #endif...

But the difference does not stop here though. Let's look at how the WinRT handler can be used :

public class MyCommand : Windows.UI.Xaml.Input.ICommand
{
    public event Windows.UI.Xaml.EventHandler CanExecuteChanged;

    public bool CanExecute(object parameter) { }

    public void Execute(object parameter) { }
}

Translates to this, after the compiler does its magic :

using System.Runtime.InteropServices.WindowsRuntime;
public class MyCommand : Windows.UI.Xaml.Input.ICommand
{
    public event Windows.UI.Xaml.EventHandler CanExecuteChanged
    {
        add
        {
            return this.CanExecuteChanged.AddEventHandler(value);
        }
        remove
        {
            this.CanExecuteChanged.RemoveEventHandler(value);
        }
    }

    public bool CanExecute(object parameter) { }

    public void Execute(object parameter) { }

    public MyCommand()
    {
        this.CanExecuteChanged = 
           new EventRegistrationTokenTable();
    }
}

The delegates are not stored in a multicast delegate instance like they used to be, but are now stored in an EventRegistrationTokenTable type instance, and provides a return value for the add handler ! Also, the remove handler "value" is a EventRegistrationToken instance.

That construct is so new that even the intellisense engine is mistaken by this new syntax if you try to write it by yourself, but it compiles correctly.

The return value is of type EventRegistrationToken, and I'm guessing that it must be something WinRT must keep track of to call marshaled managed delegates.

The calling part is also very interesting, if you try to register to that event :

// Before
MyCommand t = new MyCommand();
t.CanExecuteChanged += (s, e) => { };
// After
MyCommand t = new MyCommand();
WindowsRuntimeMarshal.AddEventHandler(
   new Func(t.add_CanExecuteChanged)
 , new Action(t.remove_CanExecuteChanged)
 , delegate(object s, object e) { }
);

Quite different, isn't it ?

But this syntactic sugar seems only to be related to the fact that the WinRT EventHandler delegate type is exposed as a implemented interface member, like in ICommand. It does not appear if it is used somewhere else.

 

Cool. Why should care ?

Actually, you may not care at all, unless you write ICommand implementations.

If you write a command, and particularly ICommand wrappers or proxies, you may want to write your own add/remove handlers and to be able to do so, you'll need to return that EventRegistrationToken too, and map that token to your delegate.

Here's what I came up with :

public class MyCommand : Windows.UI.Xaml.Input.ICommand
{
    EventRegistrationTokenTable _table = new EventRegistrationTokenTable();
    Dictionary _reverseTable = new Dictionary();
        
    public event EventHandler CanExecuteChanged
    {
        add
        {
            var token = _table.AddEventHandler(value);
            _reverseTable[token] = value;

            // do something with value

            return token;
        }

        remove
        {
            // Unregister value 
            RemoveMyHandler(_reverseTable[value]);

            _table.RemoveEventHandler(value);
        }
    }
}

All this because the EventRegistrationTokenTable does not expose a bi-directional mapping between event handlers and their tokens.

But remember, WinRT and Dev11 are in Developer Preview state. That's not even beta. This will probably change !

[wpdev] Tips and tricks about updating live tiles in Mango

By jay at September 29, 2011 19:10 Tags: , , , ,

Cet article est aussi disponible en francais.

In the last published applications I've worked on, like Foursquare, Flickr or TuneIn (and more are coming), all of them have live tiles, in both the Pull and locally generated tiles forms. But there are a few things to know to have a great experience with it, and you'll find it out by reading this article.

This is a very powerful feature, letting the user choose how to customize its own very personal experience, with no-one forcing the user into having a tile he does not want. This is the very same reasoning behind the absence of API to add items in Windows 7 task bar.

 

Live Tiles in Pull mode

In the foursquare app there is the main tile, updated via the "pull" model, every hour or so (and the "or so" has a very strong meaning).

That tile that displays the Leaderboard is built in an Azure cloud service using a WPF offscreen rendering, based on the requests of the tile Pull Engine. This tile was built this way because of the limited capabilities of NoDo, where background agents were not available to render it locally on the device.

With Windows Phone Nodo, many users were complaining about the main tile not updating, and quite frankly, this has been a mystery up to the end. It seems like tiles would update on some devices, but not on others, but would only update if the battery power was more 50%.

Also, these tiles seemed to not update if the device is in standby, but only when the user sees the home screen, and when the suggested refresh delay has expired. I say "seem" because it seems like the rules behind this tile update were either not clear, or broken in some way.

This has changed with mango though. The Pull tiles are not updating almost all the time, but the 50% battery rule still seems to apply.

Also there's the rule of the 80KB file size JPEG, that if you go over, your tile won't be displayed.

 

Programmatic Live Tiles

In Foursquare, the user may choose to pin a secondary "Tile" a specific place to its main screen for easy access.

Updating these tiles can be acheived with the ShellTile API, and with that you can set four thing:

  • A title for the front and back
  • An image URL on the front and back
  • A four lines text on the back
  • A number on the front
  • (and you forget about the animated tiles like the people hub)

 

While all these features are interesting, only one of them is actually very useful: Images URL.

All the other properties are not stylable, they only follow the system's colors, and do not fit very well with user generated content. In the case of Foursquare, Flickr and TuneIn, the displayed images is user provided content, and having a white on white text is not very useful.

On the subject of image URLs, setting an external URL sets the image of the tile, but as long as the device does not reboot. If the device is rebooted, the tile looses its content. A pretty strange behavior, if you ask me.

 

Using the new isostore uri schema

Fortunately, it is now possible to store the image locally in a special folder in the isolated storage named  /Shared/ShellContent, and use the new URI prefix "isostore", like this "isostore:/Shared/ShellContent/MyTile.jpg".

This means that you can download the image to display to the isolated storage, and use it from there.

But there's a big problem with using this technique : You do not control the size of the downloaded image. So if it is bigger than 80KB, you're stuck with the accent color background.

On a side note, I'd be curious to know the story behind this isostore prefix, because there are only two places that can use it, SQL CE Databases and Live Tiles. This prefix cannot be used as a Source property for Image controls, even though it would be very useful. But I digress.

 

Generating Live Tiles

Hopefully, it's very much possible to generate a complete tile's content, using the WriteableBitmap.Render method. This method allows the offscreen rendering of any UIElement, then save it using the SaveJpeg extension method to persist it.

The tiles for Foursquare, Flickr and TuneIn are generated this way, using a user control that a real designer person created. This gives great looking tiles, and the layout and style can be updated depending on the dynamic content.

Here are a few things to generate tiles :

  • The "new" (kinda) Silverlight 4 ViewBox control is very handy to resize text to fit the 173x173 layout,
  • You can use an Image control in your render source, but you need to wait for the BitmapImage (not the Image) to raise the ImageLoaded event, (The Reactive Extensions can be very handy for that)
  • You'll also need to set the CreateOptions to None on your BitmapImage to make sure the image is downloaded immediately,
  • If you download images, make sure you have a local fallback image underneath, just in case the remote image cannot be downloaded,
  • Before rendering the content, make sure to call Measure and Arrange methods to force the layout to the 173x173 size required by the tiles.
  • You may need to call Measure and Arrange multiple times, because for some obscure reason, the control to be  rendered may not honor these commands. Check for the ActualHeight and ActualWidth properties values to see if they are correct.
  • Make sure to render your tile before pinning it to the home screen ! The app is basically halted when you call the pin command, and the user may not come back to your app for you to finish the image rendering.
  • Don't take too long to render your tile though, if you wait too much, the user experience if pretty bad. That can definitely be a challenge when downloading content to be displayed on the tile.

But then, you may only refresh your tiles when the application is running, unless you use the new Background Agents mango feature.

 

Updating the tiles with Background Agents

Background agents are Microsoft's way of letting third party apps run code in the background, but with some big restrictions, like memory (4MB), schedule (30 minutes) or duration limits (15 Seconds) for Periodic Tasks.

Here are a few tricks about background agents :

  • Periodic Agents run at a 30 minutes interval, and that is not configurable. So be gentle, you may want to add logic to avoid doing work too often, like not refreshing tiles during the night, and actually update the tile every 3 to 6 hours.
  • Don't wait too much to generate the tile, 15 seconds is very short. And your task may get killed by the OS before that.
  • Don't rely solely on the agent to run to update your tiles, the user may disable your agent using the Settings / Applications / Background Agent page. And the OS may prevent it from running, if it needs to.
  • Abuse of the ScheduledActionService.LaunchForTest, to test your background agent,
  • A background agent runs your code in a different process than your application, meaning that both your app and the agent can run at the same time. Watch out for shared resources, like a SQL CE database or an isolated storage file.
  • If your are updating your tiles in both your application and your background agent, you may need to add some IPC using an old fashion named-Mutex (ahh, the good old days) and synchronize access to your resources.
  • Avoid referencing too many assemblies in your background agent, there are a lot of Unsupported API that may make your app fail certification. You can validate your app using the Marketplace Test Kit automated tests.

About the first point, while I understand the power consumption concerns on running below 30 minutes, I still don't get why that interval cannot be set higher, to avoid that very same power consumption issue. There also must be a story behind this...

Then about the last point, during the Beta Phase of the Mango SDK, the StandardTileData class was considered an unsupported API, making the automatic background update of tiles impossible. Hopefully, this changed since the RC of the SDK and it is now possible to update tiles from background agents.

 

That's it for now. Have fun with the tiles ! 

[wp7dev] Images and cache control in Windows Phone 7.1 (Mango)

By jay at August 20, 2011 16:39 Tags: , , ,

TLDR: Windows Phone 7.1 Mango's Image control now respects HTTP/1.1 server cache directives, and particularly the max-age, meaning better performing apps. And it is doing it better than you could ever do :)

Image loading is one of the weakest parts of third party apps on Windows Phone 7.0 (NoDo).

The Flickr app in WP7 is a good demonstration of this. The app is basically stalling during the loading of images, and there are (obviously) a lot of images loaded by this app. (The Mango release will make this app really awesome and responsive, I can tell you :))

There are two main reasons for this, being the image loading happening on the UI Thread and partial cache persistence.

 

Hacking around image loading in NoDo

So if you look around to mitigate those two issues, you'll find a few things like the LowProfileImageLoader from a Microsoftee. This removes a lot of burden from the UI thread by not using the WebClient, and queueing requests to avoid having too many dowloads at the same time.

But like I've discussed before, this is still not the perfect solution because HttpWebRequest still goes on to the UI thread, and when there are many images loaded the UI becomes easily sluggish.

For the image cache part, Silverlight will cache BitmapImage instances based on the Url, will persist them across application runs but will ignore the HTTP/1.1 max-age directive. This means that each time you run the application, the app will try to refresh the image again. It may not be downloaded again, but it still is checked. This may delay a lot the display of the image, because of the wait for the server to check if the image has changed.

If you still want to do some sort of caching without asking the server every time, then you need to handle the storage of downloaded streams and use BitmapSource.SetSource, and perform some in-memory caching of BitmapSource instances to still use the Silverlight cache even if you can't provide an Url. And all this has to be performed on the UI thread. It really does not help.

These are many roadblocks that hurt badly the perceived performance of the application.

 

Images in Mango

If you try to the same in Mango, doing the background download and caching by yourself, you end up making the matters worse.

Mango has changed everything on that front, and the Product Team has addressed these issues in a very nice way. Loading images is now seamless, you can download as many as you want and UI will not lag a bit.

If you observe the loading of images in mango, you'll quickly see that cached images are displayed almost instantly, primarily because of the cache engine respecting server cache directives. This means that an image will not be checked for a refresh nor downloaded again if the cache duration has not expired.

All this means that you pretty much don't need to do anything to display images in Mango, unless you need to bypass server caching directives.

This is good news :)

Also, Silverlight seems to be doing some work off the UI thread the "user code" (us, mere mortals) cannot do because it needs to be on the UI thread, meaning that you have to let Silverlight do its magic to load images the fastest and seamless way possible.

 

Image caching in Mango

By looking closer to the way Mango does caching, I've noticed a few things :

  • Images seem to be downloaded once per application run, meaning that server cache directives are ignored until a restart of the application (Fast-resume does not seem to count as a restart),
  • Images that need to be refreshed are checked for modifications on the server, and if an HTTP 302 is sent back, the cached image stays.
  • ETag is supported, the If-None-Match header is sent when the max-age has been reached.
  • If-Modified-Since is also sent when the max-age time span has been reached,
  • When using BitmapCreateOptions.IgnoreImageCache
    • Server cache directives do not seem to be bypassed, the cache is not refreshed until max-age has been reached
    • If Cache-Control max-age and Expires headers are not specified the cache does not seem to be ever refreshed
    • If Expires is specified but not max-age, then the server is called to check for a newer version with If-Modified-Since

These are very good news, since most web server implementation support and respect the HTTP/1.1 Cache-Control directives, meaning that images will be displayed and refreshed properly by default.

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.