On the Startup Performance of a WPF ElementHost in Winforms

By Jay at August 09, 2009 17:34 Tags:

Ce billet est aussi disponible en francais. 

Imagine you're dealing with a relatively complex WinForms application, and you've been so tempted for a while by the charms of WPF that you want to integrate a WPF control somewhere in a form lost in the application. The form in question is opened and closed a lot.

A solution is to develop a WPF control, and integrate it in the WinForms application by using of the ElementHost, this little piece of "magic" that saves a great deal of time.

Soon enough, you'll discover that loading time of the WPF control takes a lot of time... On my machine, a Core 2 Duo 3GHz, that takes something like 3 to 4 seconds before displaying properly the form that contains the ElementHost. And as if it was not enough, the display of the form is done by chunks... Not that fancy... Simple intuition, but that seems to be the loading time of WPF that is too long...

The solution to speed things up is rather simple : Just keep an "empty" ElementHost visible at all times. Place a control of the ElementHost type sized 1x1 somehere on a form that stays visible during the application execution.

The result, a form that shows up in no time and without any visual glitch. Of course, the loading time of the "initial" form that contains the empty ElementHost will still take some time to load, but after that all other forms that contain WPF controls will show up instantly. 

From a more technical point of view, it seems that the initialization of WPF is done when the first ElementHost of the application is initialized, and is released when the last ElementHost of the application is closed. A small analysis using reflector did not show the existence of a method named "InitializeAndKeepWPFInitialized()", and it probably just a matter of instanciating the proper WPF type to intialize WPF... But the empty ElementHost is more than enough !

WCF Streamed Transfers, IIS6 and IIS7 HTTP KeepAlive

By Jay at July 10, 2009 21:18 Tags: ,

Ce billet est disponible en francais.

A while back, I was working on a client issue where I was having some kind of unusual socket exception from a WCF client connecting to an IIS6 hosted WCF service.

To get a long story short, if you're using the .NET 3.5 WCF streamed transfer on IIS6 and making a lot of transfers in a small time, disable the KeepAlive feature on your web site. The performance will be lower, but it will last longer (without a client support call).

Still here with me ? :) If you have a bit more time to read, here some detail about what I found on this issue...

The setup is pretty simple : A WCF client that is sending a stream over a WCF service that has the transferMode set to streamed. This allows the transfer of a lot of information using genuine streaming, which means that the client writes to a System.IO.Stream instance, and the server reads from an other System.IO.Stream, and the data does not need to be transferred all at once, like in a "normal" SOAP communication. I'm using the required basicHttpBinding for both ends.

The strange thing is that after having made more than 15000 requests to transfer streams,  I was receivig this exception :

System.ServiceModel.CommunicationException: Could not connect to http://server/streamtest/StreamServiceTest.Service1.svc.
TCP error code 10048: Only one usage of each socket address (protocol/network address/port) is normally permitted 
---> System.Net.WebException: Unable to connect to the remote server
---> System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted

This is a rather common issue, which is mostly found where an application tries to bind to a TCP port but cannot do so, either because it is already being bound to an other application or because it does not use the SO_REUSEADDR socket option and the port was closed very recently.

What is rather unusual is that this exception is raised on the client side and not on the server side !

After a few netstat -an, I found out that an awful lot of sockets were lingering with the following state :

TCP    the.client:50819     the.server:80          TIME_WAIT

There were something like 15000 lines of this, with incrementing numbers for the local port. This state is normal, it's meant to be that way, but it's generally more found lingering on a server, much less on a client.

That could mean only one thing, considering that IIS6.0 is an HTTP/1.1 compliant web server: WCF is requesting that the connection to be closed at after a streamed transfer.

Wireshark being my friend, I started looking up at the content of the dialog between IIS6 and my client application :

POST /streamtest/StreamServiceTest.Service1.svc HTTP/1.1
MIME-Version: 1.0
Content-Type: multipart/related; type="application/xop+xml";start="<http://tempuri.org/0>";boundary="uuid:41d2cf74-aaa6-4a80-a6c4-0ec37692a437+id=1";start-info="text/xml"
SOAPAction: "http://tempuri.org/IService1/Operation1"
Host: the.server
Transfer-Encoding: chunked
Expect: 100-continue
Connection: Keep-Alive

The server answers this :

HTTP/1.1 100 Continue

 Then the stream transfer takes place, gets the SOAP response, then at the end :

HTTP/1.1 200 OK
Date: Sat, 11 Jul 2009 01:40:16 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Connection: close
MIME-Version: 1.0
Transfer-Encoding: chunked
Cache-Control: private

I quickly found out that IIS6.0, or the WCF handler is forcing the connection to close on this last request. That's not particularly unusual, since a server may explictly deny an HTTP client to keep alive the connection.

What's even more unusual is that out of luck by trying to deactivate the IIS6.0 keep alive setting on my web site, I noticed that all the connections were properly closed on the client...!

I tried analysing a bit deeper the dialog between the client and the server, and I noticed two differences :

  1. The content of the final answer of the IIS contains two "Connection: close" headers, which could mean one by the WCF handler, and one by IIS itself. I'm not sure if repeating headers is forbidden in the RFC, I'd have to read it again to be sure.
  2. It looks like the order of the FIN/ACK, ACK packets is a bit different, but I'm not sure either where that stands. Both the client and the server are sending FIN packets to the other side, probably the result of calling Socket.Close().

But then I found out something even stranger : It all works on IIS7 ! And the best of all, the KeepAlive status is honored by the web server. That obviously means that the global performance of the web service is better on IIS7 than it is on IIS6, since there is only one connection opened for all my 15000 calls, which is rather good. Too bad my client cannot switch to IIS7 for now...

It also seems that the WCF client is not behaving the same way it does with IIS6, because at the TCP level only the client is sending a TCP FIN packet and the server is not when the keep alive is disabled.

I think I'll be posting this on Microsoft Connect soon, but I'm not sure where the problem lies, whether it is in IIS6, the WCF client or the WCF server handler, but there is definitely an issue here.

Working with Bill Graziano's ClearTrace to optimize SQL queries

By Jay at May 27, 2009 19:37 Tags: ,

Cet article est disponible en francais.

After listening to the RunAs Radio show #103 with Bill Graziano, I decided to give a try to his tool, ClearTrace, an SQL trace analysis tool.

It turns out that I’m in an SQL optimization spree recently, and the project I’m working on had an complete sequence of operations that took more than 24 hours to complete. Analyzing traces with the SQL profiler can be time consuming­ - ­­­ needle in a haystack consuming - particularly when the log is over 7GB in size, in that case.

Finding small queries that are executed thousands of times is rather hard to track, and finding proper candidates for optimization is a bit complex. You don't want to spend time optimizing a query that has a small impact.

This is where Bill’s tool come into play. Give it a trace file, the tool analyses it and gives you aggregate information about what takes the most CPU/Duration/Read/Write. Pick your victim.

After a few hours and a few runs of ClearTrace to find which stored procedure needed rework, I found a bunch of store procedure that were executed thousands of time and that were using a lot of cumulative I/O. After optimizing these procedures, the whole process that took more than 24 hours is now down to about 7 hours.

Nothing magic here, the key is to find what to optimize on long running processes executing millions of queries. Bill’s tool does that perfectly !

On a side note, at first ClearTrace threw an out of memory exception after trying to import my big trace file. Turns out that after exporting it with Reflector and debugging the code, I spotted a small refactoring issue that Bill fixed very quickly. Thanks Bill !

As Carl Franklin says in .NET Rocks' “Better Know a Framework”, learn it, use it, love it !

A C# Traverse extension method, with an F# detour

By Jay at May 17, 2009 09:10 Tags: , ,

Cet article est disponible en Français.

The Traverse extension method in C#

Occasionally, you'll come across data structures that take the form of single linked lists, like for instance the MethodInfo class and its GetBaseDefinition method.

Let's say for a virtual method you want, for a specific type, discover which overriden method in the hierarchy is marked with a specific attribute. I assume in this example that the expected attribute is not inheritable.

You could implement it like this :


    private static MethodInfo GetTaggedMethod(MethodInfo info)
        MethodInfo ret = null;

            var attr = info.GetCustomAttributes(typeof(MyAttribute), false) as MyAttribute[];

            if (attr.Length != 0)
                return info;

            ret = info;

            info = info.GetBaseDefinition();
        while (ret != info);

        return null;

This method has two states variables and a loop, which makes it a bit harder to stabilize. This is a method that could easily be expressed as a LINQ query, but (as far as I know) there is no way to make a enumeration of a data structure which is part of a linked list.

To be able to do this, which is "traverse" a list of objects of the same type that are linked from one to the next, an extension method containing a generic iterator can be written like this :

    public static class Extensions
        public static IEnumerable<T> Traverse<T>(this T source, Func<T, T> next)
            while (source != null)
                yield return source;
                source = next(source);

This is a really simple iterator method, which calls a method to get the next element using the current element and stops if the next value is null.

It can be used easily like this, using the GetBaseDefinition example :

   var methodInfo = typeof(Dummy).GetMethod("Foo");

   IEnumerable<MethodInfo> methods = methodInfo.Traverse(m => m != m.GetBaseDefinition() ? m.GetBaseDefinition() : null);

Just to be precise, the lambda is not exactly perfect, as it is calling GetBaseDefinition twice. It can definitely be optimised a bit.

Anyway, to go back at the first example, the GetTaggedMethod function can be written as a single LINQ query, using the Traverse extension :

    private static MethodInfo GetTaggedMethod(MethodInfo info)
        var methods = from m in methodInfo.Traverse(m => m != m.GetBaseDefinition() ? m.GetBaseDefinition() : null)
                      let attributes = m.GetCustomAttributes(typeof(MyAttribute), false)
                      where attributes.Length != 0
                      select m;

        return methods.FirstOrDefault();

I, for one, find this code more readable... But this is a question of taste :)

Nonetheless, the MethodInfo linked-list is not the perfect example, because the end of the chain is not a null reference but rather the same method we're testing. Most of the time, a chain will end with a null, which is why the Traverse method uses null to end the enumeration. I've been using this method to perform queries on a hierarchy of objects that have parent objects of the same type, and the parent's root set to null. It has proven to be quite useful and concise when used in a LINQ query.

An F# Detour

As I was here, I also tried to find out what an F# version of this code would be. So, with the help of recursive functions, I came up with this :

    let rec traverse(m, n) =
       let next = n(m)
       if next = null then
           [m] @ traverse(next, n)

The interesting part here is that F# does not require to specify any type. "m" is actually an object, and "n" a (obj -> obj) function, but returns a list of objects. And it's used like this :

    let testMethod = typeof<Dummy>.GetMethod("Foo")

    for m in  traverse(testMethod, fun x -> if x = x.GetBaseDefinition() then null else x.GetBaseDefinition()) do
       Printf.printfn "%s.%s" m.DeclaringType.Name m.Name

Actually, the F# traverse method is not exactly like the C# traverse method, because it is not an extension method, and it is not lazily evaluated. It is also a bit more verbose, mainly because I did not find an equivalent of the ternary operator "?:".

After digging a bit in the F# language spec, I found out it exists an somehow equivalent to the yield keyword. It is used like this :

    let rec traverse(m, n) =
       seq {
           let next = n(m)
           if next = null then
               yield m
               yield m
               yield! traverse(next, n)

It is used the same way, but the return value is not a list anymore but a sequence.

I also find interesting that F# is able to return tuples out of the box, and for my attribute lookup, I'd have the method and I'll also have the attribute instance that has been found. Umbrella also defines tuples useable from C#, but it's an addon.

F# is getting more and more interesting as I dig into its features and capabilities...

Hyper-V Virtual Machine Mover

By Jay at May 10, 2009 15:20 Tags:

Thanks to the guys at Lakewood Communications, I've updated my tool to move Hyper-V Virtual Machines to allow VMs that do not have any snapshots.

Version also has a minor fix to try guessing the original VM path to replace it correctly in the configuration file. That would mean that you could have detached a VM and could not have attached it back.

On the Win2008 R2 front now the RC is out, for the time being since I don't have any spare hardware to test it on, I don't know if it works or if it still needed. If you do have tested my tool on this OS, please let me know.

Download the latest version here.

SharePoint : The database connection string is not available. (0xc0041228)

By Jay at March 31, 2009 20:21 Tags:

A Sharepoint Services 3.0 setup I'm managing had a few issues lately, and I had to bring back up an old version of the system. The original setup had a Search Server Express 2008 installed, and the backup I restored did not, even though I had the databases for it.

After reinstalling everything that was needed, and having the Search Server properly indexing content, I kept having a lot of messages like "The database connection string is not available." in the event log, and "Your search cannot be completed because of a service error." in the search tool in any Sharepoint site. I had the content database properly associated with the correct indexer.

I did not notice at first that the service named "Windows SharePoint Services Search" was not started, and when I tried to start it, I had a nice "The handle is invalid." error message... Not very helpful.

A few posts around the web were suggesting to stop that service, then restart it. One suggested to check the user account of that service, which was "Network Service" for me. I changed it to the same domain account that the "Windows SharePoint Services Timer" service is using. At this point, the service was starting properly, but I was still having the "The database connection string is not available." message.

In the "Services on Server", I tried stopping the "Windows SharePoint Services Search" service, (telling me that it was going to delete the index content), which succeeded. But trying to restart the service gave me an error saying that the database already had content, and that I had to create a new one.

I did create a new database, but the service would still not start, this time giving an other error message that I enjoy so much : "An unknown problem occured".

I went back to some forum posts, and I came across a command to "initialize" the service from the command line with STSADM :

 stsadm -o spsearch -action start -farmserviceaccount [domain_account] -farmservicepassword [domain_account_password]

Which at first gave me this :

 The specified database has an incorrect collation.  Rebuild the database with the Latin1_General_CI_AS_KS_WS collation or create a new database.

I did re-create the database with the proper collation, then ran stsadm again and it gave me this : 

 The Windows SharePoint Services Search service was successfully started.

Hurray ! That did the trick, and indeed, my searches in any Sharepoint sites were not returing any error. I just had to wait for the service to refresh its index, and my search was running again !

This is a long and verbose post, but I hope this will help someone with this cryptic message...



Hyper-V Virtual Machine Mover and Hyper-V Server

By Jay at March 28, 2009 14:47 Tags:

I've had some hardware trouble lately, with hard-drives failing with some VMs on these, and my Hyper-V Mover tool hase saved me a great deal of time.

I've had some time to improve it and this time, it is possible to attach and detach VMs from remote machines, and particularly those that are on Hyper-V Server machines.

I've created a page for this tool, the Hyper-V Virtual Machine Mover, version

Still no sources available but they'll be available on CodePlex soon.

Google Transit and Montreal's STM

By Jay at March 15, 2009 21:53 Tags: , ,

A while ago, the Montréal's STM transit system announced that they were now supported by Google Transit.

While it is possible to trace proper routes, Google's having the same problem as I do, which is that the STM is updating schedules per trimester. And since it's the STM that is providing the data and that it's not been updated since the 1st of January 2009, schedules have been incorrect ever since.

To be perfectly fair, I did not update the schedules in my application since that time too by lack of time to create a proper update procedure, but I'm not paid for that either...

Now that I've given it some thoughts, I'm now streamlining the schedule updates stops after stops as long as they are out of date. Previously, I updated the database all at once, but this does not scale... Now the updates are progressive, which is far more manageable for me.

Anyway, now there may be a simple message saying that the displayed schedule is outdated, which is better than trusting the time and blaming the STM for no reason :)

A tool to move an Hyper-V Virtual Machine without exporting it

By jay at February 20, 2009 20:26 Tags:

Cet article est disponible en francais.

Hyper-V is a wonderful tool, and provides great performance and stability. But on the administration side, available tools are a bit scare, and even though most general operations are available, some are a bit hard to use. One can guess that this will improve in Windows Server 2008 R2.

But for now, the administration tools do not provide any mean to import a VM that has not been previously exported. Exporting a VM can only be done on the original host server while it is still running. In the case of a crashed server, exporting a VM becomes a bit more complex.

Some techniques do exist, here and there, that explain by means of mklink and icacls, how to recreate symbolic links and file permissions for the VM configuration files. But that stays a particularly complex task, mostly because all files must be modified, and a specific order must be respected for all modifications. And this is especially true for the case of a running host server.

After having dug in Hyper-V symlinks and WMI interface, I created a GUI tool that allows to attach and detach VM that have not been previously epoxted.

Some thougts on this tool :

  • A VM can only be detached if it is in the "Saved" or "Stopped" state.
  • It is not necessary to stop the Hyper-V service and all modifications are detected live by the service.
  • A VM can only be imported if it contains at least on HDD on the IDE 0 controller.
  • All the VM files must be under the same directory, HDD and snapshots.
  • All files that are modified are backed-up next to the original files; All other files are not modified nor moved.
  • .NET 3.5 must be installed.

I'll provide the sources for this tool in a near future, as well as a console version.

Of course, there will be bugs, and do not hesitate to report them to me. I may not be able to do anything about it because it is a tool that performs a operation that is (I assume) not supported by Microsoft.

You can download the tool here.

Using Multiple Where Clauses in a LINQ Query

By Jay at December 06, 2008 15:32 Tags: , , ,
Cet article est disponible en francais.
After writing my previous article where I needed to intercept exceptions in a LINQ Query, I found out that it is possible to specify multiple where clauses in a LINQ Query.

Here is the query :

var q = from file in Directory.GetFiles(@"C:\Windows\Microsoft.NET\Framework\v2.0.50727", "*.dll")
        let asm = file.TryWith(f => Assembly.LoadFile(f))
        where asm != null
        let types = asm.TryWith(a => a.GetTypes(), (Exception e) => new Type[0])
        where types.Any()
        select new { asm, types };

This query is able to find assemblies for which it is possible to list types. The point of using multiple Where clauses is to avoir evaluating chunks of a query if previous chunks can prevent it. By the way, the TryWith around the Assembly.GetTypes() is there to intercept exceptions raised when loading types, in case dependencies would not be available at the moment of the enumeration.

A useful LINQ trick to remember !

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.