tl;dr: Memoization can be associated with the ConditionalWeakTable class, which allows the addition of memoized computation results to immutable types. This makes the memoized results live as long as the instances that was used to create it.
In the first part of this article, we discussed a bit the Memoization pattern in C#. In this second part, we will discuss how to alleviate the memory management issue for memoized computation results.
ConditionalWeakDictionary to the rescue
In .NET 4.0 – quite a while ago now – a new class was added, the ConditionalWeakTable, to help in the creation of dynamic languages based on the DLR, where there was a need to be able to attach data to existing type instances, much like a fictional extension property could be. This topic is not covered much, and since it has to do with the GC, it is often misunderstood.
The idea is pretty simple: It is a dictionary that takes an type instance as a key, and a value associated to it. The key is stored as a weak reference, meaning that the data is held as a hard-reference as long as the key lives. When the key is collected by the GC, the hard-link is removed, making the data available for collection if it’s not referenced anywhere else.
Here’s how to use it:
TL;DR: Immutable data and memoization are functional programming concepts that can be applied to C# programming. These patterns have their strengths and weaknesses, discussed a bit in this article.
I’ve grown to not being a great fan a data mutability.
Data mutability can introduce a lot of side effects in the code, and it can be pretty complex to go back in time to know what a specific state was before the code failed. This gets worse when multiple threads are involved, and that tends to happen a lot more these days, now that even phones have multiple cores.
Sure, we can use IntelliTrace to ease that kind of debugging, but that’s pretty much limited to issues you already know about. That means you’re reacting to issues that already happened, you’re not proactively preventing those issues from happening.
So, to address this more reliably, there’s the concept of immutability. When a set of data is built, it cannot change anymore. This means that you can pass it around, do computation with it, use it any place you want, there’s not going to be any subtle concurrency issues because the data changed under your feet.
TL;DR: C# 5.0 async/await does not include the implicit support for cancellation, and needs to pass CancellationToken instances to every async method. F# and the Reactive Extensions offer solutions to this problem, with both implicit and explicit support for cancellation.
My development style has slowly shifted to a more functional approach, during the past year. I’ve been peeking a F# for a while and that shift to a more functional mindset in C# lends me toward understanding a lot better the concepts behind core features of F#, and more specifically the async “support” in F#.
It’s known that F# inspired a lot the implementation of C# async, but having looked at the way it’s been implemented in F# gives me some more points against the “unfinished” implementation in C#.
Recently, now that people are effectively using async, in real-world scenarios, problems are starting to bubble up, and some to giggle. Async void, async void lambdas, the fact that continuations run mostly on the UI thread when not taken care of properly, obscure exception handling scenarios, the “magic” relation to the SynchronizationContext, that it does not address parallelism, and one that’s been pretty low-key, cancellation.
TL;DR: Using an upgraded (and fixed) Parallel Build Process Template allows to use multiple TFS2012 build agents simultaneously, which can be more than welcome when building metro apps that target all three supported platforms. A build that took 11 minutes can go down to 3.5 minutes.
Download the Parallel Build Process Template for TFS2012 here.
CI is a wonderful feature, especially when associated with Gated Checkins.
You’re certain that what’s in your source control is in line with your build definition and constraints, and that there is always a binary that respects a minimum set of rules. This does not ensure that your app is bug free, but still, that’s a minimum.
Build time matters
The downside of this validation is that there cannot be multiple builds running at the same time. This can become a bottleneck when multiple developers checkin within the duration of a single build run.
This means that the longer your build gets, the longer a developer might wait for its task completion because of a long build queue, and increase its task switching cost. If a build fails, the developer needs to unshelve its changes, make the necessary adjustments, then check-in again.
Below 4 minutes of build time, this stays in the acceptable range where the developer’s task context may not be lost if the build fails.
TL;DR: It is possible to disable the Static Analysis phase in VS2012 projects by setting the DevDivCodeAnalysisRunType environment variable to “Disabled”.
This will be a quick post, but which might save you a lot of time if you rely heavily on Code Analysis (FxCop).
FxCop is definitely not known for its analysis speed, and when ran in every build, this takes a lot of time. I usually work on projects where FxCop is only enabled in the Release Configuration, which helps during development in Debug configuration.
But if you’re bound to run in Release configuration, such as when profiling the app, or any other task that requires to build in that configuration, then having FxCop running every single time can be time consuming.
To avoid this, there are multiple choices :
- Use find and replace to change <RunCodeAnalysis>true</RunCodeAnalysis> to <RunCodeAnalysis>false</RunCodeAnalysis>, but then you have to remember to not check that into your source control (even if the Perform Code Analysis setting is set to Always in your CI Build definition),
- Create an alternate configuration similar to Release that does not have the Static Code Analysis enabled, but changing configurations in VS2012 can take time (even with the Update 2) and you’ll have to maintain that configuration with the others,
- Or you can use a little trick to disable the static code analysis for a whole Visual Studio instance.
That trick is a bit hidden, but here’s how you can do this :
- Open a Developer Command Prompt for VS2012
- type set DevDivCodeAnalysisRunType=Disabled
- type devenv
Build your solution and you won’t have the static analysis running, without any modification to your solutions’ configuration or projects. Easy.
That said, remember that this is not documented and might change at any point in the future so don't rely on it too much.
TL;DR: Writing Xaml/C++ attached properties sometimes gives a 30% improvement over the C# version, which can be caused by the use of events. This article shows code sample for both versions.
Since it is possible to write a XAML application entirely in C++/CX, I decided to give a try to the performance of some simple code.
There is, after all, some marshaling involved when communicating from C# to native code, particularly with events.
WinRT’s BitmapImage class supports, as does WPF and Silverlight, the DecodePixelWidth and DecodePixelHeight properties.
These are very useful properties that forces the memory surface to store the image to fit a certain size, and avoid the waste of memory induced by large downscaled images. This is a very common performance issue for applications that display variable sized images, where the memory can grow very quickly.
TL;DR: Expanding data-bound item templates in Xaml/WinRT in Windows 8 is about a hundred times slower than with Xaml/WPF. This article details how this was measured and a possible explanation.
In Windows 8, Microsoft has a introduced a whole new Xaml stack, codenamed Jupiter, completely re-written to be native only.
This allows the creation of Xaml controls using C++ as well as C#.
I will not discuss the philosophical choice of ditching managed WPF in favor of a native rewrite, but make a simple comparison of the performance between the two.
Template Expansion Performance
I worked on a project that had performance issues for a UI-Virtualized control, where the initial binding of data as well as the realization of item templates, was having a significant impact on the fluidity of the scrolling of a GridView control.
To isolate this, I created a simple UI: More...
Not in everyone's minds though, which is only around 7% according to the PYPL index.
TL;DR: Don't use the ReaderWriterLockSlim class on Windows Phone 8 RTM, it has a bug that appears only under contention.
Windows Phone 8’s move to the NT kernel has had a lot of advantages for the developer, such as the move to the same .NET CLR as the Desktop Windows, but also the ability to have multi-core based environment.
More specifically, there is one access synchronization – the ReaderWriterLockSlim – which makes a lot of sense in real multi-core environment.
I’ve been using this class to synchronize access to a dictionary abstraction for performance reasons and also for legacy reasons, since that the Concurrent Collections are available. Note that we do have a new tool in the toolbox, the BCL Immutable Collections, that are becoming my new preferred way for creating collections.
Agreed, that’s a lot of keywords. Yet, they fit one another in a very interesting way.
I find more and more that F#, particularly regarding the way my development trends are going, is getting more and more of a fit regarding the immutability and flexibility needs.
Given that, I thought I’d give a try at running some F# Query Expressions using custom Rx operators, on Windows Phone 8, using Portable Libraries.