The Disposable Pattern, Determinism in the .NET World

By Jerome at August 16, 2004 11:43 Tags: ,
One of the numerous features that can be found in the CLR is the garbage collection. Depending on the programmer’s background, this can be a little disturbing, having the habit to manage the memory by hand (especially for plain C programmers). For C++ developers for instance, even though the memory management can be abstracted by the use of the STL, the C++ has a pretty strong model that provides a deterministic way of handling memory allocations. Once an object has reach the end of its life, either by the control flow leaving the scope or the container object being destroyed, the object is immediately destroyed. This is done by the C++ compiler that calls the object’s destructors and releases the memory. The CLR model is, however, weaker than that. The Garbage Collector (GC) is generally more efficient than the programmer and it handles objects destruction in an asynchronous way. When an object is created, it has the ability to provide the GC a special method named the Finalizer, which is called by the GC when it needs to reclaim the memory used by the object. This can be done at any time by an internal CLR thread, only when necessary. This means that the memory management is really efficient and fast. This also means that it can fit the environment the program is running in by efficiently using the available memory. The effect of this asynchronous behavior is that there is no way to have a deterministic destruction of objects. This is one of the most frequent critics of programmers beginning with .NET programming. The biggest trap in this area for C++ developers is the syntax of the finalizer in managed C++ and C#. For instance, in C# :       class Program      {        static void Main(string[] args)        {          {            Dummy dummy = new Dummy();          }           Console.WriteLine("End of main.");        }      }       public class Dummy      {        ~Dummy()        {          Console.WriteLine("Dummy.~Dummy()");        }      } Since the instanciation of the Dummy class is between brackets, a C++ programmer would think that the so called destructor is called right at end of the scope, before the last WriteLine. In reality, the GC will call the Finalizer when the memory is reclaimed : At the end of the program execution. A concrete view of this problem is often found when using file IO :   class Program  {    static void Main(string[] args)    {      StreamWriter writer = File.OpenWrite("Test.txt");      writer.Write("Some value");       Console.ReadLine();    }  } The problem with this program is that the writer is not closed. It will eventually be closed when the GC will call the finalizer of the writer intance, thus closing the file. This is a common problem found in C# programs, leaving some file handles opened, preventing the files to be opened by some other program. To fix this problem, there are two methods :
  • Call the StreamWriter.Close method when the stream is not used anymore,
  • Use the using keyword to limit the scope of the object.
 The using keyword is a shortcut in the C# language to call a method of the System.IDisposable interface, at the and of its scope. In pratice this is what that means :   static void Main(string[] args)  {    using (StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt")))    {      writer.Write("Some value");    }  } Which is in fact expanded by the C# compiler to :   static void Main(string[] args)  {    StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt"));     try {      writer.Write("Some value");    }    finally {      writer.Dispose();    }  } This is straightforward, the Dispose method is called at the end of the “using” scope. One thing though, this does not mean that the memory allocated for the StreamWriter instance is reclaimed right after the dispose. This only means that the instance will release the “unmanaged” objects it holds. A file handle in this case. But one might say : “But if the programmer forgets to call Dispose or Close, the file is not closed at the end”. Actually, no. This is where the Disposable pattern enters the scene. The good thing about this is that you can combine the Dispose method and the GC, the GC being the safekeeper of the unamanged resources of the object; Even if the programmer forgets to call the Close or Dispose method. Here is an example of a Disposable type that implements the Disposable pattern:   public class MyDisposable : IDisposable  {    public void Dispose()    {      Dispose(true);    }     ~MyDisposable()    {      Dispose(false);    }     private void Dispose(bool disposing)    {      // If we come from the Dispose method, suppress the      // finalize method, so this instance is only disposed      // once.      if (disposing)        GC.SuppressFinalize(this);       // Release any unmanaged resource      // ...    }  } This class implements implicitly the IDisposable interface, by defining the Dispose method. Here, both the Dispose method and the Finalizer call an overload of the Dispose method. This method is the code that will actually release unmanaged resources. You might note the use of the GC.SuppressFinalize method, that prevents the GC from calling the Dispose method again from the Finalize method. This also has a alternate objective: Remove some pressure on the GC, as finalizing objects is rather expensive.This pattern can also be completed by an “already disposed” check, to avoid multiple Dispose calls. There are two possible behaviors there : Either silently ignore any subsquent calls, or throw a DisposedException. Using one or the other is a matter of context. 

While not every object needs to be finalizable (and disposable), each time you add a finalizer, you should also implement the System.IDisposable interface and the Disposable pattern.


Don't get C# volatile the wrong way

By Jerome at August 05, 2004 11:45 Tags: ,

Don't get the C# volatile the wrong way. There is a lot of blurriness around synchronization issues in .NET, especially around the Memory Barriers, System.Monitor, the lock keyword and stuff like this.

It is common to have objects that are able to create Unique identifiers, by mean of a index incremented each time a new value is retreived. It would, using a simplistic view, look like this :

   public class UniqueIdentifier  
private static int _currentIndex = 0;
public int NewIndex

return _currentIndex++;

This is pretty straightforward: Each time the NewIndex property is called, a new index is returned.

But there is a problem in a multithreaded environment, where multiple threads can call the NewIndex at the same time. If we look at the code generated for the getter, here is what we have:

  IL_0000:  ldsfld     int32 UniqueIdentifier::_currentIndex
  IL_0005:  dup
  IL_0006:  ldc.i4.1
  IL_0007:  add
  IL_0008:  stsfld     int32 UniqueIdentifier::_currentIndex

One thing about the multithreading in general, the system can stop the execution anywhere it wants, especially between the operation at 0 and the end of the operation at 8. The effect is pretty obvious : If during this stop time, some other thread executes that same piece of code, each thread ends up with the same "new" index, each one incrementing from the same index. This scenario is in the presence of a uni-processor system, which interleaves the execution of running threads. On multi-processor systems, threads do not even need to be stopped to have this exact same problem. While this is harder that kind of race condition on a uniprocessor, this is far more easier to fall into with multiple processors.

This is a very common problem when programming in multithreaded environments, which is generally fixed by means of synchronization mecanisms like Mutexes or CriticalSections. The whole operation needs to be atomic which means executed by at most one thread at a time.

In the native world, in C/C++ for instance, the language does not provide any "built-in" synchronization mecanisms and the programmers have to do all the work by hand. The .NET framework with C#, on the other hand, provides that kind of mecanisms integrated in the language : the volatile and lock keywords.

A common and incorrect interpretation of the volatile keyword is to think that all operations (opposed to accesses) on a volatile variables are synchronized. This generally leads to this kind of code :

   public class UniqueIdentifier
private static volatile int _currentIndex = 0;
public int NewIndex
return _currentIndex++;

While this code is valid, it does not fix the synchronization problem encountered. The correct interpretation of the volatile keyword is that read and write operations to a volatile fields must not be reordered and, that the value of the variable must not be cached.

On a single x86 processor system, the only effect of the volatile keyword is that the value is never cached in something like a register and is always fetched from the memory. Since there is only one set of caches and one processor, there is no risk to have inconsistencies where memory would have been modified elsewhere. (This is called processor Self-Consistency)
But, on a multiprocessor system, each processor has a data cache set and depending on the cache policy, an updated value for the variable might not be written back immediatly into the main memory to make it available to the other threads requesting it. In fact it may never be updated, depending on the cache policy. Actually, this kind of situation is really hard to reproduce because of the high utilization of the cache and frequent flushes.

Back to volatile, it means that read/write operations will always target the main memory. In practice, a volatile read or write is called a Memory Barrier. Then, when using a volatile variable the thread is sure to have the latest value.
Back to our example, while we are sure to have the latest value, the read/increment/write operation is still not atomic and can be interrupted right in the middle.

To have a correct implementation of this UniqueIndentifier generator, we have to make it atomic :

   public class UniqueIdentifier
private static int _currentIndex = 0;
private object _syncRoot = new object();
public int NewIndex


return _currentIndex++;





In this example, we are using the lock keyword. This is pretty nice because it uses the System.Threading.Monitor class to create a scope that can be entered by one thread at a time. While this solves the atomicity problem, you might notice that the volatile keyword is not present anymore. This is where the Monitor does something under the hood : It does a memory barrier at the acquisition of the lock and an other one a the release of the lock.

A lot of implicit stuff done by the CLR and it can be pretty hard to catch up on all this. Besides, the x86 memory model is pretty strong and does not introduce a lot of race conditions, while it would on a weaker memory model like the one on Itanium.

As a conclusion, use lock and forget volatile. :)

By the way, this article is pretty interesting on the subject.

Obscure abstraction stories

By Jerome at April 14, 2004 11:55 Tags:

Today, I was rating some students group for one of their project (called Zia, a Third Year project about creating an HTTP server) and they told me that they could not understand some very strange behavior of their software. To keep it short, they were having some really strange jumps from some functions to others, in some unrelated places. This is unusual, particularly when the function actually called is not the one that should be called in a virtual function context.

The thing is, about this project, that it is required to have an extensible way of dealing with functionalities of an HTTP server in the form of Modules or Plug-ins. In an object oriented way of seeing things, two methods exist :

  • The first one is about using static referencing of types in a DLL, by means of directives like __dllimport set on classes. This is an impracticable way for plug-in enumeration as this is "dynamic static" linking of functions, which is much like linking using a static library. An other problem with this kind of implementation is that it cannot be used to efficiently achieve abstraction using the C#-style (or java) interface because types are explicitly referenced from the dynamic library. Generally speaking, this is not a good choice (and this is also not portable).
  • An other one is about using a common interface (or fully abstract class, only pure virtual functions and no data members) shared by both plug-ins and the host, and allowing  plug-ins to create concrete instances of the common interface. This has multiple advantages :
    • The host only imports a few "C-Style" functions (generally declared as extern "C") used, for example, to create instances of concrete classes or to enumerate types that can be created by the module. This also removes the implications of the symbols decoration (or name mangling) generated by the C++ compiler, (By the way, the dependency walker is a great tool to see this)
    • Types Virtual Tables are automatically built using the DLL memory space, letting the C++ plumbing doing the job for the user,
    • It is also possible to enumerate, load or unload plug-ins on the fly,
    • And obviously, the host only relies on the interface exposed by a plug-in to use its services. This is not specific to this method, but rather an other way of using interfaces with a greater level of abstraction, because concrete types are not known by the host.

In many ways, true dynamic loading of DLLs using this method is better than the static one. But we’re not in a perfect world and this method also has its drawbacks, some being really vicious.


There are multiple ways for including the Runtime Library, which is a kind of libc that can be found under Unices. Actually there are three ways:

  • Including the “SingleThreaded” (ST) version of the library, which is mainly suited for applications that do not use threads,
  • Including the “MultiThreaded” (MT) version that is used when the application is MultiThreaded. RT functions are then optimized to be ThreadSafe, which is not the case for the ST version. This is a static version of the RT that is completely included in the final binary minus, of course, methods you do use.
  • Including the “MultiThreaded DLL” (MD) version – the mostly used version – that references the msvcr7x.dll file. (This changes depending on the C++ compiler. Actually, this used not to change but it created havoc so…) This is, so far, the best way of including the RT as it lightens the weight of binaries and has other implications I will discuss later on.

This parameter can be found in :


Project Properties / C/C++ / Code Generation / Runtime Library


There is a thing about DLLs and static variables: These are local to their modules (understand DLL in this context) and not the current process. In other words, including the RT as a static library in a plug-in/host context creates multiple “instances” of the static variables found in the RT. In particular, it creates independent versions of internal lists used to maintain heap memory allocations. These lists are used, for instance, by malloc or other memory allocation functions.


Knowing this, it becomes obvious that allocating one object from one module (some plug-in) and freeing it in an other module (say the host) leads toward freeing a pointer that does not exist in the destination module. There are many behaviors that can be observed:

  • The first – which the most common – is the RT assertion dialog box showing up and saying that the pointer being freed is not valid. Most programmers don’t really understand this message and choose to ignore it because they don’t understand it, which is not that good…
  • The second, found if debugging features of the compiler are deactivated, is a simple crash but a really hard bug to spot.

Back in the object oriented world, where everything is encapsulated, you can find that kind of code in the shared interfaces:


class IObject



       virtual std::vector<int>   GetRefs() = 0;


Although this is completely correct, the problem with that kind of code is that the vector object itself is generally allocated on the stack (although it depends on the left value used in the assignation) but contained data is allocated on the heap. This is a hidden memory allocation that can cause trouble when using the Static MultiThreaded version of the RT, leading – when you are lucky – to a crash and when you’re not, the kind of behavior my students have been experiencing, like a partial destruction of the heap and/or the stack…


In other words: Use the “MultiThreaded DLL” version of the Runtime Library in all the binaries and static libraries of your projects.  Note that I insist heavily on the fact that the same runtime must be used everywhere, otherwise you’ll get a lot of strange linker warnings and errors. (This is mainly because the same symbols are not exposed in the same way, static and dll imported)


By the way, in the .NET world it is mandatory when using Managed C++ extensions, probably for the same reason…

SyncProxy, an implicit synchronizer

By Jerome at April 04, 2004 11:51 Tags: , ,

A few days ago, I ran into an MVP article on the MSDN talking about synchronizing asynchronous calls to WebServices heading back to the GUI. Since asynchronous calls are processed in separate threads, it is not safe to call GUI methods directly from these. The article was describing how to create some sort of helper class (some call this an Agent or Pattern) that allows to hide the call to BeginInvoke, freeing the developper from creating many small methods that would only contain calls to BeginInvoke.

Back in dotnetSoul, I did not have to call WebServices - at least not in a repetitive way - but I had to synchronize calls from events generated by the netsoul core. These events are fired from asynchronous read operations on a network stream, which implies that consuming these from the GUI requires to synchronize the call to update some controls.

The old fashion way is to register a "standard" method which only calls the synchronous method via BeginInvoke with the same parameters :

    _chatRoom.UserJoined += new EventHandler(OnChatRoomUserJoined);


    private void OnChatRoomUserJoined(object sender, EventArgs args)
         BeginInvoke(new SyncEventHandler(SyncChatRoomUserJoined), new object[]{ sender, args });

    private void OnChatRoomUserJoined(object sender, EventArgs args)
         // call the UI from there...

What a waste of time and error prone way of doing many GUI synchronous calls, since the netsoul core exposes about 20+ events to consumers. You might notice I'm using a SyncEventHandler delegate. I'm using this delegate to avoid the BeginMethod to change the parameters passed in.

So reminding the class used to synchronize the calls to the UI, I though I could create some sort of Proxy class that could be used to both register the event and create the instance of the proxy. I then came up with this :

using System;
using System.Windows.Forms;

namespace Epitech.NetSoul.UI
    public delegate void SyncEventHandler(object sender, EventArgs args);

    public class SyncProxy
        private Control             _control;
        private SyncEventHandler    _syncHandler;
        private EventHandler        _asyncHandler;

        public SyncProxy(Control control, SyncEventHandler syncHandler)
            _control        = control;
            _syncHandler    = syncHandler;
            _asyncHandler   = new EventHandler(AsyncHandler);

        // Implicit operator to allow an easy registering on events
        public static implicit operator EventHandler(SyncProxy proxy)
            return proxy._asyncHandler;

        // The asynchronous delegate, which calls the synchronous delegate through
        // BeginInvoke, if required.

        private void AsyncHandler(object sender, EventArgs args)
                _control.BeginInvoke(_syncHandler, new object[]{ sender, args });
                _syncHandler(sender, args);

Which can be used like this :

    _chatRoom.UserJoined += new SyncProxy(this, new SyncEventHandler(OnChatRoomUserJoined));

This call creates the proxy then registers the asynchronous delegate from the proxy through an implicit cast to EventHandler. This way, there is only one event handler to create inside the destination form per event handled.

Windows Installer CleanUp Utility

By Jerome at April 02, 2004 11:54 Tags:

Microsoft met a mis à disposition un utilitaire permettant de nettoyer la base de registre des problèmes liés à l'utilisation de Windows Installer... Une solution lorsque certains logiciels ne veulent plus s'installer et lancent des erreurs étranges...

News Source: The Windows Installer CleanUp Utility

Using Precompiled Headers

By Jerome at March 01, 2004 11:50 Tags: ,
(How to speed up the C/C++ compilation step with Visual Studio .NET 2003)

Visual C++ Pre compilation feature

Microsoft C/C++ compiler features for a long time now something called Header Precompilation, also known as PCH. You may have already encountered it or used it without knowing it.

The concept is quite simple : Why having to parse header files for each C/C++ file to compile that includes them ? For a given compilation, the header files used will not likely change and even for any subsequent compilations. Standard include files like stdio.h, stdlib.h and many system header files do not need to be parsed at each inclusion.

The C/C++ compiler uses a special set of files (default is stdafx.cpp and stdafx.h) where it can find all the files to precompile, and then reuse these pre-compiled headers in an efficient way in any future compilation. This avoids to recompile all these header files, especially the ones from the STL that can be really huge.

1. Using precompiled headers, the default way

At first, the C/C++ compiler searches for a file named stdafx.cpp and tries to compile it. This file only includes the stdafx.h file. This compilation generates a file called $(ProjectName).pch that will be used by other files being compiled to find pre-compiled symbols quickly.

Any other C/C++ file must then have a line like this one :

#include "stdafx.h"

Be aware that this line must be the very first line of your C/C++ file. Anything that you will place before will be ignored, not even parsed.

2. Using precompiled headers, from scratch

PCH feature can also be activated from a empty project, by adding files one by one. Here is the way to do this.

First, create an empty project and add 3 new files : main.cpp, stdafx.h and stdafx.cpp.

  • File stdafx.h :

    #ifndef __STDAFX_H#define __STDAFX_H#include #include #include #include #endif // __STDAFX_H
  • File stdafx.cpp, quite simple :

    #include "stdafx.h"
  • File main.cpp, also quite simple :

    #include "stdafx.h"int main(){ std::string myString("Hello, World !"); std::cout << myString.c_str() << std::endl; return 0;}
  • Once these files are added in the solution explorer into your C++ project, follow these steps :

  • Select the stdafx.cpp file, right click and select Properties :

    The two fields "Create/Use PCH Through File" and "Precompiled Header File" will be filled automatically.

  • Then select the project item in the solution explorer right click and select Properties :

    The two fields "Create/Use PCH Through File" and "Precompiled Header File" will be filled automatically.

    Note that changing the C/C++ properties for the project propagates them to C/C++ files that have not been customized. Here it is main.cpp, but not stdafx.cpp because we've customized settings in the previous step.

  • After setting all this, the first file to be compiled is stdafx.cpp and then the other files in the project.

    You will see that big projects compile much faster when PCH features is enabled. Also note that you can use multiple precompiled header files in one project, although it is not recommended. If you feel like you need to make multiple PCH files, it is time for you to make a static or a dynamic library.

    SEH et Exceptions en C++

    By Jerome at February 03, 2004 11:49 Tags: ,

    Qui ne s'est pas déja retrouvé devant une horrible boite de dialogue de crash de programme ressemblant à celle ci :

    "The memory cannot be 'written' at 0x404459FF".

    Cette boite de dialogue, bien connue des utilisateurs et développeurs sur Windows, est le résultat d'une exception matérielle, souvent la conséquence d'un problème logiciel comme un deréférencement de pointeur nul. Il existe bien entendu beaucoup raisons pour lesquelles cette boite peut apparaitre comme des Access Violation, Divide By Zero, Invalid Operation, Float Overflow et Underflow, Privileged Instruction,... autant de problèmes potentiels qui peuvent arrêter l'exécution d'un programme. 

    Le C++ met à disposition un mécanisme de gestion des exceptions par l'intermédiaire des mots clés try et catch tout en utilisant le handler par défaut. Ce handler (catch(...)) du mécanisme standard n'est cependant pas le plus intéressant car il ne permet pas d'obtenir le contexte d'exécution du processeur lors de la génération de l'exception, ni d'en savoir la cause.

    Windows met à disposition du développeur une API spécifique permettant d'utiliser le Structured Exception Handling. Cela permet, entre autres, d'afficher une boite de dialogue à l'utilisateur avec un minimum d'informations de debug. Il est également possible d'avoir un StackTrace pour un exécutable en utilisant les informations de debug séparées. Contrairement à une idée recue, il est possible de génerer avec visual studio des informations de debug pour un exécutable optimisé avec /Ox par exemple.

    Le compilateur C++ met à disposition un certain nombre de nouveaux mots clés permettant de protéger une section de code ou bien un programme entier et d'en intercepter les exceptions d'une manière plus efficace. On peut notamment utiliser __try, __except et __finally qui spécifiques au compilateur Microsoft. Cette méthode n'est cependant pas la plus simple à utiliser.

    L'utilisation du SEH ici est couplée à l'utilisation de la librairie imagehlp.dll. Cette librairie permet d'explorer les fichiers de symboles (pdb) pour déterminer par exemple les fonctions trouvées en parcourant la StackFrame.

    Cet article du MSJournal de 1997 ainsi que celui ci, (Oui, oui, 1997...) décrit assez bien le fonctionnement et l'utilisation du SEH au travers de l'utilisation d'une instance de classe en singleton, mais ce mode ne permet pas de rendre simplement la main au dernier handler d'exception présent.

    Dans ce code, un mélange de l'utilisation du SEH et des exceptions du C++ permet de protéger des morceaux de code dans des exceptions C++ standard. L'utilisation de la fonction _set_se_translator permet d'enregistrer une fonction appelée lorsqu'une exception survient et de transformer ces exception Win32 en exceptions C++. L'exemple ici n'est pas parfait puisque l'original est utilisé dans le cadre de la protection d'un programme complet (tout le main en quelque sorte). Il se pourrait qu'une exception SEH lancée en dehors d'un bloc protégé génère tout de même la boite de dialogue de plantage générique.

    Quoi qu'il en soit, l'utilisation de ce code est assez simple, il faut juste noter qu'il est nécessaire d'activer la génération des StackFrames (désactiver Omit Stack Frames) ainsi que des informations de debug sous forme de fichier extérieur (Debug Information Format : Program Database). Il faut noter également qu'il est indispensable de distribuer le fichier imagehlp.dll et les fichiers pdb (Program Database) avec l'executable pour avoir l'affichage de la StackTrace lors d'une exception.

    Bien entendu pour la distribution publique d'un programme, on ne publiera pas les fichiers pdb, qui contiennent énormément d'informations.

    Il faut noter également qu'il est possible d'utiliser le SEH sous sa forme "native", en C. Vous aurez simplement à supprimer les fonctionnalités spécifique au C++ du code précédent. (Je donne l'info puisque certains utilisent encore ce langage... :p)

    Bon debug !

    About me

    My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.