WPF DataContext and CurrentItem

By Jerome at February 13, 2007 15:23 Tags: , ,

DataBinding is one of the technologies I like the most. 

Winforms did a fine job introducing the concept, but it was pretty easy to be stuck when trying to perform complex databinding. .NET 2.0 brought the concept of BindingSource, which eased the management of multiple "Current items" on a single form. You might have encountered this when creating multiple master/detail views on the same form. That was the case when you wanted to walk through a set of tables through their relations, say down to three levels.

WPF has a far more advanced DataBinding engine and introduces the concept of DataContext. There's the notion of a hierarchy of DataContexts and a databound control uses the nearest DataContext available.

An example is better than a thousand words. Here's a data source :


<xmldataprovider x:key="ops" xpath="/data/level1">

  <x:XData>

    <data xmlns="">

      <level1 name="1">

        <level2 name="1-1">

          <level3 name="test">Some Value</level3>

          <level3>Yet an other value from level 3</level3>

        </level2>

        <level2 name="1-2">

          <level3>Some other Value</level3>

          <level3>Yet an other Value</level3>

        </level2>

      </level1>

      <level1 name="2">

        <level2 name="2-1">

          <level3>Some Value</level3>

          <level3>Yet an other value from level 3</level3>

        </level2>

        <level2 name="2-2">

          <level3>Some other Value</level3>

          <level3>Yet an other Value</level3>

        </level2>

      </level1>

    </data>

  </x:XData>

</xmldataprovider>


It's a three level hierarchy, and I want to display this data, by recursively selecting each level in a ListBox to see its content.

Now, let's bind this data to a list box, contained in a GroupBox to be a bit more readable :


<GroupBox Header="Level 1" DataContext="{Binding Source={StaticResource ops}}">

  <ListBox ItemsSource="{Binding}"

           DisplayMemberPath="@name"

           IsSynchronizedWithCurrentItem="True" />

</GroupBox>

This performs a binding to the attribute "name" of the level1 node list. The IsSynchronizedWithCurrentItem tells the listbox to set the CurrentItem of the current DataContext, which can be used to fill the next ListBox for the level.

Now, to add a new DataContext level, let's add a new GroupBox, and a stack panel to have a nice layout :


<GroupBox Header="Level 1"

          DataContext="{Binding Source={StaticResource ops}}">

  <StackPanel Orientation="Horizontal">

    <ListBox ItemsSource="{Binding}"

             DisplayMemberPath="@name"

             IsSynchronizedWithCurrentItem="True" />

    <GroupBox Header="Level2"

              DataContext="{Binding Path=CurrentItem}">

      <ListBox ItemsSource="{Binding}"

               DisplayMemberPath="@name"

               IsSynchronizedWithCurrentItem="True" />

    </GroupBox>

  </StackPanel>

</GroupBox>

Now, there is a new DataContext for any children of the second group box, and is having the CurrentItem of the upper DataContext as a root. This is fairly easy to do, so let's do this for the final level.


<GroupBox Header="Level 1"

          DataContext="{Binding Source={StaticResource ops}}">

  <StackPanel Orientation="Horizontal">

    <ListBox ItemsSource="{Binding}"

             DisplayMemberPath="@name"

             IsSynchronizedWithCurrentItem="True" />

    <GroupBox Header="Level2"

              DataContext="{Binding Path=CurrentItem}">

      <StackPanel Orientation="Horizontal">

        <ListBox ItemsSource="{Binding}"

                 DisplayMemberPath="@name"

                 IsSynchronizedWithCurrentItem="True" />

        <GroupBox Header="Level3"

                  DataContext="{Binding Path=CurrentItem}">

          <StackPanel Orientation="Horizontal">

            <ListBox ItemsSource="{Binding}"

                     IsSynchronizedWithCurrentItem="True" />

            <Label Content="{Binding Path=CurrentItem}" />

          </StackPanel>

        </GroupBox>

      </StackPanel>

    </GroupBox>

  </StackPanel>

</GroupBox>

Each time a new DataContext is set, a new CurrentItem is created. That kind of behavior was hard to reproduce using DataBinding with WinForms; WPF allows it only by using a simple declarative syntax. Easy and powerful.

Also there is a fine feature that came up when using this DataContext and CurrentItem : The CurrentItem "chain" for a specific path is -- when the data source does not change -- kept if you change the selection, and come back to that particular path. Pretty interesting.

Here is a working xaml sample of this post.

Did I say that a really like WPF already ? :)

Playing with WCF and NuSOAP 0.7.2

By Jerome at January 25, 2007 10:24 Tags: , , , ,

.NET 3.0 has now been released, along with Windows Communication Foundation (WCF). I thought I could give it a shot for one of my projects at work, where I have to create an internal web service. The problem is that the client at the other end will be using NuSOAP 0.7.2 to contact this WebService, so I had to make sure it would work fine.

First observation, compared to ASP.NET generated WebService, the WCF wsdl is much more complex. Actually, it has a little bit more information such as the validation rules for a guid, but it also has its schema split up into multiple XSD files. I was a bit worried that NuSOAP wouldn't handle that well, but it does fine... I also wanted to be able to expose a signature like this :

1:     [DataContract]
2:     public class Parameter
3:     {
4:         private string _key;
5:         private object _value;
6:  
7:         [DataMember]
8:         public string Key
9:         {
10:             get { return _key; }
11:             set { _key = value; }
12:        }
13:  
14:        [DataMember]
15:        public object Value
16:        {
17:             get { return _value; }
18:             set { _value = value; }
19:        }
20:    }

You'll notice that the second property is an object, and that implies that the serializer/deserializer handles properly types defined at runtime, by using xsd:anyType in the schema.

So, after a few attempts to get working linux LiveCD distro, for which none of them had PHP compiled the correct --enable-soap flag, I decided to fall back from the native PHP SOAP extensions to the OpenSource SOAP library NuSOAP and use EasyPHP on Windows.

First, I had to change NuSOAP's response encoding to UTF-8 using soap_defencoding, which seems to be the default for WCF, then I had to figure out how to pass an arry of structures to call my method.

So, for the following signature :

    void MyMethod(string a, string b, Parameter[] parameters)

The caller's php parameter structure should be :

$params = array(
'a' => $a,
'b' => $b,
'parameters' => array(
'Parameter' => array(
array('Key' => 'abc', 'Value' => 10),
array('Key' => 'def', 'Value' => 42)
)
),
); 

Notice that you have to place a "Parameter" element inside the "parameters" parameter.

Then, to call the method, use the following line :

    $client->call("MyMethod", array($params));

by encapsulating the parameter array once again. This makes a lot of levels to call one little function... I prefer the .NET way of doing thing :)

An other problem came up right at this point : The array, although the information being in the soap body, was not filled on the .NET side. After comparing with a .NET to WCF call, I figured that there was a missing namespace. This is what is generated by default with the code I've presented above using NuSOAP :

<parameters>
<Parameter>
<Key>abc</Key>
<Value>10</Value>
</Parameter>
<Parameter>
<Key>def</Key>
<Value>42</Value>
</Parameter>
</parameters>

And this is what .NET is generating :

<parameters>
<Parameter xmlns="http://schemas.datacontract.org/2004/07/MyService">
<Key>abc</Key>
<Value>10</Value>
</Parameter>
<Parameter xmlns="http://schemas.datacontract.org/2004/07/MyService">
<Key>def</Key>
<Value>42</Value>
</Parameter>
</parameters>

For some reason, the WCF basicHttp binding point is generating two different namespaces for the service contract and for the data contract. To fix this, you just have to specify explicitly a namespace for each ServiceContract and DataContract attribute of your service :

    [DataContract("http://mywebservice.example.com")]

The other problem was about using an unspecified data type as a member of a structure, a System.Object in my case. Well, it turns out that NuSOAP does not support it, as it does not include the data type of the serialized element, so the WCF deserializer cannot interpret it. I changed the data type back to string, unfortunately losing the type information. I can get it from somewhere else but still, this can lead to serialization problems related to culture (comma being a dot and the opposite depending on systems, for instance).

Anyway, there are a few things to remember to get things to work fine with NuSOAP :

  • Change the encoding to UTF-8 or whatever encoding you choose to use,
  • Don't forget to specify the name of the type of an element in an array in PHP, 
  • Do not expose unspecified parameters or attributes,
  • Explicitly specify the namespace of each DataContract and ServiceContact attribute of your service.

It's been a while since I've written a line of PHP code, and I didn't miss it at all. I'm going back to WCF now :)

C# 3.0, a sneak peek

By Jerome at July 22, 2006 10:22 Tags: ,

If you've used both DataSet and DataTable, you must have seen the DataTable.Select method. This is an interesting method that allows to select rows using a set of criterias, like IS NULL and comparison operators referencing columns of the current table, as well as columns from other tables using relations. The problem with method is that is returns a DataRow[], on which you cannot perform an other select.

The solution is actually quite simple : Just copy the rows you'll answer me. Yes, but you can't just reference rows in two DataTable instances, so you also have to perform a deep copy of the rows. So, with a little digging in the DataTable methods, here is what you get :

public static DataTable Select(DataTable table, string filter, string sort)
{
   DataRow[] rows = table.Select(filter, sort);
   DataTable outputTable = table.Clone();

   outputTable.BeginLoadData();

   foreach(DataRow row in rows)
      outputTable.LoadDataRow(row.ItemArray, true);

   outputTable.EndLoadData();
   return outputTable;
}

Clone is used to copy the table schema only, BeginLoadData/EndLoadData to disable any event processing during the load operation, and LoadDataRow to effectively load each row. This seems to be a fairly fast way to copy a table's data.

Now, I wondered how they would do this in C# 3.0, since there is a lot of data manipulation with the new LINQ syntax. This version is quite interesting because instead of evolving the runtime, they chose to upgrade only the language by adding features that generate a lot of code under the hood. That was the case in C# 2.0 with iterators and anonymous methods. C# 1.0 also had this with foreach, using or lock for instance.

In the particular case of Linq, C# 3.0 generates a method invocation list  of a LINQ query, producing standard C# 3.0 code with the help of lambda expressions. For example, these two lines are equivalent :

   var query = from a in test where a > 2 select a;
   var query2 = Sequence.Where(test, a => a > 2);

This ties a little more the compiler to the system asssemblies, but this does not matter anymore.

By the way, you can apply queries to standard arrays and join them :

static void Main(string[] args)
{
   var names = new[] {
      new { Id=0, name="test" },
      new { Id=1, name="test1" },
      new { Id=2, name="test2" },
      new { Id=4, name="test2" },
   };

   var addresses = new[] {
      new { Id=0, address="address" },
      new { Id=1, address="address1" },
      new { Id=2, address="address2" },
      new { Id=3, address="address2" },
   };

   var query = from name in names
      join address in addresses on name.Id equals address.Id
      orderby name.name
      select new {name = name.name, address = address.address};

   foreach(var value in query)
      Console.WriteLine(value);
}

I've joined the two arrays using the Id field, and creating a new type that extracts both name and address. I really like inline querying because you can query anything that implements IEnumerable.

I'm also wondering how it'll fit into eSQL (Entity SQL)...

But back to the original subject of this post. They had to do some kind of a DataTable copy in the C# 3.0 helper library, which uses extension methods :

   DataTableExtensions.ToDataTable<T>(IEnumerable<T>)

And with some further digging, I found that the LoadDataRow method for copying data is the fastest way to go.

I also found out using the great reflector that there is an Expression compiler in System.Expressions.Expression<T>. Maybe they finally did expose an expression parser that we can use... I'll try this one too !

Precision Timer in .NET 2.0

By Jerome at November 18, 2005 16:25 Tags: ,

If you've been using the .NET Framework since the beginning, you must have had to do some early code profiling, or framerate computation for realtime graphics.

The first idea that pops up to acheive this is to use the DateTime.Now property and do some subtraction of two instances. This is not a good idea since the resolution of this timer is around 10ms or so, which is clearly not enough (and your framerate counter may not go higher than 100 FPS or worse may not work at all).

If you've been in the business for long enough, and been doing some plain old "native" code in, say, C++ on Win32, you should probably used the couple QueryPerformanceFrequency/QueryPerformanceCounter to get the job done. And the same goes for .NET 1.0/1.1. Well, I don't know for you, but each time I have a project that reaches a certain critical size, I always need to use this kind of timer and I end up by writing the P/Invoke wrapper to reach these two methods.

Good news is, .NET 2.0 already has this class integrated in the form of System.Diagnostics.Stopwatch, so you don't have to write it from scratch again and again because you can't find on the net the right "free" class that does enough for you.

The BCL team has added some other nice utility classes like this one, and this saves quite some time.

Reflective Visitor using C#

By Jerome at April 04, 2005 11:35 Tags: ,

There is a well known design pattern in the Object Oriented world: The Visitor pattern.

This pattern, among other things, allows to extend an object without actually modifying it. It is fairly easy to implement in any good OO language such as C++, Java or C#.

However, there is a problem with the implementation of this pattern, or rather an implementation limitation. It requires the base interface or abstract class for all visitors to define at most one method for each type that may visit it. This is not a problem by itself, but it requires to modify the visitor base each time you add a new type. The best way to do this would be to call the appropriate method based on the caller type.

.NET provides that kind of behavior through reflection, as it is possible to find a method based on its parameters at runtime.

I decided to try this out with the C# 2.0 and its generics :)

Here what I came up with :

 

public interface IOperand
{
 
IOperand Accept(Visitor visitor, IOperand
right);
}

public class Operand< T> : IOperand
{
  private
T _value;

  public
Operand(T value)
 
{
   
_value = value;
 
}
  public IOperand Accept(Visitor v, IOperand
right)
 
{
    return v.Visit(this
, right);
  }
  public T Value
  {
    get { return
_value; }
 
}
  public override string ToString()
  {
    return string.Format("{0} ({1})", _value, GetType());
  }
}

This is the definition of an operand, which is used in a abstract machine to perform operations on abstract types. This class is generic, I did not want to implement all the possible types.

Then here is the visitor :

 

public class Visitor
{
 
public virtual IOperand Visit(IOperand left, IOperand right)
 
{
   
MethodInfo info = GetType().GetMethod("Visit", new Type[] { left.GetType(), right.GetType() });
   
   
if (info != null && info.DeclaringType != typeof(Visitor))
     
return info.Invoke(this, new object[] { left, right }) as IOperand;

    
Console.WriteLine("Operation not supported");
   
return null;
  
}
}

This method search in the current type all methods named "Visit" that take the actual type of the parameters left and right and tries to match a method with it. Also, to avoid looping through the same method we're in since it's matching everything, there is a test for the type declaring the method.

Now the AdditionVisitor :

 

public class AdditionVisitor : Visitor
{
  public IOperand Visit(Operand<int> value, Operand<int> right)
  {
    return new Operand<int>(value.Value + right.Value);
  }
  public IOperand Visit(Operand<int> value, Operand<short> right)
  {
    return new Operand<int>(value.Value + right.Value);
  }
  public IOperand Visit(Operand<double> value, Operand<int> right)
  {
    return new Operand<double>(value.Value + right.Value);
  }
}

Which defines a bunch of visitable methods used to add different operations on IOperand-like types.

And finally to use it :

 

class Program
{
  static void Main(string[] args)
  {
    Operand<int> a = new Operand<int>(21);
    Operand<short> b = new Operand<short>(21);
    Console.WriteLine(Add(a, b));
  }

 
static IOperand Add(IOperand a, IOperand b)
 
{
   
AdditionVisitor addVisitor = new AdditionVisitor();
   
return a.Accept(addVisitor, b);
 
}
}

Using this Reflective Visitor, modifying the base visitor class is not needed anymore, which limits the modifications to one class only. Of course, there's room for optimization, for instance by avoiding the method lookup using the System.Reflection namespace, but you get the picture.

Some asked me what could be done in .NET that could not be done in C++, this is an example of it :)

C# 2.0, Closures and Anonymous Delegates

By Jerome at April 04, 2005 11:33 Tags: ,

I was looking around the web about new features in C# 2.0, and I came across this article about the support for closures in C# 2.0. The article explains that the support for closures in C# 2.0 takes the form of anonymous delegates.

There are some examples of closures like this one :

public List<Employee> Managers(List<Employee> emps)
{
  return emps.FindAll(
    delegate(Employee e)
    {
      return e.IsManager;
    }
  );
}

Which is interesting, but less than this one :

public List<Employee> HighPaid(List<Employee> emps)
{
  int threshold = 150
;
  return emps.FindAll(
    delegate(Employee
e)
    {
      return
e.Salary > threshold;
    }
 
);
}

The interesting part here is that the delegate is actually allowed to use a variable that is local to the method where it is defined. You might wonder how this is implemented by the C# compiler.
It may become even less obvious with this example :

public Predicate<Employee> PaidMore(int amount)
{
  return delegate(Employee e)
  {
    return e.Salary > amount;
  };
}

Ok, where does the compiler stores the value of "amount" since the delegate method is only returned to be executed later... ?

In fact, the compiler only generates a "DisplayClass" that containts amount as a field initialized when the anonymous delegate is created, and the implementation of the delegate itself.

Easy.

The Disposable Pattern, Determinism in the .NET World

By Jerome at August 16, 2004 11:43 Tags: ,
One of the numerous features that can be found in the CLR is the garbage collection. Depending on the programmer’s background, this can be a little disturbing, having the habit to manage the memory by hand (especially for plain C programmers). For C++ developers for instance, even though the memory management can be abstracted by the use of the STL, the C++ has a pretty strong model that provides a deterministic way of handling memory allocations. Once an object has reach the end of its life, either by the control flow leaving the scope or the container object being destroyed, the object is immediately destroyed. This is done by the C++ compiler that calls the object’s destructors and releases the memory. The CLR model is, however, weaker than that. The Garbage Collector (GC) is generally more efficient than the programmer and it handles objects destruction in an asynchronous way. When an object is created, it has the ability to provide the GC a special method named the Finalizer, which is called by the GC when it needs to reclaim the memory used by the object. This can be done at any time by an internal CLR thread, only when necessary. This means that the memory management is really efficient and fast. This also means that it can fit the environment the program is running in by efficiently using the available memory. The effect of this asynchronous behavior is that there is no way to have a deterministic destruction of objects. This is one of the most frequent critics of programmers beginning with .NET programming. The biggest trap in this area for C++ developers is the syntax of the finalizer in managed C++ and C#. For instance, in C# :       class Program      {        static void Main(string[] args)        {          {            Dummy dummy = new Dummy();          }           Console.WriteLine("End of main.");        }      }       public class Dummy      {        ~Dummy()        {          Console.WriteLine("Dummy.~Dummy()");        }      } Since the instanciation of the Dummy class is between brackets, a C++ programmer would think that the so called destructor is called right at end of the scope, before the last WriteLine. In reality, the GC will call the Finalizer when the memory is reclaimed : At the end of the program execution. A concrete view of this problem is often found when using file IO :   class Program  {    static void Main(string[] args)    {      StreamWriter writer = File.OpenWrite("Test.txt");      writer.Write("Some value");       Console.ReadLine();    }  } The problem with this program is that the writer is not closed. It will eventually be closed when the GC will call the finalizer of the writer intance, thus closing the file. This is a common problem found in C# programs, leaving some file handles opened, preventing the files to be opened by some other program. To fix this problem, there are two methods :
  • Call the StreamWriter.Close method when the stream is not used anymore,
  • Use the using keyword to limit the scope of the object.
 The using keyword is a shortcut in the C# language to call a method of the System.IDisposable interface, at the and of its scope. In pratice this is what that means :   static void Main(string[] args)  {    using (StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt")))    {      writer.Write("Some value");    }  } Which is in fact expanded by the C# compiler to :   static void Main(string[] args)  {    StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt"));     try {      writer.Write("Some value");    }    finally {      writer.Dispose();    }  } This is straightforward, the Dispose method is called at the end of the “using” scope. One thing though, this does not mean that the memory allocated for the StreamWriter instance is reclaimed right after the dispose. This only means that the instance will release the “unmanaged” objects it holds. A file handle in this case. But one might say : “But if the programmer forgets to call Dispose or Close, the file is not closed at the end”. Actually, no. This is where the Disposable pattern enters the scene. The good thing about this is that you can combine the Dispose method and the GC, the GC being the safekeeper of the unamanged resources of the object; Even if the programmer forgets to call the Close or Dispose method. Here is an example of a Disposable type that implements the Disposable pattern:   public class MyDisposable : IDisposable  {    public void Dispose()    {      Dispose(true);    }     ~MyDisposable()    {      Dispose(false);    }     private void Dispose(bool disposing)    {      // If we come from the Dispose method, suppress the      // finalize method, so this instance is only disposed      // once.      if (disposing)        GC.SuppressFinalize(this);       // Release any unmanaged resource      // ...    }  } This class implements implicitly the IDisposable interface, by defining the Dispose method. Here, both the Dispose method and the Finalizer call an overload of the Dispose method. This method is the code that will actually release unmanaged resources. You might note the use of the GC.SuppressFinalize method, that prevents the GC from calling the Dispose method again from the Finalize method. This also has a alternate objective: Remove some pressure on the GC, as finalizing objects is rather expensive.This pattern can also be completed by an “already disposed” check, to avoid multiple Dispose calls. There are two possible behaviors there : Either silently ignore any subsquent calls, or throw a DisposedException. Using one or the other is a matter of context. 

While not every object needs to be finalizable (and disposable), each time you add a finalizer, you should also implement the System.IDisposable interface and the Disposable pattern.

?>

Don't get C# volatile the wrong way

By Jerome at August 05, 2004 11:45 Tags: ,

Don't get the C# volatile the wrong way. There is a lot of blurriness around synchronization issues in .NET, especially around the Memory Barriers, System.Monitor, the lock keyword and stuff like this.

It is common to have objects that are able to create Unique identifiers, by mean of a index incremented each time a new value is retreived. It would, using a simplistic view, look like this :

   public class UniqueIdentifier  
  
{     
    
private static int _currentIndex = 0;
    
public int NewIndex
     {         
      
get 
        
       
{

        
return _currentIndex++;
      
}
     }
   }

This is pretty straightforward: Each time the NewIndex property is called, a new index is returned.

But there is a problem in a multithreaded environment, where multiple threads can call the NewIndex at the same time. If we look at the code generated for the getter, here is what we have:

  IL_0000:  ldsfld     int32 UniqueIdentifier::_currentIndex
  IL_0005:  dup
  IL_0006:  ldc.i4.1
  IL_0007:  add
  IL_0008:  stsfld     int32 UniqueIdentifier::_currentIndex

One thing about the multithreading in general, the system can stop the execution anywhere it wants, especially between the operation at 0 and the end of the operation at 8. The effect is pretty obvious : If during this stop time, some other thread executes that same piece of code, each thread ends up with the same "new" index, each one incrementing from the same index. This scenario is in the presence of a uni-processor system, which interleaves the execution of running threads. On multi-processor systems, threads do not even need to be stopped to have this exact same problem. While this is harder that kind of race condition on a uniprocessor, this is far more easier to fall into with multiple processors.

This is a very common problem when programming in multithreaded environments, which is generally fixed by means of synchronization mecanisms like Mutexes or CriticalSections. The whole operation needs to be atomic which means executed by at most one thread at a time.

In the native world, in C/C++ for instance, the language does not provide any "built-in" synchronization mecanisms and the programmers have to do all the work by hand. The .NET framework with C#, on the other hand, provides that kind of mecanisms integrated in the language : the volatile and lock keywords.

A common and incorrect interpretation of the volatile keyword is to think that all operations (opposed to accesses) on a volatile variables are synchronized. This generally leads to this kind of code :

   public class UniqueIdentifier
  
{
    
private static volatile int _currentIndex = 0;
    
public int NewIndex
    
{
      
get
       
{
        
return _currentIndex++;
      
}
     
}
  
}

While this code is valid, it does not fix the synchronization problem encountered. The correct interpretation of the volatile keyword is that read and write operations to a volatile fields must not be reordered and, that the value of the variable must not be cached.

On a single x86 processor system, the only effect of the volatile keyword is that the value is never cached in something like a register and is always fetched from the memory. Since there is only one set of caches and one processor, there is no risk to have inconsistencies where memory would have been modified elsewhere. (This is called processor Self-Consistency)
But, on a multiprocessor system, each processor has a data cache set and depending on the cache policy, an updated value for the variable might not be written back immediatly into the main memory to make it available to the other threads requesting it. In fact it may never be updated, depending on the cache policy. Actually, this kind of situation is really hard to reproduce because of the high utilization of the cache and frequent flushes.

Back to volatile, it means that read/write operations will always target the main memory. In practice, a volatile read or write is called a Memory Barrier. Then, when using a volatile variable the thread is sure to have the latest value.
Back to our example, while we are sure to have the latest value, the read/increment/write operation is still not atomic and can be interrupted right in the middle.

To have a correct implementation of this UniqueIndentifier generator, we have to make it atomic :

   public class UniqueIdentifier
  
{
    
private static int _currentIndex = 0;
    
private object _syncRoot = new object();
    
public int NewIndex
    
{
      
get
      
{
        
lock(_syncRoot)

         
{

          
return _currentIndex++;

         
}

      
}

     
}

  
}

In this example, we are using the lock keyword. This is pretty nice because it uses the System.Threading.Monitor class to create a scope that can be entered by one thread at a time. While this solves the atomicity problem, you might notice that the volatile keyword is not present anymore. This is where the Monitor does something under the hood : It does a memory barrier at the acquisition of the lock and an other one a the release of the lock.

A lot of implicit stuff done by the CLR and it can be pretty hard to catch up on all this. Besides, the x86 memory model is pretty strong and does not introduce a lot of race conditions, while it would on a weaker memory model like the one on Itanium.

As a conclusion, use lock and forget volatile. :)

By the way, this article is pretty interesting on the subject.

SyncProxy, an implicit synchronizer

By Jerome at April 04, 2004 11:51 Tags: , ,

A few days ago, I ran into an MVP article on the MSDN talking about synchronizing asynchronous calls to WebServices heading back to the GUI. Since asynchronous calls are processed in separate threads, it is not safe to call GUI methods directly from these. The article was describing how to create some sort of helper class (some call this an Agent or Pattern) that allows to hide the call to BeginInvoke, freeing the developper from creating many small methods that would only contain calls to BeginInvoke.

Back in dotnetSoul, I did not have to call WebServices - at least not in a repetitive way - but I had to synchronize calls from events generated by the netsoul core. These events are fired from asynchronous read operations on a network stream, which implies that consuming these from the GUI requires to synchronize the call to update some controls.

The old fashion way is to register a "standard" method which only calls the synchronous method via BeginInvoke with the same parameters :

    _chatRoom.UserJoined += new EventHandler(OnChatRoomUserJoined);

...

    private void OnChatRoomUserJoined(object sender, EventArgs args)
    {
         BeginInvoke(new SyncEventHandler(SyncChatRoomUserJoined), new object[]{ sender, args });
    }

    private void OnChatRoomUserJoined(object sender, EventArgs args)
    {
         // call the UI from there...
    }

What a waste of time and error prone way of doing many GUI synchronous calls, since the netsoul core exposes about 20+ events to consumers. You might notice I'm using a SyncEventHandler delegate. I'm using this delegate to avoid the BeginMethod to change the parameters passed in.

So reminding the class used to synchronize the calls to the UI, I though I could create some sort of Proxy class that could be used to both register the event and create the instance of the proxy. I then came up with this :

using System;
using System.Windows.Forms;

namespace Epitech.NetSoul.UI
{
    public delegate void SyncEventHandler(object sender, EventArgs args);

    public class SyncProxy
    {
        private Control             _control;
        private SyncEventHandler    _syncHandler;
        private EventHandler        _asyncHandler;

        public SyncProxy(Control control, SyncEventHandler syncHandler)
        {
            _control        = control;
            _syncHandler    = syncHandler;
            _asyncHandler   = new EventHandler(AsyncHandler);
        }

        // Implicit operator to allow an easy registering on events
        public static implicit operator EventHandler(SyncProxy proxy)
        {
            return proxy._asyncHandler;
        }

        // The asynchronous delegate, which calls the synchronous delegate through
        // BeginInvoke, if required.

        private void AsyncHandler(object sender, EventArgs args)
        {
            if(_control.InvokeRequired)
            {
                _control.BeginInvoke(_syncHandler, new object[]{ sender, args });
            }
            else
            {
                _syncHandler(sender, args);
            }
        }
    }
}

Which can be used like this :

    _chatRoom.UserJoined += new SyncProxy(this, new SyncEventHandler(OnChatRoomUserJoined));

This call creates the proxy then registers the asynchronous delegate from the proxy through an implicit cast to EventHandler. This way, there is only one event handler to create inside the destination form per event handled.

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.