Playing with WCF and NuSOAP 0.7.2

By Jerome at January 25, 2007 10:24 Tags: , , , ,

.NET 3.0 has now been released, along with Windows Communication Foundation (WCF). I thought I could give it a shot for one of my projects at work, where I have to create an internal web service. The problem is that the client at the other end will be using NuSOAP 0.7.2 to contact this WebService, so I had to make sure it would work fine.

First observation, compared to ASP.NET generated WebService, the WCF wsdl is much more complex. Actually, it has a little bit more information such as the validation rules for a guid, but it also has its schema split up into multiple XSD files. I was a bit worried that NuSOAP wouldn't handle that well, but it does fine... I also wanted to be able to expose a signature like this :

1:     [DataContract]
2:     public class Parameter
3:     {
4:         private string _key;
5:         private object _value;
7:         [DataMember]
8:         public string Key
9:         {
10:             get { return _key; }
11:             set { _key = value; }
12:        }
14:        [DataMember]
15:        public object Value
16:        {
17:             get { return _value; }
18:             set { _value = value; }
19:        }
20:    }

You'll notice that the second property is an object, and that implies that the serializer/deserializer handles properly types defined at runtime, by using xsd:anyType in the schema.

So, after a few attempts to get working linux LiveCD distro, for which none of them had PHP compiled the correct --enable-soap flag, I decided to fall back from the native PHP SOAP extensions to the OpenSource SOAP library NuSOAP and use EasyPHP on Windows.

First, I had to change NuSOAP's response encoding to UTF-8 using soap_defencoding, which seems to be the default for WCF, then I had to figure out how to pass an arry of structures to call my method.

So, for the following signature :

    void MyMethod(string a, string b, Parameter[] parameters)

The caller's php parameter structure should be :

$params = array(
'a' => $a,
'b' => $b,
'parameters' => array(
'Parameter' => array(
array('Key' => 'abc', 'Value' => 10),
array('Key' => 'def', 'Value' => 42)

Notice that you have to place a "Parameter" element inside the "parameters" parameter.

Then, to call the method, use the following line :

    $client->call("MyMethod", array($params));

by encapsulating the parameter array once again. This makes a lot of levels to call one little function... I prefer the .NET way of doing thing :)

An other problem came up right at this point : The array, although the information being in the soap body, was not filled on the .NET side. After comparing with a .NET to WCF call, I figured that there was a missing namespace. This is what is generated by default with the code I've presented above using NuSOAP :


And this is what .NET is generating :

<Parameter xmlns="">
<Parameter xmlns="">

For some reason, the WCF basicHttp binding point is generating two different namespaces for the service contract and for the data contract. To fix this, you just have to specify explicitly a namespace for each ServiceContract and DataContract attribute of your service :


The other problem was about using an unspecified data type as a member of a structure, a System.Object in my case. Well, it turns out that NuSOAP does not support it, as it does not include the data type of the serialized element, so the WCF deserializer cannot interpret it. I changed the data type back to string, unfortunately losing the type information. I can get it from somewhere else but still, this can lead to serialization problems related to culture (comma being a dot and the opposite depending on systems, for instance).

Anyway, there are a few things to remember to get things to work fine with NuSOAP :

  • Change the encoding to UTF-8 or whatever encoding you choose to use,
  • Don't forget to specify the name of the type of an element in an array in PHP, 
  • Do not expose unspecified parameters or attributes,
  • Explicitly specify the namespace of each DataContract and ServiceContact attribute of your service.

It's been a while since I've written a line of PHP code, and I didn't miss it at all. I'm going back to WCF now :)

ODP.NET Connection Pool Race Condition

By Jerome at December 22, 2006 12:56 Tags: ,

Aaah, les joies d'Oracle. Ma base de données préférée... accompagnée de son cortège de bugs...

Bon, j'arrête là le sarcasme, mais j'ai du mal :)

Dans le cadre d'un projet sur lequel je travaille, je suis tombé sur un bug très, très gênant : Le provider ODP.NET d'oracle peut se connecter à un schéma sur lequel il n'est pas censé se connecter...

Je m'explique : dans le même AppDomain d'un même process, je crée dans des Threads différentes deux OracleConnection avec deux chaines de connexion différentes. Jusque là , rien d'anormal. Le problème est que de temps à autres, l'une de deux connexion va utiliser la chaine de l'autre thread !

Après avoir vérifé rapidement le contenu de l'objet OracleConnection, il se trouve qu'une variable nommée "m_InternalConStr" contient parfois une chaine qui ne correspond par du tout à celle qui a été passée en paramètre dans le constructeur... C'est très génant, car si l'on se connecte à une base alors que l'on pense se connecter à une autre, ... je vous laisse immaginer les dégats si on fait des updates.

Donc après de nombreuses tentatives, je suis arrivé à la conclusion qu'il faut synchroniser tous les appels à OracleConnection du constructeur jusqu'à Open avec un Mutex, et cela sur l'AppDomain courant. Vous imaginez aisément le goulot d'étranglement. J'aurais tout aussi bien pu désactiver le Connection Pool, mais la aussi, coté performance cela devient gênant.

Bien entendu, ce genre de problème apparait plus souvent sur une machine Multi Processeur. (En production dans mon cas, ca fait toujours plaisir...)

Je n'arrive pas à reproduire la Race Condition de manière systématique, mais je met avec ce post un exemple de code qui teste tout ca. Généralement, l'erreur apparait au bout de quelques essais. Pour tester si le ConnectionPool est consistant, j'effectue quelques lignes de Reflection pour aller chercher des variables interne... pas très propre, mais c'est suffisement déterministe.

Si quelqu'un se sent suffisement motivé pour tester... :)

IIS, HTTP 401.3 and ASP.NET directories ACLs

By Jerome at September 01, 2006 22:09 Tags: , ,

A few days ago, on a newly installed web server with all the appropriate security patches applied, I kept having the same error on every ASP.NET 1.1 application I was running :

HTTP Error 401.3 - Unauthorized: Access is denied due to an ACL set on the requested resource.

At first, the reflex is to check all the permissions of the mapped physical directory, that they match the Application Pool identity, the guest identity (IUSR_Machine on my server) and for some configurations, the impersonated identity any ASP.NET configuration. Even with all these checks, any ASP.NET application was returning the same 401.3 error for anonymous users...

Well, it turns out that the ACL of the %SystemRoot%\Microsoft.NET\Framework\v1.1.4322 is important too... I don't know how the ACL got changed in the first place, and I don't know either how I came to check on these ACL, but that can waste a lot of time...

C# 3.0, a sneak peek

By Jerome at July 22, 2006 10:22 Tags: ,

If you've used both DataSet and DataTable, you must have seen the DataTable.Select method. This is an interesting method that allows to select rows using a set of criterias, like IS NULL and comparison operators referencing columns of the current table, as well as columns from other tables using relations. The problem with method is that is returns a DataRow[], on which you cannot perform an other select.

The solution is actually quite simple : Just copy the rows you'll answer me. Yes, but you can't just reference rows in two DataTable instances, so you also have to perform a deep copy of the rows. So, with a little digging in the DataTable methods, here is what you get :

public static DataTable Select(DataTable table, string filter, string sort)
   DataRow[] rows = table.Select(filter, sort);
   DataTable outputTable = table.Clone();


   foreach(DataRow row in rows)
      outputTable.LoadDataRow(row.ItemArray, true);

   return outputTable;

Clone is used to copy the table schema only, BeginLoadData/EndLoadData to disable any event processing during the load operation, and LoadDataRow to effectively load each row. This seems to be a fairly fast way to copy a table's data.

Now, I wondered how they would do this in C# 3.0, since there is a lot of data manipulation with the new LINQ syntax. This version is quite interesting because instead of evolving the runtime, they chose to upgrade only the language by adding features that generate a lot of code under the hood. That was the case in C# 2.0 with iterators and anonymous methods. C# 1.0 also had this with foreach, using or lock for instance.

In the particular case of Linq, C# 3.0 generates a method invocation list  of a LINQ query, producing standard C# 3.0 code with the help of lambda expressions. For example, these two lines are equivalent :

   var query = from a in test where a > 2 select a;
   var query2 = Sequence.Where(test, a => a > 2);

This ties a little more the compiler to the system asssemblies, but this does not matter anymore.

By the way, you can apply queries to standard arrays and join them :

static void Main(string[] args)
   var names = new[] {
      new { Id=0, name="test" },
      new { Id=1, name="test1" },
      new { Id=2, name="test2" },
      new { Id=4, name="test2" },

   var addresses = new[] {
      new { Id=0, address="address" },
      new { Id=1, address="address1" },
      new { Id=2, address="address2" },
      new { Id=3, address="address2" },

   var query = from name in names
      join address in addresses on name.Id equals address.Id
      select new {name =, address = address.address};

   foreach(var value in query)

I've joined the two arrays using the Id field, and creating a new type that extracts both name and address. I really like inline querying because you can query anything that implements IEnumerable.

I'm also wondering how it'll fit into eSQL (Entity SQL)...

But back to the original subject of this post. They had to do some kind of a DataTable copy in the C# 3.0 helper library, which uses extension methods :


And with some further digging, I found that the LoadDataRow method for copying data is the fastest way to go.

I also found out using the great reflector that there is an Expression compiler in System.Expressions.Expression<T>. Maybe they finally did expose an expression parser that we can use... I'll try this one too !

Precision Timer in .NET 2.0

By Jerome at November 18, 2005 16:25 Tags: ,

If you've been using the .NET Framework since the beginning, you must have had to do some early code profiling, or framerate computation for realtime graphics.

The first idea that pops up to acheive this is to use the DateTime.Now property and do some subtraction of two instances. This is not a good idea since the resolution of this timer is around 10ms or so, which is clearly not enough (and your framerate counter may not go higher than 100 FPS or worse may not work at all).

If you've been in the business for long enough, and been doing some plain old "native" code in, say, C++ on Win32, you should probably used the couple QueryPerformanceFrequency/QueryPerformanceCounter to get the job done. And the same goes for .NET 1.0/1.1. Well, I don't know for you, but each time I have a project that reaches a certain critical size, I always need to use this kind of timer and I end up by writing the P/Invoke wrapper to reach these two methods.

Good news is, .NET 2.0 already has this class integrated in the form of System.Diagnostics.Stopwatch, so you don't have to write it from scratch again and again because you can't find on the net the right "free" class that does enough for you.

The BCL team has added some other nice utility classes like this one, and this saves quite some time.

Reflective Visitor using C#

By Jerome at April 04, 2005 11:35 Tags: ,

There is a well known design pattern in the Object Oriented world: The Visitor pattern.

This pattern, among other things, allows to extend an object without actually modifying it. It is fairly easy to implement in any good OO language such as C++, Java or C#.

However, there is a problem with the implementation of this pattern, or rather an implementation limitation. It requires the base interface or abstract class for all visitors to define at most one method for each type that may visit it. This is not a problem by itself, but it requires to modify the visitor base each time you add a new type. The best way to do this would be to call the appropriate method based on the caller type.

.NET provides that kind of behavior through reflection, as it is possible to find a method based on its parameters at runtime.

I decided to try this out with the C# 2.0 and its generics :)

Here what I came up with :


public interface IOperand
IOperand Accept(Visitor visitor, IOperand

public class Operand< T> : IOperand
T _value;

Operand(T value)
_value = value;
  public IOperand Accept(Visitor v, IOperand
    return v.Visit(this
, right);
  public T Value
    get { return
_value; }
  public override string ToString()
    return string.Format("{0} ({1})", _value, GetType());

This is the definition of an operand, which is used in a abstract machine to perform operations on abstract types. This class is generic, I did not want to implement all the possible types.

Then here is the visitor :


public class Visitor
public virtual IOperand Visit(IOperand left, IOperand right)
MethodInfo info = GetType().GetMethod("Visit", new Type[] { left.GetType(), right.GetType() });
if (info != null && info.DeclaringType != typeof(Visitor))
return info.Invoke(this, new object[] { left, right }) as IOperand;

Console.WriteLine("Operation not supported");
return null;

This method search in the current type all methods named "Visit" that take the actual type of the parameters left and right and tries to match a method with it. Also, to avoid looping through the same method we're in since it's matching everything, there is a test for the type declaring the method.

Now the AdditionVisitor :


public class AdditionVisitor : Visitor
  public IOperand Visit(Operand<int> value, Operand<int> right)
    return new Operand<int>(value.Value + right.Value);
  public IOperand Visit(Operand<int> value, Operand<short> right)
    return new Operand<int>(value.Value + right.Value);
  public IOperand Visit(Operand<double> value, Operand<int> right)
    return new Operand<double>(value.Value + right.Value);

Which defines a bunch of visitable methods used to add different operations on IOperand-like types.

And finally to use it :


class Program
  static void Main(string[] args)
    Operand<int> a = new Operand<int>(21);
    Operand<short> b = new Operand<short>(21);
    Console.WriteLine(Add(a, b));

static IOperand Add(IOperand a, IOperand b)
AdditionVisitor addVisitor = new AdditionVisitor();
return a.Accept(addVisitor, b);

Using this Reflective Visitor, modifying the base visitor class is not needed anymore, which limits the modifications to one class only. Of course, there's room for optimization, for instance by avoiding the method lookup using the System.Reflection namespace, but you get the picture.

Some asked me what could be done in .NET that could not be done in C++, this is an example of it :)

C# 2.0, Closures and Anonymous Delegates

By Jerome at April 04, 2005 11:33 Tags: ,

I was looking around the web about new features in C# 2.0, and I came across this article about the support for closures in C# 2.0. The article explains that the support for closures in C# 2.0 takes the form of anonymous delegates.

There are some examples of closures like this one :

public List<Employee> Managers(List<Employee> emps)
  return emps.FindAll(
    delegate(Employee e)
      return e.IsManager;

Which is interesting, but less than this one :

public List<Employee> HighPaid(List<Employee> emps)
  int threshold = 150
  return emps.FindAll(
e.Salary > threshold;

The interesting part here is that the delegate is actually allowed to use a variable that is local to the method where it is defined. You might wonder how this is implemented by the C# compiler.
It may become even less obvious with this example :

public Predicate<Employee> PaidMore(int amount)
  return delegate(Employee e)
    return e.Salary > amount;

Ok, where does the compiler stores the value of "amount" since the delegate method is only returned to be executed later... ?

In fact, the compiler only generates a "DisplayClass" that containts amount as a field initialized when the anonymous delegate is created, and the implementation of the delegate itself.


Mono 1.0.5 support for NetBSD 2.0

By Jerome at December 15, 2004 11:37 Tags: ,

As I promised earlier, here is the patch for Mono 1.0.5 to run on NetBSD 2.0-Release.

I've managed to get MonoDoc 1.0.5 to run on my box and near being able to run MonoDevelop too. It seems that somewhere, a mutex is unlocked twice and since the libpthread is asserting on that kind of invalid behavior, MonoDevelop stops. But even if assertions are disabled, MonoDevelop stops while not being able to read a perfectly valid file... Well :) There's some work to be done here.

Earlier I talked about the fact that I forgot to save the stack's address of suspended threads. Under Linux, where signals are used to "suspend" threads, the signal handler looks like this :

   void GC_suspend_handler(int sig)
      int dummy;

      /* some not important stuff... */

      pthread_t my_thread = pthread_self();

      me -> stop_info.stack_ptr = (ptr_t)(&dummy);  /* Get the top of the stack address */

      sigsuspend(&suspend_handler_mask); /* Wait for signal to resume */

The thread being stopped is held into its signal handler, waiting for the signal to resume and exit the signal handler. The main reason for this "trick" is that there is no standard way to suspend a thread with the libpthread.

However, NetBSD's libpthread has two nice methods pthread_suspend_np and pthread_resume_np, which are able to suspend and resume arbitrarily a specific thread. So in the GC code, instead of raising signals to specific threads, calling these two methods is only necessary.

The thing I missed in the first patch, is the storage of the top stack's address, retreived by using the dummy variable in the signal handler. So, to work around this problem, I used a third non standard function that is able to retreive the base adress, not the top address, of the stack for the suspended thread and give it to the GC.

Although, the patch seems to please mono, which is running fine, the stack address given to the GC is not the perfect one. Unfortunately, there is no way, as far as I know, to get the top of stack pointer for suspended threads. I can't tell for now the impact of this modification, but still, mono does not seem to complain, neither does the GC.

I also tried to mix signals and thread suspension, but it seems that signals in general are breaking GC's operations.

Maybe some pthread/GC guru could enlighten me :)

Update : It also works nicely with Mono 1.1.3. Actually, it runs better as it has a little bug that has been fixed in System.Timers.Timer, but this has nothing to do with NetBSD ;)

ASP.NET Remote Debugging, Windows XP SP2 and .NET Framework 2.0

By Jerome at December 13, 2004 11:38 Tags: ,

I did not have to create and debug any ASP.NET application for a long time, but since I'm creating an online Questions/Answers application, I had to use the really nice debugging features brought by Visual Studio .NET.

To be specific, I did not have to debug any remote web site since I installed the Windows XP SP2. My configuration is quite simple, I host my application on a development Windows 2003 Server and I design with VS.NET on my Windows XP machine.

So when I trying to debug my web application, all I could get was : "Unable to debug the application" or "The remote debugger could not connect to the local machine" or a really helpfull "Cannot debug process".

The first reaction when seeing things like this is to check the local and remote VS Developers and Debugger Users security groups. But every thing was fine there... In fact, the problem lies in the DCOM security configuration. Installing the SP2 removed the right for the anonymous account to use DCOM remotely... but not for the Everyone account. Odd.
The only thing to do is to get there : Run / dcomcnfg.exe / Component Services / Computers / My Computer / Properties / COM Security / Edit Limits, and to check "Remote Access / Allow". Easy.

This solved my first problem, the remote debugging. The second one is still ASP.NET debugging, but locally this time.

I have both VS.NET 2003 and 2005 installed on my local machine, and so are both 1.1 and 2.0 .NET Framework versions. Installing 2.0 over 1.1 changes the default framework used by the Windows XP IIS to version 2.0 which breaks the 1.1 debugger :)
The simple thing to do here is this : IIS / Web Sites / [WebSite] / Properties / ASP.NET, then select 1.1.4322 for the ASP.NET Version field and that's it :)

Man, I really like being able to debug my Web Applications, I really missed it ! (And so was Karouman actually, who was debugging blindly by guessing exceptions :p)

The Disposable Pattern, Determinism in the .NET World

By Jerome at August 16, 2004 11:43 Tags: ,
One of the numerous features that can be found in the CLR is the garbage collection. Depending on the programmer’s background, this can be a little disturbing, having the habit to manage the memory by hand (especially for plain C programmers). For C++ developers for instance, even though the memory management can be abstracted by the use of the STL, the C++ has a pretty strong model that provides a deterministic way of handling memory allocations. Once an object has reach the end of its life, either by the control flow leaving the scope or the container object being destroyed, the object is immediately destroyed. This is done by the C++ compiler that calls the object’s destructors and releases the memory. The CLR model is, however, weaker than that. The Garbage Collector (GC) is generally more efficient than the programmer and it handles objects destruction in an asynchronous way. When an object is created, it has the ability to provide the GC a special method named the Finalizer, which is called by the GC when it needs to reclaim the memory used by the object. This can be done at any time by an internal CLR thread, only when necessary. This means that the memory management is really efficient and fast. This also means that it can fit the environment the program is running in by efficiently using the available memory. The effect of this asynchronous behavior is that there is no way to have a deterministic destruction of objects. This is one of the most frequent critics of programmers beginning with .NET programming. The biggest trap in this area for C++ developers is the syntax of the finalizer in managed C++ and C#. For instance, in C# :       class Program      {        static void Main(string[] args)        {          {            Dummy dummy = new Dummy();          }           Console.WriteLine("End of main.");        }      }       public class Dummy      {        ~Dummy()        {          Console.WriteLine("Dummy.~Dummy()");        }      } Since the instanciation of the Dummy class is between brackets, a C++ programmer would think that the so called destructor is called right at end of the scope, before the last WriteLine. In reality, the GC will call the Finalizer when the memory is reclaimed : At the end of the program execution. A concrete view of this problem is often found when using file IO :   class Program  {    static void Main(string[] args)    {      StreamWriter writer = File.OpenWrite("Test.txt");      writer.Write("Some value");       Console.ReadLine();    }  } The problem with this program is that the writer is not closed. It will eventually be closed when the GC will call the finalizer of the writer intance, thus closing the file. This is a common problem found in C# programs, leaving some file handles opened, preventing the files to be opened by some other program. To fix this problem, there are two methods :
  • Call the StreamWriter.Close method when the stream is not used anymore,
  • Use the using keyword to limit the scope of the object.
 The using keyword is a shortcut in the C# language to call a method of the System.IDisposable interface, at the and of its scope. In pratice this is what that means :   static void Main(string[] args)  {    using (StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt")))    {      writer.Write("Some value");    }  } Which is in fact expanded by the C# compiler to :   static void Main(string[] args)  {    StreamWriter writer = new StreamWriter(File.OpenWrite("Test.txt"));     try {      writer.Write("Some value");    }    finally {      writer.Dispose();    }  } This is straightforward, the Dispose method is called at the end of the “using” scope. One thing though, this does not mean that the memory allocated for the StreamWriter instance is reclaimed right after the dispose. This only means that the instance will release the “unmanaged” objects it holds. A file handle in this case. But one might say : “But if the programmer forgets to call Dispose or Close, the file is not closed at the end”. Actually, no. This is where the Disposable pattern enters the scene. The good thing about this is that you can combine the Dispose method and the GC, the GC being the safekeeper of the unamanged resources of the object; Even if the programmer forgets to call the Close or Dispose method. Here is an example of a Disposable type that implements the Disposable pattern:   public class MyDisposable : IDisposable  {    public void Dispose()    {      Dispose(true);    }     ~MyDisposable()    {      Dispose(false);    }     private void Dispose(bool disposing)    {      // If we come from the Dispose method, suppress the      // finalize method, so this instance is only disposed      // once.      if (disposing)        GC.SuppressFinalize(this);       // Release any unmanaged resource      // ...    }  } This class implements implicitly the IDisposable interface, by defining the Dispose method. Here, both the Dispose method and the Finalizer call an overload of the Dispose method. This method is the code that will actually release unmanaged resources. You might note the use of the GC.SuppressFinalize method, that prevents the GC from calling the Dispose method again from the Finalize method. This also has a alternate objective: Remove some pressure on the GC, as finalizing objects is rather expensive.This pattern can also be completed by an “already disposed” check, to avoid multiple Dispose calls. There are two possible behaviors there : Either silently ignore any subsquent calls, or throw a DisposedException. Using one or the other is a matter of context. 

While not every object needs to be finalizable (and disposable), each time you add a finalizer, you should also implement the System.IDisposable interface and the Disposable pattern.


About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.