[LINQ] Finding the next available file name

By jay at June 10, 2010 20:16 Tags: , ,

Cet article est disponible en francais.


Sometimes, the most simple examples are the best.

 

Let’s say you have a configuration file, but you want to make a copy of it before you modify it. Easy, you copy that file to “filename.bak”. But what happens there’s already that file ? Well, either you replace it, or you create an autoincremented file.

 

If you want to do the latter, you could do it using a for loop. But since you’re a happy functional programming guy, you want to make it using LINQ.

 

You then can do it like this :

    public static string CreateNewFileName(string filePath)
    {
        if (!File.Exists(filePath))
            return filePath;

        // Don't do that for each file.
        var name = Path.GetFileNameWithoutExtension(filePath);
        var extension = Path.GetExtension(filePath);

        // Now find the next available file
        var fileQuery = from index in Enumerable.Range(2, 10000)

                        // Build the file name
                        let fileName = string.Format("{0} ({1}){2}", name, index, extension)

                        // Does it exist ?
                        where !File.Exists(fileName)

                        // No ? Select it.
                        select fileName;

        // Return the first one.
        return fileQuery.First();
    }

Note the use of the let operator, which allows the reuse of what is called a “range variable”. In this case, it avoids using string.Format multiple times.

 

The case of Infinity

There’s actually one problem with this implementation, which is the arbitrary “10000”. This might be fine if you don’t intend to make more than 10000 backups of your configuration file. But if you do, to lift that limit, we could write this iterator method :

    public static IEnumerable<int> InfiniteRange(int start)
    {
         while(true)
         {
             yield return start++;
         }
    }

Which basically will return an new value each time you ask for one. To use that method you have to make sure that you have an exit condition (the file does not exist, in the previous example), or you may well be enumerating until the end of times... Actually up to int.MaxValue, for the nit-pickers, but .NET 4.0 adds System.Numerics.BigInteger to be sure to get to the end of times. You never know.

 

To use this iterator, just replace :

        var fileQuery = from index in Enumerable.Range(2, 10000)

by

        var fileQuery = from index in InfiniteRange()

And you’re done.

Thoughts on Migrating from WSS 3.0 to SharePoint Foundation 2010

By Admin at June 03, 2010 21:31 Tags: ,

Cet article est disponible en francais.


I upgraded recently a WSS 3.0 Farm to Sharepoint Foundation 2010, I tought I’d share some of notes and pitfalls I found during the upgrade.

My setup is built around two Windows Server 2008 R2 64 Bits VMs hosted on Hyper-V Server R2, one VM for the frontend, one for the Database (SQL Server 2008 SP1 64 Bits) and the Search Server Express.

 

Hyper-V Assisted Upgrade

Having the setup built on Hyper-V saved me a great deal of time, primarily by the use of snapshots taken at the same time on both machines. This helped a lot to be able to experiment directly on the production system during an expected downtime for the users.

The snapshots allow a trial and error process that lead to a somehow “perfect” environment where mistakes can be reversed pretty easily. Ugrading using this snapshot technique is however pratical only with enough disk space and a reasonable content database size depending on the physical hardware.

 

Pre-Requisites

Here are the steps I followed to perform the upgrade :

  • Made clones of both machines in a VM library, just to be safe in case the Hyper-V messed up the VMs because of the snapshots (you never know)
  • Upgraded WSS 3.0 with latest Cumulative Updates (KB978396)
  • Downloaded Sharepoint Foundation and Search Server Express packages
  • Installed the prerequisites of Sharepoint Foundation on both VMs (not Search Servers prerequisistes, which do not install properly, and are seemingly the same as the SPF package)
  • Installed SQL Server 2008 SP1 Cumulative Updates KB970315 and KB976761 (in this order)

That’s the easy part, where updates do not impact the running farm.

I took a snapshot of both VMs at this point.

 

Upgrading SharePoint and Search Server

You may want to read information on this on technet, which is very extensive.

Now, the Sharepoint upgrade :

  • Put a site lock in place (just in case a user might try to update stuff he’d probably lose)
    • stsadm -o setsitelock -url http://site -lock readonly
  • Detached the content databases using : (See later for the explanation of this step)  :
    • stsadm.exe -o deletecontentdb -url http://site –databasename WSS_Content
  • Backup the content DB so they can be upgraded on an freshly installed SPF2010 setup.
  • Executed the Search Server Express setup on both machines, without configuring it
  • Upgraded Sharepoint Foundation setup, without configuring it
  • On the frontend VM (so the admin site can track the update jobs), ran the Sharepoint Configuration wizard to perform the upgrade. I selected the Visual Upgrade so the site collections templates would use the new visual style (Ribbon powered !)
  • After the configuration ended on the frontend machine, ran the same wizard on the database VM.
  • Let the jobs run and finish properly.
  • On a temporary empty SPF 2010 setup on a spare VM, mounted the backed up content DB and ran this powershell command :
    • Mount-SPContentDatabase -Name WSS_Content -DatabaseServer db.server.com -WebApplication http://site –Updateuserexperience
    • You may require to install the templates used on your production environment to perform the upgrade properly.
  • After the content db upgrade ended, detached the content database using the admin site (beware of the dependencies between content DBs if you have more than one)
  • Backuped up the content DB.
  • On the production Farm, dropped and recreated the appropriate Web Application without any site collection. I did this step to make sure the AppPools and sites were configured properly.
  • Restored and attached the content DB on the production server using the SPF admin site.

Now, I did perform the “Attach/detach” procedure because it can be performed in parallel on multiple farms if the content databases are huge, and on my production setup, the in-place upgrade did not work properly. The images libraries where not upgraded properly (images where not displayed), and the default pages did not render properly for some obscure reason.

 

A few additional gotchas

  • I had a few other issues with the search SSP, where I needed to remove completely the Search SSP and recreate it to avoid this error :
    • CoreResultsWebPart::OnInit: Exception initializing: System.NullReferenceException
  • The search SSP needs uses of the Security Token Service Application, which by default does use the “Extended Security” setting, which needs to be turned off in IIS.
  • Since I use Search Server Express, Content Databases must not select the search provider named “Sharepoint Foundation Search Server” for the search to work properly.

 

Upgraded Wikis

You’ll find the the wiki editor has been greatly enhanced, and you’ll find it even more powerful when you select the “Convert to XHTML” option in the html menu of the ribbon. The original pages were using the very loose HTML 4.0, which does not seem to work very well.

Other than that, everything else works very fine in this area.

 

Upgraded Discussion Boards

I had a few discussion boards that had issues with the view of individual conversations using the threaded view. The conversation were defaulted to the default “subject” view, which is not applicable to view single threads. To fix this, make a new “subject” view and delete the previous one the normal behavior will come back.

 

Happy SharePointing ! Now I can go back to my Immutability and F# stuff :)

Remote Control for PowerPoint 1.0 !

By Admin at April 10, 2010 17:36 Tags: ,

After an intense (and long) process of certification of my remote control application on the Windows Phone MarketPlace, the app has been accepted !

You can find the app on the MarketPlace, and it is simply called "Remote Control" for PowerPoint. The application has been published in the US English market for now, and if you're not in the US, just choose the US English in the "World View" at the bottom.

The published app is only a sort of test run for me, and only has the capability to control PowerPoint. I'll be adding more controllable applications in the future, depending on the market reception of the app.

As I previous said in this blog, here are the new features and enhancement of this new version :

  • Improved performance
  • Support for High DPI and VGA devices
  • Windows 7 support
  • 64 Bits compatibility
  • Support for Wifi, which is more marketing abuse, and is actually TCP support
  • Support for touch screen devices, those who do not have enough hardware keys

The US English market is the only available market, for now. Other markets are still waiting to be approved, and will be :

  • Canada, UK, India, Australia for English
  • France and Canada for the French localized version

The french part certification is somewhat obscure for me, and I expecting certification surprises...

I'll also add Portugese and Spanish version in a near future.

If you were waiting for this new version, visit http://jaylee.org/rc and please let me know what you think !

[VS2010] Configure Code Analysis for the Whole Solution

By jay at March 06, 2010 17:38 Tags: ,

Cet article est disponible en francais.

In Visual Studio, configuring Code Analysis was a bit cumbersome. If you had more than a bunch (say 10), this could take a long time to manage and have a single set of rules for all your solution. You had to resort to update all the projects files by hand, or use a small tool that would edit the csproj file to set the same rules everywhere.

Not very pleasant, nor efficient. Particularly when you have hundreds of projects.

In Visual Studio 2010, the product team added two things :

  1. Rules are now in external files, and not embedded in the project file. That makes the rules reuseable in other projects in the solution. Nice.
  2. There’s a new section in the Solution properties named “Code Analysis Settings”, that allows to set rule files to use for single projects, and even better, for all projects ! Very nice.

That option is also available from the “Analyze” menu, with “Configure Code Analysis for Solution”.

One gotcha there though, to be able to select all files, you can’t use Ctrl+A but you have to select all files by selecting the first item, then hold Ctrl while selecting the last item. Maybe the Product Team will fix that for the release...

Migrating Rules from VS2008

If you’re migrating your projects from VS2008, and were using code analysis there, you’ll notice that the converter will generate a file named “Migrated rules for MyProject.ruleset” for every project in the solution. That’s nice if all your projects don’t have the same rules. But if they do, you’ll have to manage all of them...

Like all programmers, I’m lazy, and I wrote a little macro that will remove all generated ruleset files for the current solution, and use a single rule set.

This is not a very efficient macro, but since it won’t be used that often... You’ll probably live with the bad performance, and bad VB.NET code :)

Here it is :

Sub RemoveAllRuleset()

    For Each project As Project In DTE.Solution.Projects
        FindRuleSets(project)
    Next

End Sub

Sub FindRuleSets(ByVal project As Project)

    For Each item As ProjectItem In project.ProjectItems

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then
                FindRuleSets(item.SubProject)
            End If
        End If

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then

                Dim ruleSets As List(Of ProjectItem) = New List(Of ProjectItem)

                For Each subItem In item.SubProject.ProjectItems
                    If subItem.Name.StartsWith("Migrated rules for ") Then
                        ruleSets.Add(subItem)
                    End If
                Next

                For Each ruleset In ruleSets
                    ruleset.Remove()
                Next
            End If
        End If
    Next

End Sub

Reactive Framework: MemoizeAll

By jay at February 04, 2010 18:21 Tags: , , ,

Cet article est disponible en francais.

For some time now, with the release of the Rx Framework and the reactive/interactive programming, some new features were highlighted through a very good article of Bart De Smet dealing with System.Interactive and the “lazy-caching”.

When using LINQ, one can find two sorts of operators: The “lazy” operators that take elements one by one and forward them when they a requested (Select, Where, SelectMany, First, …), and the operators that I would call “Rendez-vous” for which the entirety of the elements of the enumerators need to be enumerated (Count, ToArray/ToList, OrderBy, …) to produce a result.

 

“Lazy” Operators

Lazy operators are pretty useful as they offer a good performance when it is not required to enumerate all the elements of an enumerable. This can also be useful when it may take a very long time to enumerate each element of an enumerator, and that we only want to get the first few elements.

For instance this :
         

static IEnumerable<int> GetItems()
{
    for (int i = 0; i < 5; i++)
    {
        Console.WriteLine(i);
        yield return i + 1;
    }
}

static void Main()
{
   Console.WriteLine(GetItems().First());
   Console.WriteLine(GetItems().First());
}

Will output :


0
1
0
1

Only the first element of the enumerator will be enumerated from GetItems().

However, these operators expose a behavior that is important to know about: Each time they are enumerated, they also enumerate their source again. That could either be a advantage (Enumerating multiple times a changing source) or a problem (enumerating multiple times a resource intensive source).

 

“Rendez-vous” Operators

These operators are also interesting because they force the enumeration of all the elements of the enumerable, and in the particular case of ToArray, this allows the creation of an immutable version of the content of the enumerable. These are useful in conjunction with lazy operators to prevent them to enumerate their source again, when enumerated multiple times.

If we the previous sample, and update it a bit:


static void Main()
{
   var items = GetItems().ToArray();

   Console.WriteLine(items.Count());
   Console.WriteLine(items.Count());
}

We get this result :


0
1
2
3
4
5
5

Because Count() needs to know all the elements of the source enumerator to determine the count.

These operators also enumerate their source with each use, but using ToArray/ToList prevents their result to enumerate the source again.

The case of multiple enumerations

A concrete example of the problem posed by the multiple enumerations is the creation of an enumerable partitionning operator. In this example, we can see that the enumerable passed as the source is used by to "Where" different operators, which implies that the source enumerable will be enumerated twice. Storing the whole content of the source enumerable by means of a ToArray/ToList is possible, but that would be a possible waste of resource, mainly because we can't know if the output enumerable will be enumerated completely (If that is possible, as in the case of an infinite enumerable, ToArray is not applicable).

An intermediate operator between "Lazy" and "Rendez-vous" would be useful.

EnumerableEx.MemoizeAll

The EnumerableEx class brings us an extension, MemoizeAll (built from the Memoization concept), that is just the middle ground we're looking for, and will cache elements from the source enumerator when they are requested. A sort of "lazy" ToArray.

If we take the example of Mark Needham, we would modify it like this :


var evensAndOdds = Enumerable.Range(1, 10)
                             .MemoizeAll()
                             .Partition(x => x % 2 == 0);

In this example, the MemoizeAll does not have a real benefit on the performance side, since Enumerable.Range is not a very expensive operator. But in the case where the source of the "Partition" operator would be a most expensive enumerable, like a Linq2Sql query, the lazy caching could be very effective.

One of the comments suggests that a GroupBy based implementation could be written, but this operator also evaluates the source operator when a group is enumerated. The MemoizeAll is then again appropriate for better performance, but as always, this is a tradeoff between processing and memory.

By the way, Bart de Smeth discusses the part of the elimination of side effects linked the multiple enumeration of enumerables by using Memoize and MemoizeAll, which is not really an issue in the previous example, but is nonetheless a very interesting subject.

 

.NET 4.5 ?

On a side note, I find regrettable that the EnumerableEx extensions did not make their way in .NET 4.0... They are very useful, and not very complex. They may have arrived too late in the development cycle of .NET 4.0... Maybe in .NET 4.5 :)

WinForms, DataBinding and Updates from multiple Threads

By jay at January 02, 2010 23:08 Tags: ,

Cet article est disponible en francais.

When one is trying to use the MVC model on the WinForms, it is possible to use the INotifyPropertyChanged interface to allow DataBinding between the controler and form.

It is then possible to write a controller like this :

    

public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
PropertyChanged(this, new PropertyChangedEventArgs("Status"));
}
}
}

The form is defined like this :


public partial class MyForm : Form
{
private MyController _controller = new MyController();

public MyForm()
{
InitializeComponent();

// Make a link between labelStatus.Text and _controller.Status
labelStatus.DataBindings.Add("Text", _controller, "Status");
}

private void buttonChangeStatus_Click(object sender, EventArgs e)
{
_controller.ChangeStatus();
}

}


The form will update the “labelStatus” when the “Status” property of controller changes.

All of this code is executed in the main thread, where the message pump of the main form is located.

 

A touch of asynchronism

Let’s imagine now that the controller is going to perform some operations asynchronously, using a timer for instance.

We update the controller by adding this :


private System.Threading.Timer _timer;

public MyController()
{
_timer = new Timer(
d => ChangeStatus(),
null,
TimeSpan.FromSeconds(1), // Start in one second
TimeSpan.FromSeconds(1) // Every second
);
}


By altering the controller this way, the “Status” property is going to be updated regularly

The operation model of the System.Threading.Timer implies that the ChangeStatus method is called from a different thread than the thread that created the main form. Thus, when the code is executed, the update of the label is halted by the following exception :

   Cross-thread operation not valid: Control 'labelStatus' accessed from a thread other than the thread it was created on.

The solution is quite simple, the update of the UI must be performed on the main thread using Control.Invoke().

That said, in our example, it’s the DataBinding engine that hooks on the PropertyChanged event. We must make sure that the PropertyChanged event is called “decorated” by a call to Control.Invoke().

We could update the controller to invoke the event on the main Thread:


set
{
_status = value;

// Notify that the property has changed
Action action = () => PropertyChanged(this, new PropertyChangedEventArgs("Status"));
_form.Invoke(action);
}


But that would require the addition of WinForms depend code in the controller, which is not acceptable. Since we want to put the controller in a Unit Test, calling the Control.Invoke() method would be problematic, as we would need a Form instance that we would not have in this context.

 

Delegation by Interface

The idea is to delegate to the view (here the form) the responsibility of placing the call to the event on the main thread. We can do so by using an interface passed as a parameter of the controller’s constructor. It could be an interface like this one :


public interface ISynchronousCall
{
void Invoke(Action a);
}


The form would implement it:


void ISynchronousCall.Invoke(Action action)
{
// Call the provided action on the UI Thread using Control.Invoke()
Invoke(action);
}


We would then raise the event like this :


_synchronousInvoker.Invoke(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

But like every efficient programmer (read lazy), we want to avoid writing an interface.

 

Delegation by Lambda

We will try to use lambda functions to call the method Control.Invoke() method. For this, we will update the constructor of the controller, and instead of taking an interface as a parameter, we will use :


public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker;
...
}

To clarify, we give to the constructor an action that has the responsibility to call an action that is passed to it by parameter.

It allows to build the controller like this :


_controller = new MyController(a => Invoke(a));

Here, no need to implement an interface, just pass a small lambda that invokes an actions on the main thread. And it is used like this :


_synchronousInvoker(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

This means that the lambda specified as a parameter will be called on the UI Thread, in the proper context to update the associated label.

The controller is still isolated from the view, but adopts anyway the behavior of the view when updating “databound” properties.

If we would have wanted to use the controller in a unit test, it would have been constructed this way :


_controller = new MyController(a => a());

The passed lambda would only need to call the action directly.

 

Bonus: Easier writing of the notification code

A drawback of using INotifyPropertyChanged is that it is required to write the name of the property as string. This is a problem for many reasons, mainly when using refactoring or obfuscation tools.

C# 3.0 brings expression trees, a pretty interesting feature that can be used in this context. The idea is to use the expression trees to make an hypothetical “memberof” that would get the MemberInfo of a property, much like typeof gets the System.Type of a type.

Here is a small helper method that raises events :


private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}

A method that can be used like this :


_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);

The “Status” property is used as a property in the code, not as a string. It is then easier to rename it with a refactoring tool without breaking the code logic.

Note that the lambda () => Status is never called. It is only analyzed by the InvokePropertyChanged method as being able to provide the name of a property.

 

The Whole Controller


public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

private System.Threading.Timer _timer;
private readonly Action<Action> _synchronousInvoker;

public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker

_timer = new Timer(
d => Status = DateTime.Now.ToString(),
null,
1000, // Start in one second
1000 // Every second
);
}

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);
}
}

/// <summary>
/// Raise the PropertyChanged event for the property “get” specified in the expression
/// </summary>
/// <typeparam name="T">The type of the property</typeparam>
/// <param name="expr">The expression to get the property from</param>
private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}
}

SharePoint WebDAV, IIS 7.5 and Windows Server 2008 R2

By jay at December 02, 2009 18:02 Tags: ,

A neat feature of Sharepoint 2007 (or WSS 3.0) is the ability to browse the content of a site as if it were a network drive. This is done under the hood by using WebDAV, a standard protocol that Microsoft used to implement this feature.

If you happen to have to install WSS 3.0 on a Windows 2008 R2 Box, you’ll quickly find out that this feature does not work properly, with interesting messages like “Access Denied” or “The network path could not be found” when trying to map a folder.

Using IIS 6.0, you’d simply need to make sure that the WebDAV Web Service Extension is “Prohibited”.

With IIS 7.5, there are multiple places dealing with WebDAV but only one to look at :

  • Open the “Modules” configuration section for the Sharepoint web site
  • Find the “WebDAVModule” entry
  • Remove it, your’re done !

The interesting bit about this is that even though the WebDAV component is disabled in every possible section of the site, the module seems to intecept the WebDAV PROPFIND verb and returns a 405 (Not Allowed) error.

Since the verb is handled by an ASP.NET httpHandler, it never gets the chance to deal with it... and you can’t see your files in the Windows Explorer.

Some news about Remote Control for Windows Mobile

By jay at November 08, 2009 20:38 Tags:

The long standing last release 0.9.0 has been out for a while now, more than a year and a half. It’s been downloaded a lot, over 150.000 times.

I’m not sure about the actual usage, but judging by the steady flow of comments and suggestions I’m receiving, I’m guessing quite a few people are using it.

Since the middle of october, Microsoft has pushed out its Marketplace for Windows Mobile, I have decided that I will be giving a try to this method of publication. The software will not be free this time, but it will be at a price that will not break the bank. I’ll leave the current free application available, but I will not update it anymore. I’m sure that this new paid version will disapoint some of the current users, but that’s a “price” to pay and I’m willing to take that risk.

I’m hoping that the marketplace will broaden the audience, and the app will be available at first in french, english, spanish and portuguese.

About the new features and enhancement of this new version :

  • Wifi support for all the users that do not have a bluetooth hardware
  • Improved performance
  • Support for High DPI and VGA devices
  • Windows 7 support, obviously
  • 64 Bits compatibility

I’m also planning on improving the “touch” support, since the app was originally designed for devices that had many hardware keys, back in 2003. Most recent devices hardly have any hardware keys, and for good reasons.

Anyway, thanks to all the regular users of Remote Control for Windows Mobile !

[VS2010] “Object reference not set to an instance of an object” when opening a file

By admin at October 23, 2009 20:19 Tags:

If you’re trying out Visual Studio 2010, and you try to open source code file with the “Solution Explorer”, you might encounter a nice exception like this one :

---------------------------
Microsoft Visual Studio
---------------------------
Object reference not set to an instance of an object.
---------------------------
OK
---------------------------

That could not be more vague..

After a small debugger analysis, it seems like VS2010 does not support TrueType fonts, but does not prevent their selection in the font selection dialog.

I was using “Proggy Opti Small” with VS2008, which is not TrueType... So to be able to open you source files, use a font like Consolas, and restart VS2010.

A bug report exists on Connect.

Hyper-V, CPU Load and System Clock Drift

By jay at October 14, 2009 20:39 Tags:

Cet article est disponible en francais.

Using Hyper-V Server, you may find that the time is drifting a lot from the actual time, especially when Guest Virtual Machines are using CPUs heavily. The host OS is also virtualized, which means that the load of the host is also making the clock drift.

How to prevent the clock from drifting

  1. Disable the Time Synchronization in the Integration Services. (Warning, this setting is defined per snapshot)
  2. Import the following registry file :

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\W32Time\Config]
    "MaxAllowedPhaseOffset"=dword:00000001
    "SpecialPollInterval"=dword:00000005
    "SpecialInterval"=dword:00000001


    Note: If you are using notepad, make sure to save the file using an Unicode encoding.

  3. If the guest OS (and the server OS) is not on a domain, type in the following to set the time source :

    w32tm /config /manualpeerlist:"time.windows.com,0x01 1.ca.pool.ntp.org,0x01 2.ca.pool.ntp.org,0x01" /syncfromflags:MANUAL /update

    Note: Hosts FQDN are separated by spaces.
  4. Run the following command to force a time synchronization

    w32tm /resync
  5. Check that the clock is not drifting anymore by using this command :

    w32tm /monitor /computer:time.windows.com

 

A bit of background ...

The system I’m currently working on is heavily based on time. It relies a lot on timestamps taken from various steps of the process. If for some reason the system clock is unstable, that means the data generated by the system is unreliable. It sometimes generates corrupt data, and this is not good for the business.

I was investigating a sequence of events stored in the database in order that could not have happened, because the code cannot generate it this way.

After loads of investigation looking for code issues, I stumbled upon something rather odd in my application logs, considering that each line from the same thread should be time stamped later than the previous :

2009-10-13T17:15:26.541T [INFO][][7] ...
2009-10-13T17:15:26.556T [INFO][][7] ...
2009-10-13T17:15:24.203T [INFO][][7] ...
2009-10-13T17:15:24.219T [INFO][][7] ...
2009-10-13T17:15:24.234T [INFO][][7] ...

All the lines above were generated from the same thread, which means that the system time changed radically between the second and the third line. From the application point of view, the time went backward of about two seconds and that also means that during that two seconds, there were data generated in the future. This is not very good...

 

The Investigation

Looking at the Log4net source code, I confirmed that the time is grabbed using System.DateTime.Now call, which excludes any code issues.

Then I looked at the Windows Time Service utility, and by running the following command :

w32tm /stripchart /computer:time.windows.com

I found out that the time difference from the NTP source was very different, something like 10 seconds. But the most disturbing was not the time difference itself, but the evolution of that time difference.

Depending on the load of the virtual machine, the difference would grow very large, up to a second behind in less than a minute. Both the host and the guest machines were exposing this behavior. Since Hyper-V Integration Services are by default synchronizing the clock of all the virtual machines on the guest OS, that means that the load of a single virtual machine can influence the clock of all other virtual machines. The host machine CPU load can also influence the overall clock rate, because it is also virtualized.

 

Trying to explain this behavior

To try and make an educated guess, the time source used by windows seems to be the TSC of the processor (by the use of the RDTSC opcode), which is virtualized. The preemption of the CPU by other virtual machines seems to have an negative effect on the counter used as a reference by windows.

The more the CPU is preempted, the more the counter drifts.

 

Correcting the drift

By default, the Time Service has a “phase adjustment” process that slows down or speeds up the system clock rate to match a reliable time source. The TSC counter on the physical CPU is clocked by the system Quartz (If it is still like this). The “normal” drift of that kind of component is generally not very important, and may be related to external factors like the temperature of the room. The time service can deal with that kind of slow drift.

But the default configuration does not seem to be a good fit for a time source that drifts this quickly and is rather unpredictable. We need to shorten the process of phase adjustment.

Fixing this drift is rather simple, the Time Service needs to correct the clock rate more frequently, to cope with the load of the virtual machines that slow down the clock of the host.

Unfortunately, the default parameters on Hyper-V Server R2 are those of the default member of a domain, which are defined here. The default polling period from a reliable time source is way too long, 3600 seconds, considering the drift faced by the host clock.

A few parameters need to be adjusted in the registry for the clock to stay synchronized :

  • Set the SpecialInterval value to 0x1 to force the use of SpecialPollInterval.
  • Set SpecialPollInterval to 10, to force the source NTP to be polled every 10 seconds.
  • Set the MaxAllowedPhaseOffset to 1, to force the maximum drift to 1 second before the clock is set directly, if adjusting the clock rate failed.

Using these parameters will not mean that the clock will stay perfectly stable, but at the very least it will correct itself very quickly.

It seems that there is a hidden boot.ini parameter for Windows 2003, /USEPMTIMER, which forces windows to use the ACPI timer and avoid that kind of drift. I have not been able to confirm this has any effect at all, and I cannot confirm if the OS is actually using the PM Timer or the TSC.

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.