Microsoft’s Plans for the Future of .NET

March 25, 2017 § Leave a comment

Microsoft’s Plans for the Future of .NET

| by Jeff Martin 

Microsoft’s Mads Torgersen has shared an updated strategy for the .NET family of languages, providing insight into the comapny’s thinking for future functionality.  Although the development of C#, VB .NET, and F# happens in public over GitHub, Microsoft’s long-term plans have frequently been kept private.  Torgersen’s announcement is useful in that Microsoft’s current way of thinking is now available for public review and commentary.

Torgersen notes that according to StackOverflow, only Python and C# are both found on the top ten lists for most used and most loved programming languages.  C# is used in a wide variety of application types:  business, gaming, and web, among several others.  Recognizing this, Microsoft wants C#’s design to “innovate aggressively, while being very careful to stay within the spirit of the language”.  Another aspect of this is to support all of C#’s various platforms, so that one is not emphasized at the expense of others.

When it comes to Visual Basic, its user base is not as large as C#, but that user base does have a larger percentage of new developers than C#.  Since Visual Basic has a smaller, more inexperienced developer base in Microsoft’s eyes, future design plans are going to see VB decoupled from C#’s design.  VB will add new language features where it makes sense for that language, rather than merely add them because C# is getting something similar.  That said, Torgersen says Microsoft will continue to maintain it as a first-class citizen on .NET which remains welcoming to new developers.

Of the three languages mentioned, F# has the smallest user base, but it is one that is very passionate about the language.  Torgersen says that Microsoft intends to “make F# the best-tooled functional language on the market” while ensuring it interoperates well with C# where appropriate.

Reader commentary on this announcement is mixed.  F# and C# developers are mostly happy as their languages will continue to be considered in a place of prominence.  VB developers are more concerned that their language will be left behind or stagnate.  However Torgersen insists that VB will continue to be a point of investment for Microsoft.

WCF message headers with OperationContext and with MessageInspector and Custom Service Behavior

April 14, 2014 § Leave a comment

WCF message headers with OperationContext and with MessageInspector and Custom Service Behavior

In this post we will view some possible options for adding message headers to the messages send from the client to service.

Our scenario:

We have a WCF service provided as a Software as a Service (SaaS). People who have an active subscription with our company, are able to invoke methods on our service and retrieve information. To successfully invoke methods and retrieve information the client invoking the method must add a message header named “SubscriptionID which contains the subscriptionid of that customer. If the subscriptionid does not match a valid and active subscriptionid, access to the operations are denied.

1. Setup and configuration of the Service

Our solution setup looks as following:
Solution overview

The “WCF.MessageHeaders.Client” is a Console Application used to mimic a client
The “WCF.MessageHeaders.Service” is a WCF Service Application, holding our SaasService


Our service is called SaasService and stands for our saas service that provides certain operations for information to external clients.
Our service interface looks as following:

WCF Service interface

And our service implementation as following (click to view enlarged):

WCF Service implementation

We have 1 public demo method on our SaasService, namely InvokeMethod() with returns a string (or any kind of information). When invoking the method we check if we can find a header called “SubscriptionID” and an empty namespace (for the demo purpose we used an empty namespace, less hassle). If the message header is not found in the incoming message headers, access is denied. If the SubscriptionID message header is found, it is validated against the subscription store to validate it and if it a valid subscriptionid, the required information is returned to the client.

Getting Message headers at the service side can be found at OperationContext.Current.IncommingMessageHeaders. The method FindHeader allows you to check if the header exists and the method GetHeader allows you to retrieve the header information.

Our web.config looks as following:

WCF Service configuration

Nothing special, we just use a basicHttpBinding and we leave the address unspecified as we will host this service in our local IIS. I’m no fan of hosting services in managed applications, unless it’s really necessary.
To host the service easily in IIS, go to the service project properties, go to tab “Web” and instead of using the Visual Studio Development Server use local IIS Web Server and press the “Create Virtual Directory”

WCF Service hosted at IIS

Your service should be available at:

2. Setting up and configuring the client

We simply add a service reference to the service url mentioned above.

This is the code we put at our client console application:

WCF Service client proxy

If we run our client console application:

WCF Client console application

Behaves as intended, as we didn’t add any serial key at the outgoing messages at our client, there is no subscriptionID message header found at the message at service, so the access is denied. If we are a valid customer, we have a subscriptionID which we can pass on to the service, which will grant us access.

The code should look as following:

WCF add message header to message

Basically we start by creating the WCF client proxy, which will invoke the operation. Then we create a new OperationContextScope, based on the proxy’s innerChannel. The new OperationContextScope at the client has to be created, before you can access OperationContext.Current.outgoingMessageHeaders, which we need to add a message header to the outgoing messages. If you do not use the operationcontextscope, the OperationContext will be nothing and an error will occur will you try to invoke OperationContext.Current.outgoingMessageHeaders.

Next  we create a MessageHeader of type string with the subscriptionID value passed in the message header. This is the subscriptionID which should grant us access to the SaasService. After that we can invoke messageheader.GetUntypedHeader(string name, string ns) which allows us to get an untyped message header, with a defined message header name and namespace. In our case the name is “SubscriptionID”, which is the name of the messageheader the service looks for. Our namespace in this demo is just empty.

After having created the message header, we can simply add it to the outgoingmessageheaders and invoke the method on the proxy.

The result, which we expect:

WCF Message headers added to outgoing or incoming messages
All by all, adding message headers to incoming or outgoing messages and validating them is not that difficult. However when we have a service with a few tens or hundreds of operations, we do not want to validate the message header at every operation. The solution for that is using a message inspector and a custom service behavior.

3. Use a messageInspector and add a custom Behaviour to our service to validate all incoming messages

We have a SaasService which will expose a few hundreds of operations, for example.
For our first method we used quite some code, to find the header, get the header and validate the header content if it is a valid subscriptionID.

However if I have to write this exact  code 99 times more for the other 99 methods I need to implement, that would be silly. We could put the message header validation code in 1 method and in every operation then call that validation method which does the validation and for example returns a boolean. That would save us from duplicating our validation code, however it still requires us to have 100 methods which call all the same validation method and the operation only executes on the result returned of that validation method.

Another possibility is to add a message inspector, which inspects the incoming message before it is handed off to the service operation. At the message inspecor we inspect the message headers to check for the subscriptionid and do some validation if it is provided. If it is not provided or an incorrect subscriptionid is provided, we return an error that the service consumer does not have access to the service. It will save us from writing any validation code at all at our service. We can simply write operations and assume that if the operation is invoked, the client invoking the operation has valid access. Quite a nice solution if you ask me.

How do we implement this:

1. Create an IDispatchMessageInspector at the service side

We start by creating a class “SaasServiceMessageInspector” which is the class which will inspect each message send to our SaasService, inspecting if the subscriptionid message header is present and validate it.
To create a message inspector at the service side, we need to inherit from IDispatchMessageInspectorwhich is an interface provided to create message inspectors at service side. The method we need to implement, in our case, is the AfterReceiveRequest, which gives us access to the message after it has been received.

The implementation (click the enlarge):

WCF IDispatchMessageInspector check message headers

The message is being passed in the AfterReceiveRequest method, so we can just use the request.Headers to find if our SubscriptionID header is present and whether a valid subscriptionid is provided. If no subscriptionid or no valid subscriptionid is provided, we return a faultexception to the client notifying them that access to the service is denied.

Instead of validating the subscriptionid’s against a database or any store, we just check if the subscriptionid matches “123-456789-098″. If it does not match this subscriptionid, the client should be denied access to the service, even before any method gets invoked. An ideal scenario to easily test our code in a demo.

Now after we implemented the IDispatchMessageInspector we need to write some code that this IDispatchMessageInspector is linked to our SaasService and that this message inspection happens at service level. Any operation that might get invoked of our service, needs to have the message inspected first by the message inspector.

2. Create an IServiceBehavior and attach the custom behavior by attribute

To attach our new SaasServiceMessageInspector to our SaasService, we need to create a new behavior (like which we define in our web.config for example).
There are multiple behaviors we can extend:

  • IServiceBehavior: Applies to the entire service
  • IEndpointBehavior: Applies to a specific endpoint
  • IContractBehavior: Applies to a specific contract
  • IOperationBehavior : Applies to a specific operation
Since we want to attach our MessageInspector to each every operation of our service, we will extend the IServiceBehaviour. The default method to extend is the ApplyDispatchBehavior, which allows us to add components to the WCF runtime: (click to enlarge image)

Attach wcf message inspector by IServiceBehavior

For each channeldispatcher we get each endpointdispatcher and add a SaasServiceMessageInspector to the dispatchruntime message inspectors.
Notice that we also inherit from the Attribute, which makes our SaasServiceMessageInspectorBehavior to be used as an attribute.
Also make sure you have System.ServiceModel.Dispatcher namespace imported for the code above.

We can attach our new behavior SaasServiceMessageInspectorBehavior to our Service as following:

Extend WCF servicebehavior

We only attach the custom behavior as as attribute on our service, and the custom behavior is automatically attached to our SaasService and we have a message inspector attached to each endpoint dispatchruntime.
Also notice that our InvokeMethod has no more validation code at all at the method. Only method functionality is added.

Our service solution looks as following:

WCF Service with custom behavior and message inspector

Our client console application remains the same. The subscriptionID used at our client console currently is:

This is the valid subscriptionID we check for in our message-inspector, so we should be able to invoke the method:
Executing the client console application:

WCF Message Inspector Custom behavior

Works as intended. If we change the subscriptionid at our client to an incorrect subscription and try to invoke a service method:

WCF Messageheader

WCF Message header message inspector

Again working as intended. So we created a custom message inspector and a custom service behavior, which did the message header validation for the entire service, only at 1 single and isolated place.

4. What about using a message inspector and custom behavior at the client to add message headers to the outgoing messages

Now we did this for the service, which is a great solution. But what about the client ? If the client has to invoke a few hundreds or thousands of methods of the SaasService, it has to create an OperationScope, create a message header and attach it to the messageheaders, before invoking the operation. (yes you could create only 1 proxy, attach the message header there and always return that proxy to use for invoking operations). Well it is possible to create the same behavior on the client side. Create a message inspector who adds a message header to the request, instead of checking for it and create a custom behavior to attach to your client proxy.

I will not run into all code by detail again, as it resembles the previous section a lot.

We start by creating our message inspector at the client side (click to view at full size):

IClientMessageInspector WCF message inspector

Notice that we inherit from the IClientMessageInspector now, and not from the IDispatchMessageInspector! The IDispatch interfaces are for service side, while the IClient interfaces are for the client side.
We override the BeforeSendRequest method, which is invoked before the messages is handed over to the channel which will route the message to the service. In this method we create the message header and attach it to our request.

Next we create a custom behavior, just as in the previous section for the service side. Now we will inherit from IEndpointBehavior and not from IServiceBehavior:

IEndpointBehavior custom wcf behavior

The IServiceBehavior is for the service side, so not applicable for the client. The IEndpointBehavior is ideal as it’s scope is a specific endpoint, which in our case is the endpoint to the SaaSService. We use the ApplyClientBehavior. Notice there is also an ApplyDispatchBehavior, in case you would use this behavior for the service side. So the IEndpointBehavior can be used for aswell the client as for the service.

In the previous section we attached the behavior by Attribute to the service. However for an Endpoint behavior this is not possible, it can only be attached by configuration file. To attach custom behaviors by configuration file, we need to create an extension from BehaviorExtensionElement

BehaviorExtensionElement WCF custom behavior

You will also need to import the reference System.Configuration. Now that we created the behaviorExtension, we can add this behavior extension in our app.config:

Behavior extension WCF

To add our behaviorextension at our app.config, we need to add an <extensions> node under <system.serviceModel> and add a <behaviorExtensions> in which we add our custom behavior extension.
We now create an endpoint behavior, that used our MessageHeaderInspector extension:

WCF EndpointBehavior extension

Notice that in the behavior we use the “MessageHeaderInspector” which is defined at our behaviorextensions with name “MessageHeaderInspector”.
We now attach this endpointbehavior to our client endpoint:

WCF Client endpoint behavior

This way our endpoint is linked to the endpoint behavior “SaasEndpointBehavior”, which is our endpoint behavior using the MessageHeaderInspector, which is our custom code to add a subscriptionid message header to each outgoing message for the endpoint we defined.
Our client console code looks as following:

WCF client proxy with message header

So we do not add the message header anymore in our code. Our message inspector and custom behavior will take care of this, for every operation we invoke with your proxy on the endpoint defined.
Our solutions looks like this now:

WCF Message inspector and custom behavior client and service

When executing our client console application:

Client with message inspector and custom behavior
Any suggestions, remarks or improvements are always welcome.
If you found this information useful, make sure to support me by leaving a comment.

Real-Time Web Enablement with SignalR in .Net

June 24, 2013 § 3 Comments

Here is good article on SignalR in .Net from Sonal Arora

Real-Time Web Enablement with SignalR in .Net

We know that Twitter is the classy example of the real-time web and we can find the latest idea, stories, trends and what not on Twitter as it happens. But then again what is real-time web and what is so gripping about it? The real-time web is fundamentally different from real-time computing since there is no knowing when, or if, a response will be received1 and HTTP being a stateless protocol, the server does not have any connection with the client once a response is sent back. So how can a server make its connected client update with some information when the client has not asked for it and can I achieve it for my web application to make it trendy? The Answer is Yes… read on to learn more about the Enabler, SignalR.

So What is SignalR

Real-time web functionality is the ability to have server-side code push content to connected clients instantly as and when required2 and ASP.NET SignalR is a library that enables applications to include real-time web as a feature in no time. It offers a simple to use, high-level API for making server to client RPC in ASP.NET applications, through which we can call JavaScript functions in clients’ browsers from our server-side .NET code. Enhancing it for connection management, e.g. connect/disconnect actions, grouping connections, authorization is also doable with SignalR, although Authorization is not offered out of the box here.

It is an Open Source library and currently licensed under the Apache License, Version 2.0. One may obtain a copy of the License at

How SignalR Works

SignalR is something that sits a layer above all of the different techniques of transport whether it’s Web Sockets or AJAX long polling or server-sent events. I, as a developer, code against the SignalR API, and SignalR takes care of ensuring that the appropriate persistent connection passage is set up and maintained between the browser and the server. So it is like layered abstraction over a connection.

When an application has a SignalR connection and it wants to send some data to the server, data is not sent in a raw form; SignalR wraps that data in some JSON along with other information and wraps it all on the JSON payload before sending it to the server. Similarly on the server side, when an app broadcasts data to all connected clients, it does not just broadcast the raw data but a bunch of framing information that contains connection information as well.

A typical request header sent through SignalR client could be something like the following.

POST/signalr/send?transport=foreverFrame&connectionId=ba0c3fe4-1a34-48b4-9a3a-8c7cdc02b5d9 HTTP/1.1

It has transport type (Web socket/Forever frame etc.), Connection Id and HTTP protocol information.

Most of SignalR pieces are replaceable with customized implementations. You can check more at

SignalR consists of a client-side library and a server-side library that work together; let’s check these one by one.


There are two programming prototypes of servers possible with SignalR.

Two programming prototypes of servers
Two programming prototypes of servers

  • Persistent Connections
    Persistent connection, the simpler of the two possibilities, provides mechanisms to alert the connection and disconnection of users, and to manage asynchronous messages to linked users, both individually and collectively. In this kind of implementation, your endpoint or server derives from PersistentConnection class.
  • Hubs
    Hubs provide a development interface, which is much easier to use. The integration between client and server is almost seamless. Hubs provide a higher level RPC framework over a PersistentConnection. It is advisable to use when the application has different types of messages that need to be sent between servers. We can create the same applications using persistent connections, but with hubs it will be simpler.

And Self Host

Any .NET application (console, forms, Windows services…) can act as the host of SignalR services. In case you want to run your host outside IIS, you can code for having Self host.


The SignalR JavaScript client comes in the form of a jQuery plugin. However, SignalR does not just have JavaScript client library. It is also possible to find client components for Windows Phone, Silverlight, WinRT or generic .NET clients, which extends the range of SignalR’s applicability as a framework for real-time applications in any kind of scenario.

Different client components currently available are:

  • Javascript
  • .NET 4.0
  • .NET 4.5
  • Silverlight 5
  • WinRT
  • Windows Phone 8

SignalR features
SignalR features


SignalR is built upon the idea of transports; every transport mode decides how data is sent /received and how it connects and disconnects. SignalR supports multiple transport modes with the assigned priority. If the client’s browser supports Web Socket transport then communication happens in this mode else it falls back to Server Sent or Forever frame; if browser does not support these two then it makes use of the long polling mode for communication.

Multiple transport modes
Multiple transport modes

Though SignalR tries to choose the “best” connection supported by the server and client, nevertheless you can also force it to use a specific transport. At the time of starting the connection, you specify transport mode as:

  1. //try only longpolling
  2. connection.start({ transport: ‘longPolling’ });
  3. // try longPolling then webSockets
  4. connection.start({ transport: [‘longPolling’,’webSockets’] });

The Magical Connection ID

When an app is using SignalR, every connected client is assigned a connection id, which is regenerated at every page refresh. The server identifies the calling client and other connected clients with the help of this connection id only. Any connected client can invoke a server method from the script with the help of proxy handler. And at the server end, along with the relevant processing, a client side method can be invoked. That makes it a Bi-Directional RPC, where one client invokes a server method defined on the hub and from this server method you can invoke methods at all/selected/calling client.

Where can I use SignalR?

SignalR works deliciously for simple notifications where we need to update all connected users for some news feed and broadcasting. However it can handle complex scenarios such as Chat, Co-Creating, Gaming, etc. Here is the small snapshot of the usability scenario.





Notify All or selected clients.

Notification could be of numerous types, like incoming message, some alert, progress bar, reminders, comment or feedback on your blog or post, etc.

It is easy to implement chat with the help of SignalR. Chat can be one-to-one or Group chat. You can try out a simple chat application as described here: /getting-started/tutorial-getting-started-with-signalr Two or more connected users can enter into co-create mode similar to the one described here: /417502/Online-Whiteboard-using-HTML5-and-SignalR It enables applications like gaming, which require high frequency push from the server. You can check out the example given on the SignalR site:


SignalR and its usability looks very promising; it lets developers embed real-time web in the application without getting their hands dirty. It’s on GitHub and there is a dedicated team working tirelessly to make it even better. New users get instant support on the forums as well. To get hands on with this very promising library from, read Part 2 of this article to know how to actually do that.


1.        Real Time Web. Wikipedia”

2.        Introduction. “SignalR


Getting Started with SignalR

Your app should be on .Net Framework 4.0 or higher in order to use SignalR. You can find it on NuGet. To install SignalR in your .Net Application, click on Tools | Library Package Manager | Package Manager Console in your VS IDE and run the command:

  1. install-package Microsoft.AspNet.SignalR

This command will add Server and Client side libraries in your application, making you ready to go.

Add Server and Client side libraries

Add Server and Client side libraries

Follow these steps to get hands on with SignalR with one simple example.

Adding Reference to SignalR

Add reference to SignalR in your class file.

  1. using Microsoft.AspNet.SignalR;

Implementing the Endpoint

The endpoint, or SignalR service, can be implemented using persistent connection or hubs. I will be implementing Hubs in the example.

Example Code

Route Registration

Once the endpoint is implemented, we must register it in the routing system, which will allow us to access it. The best place to do this is in the Application_Start() of Global.asax, so that it runs during initialization of the application.

Example Code

Implementing the Web Client

Implementing the web client begins by including a reference to the client library of this component on our page or view. In order to support older clients that do not natively support JSON deserialization, you will have to add a reference to the json2.js script library before referencing SignalR related script files.

Example Code

Initiate the Connection

To initiate a connection from client to server, you need to start the hub in script. Along with initiating connection you would want to define the event that triggers the server push; it could be anything from a button click or some status change. To keep it simple, in the example it’s a button click on your page.

Example Code

Defining Client Procedure to be Invoked by Server

If you recall from the “Implementing the endpoint” step, I had called a method broadcastMessage ()on all the clients. The definition of this method is done in the script at the client side as shown below.

Example Code

Now run the application in multiple browser windows and click at the button control on one page. This will display “Hello <User  Name> Welcome to Devx” in <YOUR HTML CONTROL> in opened browser windows, even for those that were not active.

So far so good, without worrying about the rawness of transport and real time web intricacies, you made your site RealTime web enabled in no time with SignalR. But is that all you wanted for your site, if not…then read on.

Unraveling Real World Problems with SignalR

Applications in the real world are not as easy as the example we saw just now. To implement real-time web with SignalR, you will have to play around with the connection ids of connected clients. And if authorization is enabled for a web site then you will have to write your own code and save the connection id along with logged in user details, as authorization is not offered here out of the box. Today authorization is more or less an integral part of web sites, and I am not taking up some unexplored topic here. Nevertheless, I would like to detail different strategies to help you choose the best one for your requirement.

Diverse Approaches for Managing Connection IDs

  • DEPRECATED: In its initial releases, SignalR provided a way to override its Connection Factory where a programmer could assign a unique connection id to each connected client and that could be used while invoking the client method; however it is no longer supported.
  • Now you will have to save the connection id along with the unique key of the authorized user in order to enable your code to invoke the method on the selected client. A very commonly used design is to maintain a user list on the server that plots connection IDs to users and registers the users as they connect and disconnect1, as proposed at the link
  • For application sanity and keeping the app server memory free for processing, I personally suggest using DB tales instead of adding things up in some variable at the app server. The above mentioned concept can be implemented using a DB table we well. I will elaborate this approach in article and at the end suggest some enhancements based on the requirement.

Step 1: Boot up

Here I assume that:

1. You have installed SignalR lib in your .Net application, on framework 4.0 or higher(ref).

2. You have defined your end point by deriving your class from Hub (ref).

3. You have made the necessary changes to your Global.asax (ref).

Step 2: Change in DB Schema

Create one db table to save logged in user’s id along with connection id. Entity class would look something like what is shown below.

Example Code

Step 3: Client and Connection

After including references to the SignalR client library as we did in previous example, you will have to initiate connection. When a client connects to Hub, you can invoke the method to save user information along with connection id. So here we write a client method, which is invoked when the master page or frame window loads.

Example Code

Heremarkonline() is a server side method in your Hub class, to store User and connection id info in the table.

Example Code

Once we are done with “Setting up the connection,” which includes saving rows to db schema, we should implement logic to clear the table as and when the user disconnects from application.

Now disconnection can happen in two manners. The user can close the browser or can log off. To handle both scenarios, you can write methods similar to those shown below.

In the class that derives from Hub, override onDisconnected().

Example Code

And when a user logs off from the application, invoke a method similar to the one shown below.

Example Code

Step 4: Updating DB Schema

Code for updateOnlineUser()method is shown below.

Example Code

Step 5: Fetching a User’s Connection ID/s

Now write a code that fetches all connection ids for a specific user.

Example Code

Step 6: Client-Server BI Directional RPC

Now whenever a server method is invoked, you can fetch list of available connection ids for the target user/s and invoke the method on available connection ids.

Example Code

Where broadcastMessage is a client side method defined as:

Example Code

The above mentioned code will work even If you have multiple connection id scenarios like notification, chat and Co-Create on the same screen, working parallel to each other.

Multiple connection id scenarios
Multiple connection id scenarios

How to Make Multiple Connection ID Scenario More Scalable

Until now we were fetching all connection ids for a user and invoking client method on all of these. This way, methods meant for Chat would be called upon frame for co create or notification as well, which eventually slows down the application with unnecessary processing.

To make it better, you can store the origin of connection id along with it. When invoking markonline(), you can send a string say “source” as a parameter. For example from the Co-Create frame as markonline(“cocreate”) and from Chat window  markonline(“chat”)  and from the master page markonline(“master”) ; this will make your table look like this:




User1 24b11640-XXXX-4069-XX96-8389223418ac cocreate
User1 eaa2de3d-f7df-46ce-XXXX-32bdfe612794 master
User2 8d2XXcc5-XX61-4f8d-a9f4-d3bcXXX1f14c cocreate
User2 3b900a78-4c1e-4XX8-be2c-XXXXXXXX chat
User3 XXXXXX-af57-4cc0-a2c0-4075947e3915 cocreate
User3 7f946c23-XXXX-4b49-b7d8-ee97cac5eXXX chat
User4 b2136fe5-XXXX-48d3-XXXX-047b77f34b09 master
…… …… ……

Now while fetching the list of connection ids from the table with isUserOnline(string username) method, you can specify source based on the method you are invoking it for by passing it as a where clause to the query. So when you want connection id for chat methods, you will fetch record only for source=”chat”.

How to Enhance Same Code for Multiple Sessions of Same User

Today a user can run the application from multiple devices as well. To make the above approach work for multiple session scenarios, you can store session id along with userid, source and connection id and fetch the result based on these parameters. This will ensure that when you invoke client method from the server, it reaches the destination it is meant to reach. Similarly, while deleting records when a user logs off or closes a browser window, you delete rows belonging to that session only.


As I said earlier, SignalR and its usability looks very promising, and you saw how you can embed real time web in the application without getting your hands dirty. Yes, there are tricks, which you would have to play while working on selected list of connection ids however that is restricted to some “ifs” here and some “where” there but saves us from dealing with the rawness of communication APIs.


1. ASP.Net Forums. “How to manually set the connection id”.

Top JavaScript MVC Frameworks

June 19, 2013 § 3 Comments

As more and more logic ends up being executed in the browser, JavaScript front-end codebases grow larger and more difficult to maintain. As a way to solve this issue developers have been turning to MVC frameworks which promise increased productivity and maintainable code. As part of the new community-driven research initiative, InfoQ is examining the adoption of such frameworks and libraries by developers.

  • Backbone.js: Provides models with key-value binding and custom events, collections, and connects it all to your existing API over a RESTful JSON interface.
  • AngularJS: A toolset based on extending the HTML vocabulary for your application.
  • Ember.js: Provides template written in the Handlebars templating language, views, controllers, models and a router.
  • Knockout: Aims to simplify JavaScript UIs by applying the Model-View-View Model (MVVM) pattern.
  • Agility.js: Aims to let developers write maintainable and reusable browser code without the verbose or infrastructural overhead found in other MVC libraries.
  • CanJS: Focuses on striking a balance between size, ease of use, safety, speed and flexibility.
  • Spine: A lightweight framework that strives to have the most friendly documentation for any JavaScript framework available.
  • Maria: Based on the original MVC flavor as it was used in Smalltalk – aka “the Gang of Four MVC”.
  • ExtJS: Amongst other things offers plugin-free charting, and modern UI widgets.
  • Sammy.js: A small JavaScript framework developed to provide a basic structure for developing JavaScript applications.
  • Stapes.js: A tiny framework that aims to be easy to fit in an existing codebase, and because of its size it’s suitable for mobile development.
  • Epitome: Epitome is a MVC* (MVP) framework for MooTools.
  • soma.js: Tries help developers to write loosely-coupled applications to increase scalability and maintainability.
  • PlastronJS: MVC framework for Closure Library and Closure Compiler.
  • rAppid.js: Lets you encapsulate complexity into components which can be easy used like HTML elements in your application.
  • Serenade.js: Tries to follow the ideas of classical MVC than competing frameworks.
  • Kendo UI: Combines jQuery-based widgets, an MVVM framework, themes, templates, and more.

Design Pattern Automation

March 12, 2013 § Leave a comment

Design Pattern Automation

posted by Gael Fraiteur and Yan Cui  (Source : Infoq)



Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged

  string firstName, lastName;
   public event NotifyPropertyChangedEventHandler PropertyChanged;

   protected void OnPropertyChanged(string propertyName)
    if ( this.PropertyChanged != null ) {
         this.PropertyChanged(this, new 

 public string FirstName
   get { return this.firstName; }
       this.firstName = value;
public string LastName
   get { return this.lastName; }
       this.lastName = value;
  public string FullName { get { return string.Format( “{0} {1}“, 
this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

public Person

public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, 
this.FirstName, this.LastName); }}


This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?


Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts ( is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
    Contract.Requires(id != Guid.Empty);
    return Dal.Get<Book>(id);

public Author GetAuthorById(Guid id)
    Contract.Requires(id != Guid.Empty);

    return Dal.Get<Author>(id);

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
      if (__ContractsRuntime.insideContractEvaluation <= 4)
              __ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
      return Dal.Get<Program.Book>(id);

  public Author GetAuthorById(Guid id)<
      if (__ContractsRuntime.insideContractEvaluation <= 4)
              __ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
      return Dal.Get<Program.Author>(id);

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

public sealed class BackgroundThreadAttribute : MethodInterceptionAspect     
    public override void OnInvoke(MethodInterceptionArgs args)   
        Task.Run( args.Proceed );   

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in ourBackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method )

  MethodInfo methodInfo = (MethodInfo) method;

  if ( methodInfo.ReturnType != typeof(void) || 
       methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
     ThreadingMessageSource.Instance.Write( method, SeverityType.Error, 
             method.DeclaringType.Name, method.Name );

     return false;

  return true;

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

private static void ReadFile(string fileName)
    DisplayText( File.ReadAll(fileName) );
private void DisplayText( string content )
   this.textBox.Text = content; 

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.


So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.

Eighteen years after the Gang of Four’s seminal book, isn’t it time for design patterns to become adults?

Improvements to Parallelism in .Net Framework 4.5

January 9, 2013 § Leave a comment

Here is the article I read and I thought of sharing as Parallelism will be future for making application

source :

Parallelism has become a buzz word in the .NET Framework world, making it important for every developer to keep up to date on the latest and greatest. The .NET Framework, version 4.5 has shipped with quite a number of updates and improvements to the parallelism features, many of which Microsoft implemented based on user feedback and common feature requests. This article walks you through a few of these new improvements.


In .net framework 4.5 a lot of work has been done to improve the performance of parallel computing. For developers there wouldn’t be any difference between the code that is written between 4.0 and 4.5 versions but there will be an increase in the performance of the parallel computing operations. Listed below are some areas where there will be a significant amount of performance improvement.

1. Long list of dependent task execution

2. PLinq

3. Concurrent Collections

Below is a code sample where the order by and take are done using PLinq. The same code was run on both 4.0 and 4.5 versions. Fig 1.0 shows the comparison of the results in milliseconds.

  1. static void Main(string[] args)
  2. {
  3.             Stopwatch stopWatch = new Stopwatch();
  4.             stopWatch.Start();
  5.             var students = GetStudentRecords();
  6.             var result = students.AsParallel().OrderBy(student => student.Name).Take(5000);
  7.             int count = result.Count();
  8.             stopWatch.Stop();
  9.             Console.WriteLine(“Time taken in .net <version>: {0}ms”, stopWatch.ElapsedMilliseconds);
  10.             Console.ReadLine();
  11. }

Comparison Results
Fig 1.0: Comparison Results

Values Property – in ThreadLocal Class

ThreadLocal is a class introduced in .net framework 4.0 in order to store per-instance and per-thread resources. The value stored in the ThreadLocal object goes off when the corresponding thread exits or when the ThreadLocal object instance is disposed. In .net framework 4.5 there is a new property added to the ThreadLocal class named Values. It will hold the values maintained or stored by different threads and it is an IEnumerable<T> type. These values will remain even if the respective thread exits.

The population of the Values property can be made optional by passing the trackAllValues parameters as false to the ThreadLocal constructor. The code given below will depict the Values property of ThreadLocal class.

  1. static void Main(string[] args)
  2. {
  3.             var threadLocal = new ThreadLocal<string>(() => String.Empty, trackAllValues: true);
  4.             var tasks = new Task[2];
  5.             tasks[0] = Task.Factory.StartNew(() =>
  6.             {
  7.                 threadLocal.Value = “Dummy value 1”;
  8.             });
  9.             tasks[1] = Task.Factory.StartNew(() =>
  10.             {
  11.                 threadLocal.Value = “Dummy value 2”;
  12.             });
  13.             Task.WaitAll(tasks);
  1.             //Note that the local values set by different threads will be available even after the thread exit.
  2.             foreach (var value in threadLocal.Values)
  3.             {
  4.                 Console.WriteLine(value);
  5.             }
  6. }

New TaskCreation and TaskContinuation Options

There are two new TaskCreation and TaskContinuation options introduced in .net framework 4.5. These options provide you more control while calling some third party codes in your task. Below are the new options.


The AttachToParent option in a child task will not allow the parent task to be completed until the child task completes. Say for example if you are calling a third party code in your task and you do not want any task inside it to get attached to your task then this option can be used. In such a scenario even if the third party code task does an AttachToParent it will be notified that there is no parent task available and hence will not attach.

  1. var task = Task.Factory.StartNew(() =>
  2. {
  3.         //Perform the operation calling a 3rd party code
  4. }, TaskCreationOptions.DenyChildAttach);


HideScheduler option will hide your scheduler from the third party code that you are calling inside your task. The TaskScheduler.Current will return always TaskScheduler.Default. In other words your TaskScheduler will not be applied to the child task in case the later doesn’t specify any and the default one will be used.

Another useful feature is that .net framework 4.5 will now allow you to specify the timeout for the cancellation token. The cancellation request will be issued after the specified timeout.

Happy reading!

How to Check for Application Inactivity in .NET 2010

December 4, 2012 § Leave a comment

Posted by Hannes Du Preez


A nice feature to build into your application is allowing your application to check for inactivity. Inactivity means that the program is just “standing still” – it is open, but it seems to be forgotten. Some programs check for inactivity in order to release important resources. Some programs rely on activity in order to keep database connections open, etc. In this article we will let our program check for inactivity.


There are actually a few ways to accomplish this: You could use a normal timer and check for mouse movements, clicks, or keyboard presses. You could determine this through scanning the active processes running on your computer. The question is: How complicated do you want to get?

My method involves the IMessageFilter Interface. This interface allows an application to capture a message before it is dispatched to a control or form. Yes, it may also be more complicated than just checking the mouse movements and key presses one-by-one, but it is a lot less code and actually accomplishes the same thing. By using this method, you are 100% sure that there will not be any glitches or miscalculations.

The IMessageFilter also checks mouse movements and key presses, but it does so through the use of the actual mouse messages and key messages being sent. Sound complicated? No, don’t worry – as you’ll see shortly, it is quite a breeze.


Start up Visual Studio and choose either VB.NET or C#. Create a Windows Forms Project. There will be some differences in our VB and C# projects, because C# will implement this Interface differently than VB. Add a few controls to your form, and add a Timer (which is the most important here). For the Timer, set the IntervalProperty to 1000 (One second).


As to be expected, there is not much code involved here, but that doesn’t mean that the code won’t have us scratch our heads :). For simplicity’s sake, let us cover VB.NET and C# separately.

Open the code window for your VB.NET project, and add the following code :

  1. Public Class Form1
  2. Implements IMessageFilter ‘This interface allows an application to capture a message before it is dispatched to a control or form.

Here, we are letting our form know that we will be using IMessageFilter messages. Now we need to write the Function responsible for listening to the sent messages:

  1. ” Filters out a message before it is dispatched.
  2. Public Function PreFilterMessage(ByRef m As System.Windows.Forms.Message) As Boolean Implements System.Windows.Forms.IMessageFilter.PreFilterMessage
  3. ‘Check for mouse movements and / or clicks
  4. Dim mouse As Boolean = (m.Msg >= &H200 And m.Msg <= &H20D) Or (m.Msg >= &HA0 And m.Msg <= &HAD)
  5. ‘Check for keyboard button presses
  6. Dim kbd As Boolean = (m.Msg >= &H100 And m.Msg <= &H109)
  7. If mouse Or kbd Then ‘if any of these events occur
  8. If Not Timer1.Enabled Then MessageBox.Show(“Waking up”) ‘wake up
  9. Timer1.Enabled = False
  10. Timer1.Enabled = True
  11. Return True
  12. Else
  13. Return False
  14. End If
  15. End Function

This function identifies each message sent to the form. These messages can be mouse clicks, mouse movements, key presses, etc. We wait for a message, then the program wakes up.

The final piece of code we need to add is the Timer’s Tick event. This will serve to wait for messages. If messages haven’t been received in two minutes, we quit. Add this code now:

  1. Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick
  2. Static SecondsCount As Integer ‘Counts each second
  3. SecondsCount += 1 ‘Increment
  4. If SecondsCount > 120 Then ‘Two minutes have passed since being active
  5. Timer1.Enabled = False
  6. MessageBox.Show(“Program has been inactive for 2 minutes…. Exiting Now…. Cheers!”)
  7. Application.Exit()
  8. End If
  9. End Sub

When our counter variable reaches 120 ( 2 minutes ) the program quits.

C# Code

Apart from the syntactical changes between VB.NET and C#, there are some other differences too. In C#, we cannot Implement the IMessageFilter Interface the same way we did in VB.NET. We have to create a separate class, and then make use of that class from within our form. In your C# Project, add a Class namedFilterMess and add the following code to it:

  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Text;
  5. using System.Windows.Forms; //Necessary
  6. namespace Inactivity_C //Name of my program
  7. {
  8. class FilterMess : IMessageFilter //This interface allows an application to capture a message before it is dispatched to a control or form
  9. {
  10. private Form1 FParent; //instance of the form in which you want to handle this pre-processing
  11. public FilterMess(Form1 RefParent)
  12. {
  13. FParent = RefParent;
  14. }
  15. public bool PreFilterMessage(ref Message m)
  16. {
  17. bool ret = true;
  18. //Check for mouse movements and / or clicks
  19. bool mouse = (m.Msg >= 0x200 & m.Msg <= 0x20d) | (m.Msg >= 0xa0 & m.Msg <= 0xad);
  20. //Check for keyboard button presses
  21. bool kbd = (m.Msg >= 0x100 & m.Msg <= 0x109);
  22. //if any of these events occur
  23. if (mouse | kbd)
  24. {
  25. MessageBox.Show(“Waking up”);
  26. //wake up
  27. ret = true;
  28. }
  29. else
  30. {
  31. ret = false;
  32. }
  33. return ret;
  34. }
  35. }
  36. }

It is more or less the same as in VB.NET. I just added the ability to connect this class to my Form (namedForm1). All we need to do now is to make use of this class inside our form. Change your Form’s constructor as follows:

  1. public Form1()
  2. {
  3. InitializeComponent();
  4. Application.AddMessageFilter(new FilterMess(this)); //Connect to FilterMess class
  5. }

Finally, add your Timer_Tick event:

  1. static int SecondsCount;
  2. private void timer1_Tick(object sender, EventArgs e)
  3. {
  4. //Counts each second
  5. SecondsCount += 1;
  6. //Increment
  7. //Two minutes have passed since being active
  8. if (SecondsCount > 120) {
  9. timer1.Enabled = false;
  10. MessageBox.Show(“Program has been inactive for 2 minutes…. Exiting Now…. Cheers!”);
  11. Application.Exit();
  12. }
  13. }

When run and left inactive for two minutes, a messagebox will pop up informing you that your application has been inactive for too long, and exits. If your application (form) didn’t become inactive, you’d get a message each time you did something. That can get a tad annoying, but this is obviously just an example (which you will be able to download) for you to use as you wish.


Not too complicated now was it? Nope. I hope you have enjoyed this article and that you can benefit from it. Until next time, cheers!

About the Author:

Hannes du Preez is a Microsoft MVP for Visual Basic for the fifth year in a row. He is a trainer at a South African-based company providing IT training in the Vaal Triangle. You could reach him at hannes [at] ncc-cla [dot] com.

Where Am I?

You are currently browsing the Microsoft.Net category at Naik Vinay.