Microsoft’s Plans for the Future of .NET

March 25, 2017 § Leave a comment

Microsoft’s Plans for the Future of .NET

| by Jeff Martin 

Microsoft’s Mads Torgersen has shared an updated strategy for the .NET family of languages, providing insight into the comapny’s thinking for future functionality.  Although the development of C#, VB .NET, and F# happens in public over GitHub, Microsoft’s long-term plans have frequently been kept private.  Torgersen’s announcement is useful in that Microsoft’s current way of thinking is now available for public review and commentary.

Torgersen notes that according to StackOverflow, only Python and C# are both found on the top ten lists for most used and most loved programming languages.  C# is used in a wide variety of application types:  business, gaming, and web, among several others.  Recognizing this, Microsoft wants C#’s design to “innovate aggressively, while being very careful to stay within the spirit of the language”.  Another aspect of this is to support all of C#’s various platforms, so that one is not emphasized at the expense of others.

When it comes to Visual Basic, its user base is not as large as C#, but that user base does have a larger percentage of new developers than C#.  Since Visual Basic has a smaller, more inexperienced developer base in Microsoft’s eyes, future design plans are going to see VB decoupled from C#’s design.  VB will add new language features where it makes sense for that language, rather than merely add them because C# is getting something similar.  That said, Torgersen says Microsoft will continue to maintain it as a first-class citizen on .NET which remains welcoming to new developers.

Of the three languages mentioned, F# has the smallest user base, but it is one that is very passionate about the language.  Torgersen says that Microsoft intends to “make F# the best-tooled functional language on the market” while ensuring it interoperates well with C# where appropriate.

Reader commentary on this announcement is mixed.  F# and C# developers are mostly happy as their languages will continue to be considered in a place of prominence.  VB developers are more concerned that their language will be left behind or stagnate.  However Torgersen insists that VB will continue to be a point of investment for Microsoft.

Advertisements

ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

December 1, 2014 § Leave a comment

ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

In traditional software and web applications application layering is one of the tenets of good application design and architecture. It stems from the separation of concernsprinciple in computer science which is the process of separating applications and programs into distinct features that overlap in functionality as little as possible. Hence, thesingle responsibility principle and DRY concepts in software engineering and object-oriented programming.

Typically, in layered applications, a given application layer should only communicate with and depend on the layer directly below it.

Take the following diagram for example, taken from the Microsoft Application Architecture Guide v2 (I added the red arrow indicators that typically represent where dependencies are):

image

In this example, the Presentation Layer would talk to and depend on the Business Layer. The Business Layer would talk to and depend on the Data Layer. The Data Layer would talk to and depend on the database.

This is generally considered good application design despite there being dependencies between layers. However, traditionally, layer n-1 dependency  (one layer dependency) has been the normal and accepted practice. But experience tells us that all-too-often we find dependencies on more than just one layer. In fact, we may find dependencies between multiple layers…or worse….between all layers, ending up with something like this:

image

The application may work fine. In fact, it may work perfectly. But the net effect, in the case of either diagram, is that, with each dependency, change becomes more difficult. Our apps become more difficult to maintain, more difficult to test, and more difficult to change which can result in longer development cycles, higher costs and increased risk of failure on our projects.

Usually, these dependencies come in the form of concrete class names. For example:

MyService service = new MyService();

As Scott Hanselman would say, “anytime you find yourself writing code like this, quit your job and go do something else.” No seriously, you should at least stop and acknowledge what this really is – a dependency on an implementation. Sure, factory classes and singletons come in handy to remove the need to new things up, but we still end up with a dependency – only now we’re dependent on the Singleton or the Factory.

Ideally, we want to declare a dependency on “some thing” without saying what that “some thing” is.

A simplified analogy

If you are hungry, you could ask your friend, “do you have a hamburger?” in which case you are expressing a need for an explicit implementation of something that satisfies hunger (a hamburger). If your friend has a hamburger, great! He would give it to you, you would eat it, and you wouldn’t be hungry anymore. Problem solved! But if your friend does not have a hamburger, you’d still be hungry – mainly because you were too specific in your request. Now, instead of asking for the hambuger, you could ask your friend, “do you have anything to eat?” In this case, you are expressing the “need” (food or something that satisfies hunger) vs. a concrete implementation. This way, if your friend has anything to eat he can give it to you (regardless of whether it is a hamburger or not) and you won’t go hungry. Any food would suffice.

What’s the point?

The point is that by being less specific in our request for food, we are more adaptable. The same is true with software. If we simply declare dependencies on contracts (interfaces), rather than implementations, our software becomes more adaptable and easier to change. Dependency Injection exists to help you do just that.

The goal of Dependency Injection (DI for short) is to separate behavior (or implementation) from dependency resolution, which is really just encapsulation – one of the main principles of computer science and object-oriented programming. I like to think of Dependency Injection as “intra-app SOA”; the end result being a highly decoupled application composed of “services” with explicit service boundaries and contracts (or service interfaces) where any given application layer has no knowledge of any other layers. It cares not the number of layers nor the implementation within each layer. Each layer simply depends on a contract and can be reasonably sure that at runtime there will be at least one implementation available to satisfy that contract. With Dependency Injection on our side, the above diagram might change to look something like this:

image

At first glance, this doesn’t look much different from the first diagram. We still have “dependencies.” However, now we are dependent on a contract, not an actual implementation. This provides enormous benefits to us as application developers because our application layers are now plug-n-play. They are hot-swappable like hard drives in a RAID configuration. We can change the implementation of a layer and as long as we implement the agreed upon interface, we can rest assured we won’t break something in another layer.

Of course, we still have to unit test our new layer to make sure we don’t have any internal bugs, but as long as other layers only depend on the interface (not the implementation) we know we can reliably swap out an implementation without affecting other parts of an application or system. Ideally, each implementation of an application layer becomes a “black box” to the other layers with which it interacts.

In the case of our MyService above. Instead of writing our code like: MyService service =new MyService();
Using Dependency Injection, we would instead write; IService service { get; set; }; as a property on our class or we would use constructor injection and have something like.

public class HomeController(IService service)
{

}

As you can see, we are now expressing a dependency on a contract (an interface) rather than an implementation and we are now “wired” for a Dependency Injection/IoC framework to resolve these dependencies for us without explicitly identifying them in our code.

You might say, “that’s all fine and good, but how do I make sure that my application is only dependent on contracts/interfaces?” More importantly, for existing applications that might not have been written this way, how do I find all the application dependencies and extract them into interfaces in order to move to a DI-friendly application architecture.

This is where being a .NET developer in this day and age makes your life much easier. Thanks to some new features in Visual Studio 2010, you can now answer those questions fairly easily. If you have one of the higher level VS2010 SKUs (Premium and Ultimate), you have the ability to create Application Architecture Layer diagrams. While you may have known that, you may not be aware that you can also validate an application against a layer diagram and have Visual Studio generate the dependencies between your layers so that you can see the dependencies between layers.

Using this feature, not only can you say, “my application should look like this” by creating an application layer diagram, but with the validation feature, you can ask the question, “does my application look like this?”

To get started with this feature, let’s take the following ASP.NET MVC project that I’ve setup as an example for this post. (I’ve circled the areas of immediate interest)

As you can see, I have actually organized my solution folders to mimic my application layering. We have a Repository/Data Access layer, we have a Services layer and we have a Presentation/UI layer. You’ll also notice that we have a Contracts project (or layer) which contains our interfaces.

So our dependencies go something like this:

  • Site (our ASP.NET MVC app) depends on an IUserService.
  • Our Services, depend on an IUserRepository and IUser.
  • Our Repositories depend on the IUser contract since that is the contract they return from their operations.

There are no dependencies between layers. They only depend on interfaces in the Contracts project.

You may have also noticed that there are four different “repository” projects and three different “services” projects. This is where the plug-n-play concept I discussed above comes in to play. ASP.NET MVC 2 comes with great support for Dependency Injection (which MVC 3 builds upon) which allows you to plug-in your DI/IoC framework of choice for all your DI needs. In my case, I’m using Autofac. ASP.NET MVC provides an extensibility point that allows you to say, “anytime my application needs something to satisfy a contract/interface, here’s where to find it.” That “where to find it” part is where an DI/IoC framework plugs-in to satisfy the dependencies of your application without having to declare explicit dependencies between your application layers and/or components.

Frameworks such as AutofacNinjectCastle WindsorStructureMap and Unity, all have some concept of a “registry” which is basically an Interface-to-Implementation mapping or dictionary. With our MyService example above, we would be able to register our implementation MyService as the service that satisfies all dependencies on IService. Then, any time the application needs an implementation of IService, it will ask the DI container to provide one from its registry.

I won’t go into the details of how these dependencies get registered with or resolved by the DI/IoC frameworks. I’ll save that for another post or for you to read about in your own research or digging through the attached sample code.

Instead, we’ll jump right into the benefits that using Dependency Injection provide.

So, in the diagram, we happen to have four different implementations of a DataAccessLayer.

  1. DatabaseRepository – persists data to a SQL database
  2. FakeRepository – fakes the peristence to an underlying data store (ideal for UnitTesting )
  3. MongoDbRepository – persists data to a MongoDb database
  4. XmlRepository – persists data to an Xml file

All four Repository projects implement the IUserRepository contract/interface that our Services layer depends on. This allows us to reliably swap out one for another without affecting any code in the services layer at all. So, it would be trivial to add another repository that persisted data to Oracle, SQL Azure, Amazon SimpleDB, Microsoft Access, Excel, a flat-file, what have you – just so long as the repository implements the IUserRepository interface, we’re good.

Likewise, we have three different implementations of our Services layer.

  1. CachingService – retrieves data from an IUserRepository and caches the results
  2. FakeUserService – a services that fakes retrieving data from an IUserRepository (again, ideal for unit testing)
  3. UserService – same as the caching service (just without caching)

These services all implement the IUserService contract/interface that our presentation layer (UI) depends on.

Now, on to creating an application architecture and validating these stated dependencies.

From the Architecture menu in Visual Studio, you select New Diagram.

image_thumb[64]

In the dialog box that opens, select Layer Diagram and give it an appropriate name.

image_thumb[65]

You should end up with a blank architecture diagram that looks like this:

image_thumb[66]_thumb

Now, using either the Toolbox or by right-clicking, we can begin adding layers to your diagram. After adding our application layers, our diagram should look like the following:

image_thumb[67]_thumb

Now, to validate our architecture (and dependencies), we first need to tell Visual Studio, what code/projects are in what layers. We start by dragging our projects (or solution folders) into the associated application layer. For example, in my sample solution, I would drag the entire DataAccessLayer solution folder into the Data Access layer on the diagram:

image_thumb[68]_thumb

Here’s what we should end up with when we’re done:

image_thumb[69]_thumb

It looks very similar to when we started, but now we have a small indicator in the upper-right-hand corner which tells us how many projects are associated with each layer.

Now for the magic!

Simply, right-click anywhere in the white space on the application layer diagram and click Generate Dependencies.

If what I told you above about how my application is architected is true, then you should get an updated diagram that looks like the following:

image_thumb[70]_thumb

Ah, now isn’t that diagram a breath of fresh air? In this diagram, Visual Studio is telling us there there are absolutely NO dependencies between the layers of our application! Rather, all application layers depend only on the Contracts project which is simply a collection of interfaces. This is the epitome of encapsulation and tells us that our application layers are decoupled from one another and can be swapped out for other implementations without risk to the rest of our application.

This is ideal for TDD scenarios and allows for simultaneous development on different application layers if we have already ironed out our contracts/interfaces. This means more parallel development can occur which can potentially reduce project timelines. And of course, with TDD on our side, we can reliably test individual layers and sign-off on them knowing that they are not dependent or affected by other layers/components whatsoever. End-result: higher-quality software developed in a shorter amount of time.

Now, if your application doesn’t look like this and looks more like the second picture with dependencies between every layer, don’t worry! Visual Studio can also help you find those dependencies so that you can factor out the concrete references into interfaces/contracts.

First, start by removing the dependencies from your diagram that you don’t want to have in your app – do this by rick-clicking on the dependency arrow and selecting Delete. Next, after you have removed all the unwanted dependencies, simply right-click anywhere in the whitespace of the diagram and select Validate Architecture. Visual Studio will proceed to build your projects and determine if your application actually validates against your stated (desired) architecture. If it does not, the violations will show up in the Error list window and you can start going through these dependencies and replacing the concrete implementations with contracts/interfaces. Additionally, application architects can use this functionality in conjunction with TFS to prevent code check-ins that violate an application architecture diagram.

With Visual Studio 2010, ASP.NET MVC 2 & 3 and the rich support for Dependency Injection, you can begin extracting interfaces from your concrete classes to remove the hard-dependencies in your apps, increase the maintainability of your apps. These features aren’t limited to MVC either. Many DI/IoC frameworks also work with ASP.NET WebForms as well as Windows Forms, WPF, and Silverlight. With these tools in your toolbox you too can begin enjoying the bliss that is a truly decoupled application that is easy to maintain, easy to test, easy to change and easy to replace when the next technology comes along!

Download the sample code: MvcDI.zip

For more reading on ASP.NET MVC and Dependency Injection I suggest you check out the following blogs:

http://weblogs.asp.net/scottgu/
http://hanselman.com
http://haacked.com
http://bradwilson.typepad.com/

Happy Injecting!

WCF message headers with OperationContext and with MessageInspector and Custom Service Behavior

April 14, 2014 § Leave a comment

WCF message headers with OperationContext and with MessageInspector and Custom Service Behavior

In this post we will view some possible options for adding message headers to the messages send from the client to service.

Our scenario:

We have a WCF service provided as a Software as a Service (SaaS). People who have an active subscription with our company, are able to invoke methods on our service and retrieve information. To successfully invoke methods and retrieve information the client invoking the method must add a message header named “SubscriptionID which contains the subscriptionid of that customer. If the subscriptionid does not match a valid and active subscriptionid, access to the operations are denied.

1. Setup and configuration of the Service

Our solution setup looks as following:
Solution overview

The “WCF.MessageHeaders.Client” is a Console Application used to mimic a client
The “WCF.MessageHeaders.Service” is a WCF Service Application, holding our SaasService

 

Our service is called SaasService and stands for our saas service that provides certain operations for information to external clients.
Our service interface looks as following:

WCF Service interface

And our service implementation as following (click to view enlarged):

WCF Service implementation

We have 1 public demo method on our SaasService, namely InvokeMethod() with returns a string (or any kind of information). When invoking the method we check if we can find a header called “SubscriptionID” and an empty namespace (for the demo purpose we used an empty namespace, less hassle). If the message header is not found in the incoming message headers, access is denied. If the SubscriptionID message header is found, it is validated against the subscription store to validate it and if it a valid subscriptionid, the required information is returned to the client.

Getting Message headers at the service side can be found at OperationContext.Current.IncommingMessageHeaders. The method FindHeader allows you to check if the header exists and the method GetHeader allows you to retrieve the header information.

Our web.config looks as following:

WCF Service configuration

Nothing special, we just use a basicHttpBinding and we leave the address unspecified as we will host this service in our local IIS. I’m no fan of hosting services in managed applications, unless it’s really necessary.
To host the service easily in IIS, go to the service project properties, go to tab “Web” and instead of using the Visual Studio Development Server use local IIS Web Server and press the “Create Virtual Directory”

WCF Service hosted at IIS

Your service should be available at:
http://localhost/WCF.MessageHeaders.Service/SaasService.svc

2. Setting up and configuring the client

We simply add a service reference to the service url mentioned above.

This is the code we put at our client console application:

WCF Service client proxy

If we run our client console application:

WCF Client console application

Behaves as intended, as we didn’t add any serial key at the outgoing messages at our client, there is no subscriptionID message header found at the message at service, so the access is denied. If we are a valid customer, we have a subscriptionID which we can pass on to the service, which will grant us access.

The code should look as following:

WCF add message header to message

Basically we start by creating the WCF client proxy, which will invoke the operation. Then we create a new OperationContextScope, based on the proxy’s innerChannel. The new OperationContextScope at the client has to be created, before you can access OperationContext.Current.outgoingMessageHeaders, which we need to add a message header to the outgoing messages. If you do not use the operationcontextscope, the OperationContext will be nothing and an error will occur will you try to invoke OperationContext.Current.outgoingMessageHeaders.

Next  we create a MessageHeader of type string with the subscriptionID value passed in the message header. This is the subscriptionID which should grant us access to the SaasService. After that we can invoke messageheader.GetUntypedHeader(string name, string ns) which allows us to get an untyped message header, with a defined message header name and namespace. In our case the name is “SubscriptionID”, which is the name of the messageheader the service looks for. Our namespace in this demo is just empty.

After having created the message header, we can simply add it to the outgoingmessageheaders and invoke the method on the proxy.

The result, which we expect:

WCF Message headers added to outgoing or incoming messages
All by all, adding message headers to incoming or outgoing messages and validating them is not that difficult. However when we have a service with a few tens or hundreds of operations, we do not want to validate the message header at every operation. The solution for that is using a message inspector and a custom service behavior.

3. Use a messageInspector and add a custom Behaviour to our service to validate all incoming messages

We have a SaasService which will expose a few hundreds of operations, for example.
For our first method we used quite some code, to find the header, get the header and validate the header content if it is a valid subscriptionID.

However if I have to write this exact  code 99 times more for the other 99 methods I need to implement, that would be silly. We could put the message header validation code in 1 method and in every operation then call that validation method which does the validation and for example returns a boolean. That would save us from duplicating our validation code, however it still requires us to have 100 methods which call all the same validation method and the operation only executes on the result returned of that validation method.

Another possibility is to add a message inspector, which inspects the incoming message before it is handed off to the service operation. At the message inspecor we inspect the message headers to check for the subscriptionid and do some validation if it is provided. If it is not provided or an incorrect subscriptionid is provided, we return an error that the service consumer does not have access to the service. It will save us from writing any validation code at all at our service. We can simply write operations and assume that if the operation is invoked, the client invoking the operation has valid access. Quite a nice solution if you ask me.

How do we implement this:

1. Create an IDispatchMessageInspector at the service side

We start by creating a class “SaasServiceMessageInspector” which is the class which will inspect each message send to our SaasService, inspecting if the subscriptionid message header is present and validate it.
To create a message inspector at the service side, we need to inherit from IDispatchMessageInspectorwhich is an interface provided to create message inspectors at service side. The method we need to implement, in our case, is the AfterReceiveRequest, which gives us access to the message after it has been received.

The implementation (click the enlarge):

WCF IDispatchMessageInspector check message headers

The message is being passed in the AfterReceiveRequest method, so we can just use the request.Headers to find if our SubscriptionID header is present and whether a valid subscriptionid is provided. If no subscriptionid or no valid subscriptionid is provided, we return a faultexception to the client notifying them that access to the service is denied.

Instead of validating the subscriptionid’s against a database or any store, we just check if the subscriptionid matches “123-456789-098″. If it does not match this subscriptionid, the client should be denied access to the service, even before any method gets invoked. An ideal scenario to easily test our code in a demo.

Now after we implemented the IDispatchMessageInspector we need to write some code that this IDispatchMessageInspector is linked to our SaasService and that this message inspection happens at service level. Any operation that might get invoked of our service, needs to have the message inspected first by the message inspector.

2. Create an IServiceBehavior and attach the custom behavior by attribute

To attach our new SaasServiceMessageInspector to our SaasService, we need to create a new behavior (like which we define in our web.config for example).
There are multiple behaviors we can extend:

  • IServiceBehavior: Applies to the entire service
  • IEndpointBehavior: Applies to a specific endpoint
  • IContractBehavior: Applies to a specific contract
  • IOperationBehavior : Applies to a specific operation
Since we want to attach our MessageInspector to each every operation of our service, we will extend the IServiceBehaviour. The default method to extend is the ApplyDispatchBehavior, which allows us to add components to the WCF runtime: (click to enlarge image)

Attach wcf message inspector by IServiceBehavior

For each channeldispatcher we get each endpointdispatcher and add a SaasServiceMessageInspector to the dispatchruntime message inspectors.
Notice that we also inherit from the Attribute, which makes our SaasServiceMessageInspectorBehavior to be used as an attribute.
Also make sure you have System.ServiceModel.Dispatcher namespace imported for the code above.

We can attach our new behavior SaasServiceMessageInspectorBehavior to our Service as following:

Extend WCF servicebehavior

We only attach the custom behavior as as attribute on our service, and the custom behavior is automatically attached to our SaasService and we have a message inspector attached to each endpoint dispatchruntime.
Also notice that our InvokeMethod has no more validation code at all at the method. Only method functionality is added.

Our service solution looks as following:

WCF Service with custom behavior and message inspector

Our client console application remains the same. The subscriptionID used at our client console currently is:

This is the valid subscriptionID we check for in our message-inspector, so we should be able to invoke the method:
Executing the client console application:

WCF Message Inspector Custom behavior

Works as intended. If we change the subscriptionid at our client to an incorrect subscription and try to invoke a service method:

WCF Messageheader

WCF Message header message inspector

Again working as intended. So we created a custom message inspector and a custom service behavior, which did the message header validation for the entire service, only at 1 single and isolated place.

4. What about using a message inspector and custom behavior at the client to add message headers to the outgoing messages

Now we did this for the service, which is a great solution. But what about the client ? If the client has to invoke a few hundreds or thousands of methods of the SaasService, it has to create an OperationScope, create a message header and attach it to the messageheaders, before invoking the operation. (yes you could create only 1 proxy, attach the message header there and always return that proxy to use for invoking operations). Well it is possible to create the same behavior on the client side. Create a message inspector who adds a message header to the request, instead of checking for it and create a custom behavior to attach to your client proxy.

I will not run into all code by detail again, as it resembles the previous section a lot.

We start by creating our message inspector at the client side (click to view at full size):

IClientMessageInspector WCF message inspector

Notice that we inherit from the IClientMessageInspector now, and not from the IDispatchMessageInspector! The IDispatch interfaces are for service side, while the IClient interfaces are for the client side.
We override the BeforeSendRequest method, which is invoked before the messages is handed over to the channel which will route the message to the service. In this method we create the message header and attach it to our request.

Next we create a custom behavior, just as in the previous section for the service side. Now we will inherit from IEndpointBehavior and not from IServiceBehavior:

IEndpointBehavior custom wcf behavior

The IServiceBehavior is for the service side, so not applicable for the client. The IEndpointBehavior is ideal as it’s scope is a specific endpoint, which in our case is the endpoint to the SaaSService. We use the ApplyClientBehavior. Notice there is also an ApplyDispatchBehavior, in case you would use this behavior for the service side. So the IEndpointBehavior can be used for aswell the client as for the service.

In the previous section we attached the behavior by Attribute to the service. However for an Endpoint behavior this is not possible, it can only be attached by configuration file. To attach custom behaviors by configuration file, we need to create an extension from BehaviorExtensionElement

BehaviorExtensionElement WCF custom behavior

You will also need to import the reference System.Configuration. Now that we created the behaviorExtension, we can add this behavior extension in our app.config:

Behavior extension WCF

To add our behaviorextension at our app.config, we need to add an <extensions> node under <system.serviceModel> and add a <behaviorExtensions> in which we add our custom behavior extension.
We now create an endpoint behavior, that used our MessageHeaderInspector extension:

WCF EndpointBehavior extension

Notice that in the behavior we use the “MessageHeaderInspector” which is defined at our behaviorextensions with name “MessageHeaderInspector”.
We now attach this endpointbehavior to our client endpoint:

WCF Client endpoint behavior

This way our endpoint is linked to the endpoint behavior “SaasEndpointBehavior”, which is our endpoint behavior using the MessageHeaderInspector, which is our custom code to add a subscriptionid message header to each outgoing message for the endpoint we defined.
Our client console code looks as following:

WCF client proxy with message header

So we do not add the message header anymore in our code. Our message inspector and custom behavior will take care of this, for every operation we invoke with your proxy on the endpoint defined.
Our solutions looks like this now:

WCF Message inspector and custom behavior client and service

When executing our client console application:

Client with message inspector and custom behavior
Any suggestions, remarks or improvements are always welcome.
If you found this information useful, make sure to support me by leaving a comment.

Proof-of-Concept Design

April 10, 2014 § Leave a comment

Proof-of-Concept Design

Odysseas Pentakalos, Ph.D.

Summary: This article shows how the development of a proof-of-concept can bridge the gap very effectively between how the software product is envisioned during requirements definition and how it is ultimately delivered to the customer. (6 printed pages)


Contents


Introduction
Building Expectations
Controlling Expectations
Conceptual Versus Deliverable
Conclusion
Critical-Thinking Questions
Sources
Glossary


Introduction


A few years ago, I worked on a multiyear project in which we were tasked with the staged development of a custom-made enterprise application for a large organization. Each stage of the project lasted 8 to 12 months and involved the development of a distinct component of the overall solution. Development of the deliverables of each stage was treated as a separate software-development project in which we went through the entire software-development life cycle (SDLC) in producing the deliverables. Over the years, the customer had settled on its preferred software-development process internally, which resembled the waterfall model [Wikipedia] and which the client wanted us to use. Despite our initial resistance to it, we were ultimately forced to follow it.


Building Expectations


The goal of the first stage of development was fairly limited in scope, compared with the goals of later stages of the project. The single deliverable consisted of the development of a simple prototype that would help the customer team illustrate to their end users the goals of the project. Due in part to everyone’s excitement at being involved in a new project and in part to the limited scope of the deliverable, we were able to complete the fully functional deliverable early. The customer was very pleased with the results and developed considerable respect in our abilities, which contributed to our team developing a high-level of confidence.


The next stage involved considerably more functionality. The requirements were not as clear, and the time that was allotted was not much longer than what we had available during the first stage of development (despite the much greater scope of the deliverables in this stage). Of course, with our recently acquired confidence, none of those issues was much cause for concern at the time. After a lot of work, late nights, and stress, we managed to complete development.


Next, we prepared for the meeting with the customer. Given our experience with the first stage of the project, we expected that the customer would be awed with our results and that, after offering considerable amounts of praise, they would return to their office completely satisfied with yet another successful delivery. We expected also that they, in turn, would expect us to deliver a fully functional system that did exactly what we all had originally envisioned. From the beginning of the meeting, however, it quickly became clear that what they had envisioned and what we actually had delivered were considerably different concepts. The situation became increasingly tense, as we came to realize that feature after feature that we had provided totally failed (in their opinion) their expectations.


Controlling Expectations


After that meeting, having barely survived a cancellation of the contract, and before moving on to begin the requirements-definition cycle for the next stage of the project, we jointly decided that for the next stage of development we would provide the customer with a proof-of-concept (POC) system at two checkpoints before the overall project would be due. This would allow both our customer and us to confirm that the solution that we were developing was in-line with their expectations; and, if not, to allow us to get back on track before it was too late.


The decision to incorporate the development of POC systems for the rest of the stages of that project was a considerable factor towards successful completion of the overall project. Through that experience, we learned a number of lessons with regard to the value of a POC system.


In [DeGrace, et al. 1990], the authors list four reasons for the failure of the waterfall approach for software development [Sutherland 2004]:


· Requirements are not fully understood before the project begins.


· Users know what they want only after they see an initial version of the software.


· Requirements change often during the software-construction process.


· New tools and technologies make implementation strategies unpredictable.


In our experience with the project that was described earlier, the development of a POC system provided a cure for three of these issues. First of all, by developing a POC for the customer, we were forced to understand fully the requirements early on. The understanding was much deeper than what one normally receives at the early stages of a new project by simply reading through the requirements, or while incorporating them into the architecture of the system.


The failure that we experienced on that project after the second stage of development was in part due to the second issue that was listed previously. The users were disappointed with our delivery, not only because we misinterpreted the requirements, but also because they did not really know what they wanted until they had seen the deliverable—which, unfortunately, was not what they had in mind. After the customer got a chance to review the POC—which we provided them in the later stages of the project—they got a better idea of what they wanted the final deliverable to look and behave like. Having the POC system also gave us the opportunity to communicate to the user the look and feel of the final product much more vividly than through the use of design documents and design reviews. Seeing the POC allowed them, on the one hand, to adjust their requirements to match exactly what they wanted and, on the other hand, to better define their expectations for the final deliverable. As a result of these adjustments, the customer was much happier with the end result.


At times during the development, we chose to incorporate new technologies; at other times, we were forced to do so. Many of the details that were needed during the design and development of software become known only during the implementation stages [Parnas 1986]. It has been shown consistently that design mistakes that are found early in the software-development life cycle are cheaper to fix than when they are detected later down the road [Eeles 2006]. The POC system provided us with lots of feedback and information of which we were not aware, and allowed us to adjust our design decisions well before the cost of backtracking became too high. The development of a POC system gave us the opportunity to understand and evaluate how to best incorporate those technologies into our design, without having to worry about the complexities of developing the full scope of the system.


As soon as we had a good handle on the capabilities and idiosyncrasies of each new technology (by observing its behavior and operation during the POC development), we were in much better shape to incorporate it into the final product. The risks that we undertook when using the new technologies had been reduced simply due to the fact that, after testing the technology within the scope of the POC system, they were no longer new to our team.


Conceptual Versus Deliverable


It is important to keep in mind that a POC is just a prototype and does not represent the deliverable. POC systems are usually developed quickly and without a lot of testing, so that they do not make good candidates for early versions of the final deliverable. In cases in which the deliverable includes a user interface, the POC is a façade that illustrates the look and feel of the interface; but there is no functionality behind the façade, much like the houses that are used in Hollywood movie studios. In cases in which the product is an application programming interface (API), the POC illustrates the methods and functionality that the API will provide; but the implementations of the methods are simply stubs that will not perform real work.


When the POC has served its purpose, it is best to throw it away and start with the development of the deliverable. Developers must resist the urge to start development of the final deliverable by enhancing the existing POC. At the same time, the development team must communicate to the customer that the POC is a prototype that looks like the desired system, but, in reality, is just smoke and mirrors. Otherwise, one runs the risk of raising expectations to the point at which the customer will expect the delivery of the rest of the system at the same pace that the prototype was developed.


The features that are implemented as part of the POC should have the key features of the project—especially, the parts of the system that have many unknowns or represent increased risk. At the same time, components of the system that are repetitive implementations of a given concept should be excluded. Implementation of a single instance of a given concept in the POC will provide all of the information that is needed to match a customer’s expectations successfully.


The decision with regard to whether to incorporate one or more POC systems into the schedule also depends on the software-development process that is being used. Despite much criticism—even including some by its own founder [Parnas 1986]—the waterfall model is still quite popular. Because the waterfall model and some of its close relatives do not incorporate iterations that allow for the revision of the requirements and design decisions as development progresses, it is especially important to include the development of a POC system. Other development processes, such as the Spiral [Boehm 1985] or Scrum [Sutherland 2004], prescribe the development of a prototype or early versions of the deliverable that are to be refined over time. The processes can preclude the need for—or the value derived from—the explicit development of a POC system.


Conclusion


As it became very clear to us, when engaging in a new project, it is imperative that the development of one or more POC systems be considered before one settles on the architecture and design of the final deliverable. Development of a POC system provided us with many benefits including, among others, the:


· Very clear understanding of requirements.


· Understanding of the capabilities and limitations of new technologies.


· Ability to assess design decisions early in the process.


· Ability for the customer to visualize early on the look-and-feel of the solution.


· Reduction in the overall risk of project failure.


We had to be careful of the features that we incorporated into our POC system. We had to be very clear to the customer, as well as to the development team, that the result was a POC system and not an early version of the final deliverable. Although the use of a POC provided benefits, we had to be careful about how we incorporated it into the existing software-development process.


Critical-Thinking Questions


· What software-development process are you using? How does the possibility of a POC system fit in with it?


· What is the nature of the POC system for the project at hand? Are you developing an application with a user interface, an API that will be used by third-party developers, or a product that is defined by your marketing team?


· What key features must go into the POC? What features can safely be left out? What aspects of the design must be evaluated and tested, to ensure that the correct decisions have been made?


· What new technologies are being incorporated into your project? How can they be used in the POC system?


Sources


· [Boehm, 1985] Boehm, Barry W. “A Spiral Model of Software Development and Enhancement.” Proceedings of an International Workshop on Software Process and Software Environments, Coto de Caza, Trabuco Canyon, CA. March 27-29, 1985.


· [DeGrace, et al. 1990] DeGrace, Peter, and Leslie Hulet Stahl. Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms. Englewood Cliffs, NJ: Yourdon Press, 1990.


· [Eeles 2006] Eeles, Peter. “The Process of Software Architecting.” developerWorks. April 15, 2006.


· [Parnas 1986] Parnas, David L., and Paul C. Clements. “A Rational Design Process: How and Why to Fake It.”IEEE Transactions on Software Engineering. February 1986.


· [Sutherland 2004] Sutherland, Jeff. “Agile Development: Lessons Learned from the First Scrum.” Cutter Agile Project Management Advisory Service: Executive Update, 2004, 5(20): pp. 1-4.


· [Wikipedia] Various. “Waterfall model.” Wikipedia, the Free Encyclopedia. December 26, 2007.


Glossary


Proof-of-concept—A short and/or incomplete realization of a certain method or idea to demonstrate its feasibility, or a demonstration in principle whose purpose is to verify that some concept or theory is probably capable of exploitation in a useful manner [Wikipedia].


Software-development process—A structure that is imposed on the development of a software product [Wikipedia].

 

Old Monk – here are some things you may not know about it (we know you love it anyway)

March 27, 2014 § Leave a comment

Old Monk – here are some things you may not know about it (we know you love it anyway)

 

Old Monk lovers drink nothing but Old Monk and it is India’s favourite drink. Old monk lovers do not care about expensive single malts or crazy wines – they just want the monk. Almost every Indian man has a story to tell related to his Old Monk experiences – and we’ll save it for another day as to why we are in love with it. Meanwhile, here are a few things that you might not know about this heavenly rum:

There is a beer by the name of Old Monk 10000 Super Beer.

(If you know where to buy this in India please let us know at @tadtop)SONY DSC

Image source

First old monk was produced by Mohan Meakin Ltd at Kasauli in the Himalayan Mountains.

2

Image source

It is now produced in Ghaziabad, Uttar Pradesh.

2

Image source

 

Old monk has never advertised.

3

Image source.

It is sold with different alcohol content in India and USA. 42.8% in India and 40% in USA. The Army issue alcohol content is 50%.

5

The first time it was tasted officially was 19 December 1854 – we declare it as OLD MONK DAY – can we celebrate it as a national wet day ?

6

Image Source

Old Monk lost its rank as the largest selling dark rum in the world to McDowell’s No.1 Celebration Rum. Huh, as if we’re in that Quality Vs Quantity game.

8

Image source

Old Monk is also the third largest selling Rum in the world. Comes in different shapes & sizes

7

Image Source

 

 ALWAYS REMEMBER THIS:

9 (pssst: Hardcore is old monk+water; neat old monk is level X)

Image source

And this: There are no old monk fans, just lovers

Date A Guy Who Smokes

March 26, 2014 § Leave a comment

Date A Guy Who Smokes

By GRAGORY NYAUCHI

Date a guy who smokes, you’ll find him standing under the trees in a garden, or stepping out of a club into the fresh air to have a puff, you’ll see him taking a walk as smoke trails from his nose, or lost in thought appreciating the simple beauty that nature offers every day. Talks to him, smokers are the friendliest people you will ever meet. Ask him for a cigarette and watch him reach into his packet or pocket and hand you one even if it’s the last one. If he can’t find one bask in the selflessness of him offering you the half that he is holding in his hands.

Date a guy who smokes because he can make friends in a minute, he will stop and talk to people from all walks of life. Paupers, popes, poets, and posers will stand around with him for at least the length of a cigarette. Here is a man who knows how to share the world with everyone that the earth has thrown out of its belly, here is a man who has come to understand that there is something of worth in a shared word.

Date a guy who smokes because nobody can live in the moment more than him. Watch him take a puff as he stands still and let the world pass him by. Watch him not need the chatter of conversation in order to feel comfortable in the world, watch as he looks at the world with wonder and amazement as happy to be in the Hilton as he is to be in a dingy leaky bar because as long as he has his cigarette everything is ok with the world.

Date a guy who smokes because he realises the world isn’t ending soon. He is a man who has met with triumph and disaster and managed to treat those two imposters just the same, welcoming them into his life with a cigarette in his mouth and a fire in his hands. Date him because he understands fire. Because he knows that passion held too long can scorch you but that passion not properly nurtured will leave you cold and needing to try again. A guy who smokes knows that a gust of wind is all the world needs to throw at you in order to put off your dreams for a season, so he holds his dreams close; he cups them in his hands because he wants to protect them from the world.

Date a guy who smokes because, even though he may not have a solution to every crisis he knows that time changes everything. Look at him when something horrible happens, maybe his face will crumple a little, maybe his shoulders will sag, there may be fear and panic in his eyes. This will pass too in no time at all watch as he stands up right as a rod, fumbles in his pocket for a fire and lights the cigarette that you didn’t even see him put in his mouth. Watch as every puff he takes in restores some of his balance. Look closely enough and you will see his mind churning or his soul making peace with the world.

Watch him smoke at night with the lights off. See the flame from the cigarette light him up for just a moment before this illumination turns to smoke. Realise that he has had many moments like these. He understands the value of illusion better than most, it gives a warm glow but in no time at all it turns to smoke and ash. Maybe this is why he doesn’t lie so much, he knows that all lies flit away in the night air disappearing and leaving nothing but a bad smell in the room.

Date him because he stands his ground every day. Date him because he more than anyone else hears about the hazards of smoking. He has heard arguments against it and he has read reports about it but he did the thing that men do, he stood his own. Even if the whole world was arrayed against him he would stand there between that and what he cares about.

It’s easy to date a guy who reads. For he will be faithful to you as he is to the countless of storybook heroes and heroines, villains and villainesses that passed his beady little eyes as the strolled down the pages of the books he read.

Date him because he finds pleasure every day in one small way. Date him because he can find joy in every waking moment. The joy of anticipation as he waits for his first cigarette, the child-like wonder and awe that is writ on his face when he finds a cigarette after giving up on ever doing so. Let these emotions into your life there are shabbier companions than joy and wonder.

Most of all date him because he understands love. He understands that sometimes love is wrong for you but love is worth being wrong. He knows that love hurts and he knows intimately that love kills but that doesn’t stop him from love. He is a man who has found something that he loves and is willing to let it kill him. He is a man with a capacity for great love, for depthless, for selfless love. He shows it every time he smokes.

So find a man who smokes and share a cigarette with him.

The Repository Pattern Example in C#

March 4, 2014 § Leave a comment

The Repository Pattern Example in C#

The Repository Pattern is a common construct to avoid duplication of data access logic throughout our application. This includes direct access to a database, ORM, WCF dataservices, xml files and so on. The sole purpose of the repository is to hide the nitty gritty details of accessing the data. We can easily query the repository for data objects, without having to know how to provide things like a connection string. The repository behaves like a freely available in-memory data collection to which we can add, delete and update objects.

The Repository pattern adds a separation layer between the data and domain layers of an application. It also makes the data access parts of an application better testable.

You can download or view the solution sources on GitHub:
LINQ to SQL version (the code from this example)
Entity Framework code first version (added at the end of this post)

The example below show an interface of a generic repository of type T, which is a LINQ to SQL entity. It provides a basic interface with operations like Insert, Delete, GetById and GetAll. The SearchFor operation takes a lambda expression predicate to query for a specific entity.

using System;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicinterfaceIRepository<T>
    {
        void Insert(T entity);
        void Delete(T entity);
        IQueryable<T> SearchFor(Expression<Func<T, bool>> predicate);
        IQueryable<T> GetAll();
        T GetById(int id);
    }
}

The implementation of the IRepository interface is pretty straight forward. In the constructor we retrieve the repository entity by calling the datacontext GetTable(of type T) method. The resulting Table(of type T) is the entity table we work with in the rest of the class methods. e.g. SearchFor() simply calls the Where operator on the table with the predicate provided.

using System;
using System.Data.Linq;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicclassRepository<T> : IRepository<T> where T : class, IEntity
    {
        protectedTable<T> DataTable;

        public Repository(DataContext dataContext)
        {
            DataTable = dataContext.GetTable<T>();
        }

        #region IRepository<T> Members

        publicvoid Insert(T entity)
        {
            DataTable.InsertOnSubmit(entity);
        }

        publicvoid Delete(T entity)
        {
            DataTable.DeleteOnSubmit(entity);
        }

        publicIQueryable<T> SearchFor(Expression<Func<T, bool>> predicate)
        {
            return DataTable.Where(predicate);
        }

        publicIQueryable<T> GetAll()
        {
            return DataTable;
        }

        public T GetById(int id)
        {
            // Sidenote: the == operator throws NotSupported Exception!
// 'The Mapping of Interface Member is not supported'
// Use .Equals() instead
return DataTable.Single(e => e.ID.Equals(id));
        }

        #endregion
    }
}

The generic GetById() method explicitly needs all our entities to implement the IEntity interface. This is because we need them to provide us with an Id property to make our generic search for a specific Id possible.

namespace Remondo.Database
{
    publicinterfaceIEntity
    {
        int ID { get; }
    }
}

Since we already have LINQ to SQL entities with an Id property, declaring the IEntity interface is sufficient. Since these are partial classes, they will not be overridden by LINQ to SQL code generation tools.

namespace Remondo.Database
{
    partialclassCity : IEntity
    {
    }

    partialclassHotel : IEntity
    {
    }
}

We are now ready to use the generic repository in an application.

using System;
using System.Collections.Generic;
using System.Linq;
using Remondo.Database;
using Remondo.Database.Repositories;

namespace LinqToSqlRepositoryConsole
{
    internalclassProgram
    {
        privatestaticvoid Main()
        {
            using (var dataContext = newHotelsDataContext())
            {
                var hotelRepository = newRepository<Hotel>(dataContext);
                var cityRepository = newRepository<City>(dataContext);

                City city = cityRepository
                    .SearchFor(c => c.Name.StartsWith("Ams"))
                    .Single();

                IEnumerable<Hotel> orderedHotels = hotelRepository
                    .GetAll()
                    .Where(c => c.City.Equals(city))
                    .OrderBy(h => h.Name);

                Console.WriteLine("* Hotels in {0} *", city.Name);

                foreach (Hotel orderedHotel in orderedHotels)
                {
                    Console.WriteLine(orderedHotel.Name);
                }

                Console.ReadKey();
            }
        }
    }
}

Repository Pattern Hotels Console

Once we get of the generic path into more entity specific operations we can create an implementation for that entity based on the generic version. In the example below we construct a HotelRepository with an entity specific GetHotelsByCity() method. You get the idea. 😉

using System.Data.Linq;
using System.Linq;

namespace Remondo.Database.Repositories
{
    publicclassHotelRepository : Repository<Hotel>, IHotelRepository
    {
        public HotelRepository(DataContext dataContext) 
            : base(dataContext)
        {
        }

        publicIQueryable<Hotel> FindHotelsByCity(City city)
        {
            return DataTable.Where(h => h.City.Equals(city));
        }
    }
}

[Update july 2012] Entity Framework version

The code below shows a nice and clean implementation of the generic repository pattern for the Entity Framework. There’s no need for the IEntity interface here since we use the convenient Find extension method of the DbSet class. Thanks to my co-worker Frank van der Geld for helping me out.

using System;
using System.Data.Entity;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicclassRepository<T> : IRepository<T> where T : class
    {
        protectedDbSet<T> DbSet;

        public Repository(DbContext dataContext)
        {
            DbSet = dataContext.Set<T>();
        }

        #region IRepository<T> Members

        publicvoid Insert(T entity)
        {
            DbSet.Add(entity);
        }

        publicvoid Delete(T entity)
        {
            DbSet.Remove(entity);
        }

        publicIQueryable<T> SearchFor(Expression<Func<T, bool>> predicate)
        {
            return DbSet.Where(predicate);
        }

        publicIQueryable<T> GetAll()
        {
            return DbSet;
        }

        public T GetById(int id)
        {
            return DbSet.Find(id);
        }

        #endregion
    }
}