ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

December 1, 2014 § Leave a comment

ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

In traditional software and web applications application layering is one of the tenets of good application design and architecture. It stems from the separation of concernsprinciple in computer science which is the process of separating applications and programs into distinct features that overlap in functionality as little as possible. Hence, thesingle responsibility principle and DRY concepts in software engineering and object-oriented programming.

Typically, in layered applications, a given application layer should only communicate with and depend on the layer directly below it.

Take the following diagram for example, taken from the Microsoft Application Architecture Guide v2 (I added the red arrow indicators that typically represent where dependencies are):


In this example, the Presentation Layer would talk to and depend on the Business Layer. The Business Layer would talk to and depend on the Data Layer. The Data Layer would talk to and depend on the database.

This is generally considered good application design despite there being dependencies between layers. However, traditionally, layer n-1 dependency  (one layer dependency) has been the normal and accepted practice. But experience tells us that all-too-often we find dependencies on more than just one layer. In fact, we may find dependencies between multiple layers…or worse….between all layers, ending up with something like this:


The application may work fine. In fact, it may work perfectly. But the net effect, in the case of either diagram, is that, with each dependency, change becomes more difficult. Our apps become more difficult to maintain, more difficult to test, and more difficult to change which can result in longer development cycles, higher costs and increased risk of failure on our projects.

Usually, these dependencies come in the form of concrete class names. For example:

MyService service = new MyService();

As Scott Hanselman would say, “anytime you find yourself writing code like this, quit your job and go do something else.” No seriously, you should at least stop and acknowledge what this really is – a dependency on an implementation. Sure, factory classes and singletons come in handy to remove the need to new things up, but we still end up with a dependency – only now we’re dependent on the Singleton or the Factory.

Ideally, we want to declare a dependency on “some thing” without saying what that “some thing” is.

A simplified analogy

If you are hungry, you could ask your friend, “do you have a hamburger?” in which case you are expressing a need for an explicit implementation of something that satisfies hunger (a hamburger). If your friend has a hamburger, great! He would give it to you, you would eat it, and you wouldn’t be hungry anymore. Problem solved! But if your friend does not have a hamburger, you’d still be hungry – mainly because you were too specific in your request. Now, instead of asking for the hambuger, you could ask your friend, “do you have anything to eat?” In this case, you are expressing the “need” (food or something that satisfies hunger) vs. a concrete implementation. This way, if your friend has anything to eat he can give it to you (regardless of whether it is a hamburger or not) and you won’t go hungry. Any food would suffice.

What’s the point?

The point is that by being less specific in our request for food, we are more adaptable. The same is true with software. If we simply declare dependencies on contracts (interfaces), rather than implementations, our software becomes more adaptable and easier to change. Dependency Injection exists to help you do just that.

The goal of Dependency Injection (DI for short) is to separate behavior (or implementation) from dependency resolution, which is really just encapsulation – one of the main principles of computer science and object-oriented programming. I like to think of Dependency Injection as “intra-app SOA”; the end result being a highly decoupled application composed of “services” with explicit service boundaries and contracts (or service interfaces) where any given application layer has no knowledge of any other layers. It cares not the number of layers nor the implementation within each layer. Each layer simply depends on a contract and can be reasonably sure that at runtime there will be at least one implementation available to satisfy that contract. With Dependency Injection on our side, the above diagram might change to look something like this:


At first glance, this doesn’t look much different from the first diagram. We still have “dependencies.” However, now we are dependent on a contract, not an actual implementation. This provides enormous benefits to us as application developers because our application layers are now plug-n-play. They are hot-swappable like hard drives in a RAID configuration. We can change the implementation of a layer and as long as we implement the agreed upon interface, we can rest assured we won’t break something in another layer.

Of course, we still have to unit test our new layer to make sure we don’t have any internal bugs, but as long as other layers only depend on the interface (not the implementation) we know we can reliably swap out an implementation without affecting other parts of an application or system. Ideally, each implementation of an application layer becomes a “black box” to the other layers with which it interacts.

In the case of our MyService above. Instead of writing our code like: MyService service =new MyService();
Using Dependency Injection, we would instead write; IService service { get; set; }; as a property on our class or we would use constructor injection and have something like.

public class HomeController(IService service)


As you can see, we are now expressing a dependency on a contract (an interface) rather than an implementation and we are now “wired” for a Dependency Injection/IoC framework to resolve these dependencies for us without explicitly identifying them in our code.

You might say, “that’s all fine and good, but how do I make sure that my application is only dependent on contracts/interfaces?” More importantly, for existing applications that might not have been written this way, how do I find all the application dependencies and extract them into interfaces in order to move to a DI-friendly application architecture.

This is where being a .NET developer in this day and age makes your life much easier. Thanks to some new features in Visual Studio 2010, you can now answer those questions fairly easily. If you have one of the higher level VS2010 SKUs (Premium and Ultimate), you have the ability to create Application Architecture Layer diagrams. While you may have known that, you may not be aware that you can also validate an application against a layer diagram and have Visual Studio generate the dependencies between your layers so that you can see the dependencies between layers.

Using this feature, not only can you say, “my application should look like this” by creating an application layer diagram, but with the validation feature, you can ask the question, “does my application look like this?”

To get started with this feature, let’s take the following ASP.NET MVC project that I’ve setup as an example for this post. (I’ve circled the areas of immediate interest)

As you can see, I have actually organized my solution folders to mimic my application layering. We have a Repository/Data Access layer, we have a Services layer and we have a Presentation/UI layer. You’ll also notice that we have a Contracts project (or layer) which contains our interfaces.

So our dependencies go something like this:

  • Site (our ASP.NET MVC app) depends on an IUserService.
  • Our Services, depend on an IUserRepository and IUser.
  • Our Repositories depend on the IUser contract since that is the contract they return from their operations.

There are no dependencies between layers. They only depend on interfaces in the Contracts project.

You may have also noticed that there are four different “repository” projects and three different “services” projects. This is where the plug-n-play concept I discussed above comes in to play. ASP.NET MVC 2 comes with great support for Dependency Injection (which MVC 3 builds upon) which allows you to plug-in your DI/IoC framework of choice for all your DI needs. In my case, I’m using Autofac. ASP.NET MVC provides an extensibility point that allows you to say, “anytime my application needs something to satisfy a contract/interface, here’s where to find it.” That “where to find it” part is where an DI/IoC framework plugs-in to satisfy the dependencies of your application without having to declare explicit dependencies between your application layers and/or components.

Frameworks such as AutofacNinjectCastle WindsorStructureMap and Unity, all have some concept of a “registry” which is basically an Interface-to-Implementation mapping or dictionary. With our MyService example above, we would be able to register our implementation MyService as the service that satisfies all dependencies on IService. Then, any time the application needs an implementation of IService, it will ask the DI container to provide one from its registry.

I won’t go into the details of how these dependencies get registered with or resolved by the DI/IoC frameworks. I’ll save that for another post or for you to read about in your own research or digging through the attached sample code.

Instead, we’ll jump right into the benefits that using Dependency Injection provide.

So, in the diagram, we happen to have four different implementations of a DataAccessLayer.

  1. DatabaseRepository – persists data to a SQL database
  2. FakeRepository – fakes the peristence to an underlying data store (ideal for UnitTesting )
  3. MongoDbRepository – persists data to a MongoDb database
  4. XmlRepository – persists data to an Xml file

All four Repository projects implement the IUserRepository contract/interface that our Services layer depends on. This allows us to reliably swap out one for another without affecting any code in the services layer at all. So, it would be trivial to add another repository that persisted data to Oracle, SQL Azure, Amazon SimpleDB, Microsoft Access, Excel, a flat-file, what have you – just so long as the repository implements the IUserRepository interface, we’re good.

Likewise, we have three different implementations of our Services layer.

  1. CachingService – retrieves data from an IUserRepository and caches the results
  2. FakeUserService – a services that fakes retrieving data from an IUserRepository (again, ideal for unit testing)
  3. UserService – same as the caching service (just without caching)

These services all implement the IUserService contract/interface that our presentation layer (UI) depends on.

Now, on to creating an application architecture and validating these stated dependencies.

From the Architecture menu in Visual Studio, you select New Diagram.


In the dialog box that opens, select Layer Diagram and give it an appropriate name.


You should end up with a blank architecture diagram that looks like this:


Now, using either the Toolbox or by right-clicking, we can begin adding layers to your diagram. After adding our application layers, our diagram should look like the following:


Now, to validate our architecture (and dependencies), we first need to tell Visual Studio, what code/projects are in what layers. We start by dragging our projects (or solution folders) into the associated application layer. For example, in my sample solution, I would drag the entire DataAccessLayer solution folder into the Data Access layer on the diagram:


Here’s what we should end up with when we’re done:


It looks very similar to when we started, but now we have a small indicator in the upper-right-hand corner which tells us how many projects are associated with each layer.

Now for the magic!

Simply, right-click anywhere in the white space on the application layer diagram and click Generate Dependencies.

If what I told you above about how my application is architected is true, then you should get an updated diagram that looks like the following:


Ah, now isn’t that diagram a breath of fresh air? In this diagram, Visual Studio is telling us there there are absolutely NO dependencies between the layers of our application! Rather, all application layers depend only on the Contracts project which is simply a collection of interfaces. This is the epitome of encapsulation and tells us that our application layers are decoupled from one another and can be swapped out for other implementations without risk to the rest of our application.

This is ideal for TDD scenarios and allows for simultaneous development on different application layers if we have already ironed out our contracts/interfaces. This means more parallel development can occur which can potentially reduce project timelines. And of course, with TDD on our side, we can reliably test individual layers and sign-off on them knowing that they are not dependent or affected by other layers/components whatsoever. End-result: higher-quality software developed in a shorter amount of time.

Now, if your application doesn’t look like this and looks more like the second picture with dependencies between every layer, don’t worry! Visual Studio can also help you find those dependencies so that you can factor out the concrete references into interfaces/contracts.

First, start by removing the dependencies from your diagram that you don’t want to have in your app – do this by rick-clicking on the dependency arrow and selecting Delete. Next, after you have removed all the unwanted dependencies, simply right-click anywhere in the whitespace of the diagram and select Validate Architecture. Visual Studio will proceed to build your projects and determine if your application actually validates against your stated (desired) architecture. If it does not, the violations will show up in the Error list window and you can start going through these dependencies and replacing the concrete implementations with contracts/interfaces. Additionally, application architects can use this functionality in conjunction with TFS to prevent code check-ins that violate an application architecture diagram.

With Visual Studio 2010, ASP.NET MVC 2 & 3 and the rich support for Dependency Injection, you can begin extracting interfaces from your concrete classes to remove the hard-dependencies in your apps, increase the maintainability of your apps. These features aren’t limited to MVC either. Many DI/IoC frameworks also work with ASP.NET WebForms as well as Windows Forms, WPF, and Silverlight. With these tools in your toolbox you too can begin enjoying the bliss that is a truly decoupled application that is easy to maintain, easy to test, easy to change and easy to replace when the next technology comes along!

Download the sample code:

For more reading on ASP.NET MVC and Dependency Injection I suggest you check out the following blogs:

Happy Injecting!

Proof-of-Concept Design

April 10, 2014 § Leave a comment

Proof-of-Concept Design

Odysseas Pentakalos, Ph.D.

Summary: This article shows how the development of a proof-of-concept can bridge the gap very effectively between how the software product is envisioned during requirements definition and how it is ultimately delivered to the customer. (6 printed pages)


Building Expectations
Controlling Expectations
Conceptual Versus Deliverable
Critical-Thinking Questions


A few years ago, I worked on a multiyear project in which we were tasked with the staged development of a custom-made enterprise application for a large organization. Each stage of the project lasted 8 to 12 months and involved the development of a distinct component of the overall solution. Development of the deliverables of each stage was treated as a separate software-development project in which we went through the entire software-development life cycle (SDLC) in producing the deliverables. Over the years, the customer had settled on its preferred software-development process internally, which resembled the waterfall model [Wikipedia] and which the client wanted us to use. Despite our initial resistance to it, we were ultimately forced to follow it.

Building Expectations

The goal of the first stage of development was fairly limited in scope, compared with the goals of later stages of the project. The single deliverable consisted of the development of a simple prototype that would help the customer team illustrate to their end users the goals of the project. Due in part to everyone’s excitement at being involved in a new project and in part to the limited scope of the deliverable, we were able to complete the fully functional deliverable early. The customer was very pleased with the results and developed considerable respect in our abilities, which contributed to our team developing a high-level of confidence.

The next stage involved considerably more functionality. The requirements were not as clear, and the time that was allotted was not much longer than what we had available during the first stage of development (despite the much greater scope of the deliverables in this stage). Of course, with our recently acquired confidence, none of those issues was much cause for concern at the time. After a lot of work, late nights, and stress, we managed to complete development.

Next, we prepared for the meeting with the customer. Given our experience with the first stage of the project, we expected that the customer would be awed with our results and that, after offering considerable amounts of praise, they would return to their office completely satisfied with yet another successful delivery. We expected also that they, in turn, would expect us to deliver a fully functional system that did exactly what we all had originally envisioned. From the beginning of the meeting, however, it quickly became clear that what they had envisioned and what we actually had delivered were considerably different concepts. The situation became increasingly tense, as we came to realize that feature after feature that we had provided totally failed (in their opinion) their expectations.

Controlling Expectations

After that meeting, having barely survived a cancellation of the contract, and before moving on to begin the requirements-definition cycle for the next stage of the project, we jointly decided that for the next stage of development we would provide the customer with a proof-of-concept (POC) system at two checkpoints before the overall project would be due. This would allow both our customer and us to confirm that the solution that we were developing was in-line with their expectations; and, if not, to allow us to get back on track before it was too late.

The decision to incorporate the development of POC systems for the rest of the stages of that project was a considerable factor towards successful completion of the overall project. Through that experience, we learned a number of lessons with regard to the value of a POC system.

In [DeGrace, et al. 1990], the authors list four reasons for the failure of the waterfall approach for software development [Sutherland 2004]:

· Requirements are not fully understood before the project begins.

· Users know what they want only after they see an initial version of the software.

· Requirements change often during the software-construction process.

· New tools and technologies make implementation strategies unpredictable.

In our experience with the project that was described earlier, the development of a POC system provided a cure for three of these issues. First of all, by developing a POC for the customer, we were forced to understand fully the requirements early on. The understanding was much deeper than what one normally receives at the early stages of a new project by simply reading through the requirements, or while incorporating them into the architecture of the system.

The failure that we experienced on that project after the second stage of development was in part due to the second issue that was listed previously. The users were disappointed with our delivery, not only because we misinterpreted the requirements, but also because they did not really know what they wanted until they had seen the deliverable—which, unfortunately, was not what they had in mind. After the customer got a chance to review the POC—which we provided them in the later stages of the project—they got a better idea of what they wanted the final deliverable to look and behave like. Having the POC system also gave us the opportunity to communicate to the user the look and feel of the final product much more vividly than through the use of design documents and design reviews. Seeing the POC allowed them, on the one hand, to adjust their requirements to match exactly what they wanted and, on the other hand, to better define their expectations for the final deliverable. As a result of these adjustments, the customer was much happier with the end result.

At times during the development, we chose to incorporate new technologies; at other times, we were forced to do so. Many of the details that were needed during the design and development of software become known only during the implementation stages [Parnas 1986]. It has been shown consistently that design mistakes that are found early in the software-development life cycle are cheaper to fix than when they are detected later down the road [Eeles 2006]. The POC system provided us with lots of feedback and information of which we were not aware, and allowed us to adjust our design decisions well before the cost of backtracking became too high. The development of a POC system gave us the opportunity to understand and evaluate how to best incorporate those technologies into our design, without having to worry about the complexities of developing the full scope of the system.

As soon as we had a good handle on the capabilities and idiosyncrasies of each new technology (by observing its behavior and operation during the POC development), we were in much better shape to incorporate it into the final product. The risks that we undertook when using the new technologies had been reduced simply due to the fact that, after testing the technology within the scope of the POC system, they were no longer new to our team.

Conceptual Versus Deliverable

It is important to keep in mind that a POC is just a prototype and does not represent the deliverable. POC systems are usually developed quickly and without a lot of testing, so that they do not make good candidates for early versions of the final deliverable. In cases in which the deliverable includes a user interface, the POC is a façade that illustrates the look and feel of the interface; but there is no functionality behind the façade, much like the houses that are used in Hollywood movie studios. In cases in which the product is an application programming interface (API), the POC illustrates the methods and functionality that the API will provide; but the implementations of the methods are simply stubs that will not perform real work.

When the POC has served its purpose, it is best to throw it away and start with the development of the deliverable. Developers must resist the urge to start development of the final deliverable by enhancing the existing POC. At the same time, the development team must communicate to the customer that the POC is a prototype that looks like the desired system, but, in reality, is just smoke and mirrors. Otherwise, one runs the risk of raising expectations to the point at which the customer will expect the delivery of the rest of the system at the same pace that the prototype was developed.

The features that are implemented as part of the POC should have the key features of the project—especially, the parts of the system that have many unknowns or represent increased risk. At the same time, components of the system that are repetitive implementations of a given concept should be excluded. Implementation of a single instance of a given concept in the POC will provide all of the information that is needed to match a customer’s expectations successfully.

The decision with regard to whether to incorporate one or more POC systems into the schedule also depends on the software-development process that is being used. Despite much criticism—even including some by its own founder [Parnas 1986]—the waterfall model is still quite popular. Because the waterfall model and some of its close relatives do not incorporate iterations that allow for the revision of the requirements and design decisions as development progresses, it is especially important to include the development of a POC system. Other development processes, such as the Spiral [Boehm 1985] or Scrum [Sutherland 2004], prescribe the development of a prototype or early versions of the deliverable that are to be refined over time. The processes can preclude the need for—or the value derived from—the explicit development of a POC system.


As it became very clear to us, when engaging in a new project, it is imperative that the development of one or more POC systems be considered before one settles on the architecture and design of the final deliverable. Development of a POC system provided us with many benefits including, among others, the:

· Very clear understanding of requirements.

· Understanding of the capabilities and limitations of new technologies.

· Ability to assess design decisions early in the process.

· Ability for the customer to visualize early on the look-and-feel of the solution.

· Reduction in the overall risk of project failure.

We had to be careful of the features that we incorporated into our POC system. We had to be very clear to the customer, as well as to the development team, that the result was a POC system and not an early version of the final deliverable. Although the use of a POC provided benefits, we had to be careful about how we incorporated it into the existing software-development process.

Critical-Thinking Questions

· What software-development process are you using? How does the possibility of a POC system fit in with it?

· What is the nature of the POC system for the project at hand? Are you developing an application with a user interface, an API that will be used by third-party developers, or a product that is defined by your marketing team?

· What key features must go into the POC? What features can safely be left out? What aspects of the design must be evaluated and tested, to ensure that the correct decisions have been made?

· What new technologies are being incorporated into your project? How can they be used in the POC system?


· [Boehm, 1985] Boehm, Barry W. “A Spiral Model of Software Development and Enhancement.” Proceedings of an International Workshop on Software Process and Software Environments, Coto de Caza, Trabuco Canyon, CA. March 27-29, 1985.

· [DeGrace, et al. 1990] DeGrace, Peter, and Leslie Hulet Stahl. Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms. Englewood Cliffs, NJ: Yourdon Press, 1990.

· [Eeles 2006] Eeles, Peter. “The Process of Software Architecting.” developerWorks. April 15, 2006.

· [Parnas 1986] Parnas, David L., and Paul C. Clements. “A Rational Design Process: How and Why to Fake It.”IEEE Transactions on Software Engineering. February 1986.

· [Sutherland 2004] Sutherland, Jeff. “Agile Development: Lessons Learned from the First Scrum.” Cutter Agile Project Management Advisory Service: Executive Update, 2004, 5(20): pp. 1-4.

· [Wikipedia] Various. “Waterfall model.” Wikipedia, the Free Encyclopedia. December 26, 2007.


Proof-of-concept—A short and/or incomplete realization of a certain method or idea to demonstrate its feasibility, or a demonstration in principle whose purpose is to verify that some concept or theory is probably capable of exploitation in a useful manner [Wikipedia].

Software-development process—A structure that is imposed on the development of a software product [Wikipedia].


Where Am I?

You are currently browsing the Architecture category at Naik Vinay.