ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

December 1, 2014 § Leave a comment

ASP.NET MVC, Dependency Injection and The Bliss of Truly Decoupled Applications

In traditional software and web applications application layering is one of the tenets of good application design and architecture. It stems from the separation of concernsprinciple in computer science which is the process of separating applications and programs into distinct features that overlap in functionality as little as possible. Hence, thesingle responsibility principle and DRY concepts in software engineering and object-oriented programming.

Typically, in layered applications, a given application layer should only communicate with and depend on the layer directly below it.

Take the following diagram for example, taken from the Microsoft Application Architecture Guide v2 (I added the red arrow indicators that typically represent where dependencies are):

image

In this example, the Presentation Layer would talk to and depend on the Business Layer. The Business Layer would talk to and depend on the Data Layer. The Data Layer would talk to and depend on the database.

This is generally considered good application design despite there being dependencies between layers. However, traditionally, layer n-1 dependency  (one layer dependency) has been the normal and accepted practice. But experience tells us that all-too-often we find dependencies on more than just one layer. In fact, we may find dependencies between multiple layers…or worse….between all layers, ending up with something like this:

image

The application may work fine. In fact, it may work perfectly. But the net effect, in the case of either diagram, is that, with each dependency, change becomes more difficult. Our apps become more difficult to maintain, more difficult to test, and more difficult to change which can result in longer development cycles, higher costs and increased risk of failure on our projects.

Usually, these dependencies come in the form of concrete class names. For example:

MyService service = new MyService();

As Scott Hanselman would say, “anytime you find yourself writing code like this, quit your job and go do something else.” No seriously, you should at least stop and acknowledge what this really is – a dependency on an implementation. Sure, factory classes and singletons come in handy to remove the need to new things up, but we still end up with a dependency – only now we’re dependent on the Singleton or the Factory.

Ideally, we want to declare a dependency on “some thing” without saying what that “some thing” is.

A simplified analogy

If you are hungry, you could ask your friend, “do you have a hamburger?” in which case you are expressing a need for an explicit implementation of something that satisfies hunger (a hamburger). If your friend has a hamburger, great! He would give it to you, you would eat it, and you wouldn’t be hungry anymore. Problem solved! But if your friend does not have a hamburger, you’d still be hungry – mainly because you were too specific in your request. Now, instead of asking for the hambuger, you could ask your friend, “do you have anything to eat?” In this case, you are expressing the “need” (food or something that satisfies hunger) vs. a concrete implementation. This way, if your friend has anything to eat he can give it to you (regardless of whether it is a hamburger or not) and you won’t go hungry. Any food would suffice.

What’s the point?

The point is that by being less specific in our request for food, we are more adaptable. The same is true with software. If we simply declare dependencies on contracts (interfaces), rather than implementations, our software becomes more adaptable and easier to change. Dependency Injection exists to help you do just that.

The goal of Dependency Injection (DI for short) is to separate behavior (or implementation) from dependency resolution, which is really just encapsulation – one of the main principles of computer science and object-oriented programming. I like to think of Dependency Injection as “intra-app SOA”; the end result being a highly decoupled application composed of “services” with explicit service boundaries and contracts (or service interfaces) where any given application layer has no knowledge of any other layers. It cares not the number of layers nor the implementation within each layer. Each layer simply depends on a contract and can be reasonably sure that at runtime there will be at least one implementation available to satisfy that contract. With Dependency Injection on our side, the above diagram might change to look something like this:

image

At first glance, this doesn’t look much different from the first diagram. We still have “dependencies.” However, now we are dependent on a contract, not an actual implementation. This provides enormous benefits to us as application developers because our application layers are now plug-n-play. They are hot-swappable like hard drives in a RAID configuration. We can change the implementation of a layer and as long as we implement the agreed upon interface, we can rest assured we won’t break something in another layer.

Of course, we still have to unit test our new layer to make sure we don’t have any internal bugs, but as long as other layers only depend on the interface (not the implementation) we know we can reliably swap out an implementation without affecting other parts of an application or system. Ideally, each implementation of an application layer becomes a “black box” to the other layers with which it interacts.

In the case of our MyService above. Instead of writing our code like: MyService service =new MyService();
Using Dependency Injection, we would instead write; IService service { get; set; }; as a property on our class or we would use constructor injection and have something like.

public class HomeController(IService service)
{

}

As you can see, we are now expressing a dependency on a contract (an interface) rather than an implementation and we are now “wired” for a Dependency Injection/IoC framework to resolve these dependencies for us without explicitly identifying them in our code.

You might say, “that’s all fine and good, but how do I make sure that my application is only dependent on contracts/interfaces?” More importantly, for existing applications that might not have been written this way, how do I find all the application dependencies and extract them into interfaces in order to move to a DI-friendly application architecture.

This is where being a .NET developer in this day and age makes your life much easier. Thanks to some new features in Visual Studio 2010, you can now answer those questions fairly easily. If you have one of the higher level VS2010 SKUs (Premium and Ultimate), you have the ability to create Application Architecture Layer diagrams. While you may have known that, you may not be aware that you can also validate an application against a layer diagram and have Visual Studio generate the dependencies between your layers so that you can see the dependencies between layers.

Using this feature, not only can you say, “my application should look like this” by creating an application layer diagram, but with the validation feature, you can ask the question, “does my application look like this?”

To get started with this feature, let’s take the following ASP.NET MVC project that I’ve setup as an example for this post. (I’ve circled the areas of immediate interest)

As you can see, I have actually organized my solution folders to mimic my application layering. We have a Repository/Data Access layer, we have a Services layer and we have a Presentation/UI layer. You’ll also notice that we have a Contracts project (or layer) which contains our interfaces.

So our dependencies go something like this:

  • Site (our ASP.NET MVC app) depends on an IUserService.
  • Our Services, depend on an IUserRepository and IUser.
  • Our Repositories depend on the IUser contract since that is the contract they return from their operations.

There are no dependencies between layers. They only depend on interfaces in the Contracts project.

You may have also noticed that there are four different “repository” projects and three different “services” projects. This is where the plug-n-play concept I discussed above comes in to play. ASP.NET MVC 2 comes with great support for Dependency Injection (which MVC 3 builds upon) which allows you to plug-in your DI/IoC framework of choice for all your DI needs. In my case, I’m using Autofac. ASP.NET MVC provides an extensibility point that allows you to say, “anytime my application needs something to satisfy a contract/interface, here’s where to find it.” That “where to find it” part is where an DI/IoC framework plugs-in to satisfy the dependencies of your application without having to declare explicit dependencies between your application layers and/or components.

Frameworks such as AutofacNinjectCastle WindsorStructureMap and Unity, all have some concept of a “registry” which is basically an Interface-to-Implementation mapping or dictionary. With our MyService example above, we would be able to register our implementation MyService as the service that satisfies all dependencies on IService. Then, any time the application needs an implementation of IService, it will ask the DI container to provide one from its registry.

I won’t go into the details of how these dependencies get registered with or resolved by the DI/IoC frameworks. I’ll save that for another post or for you to read about in your own research or digging through the attached sample code.

Instead, we’ll jump right into the benefits that using Dependency Injection provide.

So, in the diagram, we happen to have four different implementations of a DataAccessLayer.

  1. DatabaseRepository – persists data to a SQL database
  2. FakeRepository – fakes the peristence to an underlying data store (ideal for UnitTesting )
  3. MongoDbRepository – persists data to a MongoDb database
  4. XmlRepository – persists data to an Xml file

All four Repository projects implement the IUserRepository contract/interface that our Services layer depends on. This allows us to reliably swap out one for another without affecting any code in the services layer at all. So, it would be trivial to add another repository that persisted data to Oracle, SQL Azure, Amazon SimpleDB, Microsoft Access, Excel, a flat-file, what have you – just so long as the repository implements the IUserRepository interface, we’re good.

Likewise, we have three different implementations of our Services layer.

  1. CachingService – retrieves data from an IUserRepository and caches the results
  2. FakeUserService – a services that fakes retrieving data from an IUserRepository (again, ideal for unit testing)
  3. UserService – same as the caching service (just without caching)

These services all implement the IUserService contract/interface that our presentation layer (UI) depends on.

Now, on to creating an application architecture and validating these stated dependencies.

From the Architecture menu in Visual Studio, you select New Diagram.

image_thumb[64]

In the dialog box that opens, select Layer Diagram and give it an appropriate name.

image_thumb[65]

You should end up with a blank architecture diagram that looks like this:

image_thumb[66]_thumb

Now, using either the Toolbox or by right-clicking, we can begin adding layers to your diagram. After adding our application layers, our diagram should look like the following:

image_thumb[67]_thumb

Now, to validate our architecture (and dependencies), we first need to tell Visual Studio, what code/projects are in what layers. We start by dragging our projects (or solution folders) into the associated application layer. For example, in my sample solution, I would drag the entire DataAccessLayer solution folder into the Data Access layer on the diagram:

image_thumb[68]_thumb

Here’s what we should end up with when we’re done:

image_thumb[69]_thumb

It looks very similar to when we started, but now we have a small indicator in the upper-right-hand corner which tells us how many projects are associated with each layer.

Now for the magic!

Simply, right-click anywhere in the white space on the application layer diagram and click Generate Dependencies.

If what I told you above about how my application is architected is true, then you should get an updated diagram that looks like the following:

image_thumb[70]_thumb

Ah, now isn’t that diagram a breath of fresh air? In this diagram, Visual Studio is telling us there there are absolutely NO dependencies between the layers of our application! Rather, all application layers depend only on the Contracts project which is simply a collection of interfaces. This is the epitome of encapsulation and tells us that our application layers are decoupled from one another and can be swapped out for other implementations without risk to the rest of our application.

This is ideal for TDD scenarios and allows for simultaneous development on different application layers if we have already ironed out our contracts/interfaces. This means more parallel development can occur which can potentially reduce project timelines. And of course, with TDD on our side, we can reliably test individual layers and sign-off on them knowing that they are not dependent or affected by other layers/components whatsoever. End-result: higher-quality software developed in a shorter amount of time.

Now, if your application doesn’t look like this and looks more like the second picture with dependencies between every layer, don’t worry! Visual Studio can also help you find those dependencies so that you can factor out the concrete references into interfaces/contracts.

First, start by removing the dependencies from your diagram that you don’t want to have in your app – do this by rick-clicking on the dependency arrow and selecting Delete. Next, after you have removed all the unwanted dependencies, simply right-click anywhere in the whitespace of the diagram and select Validate Architecture. Visual Studio will proceed to build your projects and determine if your application actually validates against your stated (desired) architecture. If it does not, the violations will show up in the Error list window and you can start going through these dependencies and replacing the concrete implementations with contracts/interfaces. Additionally, application architects can use this functionality in conjunction with TFS to prevent code check-ins that violate an application architecture diagram.

With Visual Studio 2010, ASP.NET MVC 2 & 3 and the rich support for Dependency Injection, you can begin extracting interfaces from your concrete classes to remove the hard-dependencies in your apps, increase the maintainability of your apps. These features aren’t limited to MVC either. Many DI/IoC frameworks also work with ASP.NET WebForms as well as Windows Forms, WPF, and Silverlight. With these tools in your toolbox you too can begin enjoying the bliss that is a truly decoupled application that is easy to maintain, easy to test, easy to change and easy to replace when the next technology comes along!

Download the sample code: MvcDI.zip

For more reading on ASP.NET MVC and Dependency Injection I suggest you check out the following blogs:

http://weblogs.asp.net/scottgu/
http://hanselman.com
http://haacked.com
http://bradwilson.typepad.com/

Happy Injecting!

The Repository Pattern Example in C#

March 4, 2014 § Leave a comment

The Repository Pattern Example in C#

The Repository Pattern is a common construct to avoid duplication of data access logic throughout our application. This includes direct access to a database, ORM, WCF dataservices, xml files and so on. The sole purpose of the repository is to hide the nitty gritty details of accessing the data. We can easily query the repository for data objects, without having to know how to provide things like a connection string. The repository behaves like a freely available in-memory data collection to which we can add, delete and update objects.

The Repository pattern adds a separation layer between the data and domain layers of an application. It also makes the data access parts of an application better testable.

You can download or view the solution sources on GitHub:
LINQ to SQL version (the code from this example)
Entity Framework code first version (added at the end of this post)

The example below show an interface of a generic repository of type T, which is a LINQ to SQL entity. It provides a basic interface with operations like Insert, Delete, GetById and GetAll. The SearchFor operation takes a lambda expression predicate to query for a specific entity.

using System;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicinterfaceIRepository<T>
    {
        void Insert(T entity);
        void Delete(T entity);
        IQueryable<T> SearchFor(Expression<Func<T, bool>> predicate);
        IQueryable<T> GetAll();
        T GetById(int id);
    }
}

The implementation of the IRepository interface is pretty straight forward. In the constructor we retrieve the repository entity by calling the datacontext GetTable(of type T) method. The resulting Table(of type T) is the entity table we work with in the rest of the class methods. e.g. SearchFor() simply calls the Where operator on the table with the predicate provided.

using System;
using System.Data.Linq;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicclassRepository<T> : IRepository<T> where T : class, IEntity
    {
        protectedTable<T> DataTable;

        public Repository(DataContext dataContext)
        {
            DataTable = dataContext.GetTable<T>();
        }

        #region IRepository<T> Members

        publicvoid Insert(T entity)
        {
            DataTable.InsertOnSubmit(entity);
        }

        publicvoid Delete(T entity)
        {
            DataTable.DeleteOnSubmit(entity);
        }

        publicIQueryable<T> SearchFor(Expression<Func<T, bool>> predicate)
        {
            return DataTable.Where(predicate);
        }

        publicIQueryable<T> GetAll()
        {
            return DataTable;
        }

        public T GetById(int id)
        {
            // Sidenote: the == operator throws NotSupported Exception!
// 'The Mapping of Interface Member is not supported'
// Use .Equals() instead
return DataTable.Single(e => e.ID.Equals(id));
        }

        #endregion
    }
}

The generic GetById() method explicitly needs all our entities to implement the IEntity interface. This is because we need them to provide us with an Id property to make our generic search for a specific Id possible.

namespace Remondo.Database
{
    publicinterfaceIEntity
    {
        int ID { get; }
    }
}

Since we already have LINQ to SQL entities with an Id property, declaring the IEntity interface is sufficient. Since these are partial classes, they will not be overridden by LINQ to SQL code generation tools.

namespace Remondo.Database
{
    partialclassCity : IEntity
    {
    }

    partialclassHotel : IEntity
    {
    }
}

We are now ready to use the generic repository in an application.

using System;
using System.Collections.Generic;
using System.Linq;
using Remondo.Database;
using Remondo.Database.Repositories;

namespace LinqToSqlRepositoryConsole
{
    internalclassProgram
    {
        privatestaticvoid Main()
        {
            using (var dataContext = newHotelsDataContext())
            {
                var hotelRepository = newRepository<Hotel>(dataContext);
                var cityRepository = newRepository<City>(dataContext);

                City city = cityRepository
                    .SearchFor(c => c.Name.StartsWith("Ams"))
                    .Single();

                IEnumerable<Hotel> orderedHotels = hotelRepository
                    .GetAll()
                    .Where(c => c.City.Equals(city))
                    .OrderBy(h => h.Name);

                Console.WriteLine("* Hotels in {0} *", city.Name);

                foreach (Hotel orderedHotel in orderedHotels)
                {
                    Console.WriteLine(orderedHotel.Name);
                }

                Console.ReadKey();
            }
        }
    }
}

Repository Pattern Hotels Console

Once we get of the generic path into more entity specific operations we can create an implementation for that entity based on the generic version. In the example below we construct a HotelRepository with an entity specific GetHotelsByCity() method. You get the idea. 😉

using System.Data.Linq;
using System.Linq;

namespace Remondo.Database.Repositories
{
    publicclassHotelRepository : Repository<Hotel>, IHotelRepository
    {
        public HotelRepository(DataContext dataContext) 
            : base(dataContext)
        {
        }

        publicIQueryable<Hotel> FindHotelsByCity(City city)
        {
            return DataTable.Where(h => h.City.Equals(city));
        }
    }
}

[Update july 2012] Entity Framework version

The code below shows a nice and clean implementation of the generic repository pattern for the Entity Framework. There’s no need for the IEntity interface here since we use the convenient Find extension method of the DbSet class. Thanks to my co-worker Frank van der Geld for helping me out.

using System;
using System.Data.Entity;
using System.Linq;
using System.Linq.Expressions;

namespace Remondo.Database.Repositories
{
    publicclassRepository<T> : IRepository<T> where T : class
    {
        protectedDbSet<T> DbSet;

        public Repository(DbContext dataContext)
        {
            DbSet = dataContext.Set<T>();
        }

        #region IRepository<T> Members

        publicvoid Insert(T entity)
        {
            DbSet.Add(entity);
        }

        publicvoid Delete(T entity)
        {
            DbSet.Remove(entity);
        }

        publicIQueryable<T> SearchFor(Expression<Func<T, bool>> predicate)
        {
            return DbSet.Where(predicate);
        }

        publicIQueryable<T> GetAll()
        {
            return DbSet;
        }

        public T GetById(int id)
        {
            return DbSet.Find(id);
        }

        #endregion
    }
}

Agile Software Architecture Sketches and NoUML

June 26, 2013 § Leave a comment

Interesting article I read on InfoQ.com  about the role architecture in Agile development. So I thought to share it in my blog.

 

Agile Software Architecture Sketches and NoUML

Posted by Simon Brown

If you’re working in an agile software development team at the moment, take a look around at your environment. Whether it’s physical or virtual, there’s likely to be a story wall or Kanban board visualising the work yet to be started, in progress and done. Visualising your software development process is a fantastic way to introduce transparency because anybody can see, at a glance, a high-level snapshot of the current progress. As an industry, we’ve become pretty adept at visualising our software development process over the past few years although it seems we’ve forgotten how to visualise the actual software that we’re building. I’m not just referring to post-project documentation, this also includes communication during the software development process. Agility is about moving fast and this requires good communication, but it’s surprising that many teams struggle to effectively communicate the design of their software.

Prescribed methods, process frameworks and formal notations

If you look back a few years, structured processes and formal notations provided a reference point for both the software design process and how to communicate the resulting designs. Examples include the Rational Unified Process (RUP), Structured Systems Analysis And Design Method (SSADM), the Unified Modelling Language (UML) and so on. Although the software development industry has moved on in many ways, we seem to have forgotten some of the good things that these older approaches gave us. In today’s world of agile delivery and lean startups, some software teams have lost the ability to communicate what it is they are building and it’s no surprise that these teams often seem to lack technical leadership, direction and consistency. If you want to ensure that everybody is contributing to the same end-goal, you need to be able to effectively communicate the vision of what it is you’re building. And if you want agility and the ability to move fast, you need to be able to communicate that vision efficiently too.

Abandoning UML

As an industry, we do have the Unified Modelling Language (UML), which is a formal standardised notation for communicating the design of software systems. I do use UML myself, but I only tend to use it sparingly for sketching out any important low-level design aspects of a software system. I don’t find that UML works well for describing the high-level software architecture of a software system and while it’s possible to debate this, it’s often irrelevant because many teams have already thrown out UML or simply don’t know it. Such teams typically favour informal boxes and lines style sketches instead but often these diagrams don’t make much sense unless they are accompanied by a detailed narrative, which ultimately slows the team down. Next time somebody presents a software design to you focussed around one or more informal sketches, ask yourself whether they are presenting what’s on the sketches or whether they are presenting what’s still in their head.

 (Click on the image to enlarge it)

Abandoning UML is all very well but, in the race for agility, many software development teams have lost the ability to communicate visually too. The example NoUML software architecture sketches (above) illustrate a number of typical approaches to communicating software architecture and they suffer from the following types of problems:

  • Colour-coding is usually not explained or is often inconsistent.
  • The purpose of diagram elements (i.e. different styles of boxes and lines) is often not explained.
  • Key relationships between diagram elements are sometimes missing or ambiguous.
  • Generic terms such as “business logic” are often used.
  • Technology choices (or options) are usually omitted.
  • Levels of abstraction are often mixed.
  • Diagrams often try to show too much detail.
  • Diagrams often lack context or a logical starting point.

Some simple abstractions

Informal boxes and lines sketches can work very well, but there are many pitfalls associated with communicating software designs in this way. My approach is to use a small collection of simple diagrams that each show a different part of the same overall story. In order to do this though, you need to agree on a simple way to think about the software system that you’re building. Assuming an object oriented programming language, the way that I like to think about a software system is as follows … a software system is made up of a number of containers, which themselves are made up of a number of components, which in turn are implemented by one or more classes. It’s a simple hierarchy of logical building blocks that can be used to model most of the software systems that I’ve encountered.

  • Classes: in an OO world, classes are the smallest building blocks of our software systems.
  • Components: components (or services) are typically made up of a number of collaborating classes, all sitting behind a coarse-grained interface. Examples might include a “risk calculator”, “audit component”, “security service”, “e-mail service”, etc depending on what you are building.
  • Containers: a container represents something in which components are executed or where data resides. This could be anything from a web or application server through to a rich client application, database or file system. Containers are typically the things that need to be running/available for the software system to work as a whole. The key thing about understanding a software system from a containers perspective is that any inter-container communication is likely to require a remote interface such as a web service call, remote method invocation, messaging, etc.
  • System: a system is the highest level of abstraction and represents something that delivers value to, for example, end-users.

Summarising the static structure of your software with NoUML

By using this set of abstractions to think about a software system, we can now draw a number of simple boxes and lines sketches to summarise the static structure of that software system as follows (you can see some examples on Flickr):

  1. Context diagram: a very high-level diagram showing your system as a box in the centre, surrounded by other boxes representing the users and all of the other systems that the software system interfaces with. Detail isn’t important here as this is your zoomed out view showing a big picture of the system landscape. The focus should be on people (actors, roles, personas, etc) and software systems rather than technologies, protocols and other low-level details. It’s the sort of diagram that you could show to non-technical people.
  2. Containers diagram: a high-level diagram showing the various web servers, application servers, standalone applications, databases, file systems, etc that make up your software system, along with the relationships/interactions between them. This is the diagram that illustrates your high-level technology choices. Focus on showing the logical containers and leave other diagrams (e.g. infrastructure and deployment diagrams) to show the physical instances and deployment mappings.
  3. Components diagrams: a diagram (one per container) showing the major logical components/services and their relationships. Additional information such as known technology choices for component implementation (e.g. Spring, Hibernate, Windows Communication Foundation, F#, etc) can also be added to the diagram in order to ground the design in reality.
  4. Class diagrams: this is an optional level of detail and I will typically draw a small number of high-level UML class diagrams if I want to explain how a particular pattern or component will be (or has been) implemented. The factors that prompt me to draw class diagrams for parts of the software system include the complexity of the software plus the size and experience of the team. Any UML diagrams that I do draw tend to be sketches rather than comprehensive models.

(Click on the image to enlarge it)

A single diagram can quickly become cluttered and confused, but a collection of simple diagrams allows you to easily present the software from a number of different levels of abstraction. And this is an important point because it’s not just software developers within the team that need information about the software. There are other stakeholders and consumers too; ranging from non-technical domain experts, testers and management through to technical staff in operations and support functions. For example, a diagram showing the containers is particularly useful for people like operations and support staff that want some technical information about your software system, but don’t necessarily need to know anything about the inner workings.

Organisational ideas, not a standard

This simple sketching approach works for me and many of the software teams that I work with, but it’s about about providing some organisational ideas and guidelines rather than creating a prescriptive standard. The goal here is to help teams communicate their software designs in an effective and efficient way rather than creating another comprehensive modelling notation. It’s worth reiterating that informal boxes and lines sketches provide flexibility at the expense of diagram consistency because you’re creating your own notation rather than using a standard like UML. My advice here is to be conscious of colour-coding, line style, shapes, etc and let a set of consistent notations evolve naturally within your team. Including a simple key/legend on each diagram to explain the notation will help too.

There seems to be a common misconception that “architecture diagrams” must only present a high-level conceptual view of the world, so it’s not surprising that software developers often regard them as pointless. In the same way that software architecture should be about coding, coaching and collaboration rather than ivory towers, software architecture diagrams should be grounded in reality too. Including technology choices (or options) is a usually a step in the right direction and will help prevent diagrams looking like an ivory tower architecture where a bunch of conceptual components magically collaborate to form an end-to-end software system.

“Just enough” up front design

As a final point, Grady Booch has a great explanation of the difference between architecture and design where he says that architecture represents the “significant decisions”, where significance is measured by cost of change. The context, containers and components diagrams show what I consider to be the significant structural elements of a software system. Therefore, in addition to helping teams with effective and efficient communication, adopting this approach to diagramming can also help software teams that struggle with either doing too much or too little up front design. Starting with a blank sheet of paper, many software systems can be designed and illustrated down to high-level components in a number of hours or days rather than weeks or months. Illustrating the design of your software can be a quick and easy task that, when done well, can really help to introduce technical leadership and instil a sense of a shared technical vision that the whole team can buy into. Sketching should be a skill in every software developer’s toolbox. It’s a great way to visualise a solution and communicate it quickly plus it paves the way for collaborative design and collective code ownership.

How Would You Build Up a City from Components?

April 3, 2013 § 6 Comments

I read one of most interesting article in INFOQ.com and love the way it was put across. Here it goes

 

How Would You Build Up a City from Components?

Posted by Aliaksei Papou

 

More and more enterprise application development is moving to component frameworks and solutions. Why? Does component architecture have any future? I believe yes and soon all development frameworks will be component-based – it is imminent. Let me show you why.

How do you build up your house? You start with building blocks. It is possible to compare the construction of a web application with the construction of your small country house. You can quickly build up a very good looking application with all the required functionality. Every room in your house is created for specific needs, for instance the kitchen, living room, bedroom or bathroom. The layout of the house allows you to conveniently move between rooms using the corridors and stairs.

You are doing better now and can afford to build a bigger and better house – you would like to have a sauna, a pool, a movie theater and of course a giant aquarium filled with reptiles☺. But changing the design of your house can be quite difficult. If you are able to add the extra facilities the house ends up not looking so nice. It is also less convenient since your additions had to be put in inconvenient places, so that, for instance, to get into the billiard room you have to pass through the main bedroom.

In the end your nice and neat house turns into an awkward and uncomfortable house with a bunch of different functions within it. The same story can happen with application development.

The question is, is it possible to design an application such that it can grow and change according to your needs? Let’s try to figure it out.

Components are building blocks of the application

Components are the primary means of extending application functionality. The process of creating components is a bit different from the creation of applications based on them. The component should not only provide useful functionality but be designed for reuse from the outset.

 

 

Reuse of components

To easily reuse components they should be designed with a loose coupling approach. To make this possible, different frameworks typically implement their event models based on the Observer pattern. This allows multiple recipients to subscribe to the same event.

The Observer pattern was originally implemented in Smalltalk. Smalltalk is a user interface framework based on MVC and is now a key part of MVC frameworks. I would like to draw your attention to the fact that the Observer pattern has existed in Java since version 1.0. Let’s have a deeper look into it.

 

The following UML diagram describes the Observer pattern:

Here is a basic Java implementation

public class ObservableX extends Observable {
  ...
  public void setAmount(double amount) {
    this.amount = amount;
    super.setChanged();
    super.notifyObservers();
}

}
  public class ObserverA implements Observer {
  public void public void update(Observable o) {
  // gets updated amount
}

}
  public class ObserverB implements Observer {
  public void public void update(Observable o) {
  // gets updated amount
}

}
//instantiate concrete observableX
ObservableX observableX = new ObservableX();
//somewhere in code
observableX.addObserver(new ObserverA());
observableX.addObserver(new ObserverB());
//much later

observableX.setAmount(amount);

How it works:

Firstly we create an instance of ObservableX class, add the ObserverA and ObserverB instances into the observableX object and then somewhere in the code we set the value for “some amount” using the setAmount method. The functionality of the observable class notifies all the registered observers about the received amount.

The Observer acts as a mediator that maintains the list of recipients. When some event occurs within a component it is sent to all recipients from that list.

Due to the mediator the component isn’t aware of its recipients. And the recipients can subscribe to the events of different components of a particular type.

A class can become a component when it uses events to notify observers about its changes. And this can be achieved by using the Observer pattern.

To use components is easier than to create them

Using components you can quickly create a variety of forms, panels, windows, and other composite elements of the interface. However, to be able to re-use the new composite parts they should be turned into components.

In order to achieve this you do need to determine the external events that the component will generate as well as the mechanism of sending messages. I.e. you need to at least create new event classes and define interfaces or callback methods for receiving those events.

This approach makes the implementation of reusable application components more complex. It is fine if the system consists of a small number of composite elements – up to around ten. But what if the system consists of hundreds of such elements?

Conversely, not following this approach will lead to tight coupling between elements and will reduce the chances of re-use to zero. This, in its turn will lead to code duplication that will make maintenance more complicated in future and will lead to a growing number of bugs in the system.

The problem is compounded by the fact that users of the components often do not know how to define and send new events to their parts. But they can easily use ready-made events provided by the component framework. They know how to receive events but don’t know how to send them.

To solve this problem, let’s consider how to simplify the use of the event model in applications.

Too many Event Listeners

In Java Swing, GWT, JSF and Vaadin the Observer pattern is used for implementation of an event model where multiple users can subscribe to one event. Lists to which Event Listeners are added serve as the implementation. When the relevant event occurs it is sent to all the recipients of that list.

Each component creates its own set of Event Listeners for one or more events. This leads to an increasing number of classes in the application. That, in its turn, makes support and development of the system more complicated.

With annotations Java gained a way to have individual methods subscribe to particular events. As an example consider the implementation of the event model in CDI (Contexts and Dependency Injection) from Java EE 6.

 public class PaymentHandler {
      public void creditPayment(@Observes @Credit PaymentEvent event) {
        ...
      }
}

public class PaymentBean {

    @Inject
    @Credit
    Event<PaymentEvent> creditEvent;

   public String pay() {
     PaymentEvent creditPayload = new PaymentEvent();
            // populate payload ... 
            creditEvent.fire(creditPayload);
      }
}

As you can see the PaymentEvent is fired when the pay() method of the PaymentBean object is called. After that the creditPayment () method of the PaymentHandler object receives the paymentEvent.

Another example is in the implementation of the Event Bus in Guava Libraries:

 // Class is typically registered by the container.
class EventBusChangeRecorder {
  @Subscribe public void recordCustomerChange(ChangeEvent e) {
    recordChange(e.getChange());
  }
}
// somewhere during initialization
eventBus.register(new EventBusChangeRecorder());
// much later
public void changeCustomer() {
  ChangeEvent event = getChangeEvent();
  eventBus.post(event);
}

The EventBus registers the object of the EventBusChangeRecorder class. Then calling the changeCustomer() method results in the EventBus receiving the ChangeEvent object and calling the recordCustomerChange () method of the EventBusChangeRecorder object.

Now you don’t need to implement a number of Event Listeners for your components, making the use of events in applications simpler.

The Event Bus usage is convenient when all the components are displayed at the same time on the screen and they use the Event Bus for message exchange, as shown in the picture below.

Here all these elements – the header, left menu, content, right panel – are components.

Subscribed to events – don’t forget to unsubscribe

By replacing Event Listeners with annotations we have made a big step forward in simplifying the use of the event model. But even so, every component in the system needs to be connected with the Event Bus, and then has to subscribe events to it and, at the right time, unsubscribe from it.

It is possible to hit a situation when the same recipient is subscribed several times to the same event, which can lead to a number of repeated notifications. A similar situation can arise when multiple system components subscribe to the same event, which can trigger a series of cascade events.

To be able to control the event model better, it makes sense to move the work with events out to configuration and make the application container responsible for events management. Since particular events are available only on particular conditions, it is reasonable to move the management of their state out to configuration as well.

A sample configuration is shown below:

<?xml version="1.0"?>
<application initial="A">

    <view id="A">
        <on event="next" to="B"/>
    </view>

    <view id="B">
        <on event="previous" to="A"/>
        <on event="next" to="C"/>
    </view>

    <view id="C">
        <on event="previous" to="B"/>
        <on event="next" to="D"/>
    </view>

    <view id="D">
        <on event="previous" to="C"/>
        <on event="finish" to="finish"/>
    </view>

    <final id="finish" /> 

</application>

The transition to the B view will be initiated by the “next” event from the A view. From the B view the user can go back to the A view by the “previous” event or to the C view by the “next” event. From the D view the finish event leads to the “final” state which instructs the application to finish up the workflow within the application.

Finite State Machines are specifically designed for such needs. A state machine is a mathematical model of computation. It is conceived as an abstract machine that can be in one of a finite number of states, but only in one state at a time, known as the current state. An event or condition can trigger a transition to another state. Using this approach you can easily define an active screen and have an event trigger a transition to another screen.

The benefits of using Finite State Machines for configuring the application

In most cases application configuration is defined statically. Configuring the application with dependency injection we define application structure on startup. But we forget that while exploring the application its status can change. Change of state is often hard-coded in the application code which leads to complications in future adjustments and maintenance.

Moving the transitions between states into configuration gives more flexibility. And that’s why when creating composite application elements, such as forms, windows or panels, we don’t need to worry about what state the application should go to. You can do this later, setting the behavior in the configuration.

All the components can communicate using a standardized mechanism for sending events – through the Event Bus. At the same time, the state machine can control the subscription of component events to the Event Bus. This approach turns all components of the application (forms, windows, panels) into reusable components that can be easily managed from the external configuration.

If you are interested you can have a look at some examples of configuration in the Enterprise Sampler.

You can consider state configuration as a road map of a city, and events as cars delivering goods and people to the desired destinations.

I’m sure using this approach it is easy to design and build up not just a small and ready to grow house but a whole city with skyscrapers, broadways and highways.

Design Pattern Automation

March 12, 2013 § Leave a comment

Design Pattern Automation

posted by Gael Fraiteur and Yan Cui  (Source : Infoq)

 

Introduction

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.

Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged
{

  string firstName, lastName;
   public event NotifyPropertyChangedEventHandler PropertyChanged;

   protected void OnPropertyChanged(string propertyName)
  {
    if ( this.PropertyChanged != null ) {
         this.PropertyChanged(this, new 
PropertyChangedEventArgs(propertyName));
   }
  }

 public string FirstName
  {
   get { return this.firstName; }
  set
    {
       this.firstName = value;
       this.OnPropertyChanged(“FirstName”);
       this.OnPropertyChanged(“FullName”);
  }
public string LastName
  {
   get { return this.lastName; }
  set
    {
       this.lastName = value;
       this.OnPropertyChanged(“LastName”);
       this.OnPropertyChanged(“FullName”);
  }
  public string FullName { get { return string.Format( “{0} {1}“, 
this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

[NotifyPropertyChanged]
public Person
{

public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, 
this.FirstName, this.LastName); }}

}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?

Yes.

Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts (http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx) is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
{
    Contract.Requires(id != Guid.Empty);
    return Dal.Get<Book>(id);
}

public Author GetAuthorById(Guid id)
{
    Contract.Requires(id != Guid.Empty);

    return Dal.Get<Author>(id);
}

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
  {
      if (__ContractsRuntime.insideContractEvaluation <= 4)
      {
          try
          { 
              ++__ContractsRuntime.insideContractEvaluation;
              __ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty");
          }
          finally
          {
              --__ContractsRuntime.insideContractEvaluation;
          }
      }
      return Dal.Get<Program.Book>(id);
  }

  public Author GetAuthorById(Guid id)<
  {
      if (__ContractsRuntime.insideContractEvaluation <= 4)
      {
          try
          {
              ++__ContractsRuntime.insideContractEvaluation;
              __ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
Guid.Empty");
          }
          finally
          {
              --__ContractsRuntime.insideContractEvaluation;
          }
      }
      return Dal.Get<Program.Author>(id);
  }

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (http://www.infoq.com/articles/code-contracts-csharp).

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

[Serializable] 
public sealed class BackgroundThreadAttribute : MethodInterceptionAspect     
{   
    public override void OnInvoke(MethodInterceptionArgs args)   
    {   
        Task.Run( args.Proceed );   
    }   
}

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in ourBackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method )
{

  MethodInfo methodInfo = (MethodInfo) method;

  if ( methodInfo.ReturnType != typeof(void) || 
       methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
  {
     ThreadingMessageSource.Instance.Write( method, SeverityType.Error, 
"THR006",
             method.DeclaringType.Name, method.Name );

     return false;
  }

  return true;
}

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

[BackgroundThread]
private static void ReadFile(string fileName)
{
    DisplayText( File.ReadAll(fileName) );
}
[ForegroundThread]
private void DisplayText( string content )
{
   this.textBox.Text = content; 
}

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.

Summary

So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.

Eighteen years after the Gang of Four’s seminal book, isn’t it time for design patterns to become adults?

Enterprise Architecture Anti Patterns: Proved No Concept

September 11, 2012 § Leave a comment

My last blog I mention the advantage of PoC in Enterprise Architecture and in the end I mention the negative part it. I thought to cover this part. We all know there is always pros and cons. Its suits someone or it won’t. That’s why commonsense matters. And chose whats right for you. Here is article on Enterprise Architecture Anti Patterns: Proved No Concept

 

When Concepts are as clear as the The Elephant on Acid

elephant

Anti Pattern Name: [Proved No Concept]

Type: [Management, Technical]

Problem: [Proof of Concept usually started in a hurry without a clear definition of purpose and agreed specification of the actual ‘concept to prove’. These end in acrimony when no concept is actually validated as the fundamental objective was not clear from the outset. Quite often they become tenuous ‘proof of technologies’ or really more orientation projects with technologies being trialled.]

Context: [Poor specification of requirements for the Proof of Concept is the main culprit. Over exuberance and lack of planning, ill-defined concepts, or ‘make it up as we go along’ behaviours all act as amplifiers.]

Forces: [lack of governance, poor scope definition, no real understanding of the concept to prove at outset, the Proof of Concept is often really about finding and defining the concept to prove.]

Resulting Context: [Inconclusive outcomes, project overrun, false starts, confusion, weak hypotheses, badly designed research vehicles.]

Solution(s): [Resist pressure to commence a Proof of Concept without a well-articulated and signed off specification of the concept, its scope and how success (or otherwise) will be determined. If the concept is very complex or elusive, split the Proof of Concept into multiple phases with definition and agreement / candidate selection being the first stage(s). A Proof of Concept (PoC) that proves OR disproves the validity of the concept is a successful PoC. One that fails to reach any meaningful conclusion due to confusion over the concept being proved or disproved is a failure.]

Source : http://stevenimmons.org/2011/12/enterprise-architecture-anti-patterns-proved-no-concept/

The Value of the PoC in Enterprise Architecture

September 10, 2012 § Leave a comment

The Value of the PoC in Enterprise Architecture

With appropriate planning, management, and presentation a Proof-of-Concept can become a key part of a successful Enterprise Architecture

by Scott Nelson

Often times, Enterprise Architecture is very similar to the old story of the blind men and the elephant. The tale varies greatly in the telling, with the consistent part being that they all exam the elephant at the same time, yet each examines only part of the whole animal. When they discuss what they have examined, they all have completely different perspectives.

Even if only implied, all Enterprise Architecture (EA) frameworks include the notion of viewpoints. That is, we all agree that an Enterprise Architecture consists of things, and that those things can have different meanings, degrees of importance, immediacy of value, and even levels of aesthetic appeal to different people. Enter the Proof-of-Concept (PoC). The goal of a PoC is to serve as the remedy for the confusion in that old tale of the blind men and the elephant. Before the PoC, each stakeholder has a different view of the Enterprise Solution. A successful PoC does not need to change anyone’s point of view, it only needs to demonstrate to everyone’s satisfaction that it will fit the picture as they see it.

Why Do a Proof-of-Concept in Enterprise Architecture?

The value of a PoC is its ability to reduce risk. At the level of detail generally applicable to Enterprise Architecture, everything can work. It is easy to say that a portal will provide appropriately credentialed users with access to all internal applications, enforcing user permissions seamlessly through the use of an enterprise identity and access management package. It is almost as easy for the solution architects to take the logical architecture and create a physical architecture showing exactly how specific vendor packages and enterprise libraries will wire together to realize the vision of this enterprise portal.

However, the perception of enterprise architecture can be badly damaged when the actual implementation of this architecture fails to meet cost, time, or usability expectations. Building a small version of the planned solution before making a large resource commitment and business dependency on the outcome can demonstrate the value of Enterprise Architecture and greatly reduce the risk of wasted resources and lost opportunities.

Good Reasons to Do a PoC for EA

The portal scenario described previously was purposely both common and medium-complex. A PoC can be valuable for something very simple, such as testing vendors’ “ease of integration” claims that two products can work together — something that quite often is true only given a long list of limitations that are not always as easily discovered as the initial claim.

A proof-of-concept effort around a very complex solution is not only a good idea; some frameworks consider it mandatory. A popular notion in Enterprise Architecture discussions of late is that EA is about managing complexity. While EA is about much more than that, successful EA should result in managed complexity, whether or not it is a stated outcome. Conducting a PoC of complex systems is a good first step in managing complexity.

A good rule of thumb is, if you expect higher than trivial consequences when an architecture solution building block stops working, that solution deserves some level of PoC.

Bad Reasons to Do a PoC for EA

Just as a PoC to verify that two products work together as claimed is a good idea, testing whether two products from the same vendor using a standard interface will work together when you have a good support agreement in place is a waste of resources. Not because the vendor’s claims will always be 100% valid, but because the pieces are already in place to correct any issues. The project plan should simply include the normal amount of slack time to cover the inevitable unknowns that will occur during an implementation.

It is also a bad idea to conduct a PoC of something that has to be done. An example is an upgrade or migration dictated by compliance requirements. In this case, because the delivery team knows they are going to have to “do it for real” after a PoC they will generally use a throw-away approach, making the effort nothing but wasted overhead and delay.

The value of a Proof-of-Concept is the mitigation of risk (a core value of Enterprise Architecture according to many frameworks, and it’s just plain common sense). If the risk is minimal, the investment in mitigation should be proportional.

In an EA PoC, Aim First, Fire After

So, if a PoC should be conducted to mitigate risk, there needs to be a clear understanding of the following:

  • The risk that needs to be mitigated
  • The consequences of failing to mitigate it
  • What defines a successful mitigation of the risk

If any of those three are unknown, do not start work on the PoC until you understand the proof, the concept, and the reasons for both.

When attempting to complete the PoC quickly, buy as much time as possible to prove the concept thoroughly. Many “PoCs” are in production today, and the reason why maintenance costs keep going up despite improved processes is that while the processes are followed to the letter, the spirit of the underlying concept(s) are completely forgotten (or unknown).

Continuous Involvement, Consistent Messaging

So, how do you make a PoC as thorough as possible when there is inevitably pressure to succeed (though one important reason to do a PoC is to discover if it can fail)? And when any level of success can be misconstrued to be total success? First, show progress early. To business stakeholders, a PoC is doing something that is not making or saving money, it is only spending money. These stakeholders want, need, and deserve to know that their IT investment is being managed wisely. The difference between a successful and celebrated EA group and a struggling and mistrusted one is how stakeholders perceive their value. Structure the PoC effort to create demonstrable progress as early as possible.

For example, making progress early in a portal PoC would mean having the basic UI elements in place as quickly as possible, even if there is no real data behind them. For an infrastructure PoC, have cells marked “complete” in a project plan. No matter what the evidence is, make sure that it is easily recognizable to key stakeholders as early as possible.

The danger of showing progress early, however, is that the degree of progress can easily be misinterpreted by those who aren’t technically deep (i.e., those who are paying for the PoC). As some frameworks do, this is an important point in the process to mitigate the risk of presenting a premature PoC proof without a validated concept. Always follow the validation of progress with an immediate reminder of the goal that is being pursued and time/effort/dollar allotment that was agreed upon to reach that goal.

It also helps increase buy-in when presenting such reminders if you can claim that you haven’t used all committed resources to reaching the goal. Try to get there early, under budget, or both when possible. Just don’t do it too often or it may damage your credibility.

Conclusion

The tale of the blind men and the elephant has many different endings. In some they never come to consensus; in others they discover the elephant by combining their understandings. In Enterprise Architecture, the happy ending is when, after having concluded that all of their inputs fit together no matter how different, the blind men all climb on top of the elephant and ride comfortably in the same direction.

The thing about Enterprise Architecture is that it is based on common sense. Although Voltaire is credited with saying that it isn’t so common, those who possess common sense often seem to get the most from being reminded of things that are.

Finally, for a view from the negative side of this concept, there is an interesting article at stevenimmons.org.

Where Am I?

You are currently browsing the Design Patterns category at Naik Vinay.