The Sun is setting on rails style MVC Frameworks

February 29, 2012 § Leave a comment

Lately I’ve been thinking a lot about the impact of the move to a thick client architecture for web applications, and I’m becoming more and more certain that this means that Rails-style MVC frameworks on the server-side are going to end up being phased out in favour of leaner and meaner frameworks that better address the new needs of thick-client architecture.

There are a few major reasons for this:

The server is no place for view logic

The devices available for running a web app are vastly different from when these web frameworks first sprung up.  The slide towards thicker / richer clients has been proceeding on pace with increases in processing power since the Web 2.0 days.  It’s much simpler to handle views and view logic in only one place, and that place is slowly moving away from the server side.  MVC has always been a strained pattern for a server-side, non-gui application and it is been a confusing and complicated trade-off to have the backend generating front-end logic.    Front-end frameworks like backbone.js, as well as advances in web technologies like HTML5’s history.pushState are now making server-free views a realistic quality of cutting-edge front-ends.   Rendering on the client-side also gives us the opportunity to create responsive designs based on device capability rather than having the server try to somehow figure out what the capabilities are without actually running on that device.

The kinks aren’t all the way out yet, but I do think the trend is clear.

Server-side Templating and .to_json are both underpowered and overpowered for the actual requirements of JSON APIs

There’s no need for templating on the serverside (or view helpers, or any view-related cruft) to generate simple JSON documents, but there are a tonne of problems left unsolved when we fail to see that generating a JSON API is more than just a serialization problem.

How should dates look?  (RFC3339 / ISO8601 of course!).  What should the JSON error document look like when you provide a 400 and want to tell the client why?  How should links to other resources in the API look?  How does a collection look?  How does pagination look?

These aren’t just serialization concerns, and they have nothing to do with templating.

HATEOAS is not just an academic pursuit

A thick client does not want to maintain a vast list of static strings representing all the crazy URLs that it will have to call in a non standard API.  As an API designer, you don’t want them doing this anyway, because hard-coded URLs and URL structures make it a real pain for you to change your API.

The AtomPub protocol, if you ignore its XMLiness,  gets this right — The thick client just knows the root URL (which serves up a simple service document) and is aware of a bunch of relationship types (the ‘rel’ attribute on ‘link’ and ‘a’ tags) that it can follow as necessary.  If a game client needs to access a list of players, it just goes to the root url, and follows the link with the rel property called ‘players’ (and probably remembers that link until the server says it has moved via 301 status code).  JSON has no concept of links or rel, but this is still easy to imagine and implement, and while it’s a teeny bit of extra work up front, the standardization buys you the ability to start writing smarter HTTP/REST clients for your frontend that take care of much more for you automatically, so you can spend time on real business logic and actually do something more productive than fiddle with the javascript version of a routes.rb file.

(A really great API framework might generate some or all of these links on its own and automatically put them in the JSON documents.  It’s pretty easy to imagine pagination links like next/back being generated automatically, or even links amoung resources in some cases, possibly based on some internally declared relationship.)

Rails-style MVC frameworks have a horrible routing solution for RESTFul JSON APIs

In a resource-oriented API, the router need not be concerned with the http methods that are or are not allowed for the resource.  That’s the concern of the resource and the resource alone.  When the router tries to manage that, you get the unnecessary verbosity of a route for every method supported by the resource, and you get the app incorrectly throwing 404s instead of 405s when a method is not supported.  This probably means that ‘controllers’ need to go away in favor of ‘resources’, and routes can be vastly simplified, if not completely derived/inferred by the framework.  Because we keep thinking in this conventional MVC style though, we miss the possibility and potential of vastly more simple applications that actually do a lot more more for us.

The Application Developer shouldn’t have to Deal with these Details

There’s no  reason for us to all separately think about these problems and solve them in a million different ways every time we’re confronted with them.  Aside from the years of wasted time this involves, we’ve also got a bunch of non-standard and sub-standard APIs to interact with, so all the client code needs to be custom as well and nothing is reusable.  This is why being RESTful is not just academic and this is why being concerned with the details is not pedantic.

As I said earlier, AtomPub gets a lot of this right.  A lighter-weight JSON equivalent would be a huge improvement to what people are doing today, because the conventions would mean that the framework can take care of most of these API details that we reimplement ad nauseum — or worse, not at all.  It also means that frontends can start to abstract away more of the HTTP details as well.  This is already starting to happen in the new frontend MVC frameworks but in almost every case, the HTTP end of things still needs to be handled in a custom way for every API endpoint.  There is still way too much work left to the application developer, and we’re silly to continue to do it over and over without coming up with a better abstraction.

CouchApps and WebMachine are just starting to touch on this style of architecture.  Backend-as-a-service platforms like Parse certainly understand how far this simple architecture can go, but ultimately there’s a huge need for a framework that can create more complex RESTful APIs (in a language that’s more general purpose and “blue collar” than Erlang).

Rails-style MVC frameworks are both too much, and not enough at the same time.  It really is time for a new framework to support this new architecture.


(Source: http://caines.ca)
Advertisements

Ubuntu-Android Phone: Canonical’s Big Move?

February 28, 2012 § Leave a comment

For years now, the Canonical team has been attempting to set up Ubuntu as something more than just another Linux distribution. It’s definitely been a long road, filled with both ups and downs.

During this period of Ubuntu’s evolution, Canonical has managed to see success on both the desktop and server front. Where we’ve seen little to no activity however, is with Ubuntu on the tablet. Then again, remaining absent on the tablet may have been by design.

It turns out that Ubuntu may have been putting their efforts into something bigger than any tablet with the announcement of an Ubuntu-Android phone. While it’s not available for purchase yet, this Ubuntu-Android phone does present some compelling items of interest. Items that, if properly played, could be a major boost for Canonical and the Ubuntu project as a whole.

It doubles as a TV and PC

If you haven’t seen the video of the Ubuntu-Android phone in action, please take a moment to check it out. This Ubuntu powered mobile device provides three interesting functions under the guise of a single device.

The first level of functionality is that of a typical Android-powered smartphone. This translates into games, apps and other tools that bring us the usual Android experience that we all know and love.

The next level of functionality comes in the form of a dock for the phone. Attached to a TV, this dock transforms your Android phone into a fully functional Ubuntu set top box. Ubuntu TV is a strong video-on-demand experience that is comparable to Amazon VOD or Netflix, but with a better UX (user experience).

The final level of functionality – and perhaps the most impressive – is taking this same dock and attaching it to a PC monitor and keyboard. Instantly, the phone becomes a full-fledged Ubuntu-powered computer.

Best of all, unlike a tablet, this portable PC fits right into your pocket.

Traveling with the Ubuntu-Android phone

I’ll be first to admit that I wasn’t a big fan of the idea of Ubuntu TV, as I failed to see the value of it as a standalone product. However, I can definitely see value in a mobile phone that brings both Ubuntu TV and the Ubuntu desktop to any given location.

Imagine, no more notebook toting from office to home and back again. The space saving feature of having your desktop PC act as a mobile phone is actually pretty powerful when you stop to think about it.

On the flip side of this, however, you need to consider the following issues. First, you need to use a dock. This kills off the idea of extreme portability, since you’ll need to tote along the required dock to connect to TV sets and PC monitors.

The only way the dock wouldn’t be a hassle, at this point, would be to have two of them. That way you’re not needing to plug it into your peripherals each time you move from one location to another.

It’s no laptop

As fascinating as the Ubuntu-Android phone happens to be, the fact is it’s no replacement for a laptop PC. This phone hybrid may present an interesting alternative to a lower-powered desktop computer, however not offering a dock for laptop users is a mistake.

This type of dock would be an additional option that’s basically a netbook in function, with docking compatibility for the Ubuntu-Android phone. Suddenly, the Ubuntu-Android phone becomes even more valuable. Because now it serves as a phone, desktop PC, Ubuntu TV and a notebook.

Now, some of you might be wondering, what possible advantage would there be to adding yet another dock to the mix? After all, this Ubuntu-Android phone already has enough options with its existing dock, right?

Well, I believe that if there was a separate dock available for purchase, designed as a netbook alternative, this phone would be even better suited to demonstrate how powerful data unification can be.

Your data, everywhere – it’s a start

Imagine typing SMS messages on your keyboard, or viewing phone pictures on your PC monitor. Better yet, a calendar that syncs without the potential for network errors or USB-related syncing headaches. Not to mention the shared bookmarks between your Android phone and Ubuntu based desktops.

Undoubtedly, there’s certainly something to be said for data unification using this device! Having your contacts, calendar, photos and even calls right on your PC monitor does offer something pretty powerful to the end user.

Where it gets really interesting is the prospect of making and receiving phone calls while the phone is docked. Unlike a SIP or Skype setup, the Ubuntu-Android phone works with actual mobile phone calls!

But wait, here’s where things could get really wild: with how you connect to the Internet.

Normally, you would be connected to the local network only, either via Ethernet or wifi. But thanks to the mobile network connection from the phone, you can dock this device and connect to your existing 3G/4G! With no dongle or any additional headaches. As long as the mobile carrier allows for this functionality, you would have Internet access just about anywhere you can think of.

But it’s a phone

Now that we’ve looked at all the neat stuff that this portable computing device can offer, it’s time to examine some cold hard facts. Even if the CPU offers enough power to get Ubuntu going the way the end user would expect, running Ubuntu on 512MB of RAM is just painful. Even if this is a trimmed down version of Ubuntu, I am very skeptical how well Unity can perform on such limited computer specifications.

Not to mention what you’re risking if you were to lose this device. You would be losing more than just an android phone, you could lose access to your entire desktop!

Worse, if you have issues with your dock, how will you get your phone connected while you’re waiting for a dock replacement? Now I’m not trying to discount just how amazing this phone is. This easily is one of the best ideas I’ve seen in a long time. I’d love to see it succeed! Unfortunately though, I am unsure how the concerns listed above could be resolved.

What’s missing from the Ubuntu-Android phone

Overlooking the minor shortcomings I’ve listed previously with the Ubuntu-Android phone, there are actually some things that would instantly make this device a must-have for me. Actually, I’d go so far as to say that if they don’t account for this, Canonical would be doing all who consider buying this phone a disservice. One area that has always mystified me is why Ubuntu One isn’t used more effectively. Allow me to explain further.

Do you remember Zonbu? While the mistakes made by the company were many, including lackluster hardware, one thing Zonbu did right was enabling users to have all their app settings saved regardless of which Zonbu you used. So for example, if I lost one Zonbu device, I could login to a new one and everything I had would just sync up automatically.

My thinking with the Ubuntu-Android phone is that this same kind of settings functionality should be setup with Ubuntu One out of the box. This provides Canonical with a great excuse to charge a little bit of a subscription fee for added revenue, plus it also means if I lose the phone, the data stored isn’t gone forever.

The idea of limiting this kind of functionality is beyond foolish. It’s a two-fold opportunity that Canonical could use to make a name for Ubuntu-Android.

If Canonical heeds this advice, as an electable option, I firmly believe that they would see this phone become an overnight success. Even better, Canonical would find they’re in a stronger bargaining position with phone vendors as well.

Imagine, phone and PC data that is always safely backed up off-site. Now that is the kind of user experience I’d like to try out, even using the limited resources of the Ubuntu-Android phone!

 

Windows Desktop UI Concept

February 27, 2012 § Leave a comment

Windows Desktop UI Concept

Posted by Sputnik8

Desktopthumb_medium

This is a desktop concept that I’ve recently put together for fun. I thought I’d post a few screens to see what people here think. The screens include variations of explorer, ie (with a quick redesign of windows.com and bing), media center/player, and skype. Note that I didn’t aim for the design to be completely consistent with what MS calls ‘metro’ (for instance, I specifically didn’t want loops around icons, among other things). Anyway, click on the images to see the full versions.

 

 

Explorer

Desktopthumb_medium Desktopthumb_medium Desktopthumb_medium Desktopthumb_medium Desktopthumb_medium

Internet Explorer

Desktopthumb_medium Desktopthumb_medium Desktopthumb_medium

Media Center (window mode)

Desktopthumb_medium Desktopthumb_medium

Skype

Desktopthumb_medium Desktopthumb_medium

The Development Pendulum (source : SimpleProgrammer.com)

February 15, 2012 § Leave a comment

Recently I read this article and I felt like sharing it… here is the article

The Development Pendulum

I’ve noticed a rather interesting thing about best practices and trends in software development, they tend to oscillate from one extreme to another over time.

So many of the things that are currently trendy or considered “good” are things that a few years back were considered “bad” and even further back were “good.”

This cycle and rule seems to repeat over and over again and is prevalent in almost all areas of software development.

It has three dimensions

Don’t misunderstand my point though, we are advancing.  We really have to look at this from a 3 dimensional perspective.

Have you ever seen one of those toys where you rock side to side in order to go forward?

snakeboard

Software development is doing this same thing in many areas.  We keep going back and forth, yet we are going forward.

Let’s look at some examples and then I’ll tell you why this is important.

JavaScript!

Is JavaScript good or bad?

Depends on who you ask, but it is definitely popular right now.

If we go back about 5 years or so, you’ll get a totally different answer.  Most people would suggest to avoid JavaScript.

Now, JavaScript itself hasn’t changed very much in this timespan, but what has changed is how we use it.

We learned some tricks and the world changed around us.  We figured out how to solve the biggest problem of all for JavaScript…

Working with the DOM!

JQuery made it extremely easy to manipulate the DOM, the pain was removed.

Yet, new pains emerge, hence backbone.js is born.

Thick client or the web?

Take a look at how this has changed back and forth so many times.  First the web was a toy and real apps were installed on your machine.

Then it became very uncool to develop a desktop app, everyone was developing web apps.

But soon we ran into a little problem – those darn page refreshes.  Gosh!

So what did we do?  We sort of made the browser a thick client with AJAX.

That created so much of a mess that we really needed clean separation of views from our models and our logic (at least on the .NET side), so we went back to rendering the whole view on the server and sending it down to the client with MVC.  (Yes, you could argue this point, but just pretend like you agree and bear with me.)

Then we decided that we needed to start moving this stuff back to the client so we could do much more cool things with our pages. We started pumping JavaScript into the pages and ended up creating thick clients running in browsers running on JavaScript and HTML5.

And now we are seeing traditional thick clients again with iOS and Android devices and even those will probably eventually migrate to the web.

Simple data vs descriptive data

Check out this sine wave!

SineWave

First we had fixed-length records where we specified the length of each column and exactly what data went there.

Then we moved over to CSV, where we had loose data separated by commas.

Then we thought XML was all the rage and beat people up who didn’t define XSDs, because data without definition is just noise you know!

Now we are sending around very loosely structured JSON objects and throw-up whenever we see angle brackets.

So many other examples

Take a look at this list:

  • Static vs dynamic languages
  • Web services ease of use vs unambiguity (SOAP and REST)
  • Design upfront vs Agile (remember when we just wrote code and deployed it, it was kind of like Agile, but different)
  • Source control, constant collaboration vs branching
  • Testing and TDD
  • Databases, stored procs vs inline SQL
  • <% %> vs Controls

It goes on forever

So why is this important?

It is not just important, as a developer, it is CRITICAL for you to understand.

Why?

Because whatever happens to be “cool” right now, whatever happens to be the “right” way to do things right now, will change.

Not only will it change, but it will go the complete opposite direction.

It won’t look exactly the same as it did before – we will learn from our previous mistakes – but it will be the same concepts.

Advancement follows this sine wave pattern.  Don’t try and fight it so hard.

You have to be balanced.  You have to be able to understand the merits, strengths and weaknesses of both sides of a technology or best practice choice in development.

You have to understand why TDD improved our code until it led us into overuse of IoC and pushed C# and Java developers to the freedom of dynamic languages like Ruby.

You have to understand that eventually the course will correct itself yet again and head back to the direction of the new static language or even an old static language that will be resurrected.

This is how you will grow

It is also very important to realize that this is exactly how you will grow.

Just as the technological world around you is in a constant forward progressing pendulum swing, so are you, at a different pace, to a different beat.

I know that through my personal development journey, I have switched sides on a topic countless times.

You might call me a “waffler,” but I call it progress.

Life is a game of overshooting and adjusting.

C# Language Features, From C# 2.0 to 4.0

February 14, 2012 § Leave a comment

Introduction

This article discusses the language features introduced in C# 2.0, 3.0, and 4.0. The purpose of writing this article is to have a single repository of all the new language features introduced over the last seven years and to illustrate (where applicable) the advantages of the new features. It is not intended to be a comprehensive discussion of each feature, for that I have links for further reading. The impetus for this article is mainly because I could not find a single repository that does what this article does. In fact, I couldn’t even find a Microsoft webpage that describes them. Instead, I had to rely on the universal authority for everything, Wikipedia, which has a couple nice tables on the matter.

C# 2.0 Features

Generics

First off, generics are not like C++ templates. They primarily provide for strongly typed collections.

Without Generics

public void WithoutGenerics()
{
  ArrayList list = new ArrayList();

  // ArrayList is of type object, therefore essentially untyped.
  // Results in boxing and unboxing of value types
  // Results in ability to mix types which is bad practice.
  list.Add(1);
  list.Add("foo");
}

Without generics, we incur a “boxing” penalty because lists are of type “object”, and furthermore, we can quite easily add incompatible types to a list.

With Generics

public void WithGenerics()
{
  // Generics provide for strongly typed collections.
  List<int> list = new List<int>();
  list.Add(1); // allowed
  // list.Add("foo"); // not allowed
}

With generics we are prevented from using a typed collection with an incompatible type.

Constraints and Method Parameters and Return Types

Generics can also be used in non-collection scenarios, such as enforcing the type of a parameter or return value. For example, here we create a generic method (the reason we don’t create a generic MyVector will be discussed in a minute:

public class MyVector
  {
    public int X { get; set; }
    public int Y { get; set; }

  }

  class Program
  {
    public static T AddVector<T>(T a, T b)
      where T : MyVector, new()
    {
      T newVector = new T();
      newVector.X = a.X + b.X;
      newVector.Y = a.Y + b.Y;

      return newVector;
   }

   static void Main(string[] args)
   {
     MyVector a = new MyVector();
     a.X = 1;
     a.Y = 2;
     MyVector b = new MyVector();
     b.X = 10;
     b.Y = 11;
     MyVector c = AddVector(a, b);
     Console.WriteLine(c.X + ", " + c.Y);
  }
}

Notice the constraint. Read more about constraints here. The constraint is telling the compiler that the generic parameter must be of type MyVector, and that it is an object (the “new()”) constraint, rather than a value type. The above code is not very helpful because it would require writing an “AddVector” method for vectors of different types (int, double, float, etc.)

What we can’t do with generics (but could with C++ templates) is perform operator functions on generic types. For example, we can’t do this:

public class MyVector<T>
{
  public T X { get; set; }
  public T Y { get; set; }

  // Doesn't work:
  public void AddVector<T>(MyVector<T> v)
  {
    X = X + v.X;
    Y = Y + v.Y;
  }
}

This results in a “operator ‘+=’ cannot be applied to operands of type ‘T’ and ‘T'” error! More on workarounds for this later.

Factories

You might see generics used in factories. For example:

Collapse | Copy Code
public static T Create<T>() where T : new()
{
  return new T();
}

The above is a very silly thing to do, but if you are writing an Inversion of Control layer, you might be doing some complicated things (like loading assemblies) based on the type the factory needs to create.

Partial Types

Partial types can be used on classes, structs, and interface. In my opinion, partial types were created to separate out tool generated code from manually written code. For example, the Visual Studio form designer generates the code-behind for the UI layout, and to keep this code stable and independent from your manually written code, such as the event handlers, Visual Studio creates two separate files and indicates that the same class is of partial type. For example, let’s say we have two separate files:

File 1:

public partial class MyPartial
{
  public int Foo { get; set; }
}

File 2:

public partial class MyPartial
{
  public int Bar { get; set; }
}

We can use the class, which has been defined in two separate files:

public class PartialExample
{
  public MyPartial foobar;

  public PartialExample()
  {
    foobar.Foo = 1;
    foobar.Bar = 2;
  }
}

Do not use partial classes to implement a model-view-controller pattern! Just because you can separate the code into different files, one for the model, one for the view, and one view the controller, does not mean you are implementing the MVC pattern correctly!

The old way of handling tool generated code was typically to put comments in the code like:

// Begin Tool Generated Code: DO NOT TOUCH
   ... code ...
// End Tool Generated Code

And the tool would place its code between the comments.

Anonymous Methods

Anonymous methods let us define the functionality of a delegate (such as an event) inline rather than as a separate method.

The Old Way

Before anonymous delegates, we would have to write a separate method for the delegate implementation:

public class Holloween
{
  public event EventHandler ScareMe;

  public void OldBoo()
  {
    ScareMe+=new EventHandler(DoIt);
  }

  public void Boo()
  {
    ScareMe(this, EventArgs.Empty);
  }

  public void DoIt(object sender, EventArgs args)
  {
    Console.WriteLine("Boo!");
  }
}

The New Way

With anonymous methods, we can implement the behavior inline:

public void NewBoo()
{
  ScareMe += delegate(object sender, EventArgs args) { Console.WriteLine("Boo!"); };
}

Async Tasks

We can do the same thing with the Thread class:

public void AsyncBoo()
{
  new Thread(delegate() { Console.WriteLine("Boo!"); }).Start();
}

Note that we cast the method as a “delegate()”–note the ‘()’–because there are two delegate forms and we have to specify the parameterless delegate form.

Updating The UI

My favorite example is calling the main application thread from a worker thread to update a UI component:

/// <summary>
/// Called from some async process:
/// </summary>
public void ApplicationThreadBoo()
{
  myForm.Invoke((MethodInvoker)delegate { textBox.Text = "Boo"; });
}

Iterators

Iterators are a huge improvement to working with collections.

The Old Way

In the days before iterators, we had to access collections with an indexer:

public class DaysOfWeek
{
  protected string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" };

  public int Count { get { return days.Length; } }
  public string this[int idx] { get { return days[idx]; } }
}

public void TheOldWay()
{
  DaysOfWeek days=new DaysOfWeek();

  for (int i = 0; i < days.Count; i++)
  {
    Console.WriteLine(days[i]);
  }
}

The New Way

In the new approach, we can hide the indexing implementation, return each item with the “yield” keyword, and use the “foreach” keyword to iterate through the collection:

public class IterableDaysOfWeek : IEnumerable<string>
{
  protected string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" };

  public IEnumerator<string> GetEnumerator()
  {
    for (int i = 0; i < days.Length; i++)
    {
      yield return days[i];
    }
  }
}

public void TheNewWay()
{
  IterableDaysOfWeek days = new IterableDaysOfWeek();

  foreach (string day in days)
  {
    Console.WriteLine(day);
  }
}

This is much more readable and also ensures that we don’t access elements in the collection beyond the number of items in the collection. And yes, we could have used “foreach” in the GetEnumerator call as well – I left this as an indexed operation for illustrative purposes.

Nullable Types

Nullable types allow a value type to take on an additional “value”, being “null”. I’ve found this primarly useful when working with data tables. For example:

public class Record
{
  public int ID { get; set; }
  public string Name { get; set; }
  public int? ParentID { get; set; } 
}

public class NullableTypes
{
  protected DataTable people;

  public NullableTypes()
  {
    people = new DataTable();

    // Note that I am mixing a C# 3.0 feature here, Object Initializers,
    // with regards to how AllowDBNull is initialized.  I'm doing because I think
    // the example is more readable, even though not C# 2.0 compilable.

    people.Columns.Add(new DataColumn("ID", typeof(int)) {AllowDBNull=false});
    people.Columns.Add(new DataColumn("Name", typeof(string)) { AllowDBNull = false });
    people.Columns.Add(new DataColumn("ParentID", typeof(int)) { AllowDBNull = true });

    DataRow row = people.NewRow();
    row["ID"] = 1;
    row["Name"] = "Marc";
    row["ParentID"] = DBNull.Value; // Marc does not have a parent!
    people.Rows.Add(row);
  }

  public Record GetRecord(int idx)
  {
    return new Record()
    {
      ID = people.Rows[idx].Field<int>("ID"),
      Name = people.Rows[idx].Field<string>("Name"),
      ParentID = people.Rows[idx].Field<int?>("ParentID"),
    };
  }
}

In the above example, the Field extension method (I’ll discuss extension methods later) converts DBNull.Value automatically to a “null”, which in this schema is a valid foreign key value.

You will also see nullable types used in various third party frameworks to represent “no value.” For example, in the DevExpress framework, a checkbox can be set to false, true, or no value. The reason for this is again to support mapping a control directly to a structure that backs a table with nullable fields. That said, I think you would most likely see nullable types in ORM implementations.

Private Setters (properties)

A private setter exposes a property as read-only, which is different from designating the property as readonly. With a field designated as readonly, it can only be initialized during construction or in the vairable initializer. With a private setter, the property can be exposed as readonly to the outside world the class implementing the property can still write to it:

public class PrivateSetter
{
  public int readable;
  public readonly int readable2;

  public int Readable
  {
    get { return readable; }
    // Accessible only by this class.
    private set { readable = value; }
  }

  public int Readable2
  {
    get { return readable2; }
    // what would the setter do here?
  }

  public PrivateSetter()
  {
    // readonly fields can be initialized in the constructor.
    readable2 = 20;
  }

  public void Update()
  {
    // Allowed:
    Readable = 10;
    // Not allowed:
    // readable2 = 30;
  }
}

Contrast the above implementation with C# 3.0’s auto-implemented properties, which I discuss below.

Method Group Conversions (delegates)

I must admit to a “what the heck is this?” experience for this feature. First (for my education) a “method group” is a set of methods of the same name. In other words, a method with multiple overloads. This post was very helpful. I stumbled across this post that explained method group conversion with delegates. This also appears to have to do with covariance and contravariance, features of C# 4.0. Read more here. But let’s try the basic concept, which is to assign a method to a delegate without having to use “new” (even though behind the scenes, that’s apparently what the IL is emitting).

The Old Way

public class MethodGroupConversion
{
  public delegate string ChangeString(string str);
  public ChangeString StringOperation;

  public MethodGroupConversion()
  {
    StringOperation = new ChangeString(AddSpaces);
  }

  public string Go(string str)
  {
    return StringOperation(str);
  }

  protected string AddSpaces(string str)
  {
    return str + " ";
  }
}

The New Way

We replace the constructor with a more straightforward assignment:

public MethodGroupConversion()
{
  StringOperation = AddSpaces;
}

OK, that seems simple enough.

C# 3.0 Features

Implicitly Typed Local Variables

The “var” keyword is a new feature of C# 3.0. Using the “var” keyword, you are relying on the compiler to infer the variable type rather than explicity defining it. So, for example, instead of:

public void Example1()
{
  // old:
  Dictionary<string, int> explicitDict = new Dictionary<string, int>();

  // new:
  var implicitDict = new Dictionary<string, int>();
}

While it seems like syntactical sugar, the real strength of implicit types is its use in conjunction with anonymous types (see below.)

Restrictions

Note the phrase “local variables” in the heading for this section. Implicitly typed variables cannot be passed to other methods as parameters nor returned by methods.

Object and Collection Initializers

The Old Way

Previously, to initialize property values from outside of the class, we would have to write either use a constructor:

public Record(int id, string name, int? parentID)
{
  ID = id;
  Name = name;
  ParentID = parentID;
}
...
new Record(1, "Marc", null);

or initialize the properties separately:

Record rec=new Record();
rec.ID = 1;
rec.Name = "Marc";
rec.ParentID = null;

The New Way

In its explicit implementation, this simply allow us to initialize properties and collections when we create the object. We’ve already seen examples in the code above:

Record r = new Record() {ID = 1, Name = "Marc", ParentID = 3};

More interestingly is how this feature is used to initialize anonymous types (see below) especially with LINQ.

Initializing Collections

Similarly, a collection can be initialized inline:

List<Record> records = new List<Record>()
{
  new Record(1, "Marc", null),
  new Record(2, "Ian", 1),
};

Auto-Implemented Properties

In the C# 2.0 section, I described the private setter for properties. Let’s look at the same implementation using auto-implemented properties:

public class AutoImplement
{
  public int Readable { get; private set; }
  public int Readable2 { get { return 20; } }

  public void Update()
  {
    // Allowed:
    Readable = 10;
    // Not allowed:
    // Readable2 = 30;
  }
}

The code is a lot cleaner, but the disadvantage is that, for properties that need to fire events or have some other business logic or validation associated with them, you have to go back to the old way of implementing the backing field manually. One proposed solution to firing property change events for auto-implemented properties is to use AOP techniques, as written up by Tamir Khason’s Code Project technical blog.

Anonymous Types

Anonymous types lets us create “structures” without defining a backing class or struct, and rely on implicit types (vars) and object initializers. For example, if we have a collection of “Record” objects, we can return a subset of the properties in this LINQ statement:

public void Example()
{
  List<Record> records = new List<Record>();
    {
      new Record(1, "Marc", null),
      new Record(2, "Ian", 1),
    };

  var idAndName = from r in records select new { r.ID, r.Name };
}

Here we see how several features come into play at once:

  • LINQ
  • Implicit types
  • Object initialization
  • Anonymous types

If we run the debugger and inspect “idAndName”, we’ll see that it has a value:

{System.Linq.Enumerable.WhereSelectListIterator<CSharpComparison.Record,<>f__AnonymousType0<int,string>>}

and (ready for it?) the type:

System.Collections.Generic.IEnumerable<<>f__AnonymousType0<int,string>> {System.Linq.Enumerable.WhereSelectListIterator<CSharpComparison.Record,<>f__AnonymousType0<int,string>>}

Imagine having to explicitly state that type name. We can see advantages of implicit types, especially in conjunction with anonymous types.

Extension Methods

Extension methods are a mechanism for extending the behavior of a class external to its implementation. For example, the String class is sealed, so we can’t inherit from it, but there’s a lot of useful functions that the String class doesn’t provide. For example, working with Graphviz, I often need to put quotes around the object name.

Before Extension Methods

Before extension methods, I would probably end up writing something like this:

string graphVizObjectName = "\"" + name +"\"";

Not very readable, re-usable, or bug proof (what if name is null?)

With Extension Methods

With extension methods, I can write an extension:

public static class StringHelpersExtensions
{
  public static string Quote(this String src)
  {
    return "\"" + src + "\"";
  }
}

(ok, that part looks pretty much the same) – but I would use it like this:

string graphVizObjectName = name.Quote();

Not only is this more readable, but it’s also more reusable, as the behavior is now exposed everywhere.

Query Expressions

Query expressions seems to be a synonymous phrase for LINQ (Language-Intergared Query). Humorously, the Microsoft website I just referenced has the header “LINQ Query Expressions.” Redundant!

Query expressions are written in a declarative syntax and provide the ability to query an enumerable or “queriable” object using complex filters, ordering, grouping, and joins, very similar in fact to how you would work with SQL and relational data.

As I wrote about above with regards to anonymous types, here’s a LINQ statement:

var idAndName = from r in records select new { r.ID, r.Name };

LINQ expressions can get really complex and working with .NET classes and LINQ relies heavily on extension methods. LINQ is far to large a topic (there are whole books on the subject) and is definitely outside the purview of this article!

Left and Right Joins

Joins by default in LINQ are inner joins. I was perusing recently for how to do left and right joins and came across this useful post.

Lambda Expressions

Lambda expressions are a fundamental part of working with LINQ. You usually will not find LINQ without lambda expressions. A lambda expression is an anonymous method (ah ha!) that “can contain expressions and statements, and can be used to create delegates or expression tree types…The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block.” (taken from the website referenced above.)

In LINQ, I could write:

var idAndName = from r in records 
  where r.Name=="Marc"
  select new { r.ID, r.Name };

and I’d get the names of people with the name “Marc”. With a lambda expression and the extension methods provided for a generic List, I can write:

var idAndName2 = records.All(r => r.Name == "Marc");

LINQ and lambda expressions can be combined. For example, here’s some code from an article I recently wrote:

var unassoc = from et in dataSet.Tables["EntityType"].AsEnumerable()
  where !(dataSet.Tables["RelationshipType"].AsEnumerable().Any(
     rt => 
       (rt.Field<int>("EntityATypeID") == assocToAllEntity.ID) && 
       (rt.Field<int>("EntityBTypeID") == et.Field<int>("ID"))))
  select new { Name = et.Field<string>("Name"), ID = et.Field<int>("ID") };

LINQ, lambda expressions, anonymous types, implicit types, collection initializers and object initializers all work together to more concisely express the intent of the code. Previously, we would have to do this with nested for loops and lots of “if” statements.

Expression Trees

Let’s revisit the MyVector example. With expression trees, we can however compile type-specific code at runtime that allows us to work with generic numeric types in a performance efficient manner (compare with “dynamic” in C# 4.0, discussed below).

public class MyVector<T>
{
  private static readonly Func<T, T, T> Add;

  // Create and cache adder delegate in the static constructor.
  // Will throw a TypeInitializationException if you can't add Ts or if T + T != T 
  static MyVector()
  {
    var firstOperand = Expression.Parameter(typeof(T), "x");
    var secondOperand = Expression.Parameter(typeof(T), "y");
    var body = Expression.Add(firstOperand, secondOperand);
    Add = Expression.Lambda<Func<T, T, T>>(body, firstOperand, secondOperand).Compile();
  }

  public T X { get; set; }
  public T Y { get; set; }

  public MyVector(T x, T y)
  {
    X = x;
    Y = y;
  }

  public MyVector<T> AddVector(MyVector<T> v)
  {
    return new MyVector<T>(Add(X, v.X), Add(Y, v.Y));
  }
}

The above example comes from a post on stackoverflow.

C# 4.0 Features

Dynamic Binding

Let’s revisit the MyVector implementation again. With the dynamic keyword, we can defer the operation to runtime when we know the type.

public class MyVector<T>
{
  public MyVector() {}

  public MyVector<T> AddVector(MyVector<T> v)
  {
    return new MyVector<T>()
    {
      X = (dynamic)X + v.X,
      Y = (dynamic)Y + v.Y,
    };
  }
}

Because this uses method invocation and reflection, it is very performance inefficient. According to MSDN referenced in the link above: The dynamic type simplifies access to COM APIs such as the Office Automation APIs, and also to dynamic APIs such as IronPython libraries, and to the HTML Document Object Model (DOM).

Named and Optional Arguments

As with the dynamic keyword, the primary purpose of this is to facilitate calls to COM. From the MSDN link referenced above:

Named arguments enable you to specify an argument for a particular parameter by associating the argument with the parameter’s name rather than with the parameter’s position in the parameter list. Optional arguments enable you to omit arguments for some parameters. Both techniques can be used with methods, indexers, constructors, and delegates.

When you use named and optional arguments, the arguments are evaluated in the order in which they appear in the argument list, not the parameter list.

Named and optional parameters, when used together, enable you to supply arguments for only a few parameters from a list of optional parameters. This capability greatly facilitates calls to COM interfaces such as the Microsoft Office Automation APIs.

I have never used named arguments and I rarely need to use optional arguments, though I remember when I moved from C++ to C#, kicking and screaming that optional arguments weren’t part of the C# language specification!

Example

We can use named an optional arguments to specifically indicate which arguments we are supplying to a method:

public class NamedAndOptionalArgs
{
  public void Foo()
  {
    Bar(a: 1, c: 5);
  }

  public void Bar(int a, int b=1, int c=2)
  {
    // do something.
  }
}

As this example illustrates, we can specify the value for a, use the default value for b, and specify a non-default value for c. While I find named arguments to be of limited use in regular C# programming, optional arguments are definitely a nice thing to have.

Optional Arguments, The Old Way

Previously, we would have to write something like this:

public void OldWay()
{
  BarOld(1);
  BarOld(1, 2);
}

public void BarOld(int a)
{
  // 5 being the default value.
  BarOld(a, 5);
}

public void BarOld(int a, int b)
{
  // do something.
}

The syntax availble in C# 4.0 is much cleaner.

Generic Covariance and Contravariance

What do these words even mean? From Wikipedia:

  • covariant: converting from wider to smaller (like double to float)
  • contravariant: converting from narrower to wider (like float to double)

First, let’s look at co-contravariance with delegates, which has been around since Visual Studio 2005.

Delegates

Not wanting to restate the excellent “read more” example referenced above, I will simply state that covariance allows us to assign a method returning a sub-class type to the delegate defined as returning a base class type. This is an example of going from something wider (the base class) to something smaller (the inherited class) in terms of derivation.

Contravariance, with regards to delegates, lets us create a method in which the argument is the base class and the caller is using a sub-class (going from narrower to wider). For example, I remember being annoyed that I could not consume an event having a MouseEventArgs argument with a generic event handler having an EventArgs argument. This example of contravariance has been around since VS2005, but it makes for a useful example of the concept.

Generics

Again, the MSDN page referenced is an excellent read (in my opinion) on co-contravariance with generics. To briefly summarize: as with delegates, covariance allows a generic return type to be covariant, being able specify a “wide” return type (more general) but able to use a “smaller” (more specialized) return type. So, for example, the generic interfaces for enumeration support covariance.

Conversely, contravariance lets us go from something narrow (more specialized, a derived class) to something wider (more general, a base class), and is used as parameters in generic interfaces such as IComparer.

But How Do I Define My Own?

To specify a covariant return parameter, we use the “out” keyword in the generic type. To specify a contravariant method parameter, we use the “in” keyword in the generic type. For example (read more here):

public delegate T2 MyFunc<in T1,out T2>(T1 t1);

T2 is the covariant return type and T1 is the contravariant method parameter.

A further example ishere.

Conclusion

In writing this, I was surprised how much I learned that deepened my understanding of C# as well as getting a broader picture of the arc of the language’s evolution. This was a really useful exercise!

 

(- Marc Clifton)

Multithreading and WPF 4.5

February 13, 2012 § Leave a comment

WPF 4.5 has improved its support for multi-threaded data binding, but the technique is still risky. This report attempts to explain how it works and what’s involved in using it safely.

WPF data binding has always had haphazard support for multi-threading. When an object raises a property changed event on a non-UI thread the data binding infrastructure is kicked into gear. And generally this works, though it isn’t really safe because of potential race conditions. From a computer science perspective it would be more correct to simply disallow cross-thread access, which is actually the case for the collection changed event.

Unfortunately developers don’t always care about correctness, they just want to get something done. So end up with various attempts at a “thread-safe” or “dispatcher-safe” observable collection. In all these attempts the fundamental design is to marshal the collection-changed event to the correct thread before invoking it. In this case the correct thread is whichever one that the dispatcher is running on. Unfortunately this doesn’t eliminate the possibility of a race condition.

With WPF 4.5, Microsoft is offering developers a much safer alternative. By calling BindingOperations.EnableCollectionSynchronization, the WPF data binding engine participates in locking. The default behavior is to acquire a lock on the object specified in the aforementioned call, but you also have the option to use more complex locking schemes. Unfortunately this is an error prone technique; it is easy to forget to acquire the collection’s lock while on the background thread. You can also forget to disable the collection synchronization when the collection is no longer needed, which could create a memory leak.

Another problem with this technique is that it doesn’t protect individual objects. So while the collection is being read under a lock, properties on each item in the collection are not necessarily being safely read. This is mostly a problem for complex getters and properties that cannot be set atomically (e.g. large value types).

We highly recommend that anyone using a background thread to update a collection only use immutable objects in that collection. Or if the objects cannot be made immutable, extreme care should be taken to at least make their property getters thread-safe. And when push comes to shove, you are probably better off forgetting that this feature exists and just marshal your collection updates to the UI thread.

Working with Object Context in the ADO.NET Entity Framework

February 10, 2012 § Leave a comment

The ADO.NET Entity Framework is an extended object relational mapping (ORM) tool from Microsoft that has become increasingly popular over the past few years and is widely used these days. Microsoft designed the ADO.NET Entity Framework to objectify an application’s data and simplify the development of data-aware applications. Version 4.0 of the ADO.NET Entity Framework ships with Microsoft Visual Studio 2010 and offers a lot of new and enhanced features.

This article explains the basics of ADO.NET Entity Framework and shows how you can write programs that leverage the generic Object Context.

The Basics of ADO.NET Entity Framework

The first question that comes to one’s mind is “What is the ADO.NET Entity Framework?” What is it all about? Well, the ADO.NET Entity Framework (or Entity Framework, as it is popularly called) is an extended ORM development tool from Microsoft that helps you abstract the object model of an application from its relational or logical model. According to MSDN, “The ADO.NET Entity Framework enables developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications.”

The Entity Framework is comprised of three layers:

  • The Conceptual Layer: Represented using CSDL (Conceptual Data Language)
  • The Storage Layer: Represented using SSDL (Store-specific Data Language)
  • The Mapping Layer: Represented MSL (Mapping Schema Language)

You can use any one of the following to query data exposed by the Entity Data Model, based on its entity relationship model.

  • LINQ to entities
  • The entity client
  • Object services

The primary goal of the Entity Framework was to raise the level of abstraction and simplify development of data-aware applications with reduced effort and reduced KLOC.

Working with the Object Context

The Object Context in Entity Framework (like as in LINQ to SQL) is the gateway to execute your queries against the Entity Data Model. Any object that is returned as a result of a query execution is attached to the Object Context. The Object Context can in turn track changes to the object and also persist the object to the data store.

The Object Context interacts with the database and abstracts the way the connection string, the connection to the underlying database, the queries, and the stored procedures are executed. It also manages reads and writes to and from the database.

The Object Context uses the ObjectStateManager to manage the state changes of objects and then applies those changes to the underlying data store appropriately. The Object Context in Entity Framework is represented by the class called ObjectContext.

CRUD Operations Using the ObjectContext

You can use the Object Context to perform CRUD operations on your data exposed by the Entity Data Model. Here’s how:

  1. Create the recordLet’s create an employee record, as an example:
    PayrollEntities dataContext = new PayrollEntities(); try { Employee emp = new Employee { EmployeeID = 1, FirstName = "Joydip", LastName = "Kanjilal", Address = "Kolkata" }; dataContext.Employees.AddObject(emp); dataContext.SaveChanges(); } catch { //Write your code here to handle errors. }
  2. Update recordTo modify the employee record, you can use the following code:
    PayrollEntities dataContext = new PayrollEntities(); try { Employee emp = dataContext.Employees.First(e=>e.EmployeeID == 1); emp.Address = "Hyderabad"; dataContext.SaveChanges(); } catch { //Write your code here to handle errors. }
  3. Delete recordTo delete the employee record, you can use the following code:
    PayrollEntities dataContext = new PayrollEntities(); try { Employee emp = dataContext.Employees.First(e=>e.EmployeeID == 1); dataContext.DeleteObject(emp); dataContext.SaveChanges(); } catch { //Write your code here to handle errors. }

In the next section, you will explore how to query data exposed by the Entity Data Model using Object Context.

Querying the Entity Data Model Using Object Context

The following code snippet illustrate how you can use your Object Context instance to query data exposed by the Entity Data Model.

using (var dataContext = new NorthwindEntities()) { var Customers = from c in dataContext.Customer select c; foreach (var Customer in Customers) { Console.WriteLine(String.Format("{0} {1}", Customer.Name, Customer.Address )); } }

Attaching and Detaching Objects from the Object Context

You can attach or detach objects to and from the Object Context using methods such as, Attach() or Detach(). To attach a previously detached object to the Object Context, you can use the following code:

using (NorthwindEntities dataContext = new NorthwindEntities()) { dataContext.Attach(employeeObj); }

To detach objects from the ObjectContext you can use the System.Data.Objects.ObjectSet.Detach() method or the System.Data.Objects.ObjectContext.Detach(System.Object) method.

You can also specify the entity to be detached as follows:

dataContext.Entry(entity).State = EntityState.Detached;

To check if an entity is already detached, you can use this code.

if (entity == null || entity.EntityState != EntityState.Detached) Console.WriteLine("Entity is null or already detached.");

Creating a Shared Object Context

The following code snippet illustrates how you can create a shared Object Context so that it can be globally shared amongst all classes in the application. This is not suitable for ASP.NET applications though.

private static AdventureWorksObjectContext _objectContext; public AdventureWorksObjectContext ObjectContext { get { if (_objectContext == null) _objectContext = new AdventureWorksObjectContext(); return _objectContext; } }

Summary

ADO.NET Entity Framework 4.0 is a more mature ORM tool than its predecessors. The new and enhanced features in Entity Framework 4 include, POCO support, Code First Development, Self Tracking Entities, better testability, improved LINQ operator support, improved lazy loading support, and many others. In this article you explored the Object Context in Entity Framework.

Where Am I?

You are currently viewing the archives for February, 2012 at Naik Vinay.