Why Santa’s Marketing Works Better Than Yours!

January 31, 2011 § Leave a comment

Why Santa’s Marketing Works Better Than Yours!

(source:psychotactics.com)

Santa Claus Inc. is well and profitable, right through recessions, depressions and just about any economic scenario. The reason why his marketing strategies work better than yours, is because he uses solid, dyed-in-the-wool psychology. He knows he doesn’t have to use new fangled techniques, when his simple marketing has stood the test of time.

If you don’t believe in Santa, you’d better change your mind, because the fat man from the north pole rocks on and you too can do the same if you stick to the basics. Find out if your product or service matches up by reading the article below.

Jingle Bells, Jingle Bells, Jingle All the Way…

If you go to the heart of Santa’s marketing, the one word you come away with is ‘consistency’. Generation after generation have been exposed to one brand, one message, and the same powerful imagery.

Just like Mercedes own the term ‘luxury’ and Volvo owns the term ‘safety’, Santa owns the word ‘hope’. Every kid worth his Nintendo, hopes he’s got enough points on the goodness scale to justify a mountain of gifts.

Yet, most companies get tired of their own brand. They chop, change and pour thousands (if not millions) of dollars into a bottomless pit of mindless change. Take a look at McDonald’s advertising, for instance. McDonald’s own the word family outing yet their ads have been straying down the teenager path.

Does It Make Sense To Consistently Occupy One Niche?

You bet it does! Families go out with their kids to McDonalds. These kids sprout into budget-conscious teenagers that hang out at McDonalds. They have kids and grandkids and guess where they all end up. At the big yellow ‘M’, that’s where!

Santa doesn’t waver. His customers are kids. Like several marketers, he might have been sorely tempted to enter the gift market. With bad advice, he would have tried to get to teenagers, adults and everyone. Can you see the magic still working? Even the tiniest of niches is huge and niches have a way of expanding by themselves.

At the end of the day, it’s the consistency that takes the jingle all the way to the bank. Too many companies lose focus and give you seven reasons why you should buy from them. Santa sticks to one: Be a ‘good’ kid or you can keep hoping!

You Can Spot Him in the Middle of a Crowded Sky

Do you know anyone who comes to visit on a sleigh in the middle of the night? With reindeer and gifts? The reason why Santa stands out so vividly in our memories is because he’s different. The postman does the same thing, but leaves without the flourish.

It’s Really Important To Work Out How Your Marketing Message Differs

Santa’s core marketing term is not built solely on consistent branding but also on a very hard-nosed differentiation. Too much communication out there fits in with what’s safe. Customers have just one slot in their mind. You have to enter that slot at such an obtuse angle that they remember you for life.

Rose Richards runs Office Doctor. What sets her apart from all the rest of the administration crowd is the term, Small business pain relief. Can you imagine your reaction when you hear something like that?

The human mind is intensely curious and a marketing statement like that is pure bait. You want to know what pain relief she brings and how she goes about it-specially if you’re the one in pain. That’s only half the story. The construction of the message elevates her from simple number crunching to brain surgery and makes her unique.

If you want differentiation you need look no further than the guiding light of Santa’s sleigh– Rudolph, with his shiny nose. Can you even remember the names of the rest of the eight reindeer?

One very important point, however, is that the marketing message isn’t just different, but also customer-oriented. Rose takes the clutter out of administration and Rudolph provides a beacon for clearer navigation.

If you don’t have a benefit for the customer, just being different is going to get you nowhere.

Give and You Shall Receive

How many of you are out there networking like crazy? Trying desperately to fill in your steadily depleting bank reserves? You want, want, want! Take a look at Santa’s style.

He’s into giving first. If you probe deep into your mind, you’ll find the people you like best are those who have given you their time, their money or their knowledge. You trust them, and it’s very hard to say no when they ask you for a favour in return.

The deepest core of human emotions is fear. Every single product or service, without exception, is sold on the basis of a problem. The only known antidote to fear is TRUST. When trusts struts upwards, fear banishes itself to penguin land. The more you pile up the trust, the more you can do business.

Wouldn’t Santa be able to sell you just about anything? Would he be able to cross-sell and up-sell product? Santa could knock on your door next summer and you’d be more than happy to have him join your barbeque.

It’s up to you to build up the trust one Lego block at a time. Identify your clients and see what you can give them. It could be information, time or even a chocolate covered scrumptious cookie. It’s the old ‘What’s in it for me?’ theory. If you can’t find something calorie-ridden for their minds or bodies, they won’t want to see you.

Play Santa. It works.

He Knows if You’ve Been Bad or Good…

Heck Santa knows his customers. He even knows when you are sleeping, or awake.

Then, there’s you. Look at your biggest customer. What’s her name? When is her birthday? Does she like Indian curries or sushi? In curries can she handle hot or medium? What does she think about you? What doesn’t she like?

You’re guessing for sure. You can’t be dead certain because you’ve been so busy looking at dollar signs that you’ve missed the plot completely. The reason why Santa’s marketing works is because he intimately knows your individual needs. If you want a drum kit, you get one. If you want a Barbie, you don’t up sulking with a xylophone.

Santa knows because he’s interested in giving. To give, you have to know exactly what the receiver wants or your gift is not worth the packaging it’s wrapped in.

Some people worry about invading personal privacy. Hogwash! When was the last time you got upset because a supplier turned up with a big chocolate cake (your favourite) for your birthday? or with rare stamps for your son (because he loves collecting stamps)?

Santa’s invades our privacy gently and uses it to give, not to take. That’s why we don’t mind it. The tax department on the other hand, uses our information to take and therein lies the principal difference.

Once a Customer, Always a Customer

Santa Doesn’t Lose Customers.Period.

One of the primary reasons why he’s able to achieve this amazing feat is because he thinks of his customer’s customer. His customer is the kid, who in a few years gets a little wiser about Santa and his customer’s customer is the parent who has the amazing power to get their children to be nice not naughty, if only for a short while.

Since the concept works in their favour, they do all the advertising. Without TV, radio or the internet, Santa’s message gets a grip on millions of kids around the planet. These kids grow up and the marvel of Santa is handed down through the generations.

While It’s OK For Santa, How Would This Work In The Real World? Say, If You Sold Jeans.

Jeans West, a jean retailer, has several of the answers. I needed one pair, but Stephanie (the sales girl) sold me two–not by hassling me, but by gently reminding me I would get $20 off the second pair.

Then, with my purchase, she gave me a gift voucher of $10, for my use or to pass on. They, also signed me up for a loyalty program that offered to give me a 10% discount if I purchased over $250 worth of product in the next 6 months.

This Is Effectively What Jeans West Did to Make Me a Permanent Customer.

Step 1: The sales person asked the right questions to find out my need.

Step 2: She up-sold the product giving me good value for money.

Step 3: A gift voucher with a validity date, ensured an additional purchase. Or even better, the chance for me to pass it on to another person thus ‘creating a customer’ for Jeans West.

Step 4: Tying my fickle consumer head into a loyalty scheme. They wanted me to stay with them forever.

Santa’s steps may vary, but in essence he ties you into a solid loyalty program that is near impossible to get off. It’s ‘customer get customer’, rather than ‘advertising get customer.’ It’s cheaper and it works!

In conclusion here are the main points why Santa’s customers keeps coming back. These concepts may sound old, even trite, but have been proven time after time to work well. Test them against your company and brand to see where you can learn from the man from the North Pole.

1) Solid branding: We’re not talking lease here. Consistency is the key. This applies everywhere from networking meetings, advertising to any sort of communication that goes out. Keep hammering home the same unique message and put it up front. The weather changes all the time which is why we can’t trust it. If you must change, it’s because your old message isn’t doing a complete job. I changed our first baseline from ‘Recession proof business principles’ to ‘Reactivating dormant business clients.’ The proposition was the same but the second line got 10 times the response.

2) Differentiation: Santa knows he can be a courier with a difference. You, too, can create your own legend. Nike used Just Do It. Coke threw in the concept, Rum and Coke, indelibly burning the word classic into our consciousness. Sameness is in your mind. No matter how many brands exist on the market, your product has a fingerprint of its own. You just have to dig deep to find out.

3) Build trust by giving first. Life is all about sowing, then reaping-but sowing comes first. If you don’t give first, you will only get limited results. The more you stop thinking of yourself and focus on what the customer needs instead, the more you are trusted. Business is all about trust. If you don’t have it, you’re yesterday’s soup.

4) Know your customer… Like you know the hair on your head. Data collection and its optimum usage will get you right into their minds and keep you permanently rooted in. Every time they see you, they should think you are Santa coming to town.

5) Reactivate dormant clients They are all volcanoes. Sitting there with the power to erupt mightily. Figure out who they are and how you can work in tandem with them. Forget your product or service. That’s a given– It has to be good. Find out the ‘everything else’ factor and you will keep them for life.

Like Santa does…

How to Handle Defects in Agile Projects

January 28, 2011 § 2 Comments

How to Handle Defects in Agile Projects

Using a preplanned method for handling bugs helps teams with a smooth delivery and keeps them out of reactionary mode.

 

For all software development teams, defects can cause a challenge on when and how to handle them.  In agile, with a focus on quality code, and user stories, sometimes team can be confused on the actual mechanics of how to address defects.  It is not a hard process, but the team needs to be clear on how they handle defects.  Let’s start by defining what we mean by defects.

 

What is a Defect?

For purposes of this article, we will use the term defects instead of bugs.  In many teams, there is sometimes confusion on when and where to identify defects?  If the team agrees on a standard process, this makes the mechanics easy.  To help teams identify how they might handle this, below are examples on good ways to handle this.  These are not cookie cutter solutions, but starting points that allow your team to try and refine these techniques.

Scrum Teams

In a standard scrum team, the team will commit to completing let’s say 25 points, which translates, let’s say to 6 stories.  While working on these stories, someone will test the story before it can get to done.  Sometimes that is a QA person, or in a more generalized team, hopefully a different developer will test the story.

If that tester finds a problem with that story, this should not be considered a bug or defect.  Sometimes teams will label this as such.  Labeling this issues as defects before the sprint for that story is done, is confusing to the product manager.  Because the story is not done, this cannot be a defect, the team has not said that we are ready for someone to use this.  The team acknowledges that this might not be finished, that is why there is testing!

However, any time after a sprint is done, if a defect or bug is found in that story, it should be considered a defect.  And how does the team handle this?

Kanban Teams

If you are using a Kanban board, and a defect is found after release (assuming release is your definition of done), that is the time to describe it as such.   Identify the defect, create a task or story for the defect and place this in your backlog.

Keep in mind that since many Kanban teams deal with Service Level Agreements (SLA’s), with their business owners, defects will need to be considered part of that.  For those SLA’s it is helpful to identify the defect at least as critical or non-critical.  Look to other material to help with those kinds of risk class service levels, such as those by David J. Anderson[1].  Now the team needs to handle this.

Give your defects visibility to Product Managers

First, teams must have business representation to help them make priority decisions.  The most common way to do this, is to designate someone as a Product Manager.  This could be a team member who communicates regularly with business stakeholders.  Ideally, this is a business person, who works directly with the team.

Once a defect has been spotted (defect according to our criteria above) the team needs to make that visible.  In the case of a Scrum team, this means logging this in your product backlog, as you would a story.   For a team using Kanban, they would get that defect into their parking lot.  This also means, the team gives business the opportunity to prioritize the defect work.

The exception is for critical production defects.  This is covered in a section below.  For all other defects, once they are in some type of backlog, then the team needs to plan when to handle these defects.

One question that might come up is should we estimate the effort for these defects.  Opinions differ [2], but to give business an idea on effort for those defects, it makes sense to size them in some way.  If your team is used to story points, then utilize that method.

For teams using Scrum, tasking could be used.  In other words, when the team is in planning session, they will create the task time estimate for a defect.  For the dev tasks, they may estimate, 10 hours, for testing time, 4 hours, and now the team has an estimate that can be used with the teams total time budget to help identify if their sprint commitment is reasonable.

 

If the team is using Kanban, point estimating makes sense.  That way the product owner can identify the risk of not doing the defect until later, as opposed to getting a new feature out.  Many defects found really are more annoying and not critical defects.  So a savvy product owner can identify which issues can harm the product without being fixed, as opposed to when a new feature is needed.

For Kanban, once the priority of the defect has been identified, then that token or story is placed on the appropriate place for that board.  In both the Scrum and Kanban scenarios it is important to differentiate defects from stories.  The reason to do this is to track defects, to make sure your quality process is on track.

For instance, see Example 1:

Example 1 (Sample burndown chart tracking defects)

Looking at this chart, our burndown shows that the team is working well through their commitments, and buring down well.   However, the defects showing at the bottom show a disturbing tendency to go up more as we get through the project.  Maybe this is expected from the team, but it would be great topic for a retrospective.  Tracking how the team is doing with their defect rate has to help drive improvement.

Production Stopped Defects

The exception to the above is any defect that is affecting production.   In this case the team needs to have rules on how to handle this before it happens.  Make sure your team is prepared to handle these inevitability.

Certainly, if a critical production defect is found, the entire team can swarm on this defect to get the fix done, and pushed out to production.  The advantages to this include a quick turnaround time.  Also, the developers who originally worked on this piece of code would be involved.  Although assuming the team uses Pair Programming, this would not be an issue.  Disadvantages include the possibility of a Scrum team blowing their commitment to the sprint.   For a Kanban team, this might cause them to blow their SLA to the customer.

These are the risks and advantages to using a swarm strategy.  Another consideration here is how many critical defects on average does the team produce within a given time period.  One would hope that this would be a rare exception, but sometimes that is not the case.  Consider this aspect before deciding on a swarming strategy, and back it up with hard data.

Another way to handle these issues would be to designate a team member for a short length of time as your on call person.  It has been called the batman position, the position no one wants and more.  The idea is that this member of the team would not be counted on to contribute significantly to new project work for that period of time.  They would have to triage the production issues to determine if they are critical and need to be fixed, or could they be put in the backlog.

It would also make sense for this position, during slack time, either works on prioritized work, maybe helping with testing, or make this learning time.  Allow this person to investigate a new technology the team wants to use.  Besides helping the team, this makes this position a little more attractive to team members.  Usually this task is not one team members look forward to performing.

This method helps keep your new production work more predictable, and allows the team to focus on completing the work committed to.

Handling Defects in an Agile way

These strategies for attacking defects are not necessarily agile in nature.  However they do work well in agile teams.  The real trick is to make sure that the team decides on which method to use, this should not be decreed from management.  Management can certainly demand that the team documents to them how they handle defects though.  That seems reasonable.

Using some preplanned method for handling defects helps teams with a smooth delivery, and keeps them out of a reactionary mode.  Make sure that your team has a plan!

 

Fast and Less Fast Loops in C#

January 25, 2011 § Leave a comment

Fast and Less Fast Loops in C#

By Simeon Sheye

 

How fast can a loop reading from memory be made to run and how does loop constructs, data types, interfaces, unrolling and hoisting affect performance?

 

 

Introduction

After I did the first article on QS I decided to use the tool to do a few experiments to investigate how the CPU cache affects performance. During these experiments I got a few insights with regard to the performance of various C# constructs that I will share with you here.

Background

The memory system of your PC most likely consists of a large but slow main memory and smaller but faster CPU caches for instructions, data and virtual memory management. The experiment I originally set out to do was about the data cache and specifically about read performance, so here is a short and simplified description of how a memory read works:

When a memory address is accessed the CPU sends its request to the closest cache (L1) and if the cache holds the value for the address it simply responds with the value. In fact, the cache will not respond with the just the value but will have the entire line containing the address ready (on my system that is 64 bytes). The trick is that the finding the line and preparing it is slow (3 clock cycles on my machine) while fetching the data is fast (1 word per cycle), so if the CPU over the next couple of cycles request other data in the line that data will be returned much faster than the first piece of data. The CPU will do its best to predict the next memory access so that the cache can prepare the next line while the CPU works its way through the first line.

So what happens if the L1 cache does not contain the data we are looking for? The L1 cache will look in the L2 cache and if it’s there the exact same thing happens as above, but this time with a higher latency (17 clock cycles on my machine). And what if it is not in L2? Then the main memory (or L3 if such a cache is present) is accessed, again with an increased latency, this time a whopping 290 cycles on my system.

If you want to learn more about caches work, see the article http://en.wikipedia.org/wiki/CPU_cache on Wikipedia or check out the document “What Every Programmer Should Know About Memory” at http://www.unilim.fr/sci/wiki/_media/cali/cpumemory.pdf

If you are curious about your own system you can use a benchmark tool to find its characteristics. I used SiSoft Sandra fromhttp://www.sisoftware.net/ for this article.

Measuring Cache Effects

How can the effects of a cache be measured? This is a matter of allocating buffers of various sizes and reading from these buffers while timing how long it takes; If the cache fits into L1 we should expect fast access times and slower times if the data is in L2 or main memory. In reality it’s more complicated and different access patterns will yield different results due to line sizes, associativity, pre-fetching and pipelining within the CPU, but the first step is really to find out if C# / .NET is fast enough to reveal any cache effects at all. This question is the subject of the remainder of the article.

Before we start, here are some facts about my system:

CPU: Intel T9600 dual core CPU @ 2.8GHz
L1i & L1d caches each 16KB
L2 cache 6MB shared between cores
Main memory: 4 GB dual channel PC6400 DDR2 RAM
Max theoretical transfer rate (MTTR) = 12.8 GB/s.

Since I have a 32 bit OS, I’ll assume that the CPU with a single thread can read at most one 32 bit word per cycle from the memory system (i.e. from the L1d cache). At 2.8 GHz this yields a maximum theoretical transfer rate from L1d into core of 11.2 GB/s.

Measurements

The first experiment I set out to do was to see how close I could get to the L1d->Core transfer rate of 11.2 GB/s. That involved creating a number of methods with different loop constructs and using different data types. All experiments have an inner loop and an outer loop and in many of the experiments the loop has been unrolled. The inner loop sequentially reads and sums the contents of a 4KB buffer into a local variable (the sum is to prevent the compiler from eliminating the loop body) while the outer loop is repeated until a total of 64MB is read. The 4KB buffer fits into the L1d cache ensuring the fastest memory access possible so any difference in performance has to do with the loop construct.

Regarding measurements, all measurements have been repeated at least 100 times and the best execution times have been picked.

The table below show the results and as you can see, there is almost a factor 10 between the fastest and the slowest loop. The number of cycle is computed based on the assumption that the max transfer rate is 11.2 GB/s.

 

Method Time
(ms)
Transfer rate
(GB/s)
% Max Cycles / read Reads / iteration Cycles / iteration

Validation that 64 bit reads (long) does not increase transfer rate
For4_Unroll4_Long 17,36 3,87 35% 2,90 4 11,59

Different ways of indexing the buffer and different amounts of unrolling
For4_Unroll4

For-loop, 16 x unroll, prefix++

6,23 10,76 96% 1,04 16 16,65
For4_Unroll3

For-loop, 4 x unroll, prefix++

7,52 8,93 80% 1,25 4 5,02
For4_Unroll2

For-loop, 4 x unroll, postfix++

14,76 4,55 41% 2,46 4 9,85
For4_Unroll1

For-loop, 4 x unroll, index + offset

8,91 7,53 67% 1,49 4 5,95
For4_Foreach

Foreach loop

8,92 7,52 67% 1,49 1 1,49

Different ways of setting up the loop
For1_For1

Loop variable declared in loop, custom property read in loop.

17,71 3,79 34% 2,96 1 2,96
For1_For2

Loop variable declared in loop, array.length property read in loop.

11,84 5,67 51% 1,98 1 1,98
For3_For3

Loop variable declared in loop, array.length property read before loop.

11,87 5,65 50% 1,98 1 1,98
For4_For4

Loop variable declared outside loop, array.length property read before loop.

11,91 5,64 50% 1,99 1 1,99
BORDER-TOP: #f0f0f0; BORDER-RIGHT: windowtext 1pt solid; PADDING-TOP: 0cm”>For4_For5

Loop variable declared outside loop, array.length property read before loop, index incremented in loop body

11,82 5,68 51% 1,97 1 1,97
For4_While

Inner loop using while

11,82 5,68 51% 1,97 1 1,97
For4_DoWhile

Inner loop using do-while

11,80 5,69 51% 1,97 #f0f0f0; BORDER-RIGHT: windowtext 1pt solid; PADDING-TOP: 0cm”>1 1,97

Using generic List<int> instead of array int[]
For4_Unroll4_List 23,82 2,82 25% 3,98 16 63,60
For4_Foreach_List 59,04 1,14 10% 9,85 1 9,85

 

Establishing a Baseline

The fastest loop ”For4_Unroll4” is very close to the maximum theoretical speed and implemented like this:

Collapse
int unused = 0, i, j;
int imax = b.spec.CountIterations;
int jmax = b.spec.CountItems;
TestContext.Current.Timer.Start();
for (i = 0; i != imax; ++i)
{
  for (j = 0; j != jmax; )
  {
    unused += b.intArray[j]; ++j;
    // repeated 16 times
  }
}
TestContext.Current.Timer.Stop();

For4_Unroll3 is identical but only unrolled four times, while Unshared_For4_For5 has the same structure with no unrolls. From For4_For5 toFor4_Unroll3 the unrolling shaves away 75% of the iterations in the inner loop from 16M to 4M iterations and from For4_Unroll3 toFor4_Unroll4 another 75% is shaved away down to 1M iterations. Since all methods do the same work of summing up 16M integers and the only difference is the number times the inner loop is repeated, we can compare the number of cycles per iteration which reveals that the loop itself (i.e. compare and branch) costs around 1 cycle while reading the integer, adding it to a local variable and incrementing the index costs another cycle.

Postfix Increment Considered Harmful

Postfix operators (e.g. i++) are generally considered bad style because they are errorprone to use, but they can nevertheless save a lazy programmer a couple of keystrokes. Consider the inner loop of the method For4_Unroll2:

Collapse
for (j = 0; j != jmax; )
{
  unused += b.intArray[j++];
  unused += b.intArray[j++];
  unused += b.intArray[j++];
  unused += b.intArray[j++];
}

It’s certainly shorter and more concise the the code from For4_Unroll4 but interestingly you don’t only get punished by the code police, but also take a severe hit on performance. This code takes twice as long to execute compared the unrolled version that uses the prefix increment!

Post fix increment incurs overhead because a temporary copy must be made of the initial value before the increment, and we apparently cannot rely on the compilers to realise that this copy is unnecessary.

Foreach on Arrays is Pretty Good

The last unrolled loop ”For4_Unroll1” looks like this:

Collapse
for (j = 0; j != jmax; j += 4)
{
  unused += b.intArray[j];
  unused += b.intArray[j + 1];
  unused += b.intArray[j + 2];
  unused += b.intArray[j + 3];
}

It’s runs in 8.9 ms, about 20% slower than the loop using postfix increments (7.5 ms) with the extra time corresponding to one extra cycle per iteration of the inner loop. The interesting bit is to compare this result to the loop For4_Foreach where the inner loop has been replaced by a foreach loop:

Collapse
foreach (int x in b.intArray)
{
  unused += x;
}

This loop also runs in 8.9 ms just as For4_Unroll1, and 25% faster than any of the for, while and do-while variants that have not been unrolled.

A Loop is a Loop but Properties Cost

Does it matter where variables are declared? Does it matter where the index is incremented? Does it matter if the length of an array is referenced in the loop condition? To answer that, I tried some variations of the for-loop:

For1_For2

Collapse
for (int i = 0; i != b.spec.CountIterations; ++i)
{
  for (int j = 0; j != b.intArray.Length; ++j)
  {
    unused += b.intArray[j];
  }
}

For3_For3

Collapse
for (int i = 0; i != imax; ++i)
{
  for (int j = 0; j != jmax; ++j)
  {
    unused += b.intArray[j];
  }
}

For4_For4

Collapse
for (i = 0; i != imax; ++i)
{
  for (j = 0; j != jmax; ++j)
  {
    unused += b.intArray[j];
  }
}

For4_For5

Collapse
for (i = 0; i != imax; ++i)
{
  for (j = 0; j != jmax; )
  {
    unused += b.intArray[j]; ++j;
  }
}

For4_While

Collapse
for (i = 0; i != imax; ++i)
{
  j = 0;
  while (j != jmax)
  {
    unused += b.intArray[j];
    ++j;
  }
}

For4_DoWhile

Collapse
for (i = 0; i != imax; ++i)
{
  j = 0;
  do
  {
    unused += b.intArray[j];
    ++j;
  } while (j != jmax);
}

As you can see from the results they all perform exactly the same (11.8 ms or 2 cycles per iteration) and so do the variants involving while and do-while. The one loop that sticks out as a poor performer with 17.7 ms or 3 cycles/iteration is For1_For1:

Collapse
for (int i = 0; i != b.spec.CountIterations; ++i)
{
  for (int j = 0; j != b.spec.CountItems; ++j)
  {
    unused += b.intArray[j];
  }
}

public int CountItems { get { return countItems; } }
private int countItems;

I take this to mean that reading a property in a loop condition costs an extra cycle per iteration. Note however that semantics are not the same if you read the property in the condition or make a local copy. In a multithreaded program, a property may change between calls but a local copy will not change. Compare this to For1_For2 where the loop condition depends on the Length property of an array and For3_For3 where the length is read before the loop. These two loops perform the same which is expected since the array length cannot change (arrays cannot be resized).

Performance of List<int>

Being pleasantly surprised by the performance of foreach on arrays I had my hopes high that List<int> would also perform well, but as you can see it doesn’t: With a loop unrolled 16 times (reading through the list’s indexer) it takes 4 cycles to do a read and using foreach it take 10 cycles per read!

Access Through IList<int>

The next step in the quest to test looping with C# is to come up with a way to simulate different access patterns. My approach to this is to fill a list of integers with an index to the next position in the list to read, then I can simply populate the list with the access pattern I want to test and use the same code for all patterns I come up with. The previous experiments showed that loop unrolling gave a significant performance boost, so I’ll go for single loop unrolled 16 times. The code looks like this:

Collapse
int[] list = b.intArray;
for (; i != max; ++i)
{
  j = list[j];
  // repeat 16 times
}

At this point it is clear that I should use int[] as data structure, but I am still interested in the general performance of C# so I decided to do another experiment where I use int[]List<int> and see what happens if I access them directly or through the common Ilist<int> interface. Results are in the table below:

 

Method Time (ms) Transfer rate (GB/s) % Max Cycles / read Reads / iteration Cycles / iteration
List<int> 23,80 2,82 25,2% 3,97 16 63,5
int[] 17,56 3,82 34,1% 2,93 16 46,9
Ilist<int> on List<Int> 92,33 0,73 6,5% 15,41 16 246,5
Ilist<int> on int[] 86,90 0,77 6,9% 14,50 16 232,0

 

I still get 4 cycle per read when accessing List<int> directly, but the work on the int array has increased from 1 to 3 cycles per read. That is OK since I’m done with sequential reads (those where primarily for the L1 cache and read-ahead and once I get cache misses against L1 access times are 3 or more cycles anyway).

Another point is that sequential summing a list of integers is not a typical task. Neither is the loop to test access patterns but its increased complexity is closer to the things real program do and it is interresting that in this scenario the execution time increases by 33% and not 300% in the sequential summing example. This makes the tradeoff between the convenience of generic lists and the slower execution much more acceptable.

Access through the IList<int> interface is 4 times slower compared to using List<int> directly and 5 times slower compared to int[]. I guess the reason is that the interface prevents the compilers from doing optimizations relying on the comcrete implementation. For example, there is no way for the compiler to tell that the length of the integer array is constant or that the memory layout is sequential, forcing both the C# compiler and the JIT compiler to emit code that constantly re-evaluates all aspects of the loop condition and the memory access.

Using the code

The attached archive contains a VS 2008 project for the loops and control structures used to get the results above. To run the test you need to download the tool Quality Gate One Studio. The archive contains a project for this tool with a couple of test sets set up for the experiments mentioned in the article. Simply run the test sets and generate reports to get results on your system.

Conclusion

This article covers a precursor to an experiment to measure practical cache performance with the purpose to identify whether it is possible to write C# code that executes fast enough to reveal cache effects. The overall conclusion to that question is that with some care and a bit of loop unrolling it is possible. In fact, for simple constructs on simple data (arrays of int) and with some loop unrolling, C# performs really well and is capable of summing integers from memory at an average rate of one integer per clock cycle.

Some things perform better than others and the following caveats have been discovered:

  • Postfix increment (e.g. i++) is expensive and adds on average 1.25 cycles reducing throughput by 65%.
  • Reading properties, e.g. in loop conditions can hurt performance because some optimizations cannot be done in a multithreaded environment. Truly constant properties like the length of an array can be optimized, but if in doubt the safest bet is to read the value of the property before entering the loop.
  • Generic lists are significantly slower (a factor 4) than arrays in this specialized case, but the evidence seems to be that the performance cost is much less in more typical cases. This article showed an example where the execution time increased only 33%.
  • Use of interfaces instead of concrete types comes with a additional performance hit. In this case a factor 4 to 5 was observed. An educated guess is that interfaces prevent the compilers from doing optimizations that are available when using concrete types instead.

On the positive side, the evidence shows that:

  • Choice of loop construct (for, while or do-while) does not affect performance.
  • The standard foreach construct can be faster (1,5 cycles per step) than a simple for-loop ( 2 cycles per step), unless the loop has been unrolled (1.0 cycles per step). So for everyday code performance is not a reason to use the more complex for, while or do-while constructs.

Not all code is performance critical, but when things must execute fast you may have to sacrifice good programming practices for the performance:

  • Prefer concrete types over interfaces
  • Prefer simple arrays over List<>
  • Manually hoist constants outside loops if these involve properties or variables / members unless you are absolutely sure the compilers will recognize them as constants – also in a multithreaded environment.
  • Manually unroll loops

In general, don’t use postfix operators: They are both error prone and have poor performance.

Regardig compiler / platform versions I have tried to compile for both .NET 2.0 (VS 2008) and .NET 4.0 (VS 2010) but have not identified any significant differences between the two.

Finally, this article is about fast iteration in C# but the background is an attempt to measure cache effects. I think this article is long enough as it is, but I made a few observations on may way: First, when reading sequentially using For4_Unroll4 the rate of one step per cycle is sustained regardless of buffer size which basically means that pipelining and prefetching is doing a hell of a job to get the data into the CPU as fast as possible all the way from main memory and up. When changing the access pattern to use random access or strides above 4 bytes cache effects become visible because the hardware cannot predict and prefetch fast enough.

If you want to play around on your own system the attached code is prepared for the above experiments, but if you just want the hard facts about your system you can get it from a benchmark tool such as SiSoft Sandra.

 

Model View Presenter(MVP)

January 22, 2011 § 4 Comments

Model View Presenter

– Source : CodeProject.com

As we progress as developers, we strive to seek out the “best” way to perform our craft. The chosen methods to attain this lofty goal always bring with them a number of developmental trade-offs. Some techniques may simplify the code but lessen fine grained control while others enable greater power while introducing complexity. Model-View-Presenter with ASP.NET is a wonderful example of the latter. MVP aims to facilitate test-driven development of ASP.NET applications with the unfortunate introduction of added complexity. So while developers will be more confient in the quality of their product, MVP hinders their ability to easily maintain code. Yes, the tests will (hopefully) inform the developer that a bug has been introduced, but the inherent complexity of MVP makes it difficult for later team members to become comfortable with the code base and maintain it as project development continues.

Fortunately, as time progresses quickly in our field, resources and tools become available which enhance our ability to write powerful applications while simplifying the coding process itself. The introduction of NHibernate, for example, eliminated vast amounts of data-access code while still providing powerful facilities for managing transactions and dynamically querying data. Castle MonoRail (and Microsoft’s upcoming spin-off of this framework) now does for writing testable and maintainable .NET web applications what NHibernate (and the upcoming LINQ to Entities) did for ADO.NET. This is not to say that the previous techniques were necessarily wrong, but that they were only applicable considering the developer’s toolset that was available at the time of selection.

In adapting to the evolution of our field, it is important for developers to note when an accepted technique is no longer valuable in light of current alternatives. Specifically, MVP was a very powerful technique for writing ground-up, test-driven ASP.NET applications but is no longer a strong candidate for consideration when compared to the time saving benefits and simplicity of Castle MonoRail and Microsoft’s upcoming MVC framework. Oddly, it is sometimes difficult to “give up” on something that worked perfectly fine before, but that’s the nature of our business … one tenet that’s not likely to change anytime soon.

Of this article, I believe it to be of continued value to those maintaining legacy applications built upon MVP and for those interested in learning a solid domain driven architecture which is discussed further below and in detail in another post.

In summary, although I still believe that MVP is the best technique for developing ground-up ASP.NET solutions, I believe that there are off-the-shelf frameworks which make the entire job a heck of a lot simpler.

Introduction

After years of maintaining thousands of lines of ASP spaghetti code, Microsoft has finally given us a first class web development platform: ASP.NET. ASP.NET instantly brought a basic separation of concerns between presentation and business logic by introducing the code-behind page. Although introduced with good intentions and perfect for basic applications, the code-behind still lacks in a number of aspects when developing enterprise web applications:

  • The code-behind invites melding the layers of presentation, business logic and data-access code. This occurs because the code-behind page often serves the role of an event handler, a workflow controller, a mediator between presentation and business rules, and a mediator between presentation and data-access code. Giving the code-behind this many responsibilities often leads to unmanageable code. In an enterprise application, a principle of good design is to maintain proper separation of concerns among the tiers and to keep the code-behind as clean as possible. With Model-View-Presenter, we’ll see that the code-behind is greatly simplified and kept strictly to managing presentation details.
  • Another drawback to the code-behind model is that it is difficult to reuse presentation logic between code-behind pages without enlisting helper/utility classes that consolidate the duplicated code. Obviously, there are times that this provides an adequate solution. However, it often leads to incohesive classes that act more like ASP includes than first class objects. With proper design, every class should be cohesive and have a clear purpose. A class named ContainsDuplicatePresentationCodeBetweenThisAndThat.cs usually doesn’t qualify.
  • Finally, it becomes nearly prohibitive to properly unit test code-behind pages as they are inseparably bound to the presentation. Options such as NUnitAsp may be used, but they are time consuming to implement and difficult to maintain. They also slow down unit-test performance considerably, where unit tests should always be blazingly fast.

Various techniques may be employed to promote a better separation of concerns from the code-behind pages. For example, the Castle MonoRail project attempts to emulate some of the benefits of Ruby-On-Rails but abandons the ASP.NET event model in the process. Maverick.NET is a framework that optionally supports the ASP.NET event model but leaves the code-behind as the controller in the process. Ideally, a solution should be employed that leverages the ASP.NET event model while still allowing the code-behind to be as simple as possible. The Model-View-Presenter pattern does just that without relying on a third party framework to facilitate this goal.

Model-View-Presenter

Model-View-Presenter (MVP) is a variation of the Model-View-Controller (MVC) pattern but specifically geared towards a page event model such as ASP.NET. For a bit of history, MVP was originally used as the framework of choice behind Dolphin Smalltalk. The primary differentiator of MVP is that the Presenter implements an Observer design of MVC but the basic ideas of MVC remain the same: the model stores the data, the view shows a representation of the model, and the presenter coordinates communications between the layers. MVP implements an Observer approach with respect to the fact that the Presenter interprets events and performs logic necessary to map those events to the proper commands to manipulate the model. For more reading on MVC vs. MVP, take a look at Darron Schall’s concise entry on the subject. What follows is a detailed examination of MVP in the form of three example projects.

Author’s note: Martin Fowler has suggested that MVP be split between two “new” patterns called Supervising Controller and Passive View. Go here for a very short synopsis of the split. The content described herein is more consistent with Supervising Controller as the View is aware of the Model.

A most trivial example

In this example project, the client wants a page that shows the current time. Thank goodness they started us off with something easy! The ASPX page that will show the time is the “View.” The “Presenter” is responsible for determining the current time — i.e. the “Model” — and giving the Model to the View. As always, we start with a unit test:

Collapse
[TestFixture]
public class CurrentTimePresenterTests 
{
    [Test]
    public void TestInitView() 
    {
        MockCurrentTimeView view = new MockCurrentTimeView();
        CurrentTimePresenter presenter = new CurrentTimePresenter(view);
        presenter.InitView();

        Assert.IsTrue(view.CurrentTime > DateTime.MinValue);
    }

    private class MockCurrentTimeView : ICurrentTimeView 
    {
        public DateTime CurrentTime 
        {
            set { currentTime = value; }

            // This getter won't be required by ICurrentTimeView,

            // but it allows us to unit test its value.

            get { return currentTime; }
        }

        private DateTime currentTime = DateTime.MinValue;
    }
}

MVP Diagram

The above unit test, along with the diagram, describes the elements of the MVP relationship. The very first line creates an instance ofMockCurrentTimeView. As seen with this unit test, all of the Presenter logic can be unit tested without having an ASPX page, i.e. the View. All that is needed is an object that implements the View interface; accordingly, a mock view is created that stands in place of the “real” view.

The next line creates an instance of the Presenter, passing an object that implements ICurrentTimeView via its constructor. In this way, the Presenter can now manipulate the View. As seen in the diagram, the Presenter only talks to a View interface. It does not work with a concrete implementation directly. This allows multiple Views, implementing the same View interface, to be used by the same Presenter.

Finally, the Presenter is asked to InitView(). This method will get the current time and pass it to the View via a public property exposed byICurrentTimeView. A unit-test assertion is then made that the CurrentTime on the view should now be greater than its initial value. A more detailed assertion could certainly be made if needed.

All that needs to be done now is to get the unit test to compile and pass!

ICurrentTimeView.cs: the View interface

As a first step towards getting the unit test to compile, ICurrentTimeView.cs should be created. This View interface will provide the conduit of communication between the Presenter and the View. In the situation at hand, the View interface needs to expose a public property that the Presenter can use to pass the current time, the Model, to the View.

Collapse
public interface ICurrentTimeView 
{
    DateTime CurrentTime { set; }
}

The View only needs a setter for the current time since it just needs to show the Model, but providing a getter allows CurrentTime to be checked within the unit test. So instead of adding a getter to the interface, it can be added to MockCurrentTimeView and need not be defined in the interface at all. In this way, the exposed properties of the View can be unit tested without forcing extraneous setters/getters to be defined in the View Interface. The described unit test above shows this technique.

CurrentTimePresenter.cs: the Presenter

The presenter will handle the logic of communicating with the Model and passing Model values to the View. The Presenter, needed to make the unit test compile and pass, is as follows.

Collapse
public class CurrentTimePresenter 
{
    public CurrentTimePresenter(ICurrentTimeView view) 
    {
        if (view == null)
            throw new ArgumentNullException("view may not be null");

        this.view = view;
    }

    public void InitView() 
    {
        view.CurrentTime = DateTime.Now;
    }

    private ICurrentTimeView view;
}

Once the above items have been developed — the unit test, the mock view, the view and the presenter — the unit test will now compile and pass successfully. The next step is creating an ASPX page to act as the real View. As a quick side, take note of the ArgumentNullException check. This is a technique known as “Design by Contract.” Putting checks like this all over your code will greatly cut down on tracking down bugs. For more information about Design by Contract, see this article and this article.

ShowMeTheTime.aspx: the View

The actual View needs to do the following:

  1. The ASPX page needs to provide a means for displaying the current time. As shown below, a simple label will be used for display.
  2. The code-behind must implement ICurrentTimeView.
  3. The code-behind needs to create the Presenter, passing itself to the Presenter’s constructor.
  4. After creating the Presenter, InitView() needs to be called to complete the MVP cycle.

The ASPX page

Collapse
...
<asp:Label id="lblCurrentTime" runat="server" />
...

The ASPX code-behind page

Collapse
public partial class ShowMeTheTime : Page, ICurrentTimeView
{
    protected void Page_Load(object sender, EventArgs e) 
    {
        CurrentTimePresenter presenter = new CurrentTimePresenter(this);
        presenter.InitView();
    }

    public DateTime CurrentTime 
    {
        set { lblCurrentTime.Text = value.ToString(); }
    }
}

Is that it?

In a word, yes. But there is much more to the story! A drawback to the above example is that MVP seems like a lot of work for such little gain. We’ve gone from having one ASPX page to having a Presenter class, a View interface and a unit testing class. The gain has been the ability to unit test the Presenter, i.e. the ability to conveniently unit test code that would normally be found in the code-behind page. As is the case with trivial examples, the advantages of MVP shine when developing and maintaining enterprise web applications, not when writing “hello world”-like samples. The following topics elaborate the usage of MVP within an enterprise, ASP.NET application.

MVP within enterprise ASP.NET applications

I. Encapsulating Views with user controls

In the previous, simple example, the ASPX page itself acted as the View. Treating the ASPX in this way was sufficient in that the page had only one simple purpose – to show the current time. But in more representative projects, it is often the case that a single page will have one or more sections of functionality whether they be WebParts, user controls, etc. In these more typical of enterprise applications, it is important to keep functionality logically separated and to make it easy to move/replicate functionality from one area to another. With MVP, user controls can be used to encapsulate Views while the ASPX pages act as “View Initializers” and page redirectors. Extending the previous example, we need only modify the ASPX page to implement the change. This is another benefit of MVP; many changes can be made to the View layer without having to modify the Presenter and Model layers.

ShowMeTheTime.aspx redux: the View initializer

With this new approach, using user controls as the view, ShowMeTheTime.aspx is now responsible for the following:

  1. The ASPX page needs to declare the user control which will implement ICurrentTimeView.
  2. The ASPX code-behind needs to create the Presenter, passing the user control to the Presenter’s constructor.
  3. After giving the View to the Presenter, the ASPX needs to call InitView() to complete the MVP cycle.

The ASPX page

Collapse
...
<%@ Register TagPrefix="mvpProject" 
    TagName="CurrentTimeView" Src="./Views/CurrentTimeView.ascx" %>

<mvpProject:CurrentTimeView id="currentTimeView" runat="server" />
...

The ASPX code-behind page

Collapse
public partial class ShowMeTheTime : Page  
    // No longer implements ICurrentTimeView

{
    protected void Page_Load(object sender, EventArgs e) 
    {
        InitCurrentTimeView();
    }

    private void InitCurrentTimeView() 
    {
        CurrentTimePresenter presenter = 
            new CurrentTimePresenter(currentTimeView);
        presenter.InitView();
    }
}

CurrentTimeView.ascx: the user control-as-view

The user control now represents the bare-bones View. It is as “dumb” as it can be – which is exactly how we want a View to be.

The ASCX page

Collapse
...
<asp:Label id="lblCurrentTime" runat="server" />
...

The ASCX code-behind page

Collapse
public partial class Views_CurrentTimeView : UserControl, ICurrentTimeView
{
    public DateTime CurrentTime 
    {
        set { lblCurrentTime.Text = value.ToString(); }
    }
}

Pros and cons of the user Control-as-View approach

Obviously, the primary drawback to the User Control-as-View approach to MVP is that it adds yet another piece to the equation. The entire MVP relationship is now made up of: unit test, presenter, view interface, view implementation (the user control) and the view initializer (the ASPX page). Adding this additional layer of indirection adds to the overall complexity of the design. The benefits of the user control-as-View approach include:

  • The View can be easily moved from one ASPX page to another. This happens regularly in a mid to large sized web application.
  • The View can be easily reused by different ASPX pages without duplicating much code at all.
  • The View can be initialized differently by different ASPX pages. For example, a user control could be written that displays a listing of projects. From the reporting section of the site, the user may view and filter all the projects available. From another section of the site, the user may only view a subset of the projects and not have the ability to run filters. In implementation, the same View can be passed to the same presenter, but then each ASPX page, in its respective section of the site, would call a different method on the Presenter to initialize the View in a unique way.
  • Additional Views can be added to the ASPX page without adding much additional, coding overhead. Simply include the new user control-as-view into the ASPX page and link it to its Presenter in the code-behind. Placing multiple sections of functionality within the same ASPX page, without using user controls, quickly creates a maintenance headache.

II. Event handling with MVP

The previous example described, essentially, a one-way round of communications between a Presenter and its View. The Presenter communicated with the Model and delivered it to the View. In most situations, events occur which need to be handed off to the Presenter for action. Furthermore, some events depend on whether or not a form was valid and whether or not IsPostBack had occurred. For example, there are some actions, such as data-binding, that may not be done when IsPostBack.

Disclaimer: Page.IsPostBack and Page.IsValid are web specific keywords. Therefore the following may make the presenter layer, as described, slightly invalid in non-web environments. However, with minor modifications it will work fine for WebForms, WinForms or mobile applications. In any case, the theory is the same but I welcome suggestions for making the presenter layer transferable to any .NET environment.

A simple event handling sequence

Continuing with the earlier example, assume that requirements now dictate that the user may enter a number of days to be added to the current time. The time shown in the View should then be updated to show the current time plus the number of days supplied by the user, assuming the user provided valid inputs. When not IsPostBack, the current time should be displayed. When IsPostBack, the Presenter should respond to the event accordingly. The sequence diagram below shows what occurs upon the user’s initial request (top half of diagram) and what happens when the user clicks the “Add Days” button (bottom half of diagram). A more thorough review of the sequence follows the diagram.

Screenshot - BasicEventHandling.GIF

 

A) User Control-as-View created

This step simply represents the inline user control declaration found in the ASPX page. During page initialization, the user control gets created. It’s included on the diagram to emphasize the fact that the user control implements ICurrentTimeView. During Page_Load, the ASPX code-behind then creates an instance of the Presenter, passing the User Control-as-View via its constructor. So far, everything looks identical to what was described in the section “Encapsulating Views with User Controls.”

B) Presenter attached to View

In order for an event to be passed from the user control, the View, to the Presenter, it must have a reference to an instance ofCurrentTimePresenter. To do this, the View Initializer, ShowMeTheTime.aspx, passes the Presenter to the View for later use. Contrary to initial reaction, this does not cause a bi-directional dependency between the Presenter and the View. Instead, the Presenter depends on the View interface and the View implementation depends on the Presenter to pass events off to. To see how it all works, let’s take a step back to look at how all the pieces are now implemented.

ICurrentTimeView.cs: the View interface

Collapse
public interface ICurrentTimeView 
{
    DateTime CurrentTime { set; }
    string Message { set; }
    void AttachPresenter(CurrentTimePresenter presenter);
}

CurrentTimePresenter.cs: the Presenter

Collapse
public class CurrentTimePresenter 
{
    public CurrentTimePresenter(ICurrentTimeView view) 
    {
        if (view == null)
            throw new ArgumentNullException("view may not be null");

        this.view = view;
    }

    public void InitView(bool isPostBack) 
    {
        if (! isPostBack) 
        {
            view.CurrentTime = DateTime.Now;
        }
    }

    public void AddDays(string daysUnparsed, bool isPageValid) 
    {
        if (isPageValid) 
        {
            view.CurrentTime = 
                  DateTime.Now.AddDays(double.Parse(daysUnparsed));
        }
        else 
        {
            view.Message = "Bad inputs...no updated date for you!";
        }
    }

    private ICurrentTimeView view;
}

CurrentTimeView.ascx: the View

The ASCX page

Collapse
...
<asp:Label id="lblMessage" runat="server" /><br />
<asp:Label id="lblCurrentTime" runat="server" /><br />
<br />
<asp:TextBox id="txtNumberOfDays" runat="server" />
<asp:RequiredFieldValidator ControlToValidate="txtNumberOfDays" runat="server"
    ErrorMessage="Number of days is required" ValidationGroup="AddDays" />
        <asp:CompareValidator 
            ControlToValidate="txtNumberOfDays" runat="server"
            Operator="DataTypeCheck" Type="Double" ValidationGroup="AddDays"
            ErrorMessage="Number of days must be numeric" /><br />
<br />
<asp:Button id="btnAddDays" Text="Add Days" runat="server" 
    OnClick="btnAddDays_OnClick" ValidationGroup="AddDays" />
...

The ASCX code-behind page

Collapse
public partial class Views_CurrentTimeView : UserControl, ICurrentTimeView 
{
    public void AttachPresenter(CurrentTimePresenter presenter) 
    {
        if (presenter == null)
            throw new ArgumentNullException("presenter may not be null");

        this.presenter = presenter;
    }

    public string Message 
    {
        set { lblMessage.Text = value; }
    }

    public DateTime CurrentTime 
    {
        set { lblCurrentTime.Text = value.ToString(); }
    }

    protected void btnAddDays_OnClick(object sender, EventArgs e) 
    {
        if (presenter == null)
            throw new FieldAccessException("presenter has" + 
                               " not yet been initialized");

        presenter.AddDays(txtNumberOfDays.Text, Page.IsValid);
    }

    private CurrentTimePresenter presenter;
}

ShowMeTheTime.aspx: the View initializer

The ASPX page

Collapse
...
<%@ Register TagPrefix="mvpProject" 
    TagName="CurrentTimeView" Src="./Views/CurrentTimeView.ascx" %>

<mvpProject:CurrentTimeView id="currentTimeView" runat="server" />
...

The ASPX code-behind page

Collapse
public partial class ShowMeTheTime : Page  
    // No longer implements ICurrentTimeView

{
    protected void Page_Load(object sender, EventArgs e) 
    {
        InitCurrentTimeView();
    }

    private void InitCurrentTimeView() 
    {
        CurrentTimePresenter presenter = 
            new CurrentTimePresenter(currentTimeView);
        currentTimeView.AttachPresenter(presenter);
        presenter.InitView(Page.IsPostBack);
    }
}

C) Presenter InitView

As defined in the requirements, the Presenter should only show the current time if not IsPostBack. The important action to note is that the Presenter should decide what to do according to IsPostBack. It should not be the job of the ASPX code-behind to make this decision. As seen in the code above, the ASPX code-behind does no check for IsPostBack. It simply passes the value to the Presenter to determine what action to take.

This may lead to the question, “But what happens if another user control-as-view caused the post-back to occur?” In the scenario at hand, the current time would remain in the view state of the label and be displayed again after post back. This may be OK depending on the needs of the client. In general, it’s a good question to ask of any Presenter: what impact will a post-back from another user control have on the View? In fact, it’s a good question to ask even if you’re not using MVP. There may be actions that should always occur, regardless of IsPostBack, while other initialization steps may be bypassed. View state settings obviously have a large impact on this decision, as well.

When not IsPostBack, as shown in the diagram, the Presenter then sets the CurrentTime of the view via its interface. Sequence diagram purists may raise the point that the diagram implies two messages are being sent — one from CurrentTimePresenter to ICurrentTimeView and then one from ICurrentTimeView to CurrentTimeView.ascx — when in fact only one is being sent from CurrentTimePresenter to CurrentTimeView.ascx, polymorphically. The interface “middleman” is included to emphasize that the Presenter does not depend on the concrete View directly.

D) Presenter InitView after IsPostBack

In the preceding steps, the user made the HTTP request, the Presenter set the current time on the View, and the HTTP response was delivered to the user. Now, the user clicks the “Add Days” button, which causes a post-back. Everything occurs as before until InitView is called on the Presenter. At this point, the Presenter tests for IsPostBack and does not set the CurrentTime on the View.

E) Button click handled by user control

After the Page_Load of the ASPX page has occurred, the OnClick event is then raised to the user control. The View should not handle the event itself; it should immediately pass the event on to the Presenter for action. By looking at the code-behind of the user control, you can see that it makes sure it has been given a valid presenter — more “Design by Contract” — and then hands the command off to the Presenter. The Presenter then verifies that the page was valid and sets the time or error message accordingly.

The above has been an exhaustive analysis of a complete MVP cycle with event handling. Once you get the hang of MVP, it takes very little time to get all the pieces in place. Remember to always begin with a unit test and let the unit tests drive the development. The unit tests not only help ensure that the MVP pieces are working correctly, they also serve as the point for defining the communications protocol among the pieces. A Visual Studio code snippet for an MVP unit test can be found in Appendix B. We’ll now take a look at look at handling page redirection.

III. Page redirects with MVP & PageMethods

In developing enterprise application, application flow is always a concern. Who’s going to take care of page redirects? Should action redirects be stored in a configurable XML file? Should a third party tool such as Maverick.NET or Spring.NET handle page flow? Personally, I like to keep the page redirects as close to the action as possible. In other words, I feel that storing action/redirects in an external XML file leads to further indirection that can be tedious to understand and maintain. As if we don’t have enough to worry about already! On the other hand, hard-coded redirects in the ASPX code-behind are fragile, tedious to parse and not strongly typed. To solve this problem, the free download PageMethods allows you to have strongly typed redirects. So instead of writing Response.Redirect("../Project/ShowProjectSummary?projectId=" + projectId.ToString() +"&userId=" + userId.ToString()), PageMethods provides a strongly typed redirect that would look more likeResponse.Redirect(MyPageMethods.ShowProjectSummary.ShowSummaryFor(projectId, userId)). The redirect is strongly typed and, therefore, checked at compile time.

An MVP related question concerning page redirects remains: who should be responsible for making a redirect and how should the redirect be initiated? I believe there are a number of valid answers to this question but will propose a solution that I’ve found to be rather successful. Add one event to the Presenter for each outcome that is possible. For example, assume a website is made up of two pages. The first page lists a number of projects; the second page, reached by clicking “Edit” next to one of the project names, allows the user to update the project’s name. After updating the project name, the user should be redirected to the project listing page again. To implement this, the Presenter should raise an event showing that the project name was successfully changed and then the View Initializer, the ASPX page, should execute the appropriate redirect. Note that the following is illustrative and not associated with the “current time” example discussed thus far.

Presenter

Collapse
...
public event EventHandler ProjectUpdated;

public void UpdateProjectNameWith(string newName) 
{
    ...

    if (everythingWentSuccessfully) 
    {
        ProjectUpdated(this, null);
    }
    else 
    {
        view.Message = "That name already exists.  Please provide a new one!";
    }
}
...

ASPX code-behind

Collapse
...
protected void Page_Load(object sender, EventArgs e) 
{
    EditProjectPresenter presenter = 
        new EditProjectPresenter(editProjectView);
    presenter.ProjectUpdated += new EventHandler(HandleProjectUpdated);
    presenter.InitView();
}

private void HandleProjectUpdated(object sender, EventArgs e) 
{
    Response.Redirect(
        MyPageMethods.ShowProjectSummary.Show(projectId, userId));
}
...

Taking this approach keeps page redirection out of the Presenter and out of the View. As a rule of thumb, the Presenter should never require a reference to System.Web. Furthermore, disassociating redirects from the View — i.e. the user control — and allows the View to be used again by other View Initializers, i.e. other ASPX pages. At the same time, it leaves application flow up to each individual View Initializer. This is the greatest benefit of using an event based model of redirection with User Control-as-View MVP.

IV. Presentation security with MVP

Oftentimes, a column, button, table or whatever should be shown/hidden based on the permissions of the user viewing the website. Likewise, an item may be hidden when a View is included in one View Initializer vs. being included in different View Initializer. The security should be determined by the Presenter but the View should handle how that decision should be implemented. Picking up again with the “current time” example, assume that the client only wants the “Add Days” section to be available for users on even days, e.g. 2, 4, 6. The client likes to keep the users guessing! The View could encapsulate this area within a panel, as follows:

Collapse
...
<asp:Panel id="pnlAddDays" runat="server" visible="false">
    <asp:TextBox id="txtNumberOfDays" runat="server" />
    <asp:RequiredFieldValidator 
        ControlToValidate="txtNumberOfDays" runat="server"
        ErrorMessage="Number of days is required" ValidationGroup="AddDays" />
    <asp:CompareValidator ControlToValidate="txtNumberOfDays" runat="server"
        Operator="DataTypeCheck" Type="Double" ValidationGroup="AddDays"
        ErrorMessage="Number of days must be numeric" /><br />
    <br />
    <asp:Button id="btnAddDays" Text="Add Days" runat="server" 
        OnClick="btnAddDays_OnClick" ValidationGroup="AddDays" />
</asp:Panel>
...

Note that the panel’s visibility is pessimistically set to false. Although it would not make much difference in this case, it is better to be pessimistic about showing secure elements than the other way around. The code-behind of the View would then expose a setter to show/hide the panel:

Collapse
...
public bool EnableAddDaysCapabilities 
{
    set { pnlAddDays.Visible = value; }
}
...

Note that the View does not expose the panel directly. This is intentionally done for two reasons: 1) exposing the panel directly would require that the Presenter have a reference to System.Web, something we want to avoid, and 2) exposing the panel ties the Presenter to an “implementation detail” of the View. The more a Presenter is tied to how a View is implemented, the less likely it will be reusable with other Views. As with other OOP scenarios, the pros and cons of exposing implementation details of the View need to be weighed against looser coupling to the Presenter.

Finally, during InitView, the Presenter checks if the user should be allowed to use the add-days functionality and sets the permission on the View accordingly:

Collapse
...
public void InitView() 
{
    view.EnableAddDaysCapabilities = (DateTime.Now.Day % 2 == 0);
}
...

This simple example can be extended to a varied number of scenarios including security checks. Note that this is not a replacement for built-in .NET security, but it serves to augment it for finer control.

V. Application architecture with MVP

Finally! How does all of this fit together in a data-driven, enterprise application? “Enterprise application,” in this instance, is an application that has logically separated tiers including presentation, domain and data-access layers. The following graph shows an overview of a fully architected solution with discussion following.

Each raised box represents a distinct specialization of the application. Each gray box then represents a separate physical assembly, e.g.MyProject.Web.dllMyProject.Presenters.dllMyProject.Core.dll. The arrows represent dependencies. For example, the .Web assembly depends on the.Presenters and .Core assemblies. The assemblies avoid bi-directional dependency using the techniques Dependency Inversion and Dependency Injection. My preferred means of Dependency Injection — “DI” in the above graph — to the View Initializers is via the Castle Windsor project. The data layer then uses the ORM framework, NHibernate, for communicating with the database.

For a primer on Dependency Injection, read the CodeProject article entitled “Dependency Injection for Loose Coupling.” Additionally, for a complete overview of this architecture, sans the .Presenters layer and Castle Windsor integration, read the CodeProject article entitled “NHibernate Best Practices with ASP.NET.” This article also describes how to set up and run the sample application. Yes, these are both shameless plugs for other articles I have written, but both are required reading to fully appreciate the sample solution. Please feel free to raise any questions concerning the architecture.

In summary

At first glance, implementing MVP looks like a lot of extra work. In fact, it will slow development a bit during the initial stages of development. However, after using it in all stages of enterprise application development, the long-term benefits of using the approach far outweigh the initial feelings of discomfort with the pattern. MVP will greatly extend your ability to unit test and keep code more maintainable throughout the lifetime of the project, especially during the maintenance phase. When it comes right down to it, I’m not suggesting that you use MVP on all your enterprise ASP.NET projects, just the projects that you want to work! 😉 In all seriousness, MVP is not appropriate in all situations. An application’s architecture should fit the task at hand and complexity should not be added unless warranted. Obviously, MVP and User-Control-as-View MVP are just two architectural options among many. However, if used appropriately, MVP allows you to be confident in your presentation logic by making most of the code that would have been in a code-behind, testable and maintainable.

 

Shave Time Off your Development with ASP.NET MVC Razor

January 21, 2011 § Leave a comment

Introduction

Microsoft has introduced a new view engine called Razor that replaces ASPXas the default view engine in ASP.NET MVC3. Razor represents a major improvement for developers that want a cleaner and more efficient view engine with fewer keystrokes. Razor also allows the developer full control of every character sent to the browser including whether or not the ASP.NET JavaScript libraries are used at all.

Pre-requisites

First, install ASP.NET MVC3 Release Candidate 2 from ASP.NET. This release candidate has go-live licensing and runs on the 4.0 version of the .NET framework so you can build applications in MVC3 and deploy them to an existing production server.

The configuration changes you will need to make to deploy your MVC3 application are the same asprevious versions of MVC. This article assumes familiarity with ASP.NET MVC.

Basic Syntax

Razor does away with ASPbs <% %> block types and instead intelligently infers what is intended to be server side code versus client side code. The at (@) symbol is used to denote a server side statement and the double @@ is how you include a literal @ symbol on your page.

The example below shows an if statement that will be run server side and the text inside of the if block will be written to the web page including the value of the session variable CartCount. In the statement below, the code flows smoothly between server side and client side code. For someone who is used to working in ASP.NETbs codified code blocks, working in Razor can take some adjustment but you will be rewarded with a page syntax that has far fewer keystrokes and better intellisense support.

 

@if(Session["CartCount"] != null)
{
Take you have @Session["CartCount"] items in your basket.
}

Razor comments are performed server side to minimize the size of the web pages sent to the client. The @* *@ comment format works in a multi-line comment fashion similar to the html <!– –> or C# /* */.

@* This is a comment. It wonbt get sent to the userbs browser. 
It is a multi-line comment type.
*@

Razor uses the same syntax for both single and multi-line code blocks. The only difference is for multi-line blocks you should end each line with a semicolon.

@{
String s = "MyString";
s = s.ToUpper();
}

 

Layout Pages

Many of the core concepts of ASPX translate well to Razor including Layout pages which are Razorbs master pages equivalent. When you create a blank MVC3 application, the default layout template is in /Views/Shared/_Layout.cshtml. You can create additional layout pages and reference them at the top of your individual Razor .cshtml views shown below.

 

@{
    Page.Title="Your Page Title";
    Layout = "~/Views/Shared/MyCustomLayout.cshtml";
}

 

Inside the layout page itself, @RenderBody() defines where the page that uses your layout page will render its content. You can optionally, also implement a reference to @Page.Title if you want your layout page to have a proper HTML title.

It is possible to define as many custom sections as needed in your layout page using the@RenderSection function. If it is marked as required, the page will throw an exception if the view utilizing the layout does not define the section.

 

@RenderSection("RazorIntroSection", required:true)

 

To define the custom section in your corresponding view page, use the @section function along with your section name. All of the HTML and functions inside of the brackets will be a part of that sub-section.

 

@section RazorIntroSection
{
   <p>This is the razor intro section defined in my view.</p>
}

 

Links and HTML

Throughout your Razor views, you will need to reference controller URLs as well as content files such as images. For this purpose Razor includes the @Url.Content and @Url.Action functions. Similar to ASP.NET you can pass these functions the path desired preceded by the tilde (~) which is a reference to the virtual root of your site.

 

  <a href="@Url.Content("~/Content/Images/Vacation.png")">Look at my vacation!</a>

 

The @Url.Action function allows you to reference MVC paths directly or to use more advanced properties.

By default, strings containing HTML markup such as comments made using a rich text box stored in your database are automatically escaped by Razor. This means <p>This is a paragraph</p> would be automatically translated by Razor into &lt;p&gt;This is a paragraph&lt;/p&gt; when the page is run. This is done for mostly security reasons to prevent cross site scripting attacks. If you want Razor to output the HTML exactly as it is stored in your string without escaping it, use the @MvcHtmlString function shown below.

 

@MvcHtmlString.Create("<p>This is a paragraph.</p>")

 

The Model

At the top of your Razor page, you can optionally define an @model type. Doing so isnbt required but if you define it, intellisense will work correctly across your page and compile time validation will be performed accurately on your view. In the example below, if your view returns List<string>, you can iterate that list as shown below with full intellisense support.

 

@model List<string>
@{
Layout = "~/Views/Shared/_Layout.cshtml";
Page.Title = "Model Bound Page";
}
<p>This is my list:<p>
<table><tbody>
@foreach(string label in Model)
{
<tr><td>@label .ToUpper()</td></tr>
}
</tbody></table>

 

To pass your model from your controller to your view, simply return it as part of the View function as shown in the listing below. You can pass any kind of object from your controller to your view including non-serializable objects.

 

namespace IntroducingRazor.Controllers
{
    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            List<string> MyList = new List<string>();
            MyList.Add("Hello");
            MyList.Add("How");
            MyList.Add("Are");
            MyList.Add("You?");
            return View(MyList);
        }
    }
}

 

In a perfect world, your views should align well to your model and you should be able to keep as much logic out of your view as possible. In some situations you will find yourself needing to provide multiple unrelated objects to your Razor view in order to meet your project requirements. Doing this can be done quickly and easily leveraging the .NET Framework 4.0 Tuple.

 

@model Tuple<List<Product>,List<RSSFeed>>
<ul>
@foreach(Product product in Model.Item1)
{
<li>Product: @product.ProductName</li>
}
</ul>
<ul>
@foreach(RSSFeed feed in Model.Item2)
{
<li>Feed: @feed.FeedName</li>
}
</ul>

 

Putting it together

The included code example was created by choosing New->Project then choosing ASP.NET MVC3 Web Application. The following dialog selected “Empty” and accepted the default values including the View engine as “Razor”. This code example was kept intentionally very simple to provide a good starting point for understanding how Razor sites are put together.

Conclusion

Razor provides the most productive .NET framework view syntax yet. It is the most powerful, clean and concise way to define web views on the .NET framework platform.

 

Microsoft IT Developing Applications with Microsoft Azure

January 21, 2011 § Leave a comment

Microsoft IT Developing Applications with Microsoft Azure

Published: January 2011

This content discusses the real-world experiences from Microsoft IT and the social and video experience platform hosted on Windows Azure that powers the Microsoft.com Showcase and cloud sites.

Please view the video at:

http://technet.microsoft.com/en-us/edge/microsoft-it-developing-applications-with-microsoft-azure.aspx

 

 

Intended Audience

Developers

Products

  • SQL Azure
  • For More Information

For more information about Microsoft products or services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada Order Centre at (800) 933-4750. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary. To access information via the World Wide Web, go to:

http://www.microsoft.com

http://www.microsoft.com/technet/itshowcase

© 2011 Microsoft Corporation. All rights reserved.

This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Microsoft, Windows, SQL Azure, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

 

Microsoft to embrace and extend HTML 5? ‘WPF and Silverlight at risk in faction war’

January 20, 2011 § Leave a comment

Microsoft watchers are poring over a series of Twitter posts from former Silverlight Product Manager Scott Barnes, a web design and user experience specialist.

According to Barnes, just back from a week of briefings at Microsoft, there is intense internal debate about the future of HTML 5, newly implemented in the forthcoming Internet Explorer 9, and the Silverlight plug-in. He tweeted:

Click here to find out more!

“Right now there’s a faction war inside Microsoft over HTML5 vs Silverlight. oh and WPF is dead.. i mean..it kind of was..but now.. funeral.”

WPF is Windows Presentation Foundation, the rich user interface framework that was originally intended to become the primary GUI API for Windows Vista, but was sidelined when Vista development was “reset” in 2004, and does not feature strongly in Windows 7. “There’s no-one working on it beyond minor touch-ups,” says Barnes.

That said, Visual Studio 2010, released earlier this year, makes heavy use of WPF, lending credence to the idea that Microsoft’s Windows team and its Developer division have divergent strategies.

The big debate now is over Silverlight versus HTML5. Barnes claims that the Windows and IE teams see the revved-up Internet Explorer as the replacement for WPF. Since it has hardware-accelerated video, a fast JavaScript engine and support for the Canvas element for custom graphics, that is plausible. But what about access to the Windows API? No problem, says Barnes:

“HTML5 is the replacement for WPF.. IE team want to fork the HTML5 spec by bolting on custom windows APi’s via JS/HTML5”

This would be a classic “embrace and extend” strategy, encouraging developers to create Windows-specific HTML 5 applications, though Microsoft risks losing the goodwill IE9 is generating for its support of web standards among people like Opera’s Molly Holzschlag, who said in March that Microsoft’s new browser “will kick butt”.

If Microsoft does move in this direction it will be a significant shift from the current strategy, which places WPF as the framework for Windows desktop applications, and Silverlight as a subset of WPF suitable for browser-hosted or out-of-browser applications that run cross-platform. That’s on Macs as well as Windows at least, though Apple’s exclusion of runtimes like Flash and Silverlight from its device platform is damaging its value. WPF and Silverlight use the same XML-based layout language, called XAML, and support programming in .NET languages.

Silverlight is also the applications platform for Windows Phone 7, Microsoft’s attempt to get back in the mobile race, which launches later this year.

Earlier this month, Brad Becker, of Microsoft’s Developer Platforms team, defended the role of Silverlight in a blog post, saying that it remains better for “premium media experiences and apps”. ®

 

Where Am I?

You are currently viewing the archives for January, 2011 at Naik Vinay.