Reverse Engineering Legacy Applications

September 25, 2012 § 1 Comment


This article is in the Book Review chapter. Reviews are intended to provide you with information on books – both paid and free – that others consider useful and of value to developers. Read a good programming book? Write a review!


This review was first published by the author in the Sept 2012 issue of Software Developer’s Journal.

Object Oriented Reengineering Patterns, written by Serge Demeyer, Stéphane Ducasse, Oscar Nierstrasz is now out of print but is available free for download from  This book covers a topic that, as Martin Fowler points out in the Forward, software development from a clean slate is “…not the most common situation that people write code in.”  Rather: “Most people have to make changes to an existing code base, even if it’s their own.” 

This article is a review of the book, which I found contains gems of wisdom, and I have also taken the liberty to embellish the ideas presented in the book from my own personal experiences and to add what I feel is not given sufficient coverage.

The book is divided into four sections:

  • Introduction
  • Reverse Engineering
  • Reengineering
  • Appendices

Each section has several subsections in which the reader will discover a topic called “Forces.” This is a very useful narrative of the factors that need to be considered within that section of engineering patterns. For example, forces might involve different stakeholder agendas, risk analysis, prioritization, cost, etc.

The “patterns” presented in the book are not what is typically encountered as coding patterns, such as “Abstract Factory”, “Singleton”, etc. Rather, the patterns identified are common problems. The majority of the book consists of:

  • A problem statement
  • A solution
  • Tradeoffs
  • Rationale
  • Known Uses
  • Related Patterns / What’s Next

The consistency of this approach allows the reader to easily scan the book to fit his/her particular set of problems and focus on the suggested solutions, as well as engage in a discussion of the tradeoffs (Pros and Cons), the rationale (History) of the pattern, where the pattern has been used before (Known Uses) and to navigate to related patterns for further investigation.

Reverse Engineering, Reengineering, and Forward Engineering

In the Introduction, a key distinction is made between three different activities. I found that this distinction is very useful as it identifies activities of analysis, corrective rework, and new work. By separating tasks into these three categories, a more complete picture of the legacy system can be developed, one which then provides critical information to be applied to decisions such as budgets and schedules and required expertise.

Reverse Engineering

Figure 1: Setting Direction (pg. 20)

The authors provide a concise statement for what is reverse engineering: “reverse engineering is essentially concerned with trying to understand a system and how it ticks.” This section of the book has several high-level topics:

  • Setting Direction
  • First Contact
  • Initial Understanding
  • Detailed Model Capture

each of which provides a group of patterns to aid in solving problems in that section.

I found the discussion of Forces in each section to be valuable – reverse engineering and reengineering requires a level of constant vigilance that are brought to consciousness reading through the Forces section. I would actually recommend that the Forces sections be re-read weekly by all team members and management at the beginning of any reverse / reengineering effort. There are also some omissions in the discussion, which I will address next.

What About the QA Folks?

One of the most valuable sources of information that I have found when working with legacy applications is talking with the Quality Assurance folks. These are people that know the nuances of the application, things that even the developers don’t know. While this might be inferred from the sub-section “Chat with the Maintainers”, the focus seems to be on the people maintaining the code–for example: “What was the easiest bug you had to fix during the last month?” This is, in my opinion, a significant omission of the book.

Regulations and Compliance Certification

One of the stumbling blocks I once encountered in a reengineering project was that the existing code had been certified to meet certain compliances. Re-certifying new code would be a costly and time consuming process. This raises an issue that should not be ignored but which unfortunately the book completely omits – are there third party certifications that the software must undergo approval before it can be reengineered or forward engineered? What are those certifications and how were they achieved in the past?

Reverse Engineering is About More Than Code

I found the book to be a bit too code-centric in this section. For example, under the section “Detail Model Capture”, the subsections:

  • Tie Code and Questions
  • Refactor to Understand
  • Step Through the Execution
  • Look for the Contracts

are all very code-centric. Several discussions seem to be lacking:

  • Tools are available to aid in the reverse engineering process
  • Reverse engineering the database
  • Documenting user interface features

These are points that are critical to detail reverse engineering. Tools that generate class and schema diagrams and reverse engineering code into UML diagrams can be invaluable in a detail capture of the application. Over time, the user interface probably has all sorts of shortcuts and interesting behaviors that have been patched as users have made requests, and missing these behaviors will alienate the user from any new application.

I found that tools to support the documentation process could have been discussed. For example, I have set up in-house wikis for companies to provide a general repository for documenting legacy applications and providing a forum for discussion on many of the excellent points the book raises regarding communication and problem analysis.

Another tool I have often found lacking in companies maintaining legacy applications is source control (I kid you not.) Most legacy systems are still being updated, often with bug fixes, during the reverse / re-engineering effort. A source control system is critical as it developers can ensure that any new implementation mirrors the changes in the legacy system. It also provides a source of documentation – when a change is made to a legacy system, the developer can add comments that aid in the reverse engineering – why was the change made, what was discovered in making the change, how was the change made, and so forth.


Figure 2: Migration Strategies (pg. 182)

The authors define reengineering as “Reengineering, on the other hand, is concerned with restructuring a system, generally to fix some real or perceived problems, but more specifically in preparation for further development and extension.” This section of the book has five sub-sections:

  • Tests: Your Life Insurance!
  • Migration Strategies
  • Detecting Duplicate Code
  • Redistribute Responsibilities
  • Transform Conditionals to Polymorphism

again, each of which provides a group of patterns to aid in solving problems in that section.

While reengineering definitely involves testing and data migration, again I found this section to be overly code-centric. With legacy applications, regarding the database I often encounter:

  • non-normalized databases
  • obsolete fields
  • missing constraints (foreign keys, nullability)
  • missing cascade operations, resulting in dead data
  • fields with multiple meanings
  • fields that no longer are used for the data that the field label describes
  • repetitive fields, like “Alias”, “Alias1”, “Alias2”, etc., that were introduced because it was too expensive to create additional tables and support many-to-one relationship

Reengineering a database will break the legacy application but is absolutely required to move forward to supporting new features and requirements. Thus the pattern “Most Valuable First” (pg 29) in which it is stated:

“By concentrating first on a part of the system that is valuable to the client, you also maximize the commitment that you, your team members and your customers will have in the project. You furthermore increase your chances of having early positive results that demonstrate that the reengineering effort is worthwhile and necessary.”

can be very misleading. The part of the system that is valuable to the client often involves reengineering a badly designed / maintained database, and reengineering the database will take time – you will simply have to bite the bullet that early positive results are simply not achievable.

Forward Engineering

Lastly, the authors make a distinction between reengineering and new engineering work: “Forward Engineering is the traditional process of moving from high-level abstractions and logical, implementation-independent designs to the physical implementation of a system.” Forward engineering is the further development and extension of the application, once one has been adequately prepared by the reverse engineering and reengineering process.

The reader will notice that subsequent to the Introduction, there are two sections describing reverse engineering patterns and reengineering patterns, but there is no section describing forward engineering patterns. There is a very brief coverage of common coding patterns in the Appendix. Certainly there are enough books on forward engineering best practices, but in my experience, there is a significant step which I call “Supportive Engineering”, that often sits between reverse engineering and the reengineering / forward engineering process.

Supportive Engineering

Figure 3: Bridging Legacy and Forward Engineered Products (author)

Reengineering of legacy applications often requires maintaining both old and new applications concurrently for a period of time. What I have termed “Support Engineering” are those pieces of code necessary to bridge data and processes while in this concurrent phase of product support. Depending on the scope of the legacy system, this phase may take several years! But it should be realized that basically all of the “bridge” code written will eventually be thrown away as the legacy application is replaced.

Commercial and In-House Tools

Supportive engineering also includes the use of commercial tools and the in-house development of tools that support the reverse and reengineering efforts. For example, there are a variety of unit test applications readily available, however the developers must engage in writing the specific unit tests (see the section “Tests: Your Life Insurance” in the book.) The legacy applications may utilize a database, and it will probably not be normalized, requiring tools to migrate data from the legacy database to a properly normalized database.

Concurrent Database Support

Furthermore, during the concurrent phase, it may be necessary to maintain both the re-engineered database and the legacy database. This doesn’t just involve migrating data (discussed in the book under the section “Make a Bridge to the New Town.) It may involve the active synchronization between legacy (often non-normalized) and reengineered (hopefully normalized) databases. Achieving this can itself be a significant development effort, especially as the legacy application is probably lacking the architecture to create notifications of data changes, most likely requiring some kludge (PL/SQL triggers, timed sync, etc.) to keep the normalized database in sync.

Data Compatibility

Another issue that comes up when having to maintain both legacy and reengineered databases is one of data compatibility. Perhaps the purpose of the reengineering, besides normalization, is to provide the user with more complicated relationships between data. Or perhaps what was a single record entry form now supports multiple records – for example, the legacy application might allow the user to enter a single alias for a person, while the new record management database allows the user to enter multiple aliases. During the concurrent phase, it becomes a significant issue to determine how to handle “data loss” when synchronizing the legacy system with data changes in the new system, simply from the fact that the legacy system does not support the same flexibility as the new, normalized database.

It Will All Be Thrown Away

Remember that the tools, tests, and software designed to bridge between reengineered and legacy applications during the concurrent support phase will become obsolete once the legacy application is completely phased out. The costs (time and money) should be a clearly understood and communicated to all stakeholders of the reengineering effort – management, developers, users, etc.


The book Object Oriented Reengineering Patterns offers some excellent food for thought. One of the most positive things about this book is that it will give you pause to think and hopefully to put together a realistic plan for reengineering and subsequently forward engineering a legacy application. For example, the advice in the section “Keep it Simple”:

“Flexibility is a double-edged sword. An important reengineering goal is to accommodate future change. But too much flexibility will make the new system so complex that you may actually impede future change.”

is useful as a constant reminder to developers, managers, and marketing folks. However, I think the book is too focused on the reengineering of code, leading to some gaps with regards to databases, documentation tools, and certification issues, to name a few. I don’t necessarily think that the concept of “reengineering patterns” was adequately presented – the book is more of a “leading thoughts” guide, and from that perspective, it has some very good value.


What is Siri

September 24, 2012 § Leave a comment

Siri is an intelligent personal assistant and knowledge navigator which works as an application for Apple’s iOS. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. Apple claims that the software adapts to the user’s individual preferences over time and personalizes results, and performing tasks such as finding recommendations for nearby restaurants, or getting directions.
Siri was originally introduced as an iOS application available in the App Store by Siri, Inc., which was acquired by Apple on April 28, 2010.Siri, Inc. had announced that their software would be available for BlackBerry and for Android-powered phones, but all development efforts for non-Apple platforms were cancelled after the acquisition by Apple.
Siri has been an integral part of iOS since iOS 5 and was first supported on the iPhone 4S. Siri is supported on the iPad (3rd generation), the new iPhone 5 and iPod Touch (5th generation) in September 2012 with the release of iOS 6. Siri is not supported on iPhone 4 and earlier versions, even in iOS 6.

NASA’s Android-powered mini-satellites

September 19, 2012 § 4 Comments

What could you do with $3500 and an old smartphone? You could toss the phone, take the cash, and buy yourself the next best smartphone. Or, with a bit of a DIY attitude, you could push the limits of this hefty piece of technology and use it to power something else.

That’s the initial idea of the PhoneSat project, a small satellite project run by a team of engineers at NASA’s Ames Research Center. The PhoneSat team has built and developed two small satellite models powered by Android smartphones, with the rest of the body constructed mostly of things you could buy at your nearest hardware store.

At a 2009 meeting of the International Space University’s Space Studies Program, a group of young engineering students had a big idea: Why not try to develop a cheap satellite-control system using mostly off-the-shelf consumer technology items? They wanted to see if they could create something that could survive in space using existing technology, rather than spending resources to invest in building items from the ground up.

The idea is relatively simple. “Here’s what you have in your pocket, and it can fly in space,” says Oriol Tintore, a software and mechanical engineer for NASA’s PhoneSat project.

After we reported on these microsatellites earlier this summer, the PhoneSat team invited us to visit their lab at NASA’s Ames Research center in Moffett Field, California, in the heart of Silicon Valley. They showed off the two current PhoneSat models, PhoneSat 1.0 and PhoneSat 2.0, and further explained their upcoming mission, now scheduled for November 11, 2012.

PhoneSat: Meet the team, and the units

From left to right: Jasper Wolfe, Jim Cockrell, Oriol Tintore, Alberto Guillen Salas, and Watson Attai. Photo by Robert Cardin.

At a 2009 meeting of the International Space University’s Space Studies Program, a group of young engineering students had a big idea: Why not try to develop a cheap satellite-control system using mostly off-the-shelf consumer technology items? They wanted to see if they could create something that could survive in space using existing technology, rather than spending resources to invest in building items from the ground up. This big idea put the PhoneSat project into motion.

Today, team PhoneSat is a small group of about ten engineers, led by Small Spacecraft Technology program manager Bruce Yost and PhoneSat project manager Jim Cockrell.

“[PhoneSat] gives young engineers a chance to work on something that’s actually going to fly in space early on in their careers,” said Cockrell.

Although the project idea emerged in 2009, the building, designing, and testing of the parts that would make up the PhoneSat 1.0 model began in 2010. To power the satellite units, the PhoneSat team turned to small phones with large processors.

“This phone has a very powerful processor of 2.4MHz, which is more powerful than most processors out there in space,” says Jasper Wolfe, who handles altitude control for the project. “It has just about everything we need, so why not use it? It’s a few hundred dollars compared to tens of thousands of dollars.”

A PhoneSat 1.0 model. Photo by Robert Cardin.

The units themselves are small—compact enough to fit in your hand. At 10cm by 10cm by 10cm, each device is barely larger than a coffee cup. And the cost of each of these units is relatively cheap: PhoneSat 1.0 costs roughly $3500, while PhoneSat 2.0 is about $7800 due to its more advanced hardware.

PhoneSat 1.0 houses a Google Nexus One smartphone, which runs a single Android application that the team developed themselves. All of the Nexus’s phone capabilities have been disabled—Wolfe jokes that they have to set the phones to “airplane mode” before launch—and instead the device relies on the PhoneSat app for communication and data recording.

The first PhoneSat 1.0 model crashed during a speed test when its parachute deployed early, ruining the Nexus One inside. Photo by Robert Cardin.

The PhoneSat team was initially attracted to the Nexus One because, at the time, it was one of the best smartphones available. Plus, they liked the open-source nature of developing for the Android platform.

“We talked about whether we should use an Android phone versus something else, like an iPhone, and the consensus was that the iPhone was a great phone, but an Android phone was a great satellite,” says Tintore, the team’s mechanical and software engineer.

Besides the Nexus One, the main pieces of the satellite include external batteries and an external radio beacon. A watchdog circuit will monitor the system and reboot the Nexus if necessary.

PhoneSat 1.0 (left) and PhoneSat 2.0 (right). Photo by Robert Cardin.

The team plans to evolve the satellites as technology evolves, which is why PhoneSat 2.0 uses a Samsung Nexus S instead of a Nexus One. In fact, the Nexus S has an added gyroscope already built in, which has been “extraordinarily helpful” in building a next-gen satellite, according to Wolfe. “It’s just a tiny little chip, but very useful,” he says. The gyroscope helps the phone measure and maintain orientation, so it assists with navigation as well as with the motion and rotation of the phone itself.

PhoneSat 2.0’s design includes a two-way S-band radio, solar arrays for unlimited battery regeneration (“Well, for as long as there is sun,” says Yost), and a GPS receiver. The radio will command the satellite from the ground, while the solar panels will enable the unit to embark on a mission with a long duration. Also built into the PhoneSat 2.0 design are magnetorquer coils (electromagnets that interact with Earth’s magnetic field) and reaction wheels to control the unit’s orientation in space.

Each model has been tested in environments that closely resemble what they could encounter while in space. Two Nexus One phones were launched on smaller rockets in 2010 as a preliminary test of how the phones will handle high speeds and high altitude. One rocket crashed and destroyed the smartphone; the other landed with the Nexus One perfectly intact. Both PhoneSat 1.0 and 2.0 models have also been tested in a thermal-vacuum chamber, on vibration and shock tables, and on high-altitude balloons, all with great success.

PhoneSat’s first mission: survival

Three of the PhoneSat units—two PhoneSat 1.0 models and one PhoneSat 2.0—are gearing up for their first mission. They’ll be hitching a ride on the first test flight of the orbital Antares rocket, which is scheduled to launch on November 11. Because the satellites are “hitchhikers,” according to the team, their status is entirely dependent upon when the Antares is ready to fly. The Antares’s first test flight was originally supposed to occur in August of this year, but that was delayed due to complications with the launch facility.

A scene from the PhoneSat lab. Most of these parts are common items anyone can purchase. Photo by Robert Cardin.

The delay of the launch was somewhat beneficial to the PhoneSat team, because it gave them more time to improve PhoneSat 1.0 and test PhoneSat 2.0. Originally, only PhoneSat 1.0 models were scheduled to fly on the Antares, but the delay allowed for a PhoneSat 2.0 model to go for a ride as well.

Each PhoneSat unit has its own role in the mission, which is more of a tech demonstration to show off what the team has accomplished with these microsatellites. The team jokes that PhoneSat 1.0 has a simple Sputnik-like goal of broadcasting status data back to Earth and taking photos. PhoneSat 2.0 will test the subsystems on the satellites themselves—features such as desaturation, location control, and the power-regeneration system.

The satellites will be in orbit for about 10 to 14 days before they reenter our atmosphere. According to Cockrell, mission duration isn’t limited by battery life and power, bur rather by atmospheric drag: The higher the satellites go, the longer they will stay in orbit. Although the PhoneSat 1.0’s batteries are expected to run out after ten days, PhoneSat 2.0 is solar powered, so it could, in theory, live longer. But it will still come back at around the same time as the two PhoneSat 1.0 models will, because they all have roughly the same mass.

The future of PhoneSat

Photo by Robert Cardin.

The team has plenty of ideas on things they’d like to try in the next PhoneSat rebuilds. Because their projects aren’t specific missions and are more about tech demonstrations, the team can always develop further by adding new subsystems. They definitely envision building a PhoneSat 3.0, a 4.0, and even more, because the team views the project as never being fully complete.

“We don’t have a defined level of when it’s completed,” Wolfe explains. “[It’s more about] however much you can make in a bit of time, how much can you get out of three months, then six months, and so on.”

According to Alberto Guillen Salas, the team’s hardware engineer, using materials that are mostly already built and ready to go gives the team the ability to develop satellites in a very short time. That way, they can test the units often to keep improving them.

As for smartphone models to try next, Wolfe suggested the Samsung Nexus Galaxy “because of the name” and to stick with using phones from the Google family. The team would also like to use a smartphone with a high-resolution camera to capture good quality photos from space.

PhoneSat’s next expected launch will be in the summer of 2013, when the team will be advancing the development of the PhoneSat 2.0 unit. The primary focus of this next tech demo is to push the 2.0 system and see what the group can do with it. Plus, the team can use information gathered from the first PhoneSat 2.0 launch this year to make improvements on the model. Radiation testing is extremely difficult to perform, so Cockrell anticipates that the team may have to update PhoneSat 2.0’s design to protect it from radiation for future launches. Only one unit will participate in the 2013 launch.

The part of the PhoneSat project that really excites the team is the possibility of involving the public in future PhoneSat developments, mainly through Android application development. The group would like to open the project up to allow people to write apps for the PhoneSats, and then send the units into space. The PhoneSat project has gathered interest through its public appearance at Maker Faire, through Random Hacks of Kindness, and through the International Space Apps Challenge.

“We’re getting a whole new crowd of people involved in space, people that didn’t have the money to get involved before,” says Wolfe, “though the leveraging of the open-source community around Android is also opening up a whole new market of people who want to get involved in space.”


(Source :


Xbox, Not Windows, Is The Future Of Microsoft – Says Steve Ballmer

September 19, 2012 § 2 Comments

The Xbox, the Zune, Windows Phone: in ten years, these products will be the model for Microsoft, as it evolves away from a software company into a hardware-and-services provider. At least, that’s the corporate vision outlined by Microsoft CEO Steve Ballmer this weekend.

Ballmer formalized the shift in strategy in an interview published by the Seattle Times over the weekend. Among other topics, Ballmer talked about the blurring lines between tablets and PCs – and dropped hints about the price of the company’s upcoming Surface tablets.

An Epic Year For Microsoft?

Microsoft is updating all of its major product lines this year, with the new Windows 8, Office, Windows Server and Internet Explorer releases, plus ongoing updates to the software used by its Xbox gaming platform. But while Ballmer called the year “epic,” he appears ready to if not jettison those products, at least subsume them in a new wave of devices around which Microsoft can develop platforms.

“I think when you look forward, our core capability will be software, [but] you’ll probably think of us more as a devices-and-services company,” Ballmer told the Times. “Which is a little different. Software powers devices and software powers these cloud services, but it’s a different form of delivery….

“Doesn’t mean we have to make every device,” Ballmer added. “I don’t want you to leap to that conclusion. We’ll have partners who make devices with our software in it and our services built in… We’re going to be a leader at that.”

Surface Pricing Secrets?

The latest example of Microsoft’s newfound love of devices is the Surface tablet, which will be designed by Microsoft in both the Surface RT and Surface Pro configurations; the basic model will use the Windows RT operating system and an ARM chip, while the more powerful model will use a traditional x86 Intel chip and the new Windows 8 OS.

One of the most important unanswered Surface questions has been what Microsoft plans to charge for the devices; analysts have said they believed that Microsoft would be justified in charging a premium, perhaps up to $1,000 for the higher-end surface configuration.

But Ballmer’s interview might have given away a little more of his thinking on the subject. Ballmer was asked whether or not the Surface would compete with the Apple iPad on price or on features. Ballmer didn’t reply directly, but framed his answer by noting that cheaper devices are often expected to be less full-featured.

“If you say to somebody, would you use one of the 7-inch tablets, would somebody ever use a Kindle (Kindle Fire, $199) to do their homework?” Ballmer answered. “The answer is no; you never would. It’s just not a good enough product. It doesn’t mean you might not read a book on it…

“If you look at the bulk of the PC market, it would run between, say, probably $300 to about $700 or $800,” Ballmer said. “That’s the sweet spot.”

So will a Surface RT be priced about $300, with the higher-end models running between $700 to $800? Sure sounds like it.

Microsoft Moving “Away” From Software?

It’s a real stretch to imagine Microsoft ditching its cash cows: Windows, Office and its enterprise products like Windows Server, SharePoint and related services. After all, those produts make up the bulk of the company’s revenues.

But a transition to a cloud-based services model does make sense. Today, Microsoft delivers more than 200 online services to more than 1 billion customers and 20 million businesses in more than 76 markets worldwide, the company recently claimed.

Of course, the majority of these are delivered by the original computing “device,” the PC. But as rivals like Google capitalize on their own services and advertising-driven models, Microsoft has to take additional steps in this direction. While the new Microsoft Office Web Apps make a conscious effort to avoid giving away Microsoft Office’s value-added features for free, they clearly pave the way toward a services-driven future.

Xbox And Beyond

In devices, Microsoft manufactures the Xbox, its own platform for online movies and music sales. And though it has done so quietly, Microsoft also sells its “own” PCs: the “Signature” line of notebooks that it buys from its OEM partners, optimizes by removing adware, and sells directly to consumers.

Ballmer took pains to avoid stating that Microsoft would in fact, make every device that uses its software. But his statement offers ammunition to those who think that Microsoft may end up buying Nokia, its premier Windows Phone partner which just happens to have an ex-Microsoft exec, Stephen Elop, as CEO. Microsoft’s “Signature” strategy could eventually morph into a Google Nexus-like approach of building a “flagship” notebook that hardware partners could use as a reference model.

Windows 8 Is All About Mobile

Ballmer also offered one more telling tidbit: “[Windows 8] also brings us into this world of much more mobile computing and more mobile form factors,” Ballmer said. “I think it’s going to be hard to tell what’s a tablet and what is a PC.

Microsoft’s decision to enter the tablet hardware market has already proved problematic for at least one notebook manufacturer, Acer, which has publicly complained about Microsoft’s decision to manufacture the Surface tablet. But Ballmer’s statement can also be read another way: that making traditional notebooks is a backward looking strategy, and those companies who cling stubbornly to their old business models may be passed by.

Announcing: Great Improvements to Windows Azure Web Sites (from ScottGu’s Blog)

September 17, 2012 § Leave a comment

Announcing: Great Improvements to Windows Azure Web Sites

I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.

Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately.

New “Shared” Scaling Tier

Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update.

Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed:


Below are some more details on the new “shared” option, as well as the existing “reserved” option:

Shared Mode

With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB.

A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. in addition to  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers).

You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month).

Reserved Instance Mode

In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits.

You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds:


Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.

Elastic Scale-up/down

Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application.

If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage).

Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective.

Improved Custom Domain Support

Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g.  You can associate multiple custom domains to each Windows Azure Web Site.

With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use you can instead just have (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems).

We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them:


As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime.

Continuous Deployment Support with Git and CodePlex or GitHub

One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page:


The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure.

With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode).

Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site:


You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.

The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it:

This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git:


Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks.

Support for multiple branches

Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.

For example, you can now have two web-sites – a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live.


This 1 minute video demonstrates how to configure which branch to use with a web-site.


The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it.

We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available.

Hope this helps,


P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:



(Source :

.NET Deadlock Detection with PostSharp

September 17, 2012 § 2 Comments

I will keep writing this every time I read or come across, As my past experience is the evidence how sleepless night I have spend 🙂 but was worth it

SharpCrafters, makers of the AOP framework PostSharp, have developed a drop-in deadlock detection toolkit. This toolkit works with most standard locking primitives such as Mutex, Monitor, and ReaderWriterLock with only a single line of code added to the project.

When a thread waits for a lock for more than 200ms, the toolkit will run a deadlock detection routine. If it detects a deadlock, it will throw DeadlockException in all threads that are part of the deadlock. The exception gives a detailed report of all threads and all locks involved in the deadlock, so you can analyze the issue and fix it.

Deadlock detection isn’t exactly hard, but requires a significant amount of boilerplate code to be meticulously applied across the application. The PostSharp Threading Toolkit automatically injects this code around lock statements using IL rewriting techniques.

It should be noted that this uses very conservative logic to prevent false positives; they consider erroneously throwing a DeadlockException to be worse than an undetected deadlock. Also, it won’t work at all on asymmetric locks such as ManualResetEvent, AutoResetEvent, Semaphore and Barrier because “not clear which thread is responsible for ‘signaling’ or ‘releasing’ the synchronization resource”.

Locks it can handle include:

  • Mutex: WaitOne, WaitAll, Release
  • Monitor: Enter, Exit, TryEnter, TryExit (including c# lock keyword; Pulse and Wait methods are not supported)
  • ReaderWriterLock: AcquireReaderLock, AcquireWriterLock, ReleaseReaderLock, ReleaseWriterLock, UpgradeToWriterLock, DowngradeToReaderLock (ReleaseLock, RestoreLock not supported)
  • ReaderWriterLockSlim: EnterReadLock, TryEnterReadLock, EnterUpgradeableReadLock, TryEnterUpgradeableReadLock, EnterWriteLock, TryEnterWriteLock, ExitReadLock, ExitUpgradeableReadLock, ExitWriteLock,
  • Thread: Join

The PostSharp Threading Toolkit is released under the BSD 2-Clause License and is available on GitHub.

Entity Framework 5.0: Spatial Data Types, Performance Enhancements, Database Improvements

September 13, 2012 § Leave a comment

Entity Framework 5.0 provides support for spatial data types and can be implemented using DbGeography and DbGeometry classes. It also introduces automatic compilation of LINQ to Entities queries by translating inline LINQ queries in a cached mode. Hence, developers need not have to make use of CompiledQuery.Compile method as in previous versions.

According to Entity Framework team at Microsoft, LINQ to Entities queries has improved the performance of applications by nearly 600% when compared with Entity Framework 4.0.

Entity Framework 5.0 automatically detects the database engine depending upon the development environment for the creation of new databases including the ability to make use of enum properties in entity classes. The framework also adds tables to the existing database if the target database doesn’t contain any tables from the model.

The Entity Framework designer included with Visual Studio 2012 consist of new features such as DbContext code generation, multiple-diagrams per model, table-valued functions in addition to batch import of stored procedures which allows multiple stored procedures to be added during model creation. The new models created using the designer will generate a derived DbContext and POCO classes by default.

Where Am I?

You are currently viewing the archives for September, 2012 at Naik Vinay.