Agile Software Architecture Sketches and NoUML

June 26, 2013 § Leave a comment

Interesting article I read on InfoQ.com  about the role architecture in Agile development. So I thought to share it in my blog.

 

Agile Software Architecture Sketches and NoUML

Posted by Simon Brown

If you’re working in an agile software development team at the moment, take a look around at your environment. Whether it’s physical or virtual, there’s likely to be a story wall or Kanban board visualising the work yet to be started, in progress and done. Visualising your software development process is a fantastic way to introduce transparency because anybody can see, at a glance, a high-level snapshot of the current progress. As an industry, we’ve become pretty adept at visualising our software development process over the past few years although it seems we’ve forgotten how to visualise the actual software that we’re building. I’m not just referring to post-project documentation, this also includes communication during the software development process. Agility is about moving fast and this requires good communication, but it’s surprising that many teams struggle to effectively communicate the design of their software.

Prescribed methods, process frameworks and formal notations

If you look back a few years, structured processes and formal notations provided a reference point for both the software design process and how to communicate the resulting designs. Examples include the Rational Unified Process (RUP), Structured Systems Analysis And Design Method (SSADM), the Unified Modelling Language (UML) and so on. Although the software development industry has moved on in many ways, we seem to have forgotten some of the good things that these older approaches gave us. In today’s world of agile delivery and lean startups, some software teams have lost the ability to communicate what it is they are building and it’s no surprise that these teams often seem to lack technical leadership, direction and consistency. If you want to ensure that everybody is contributing to the same end-goal, you need to be able to effectively communicate the vision of what it is you’re building. And if you want agility and the ability to move fast, you need to be able to communicate that vision efficiently too.

Abandoning UML

As an industry, we do have the Unified Modelling Language (UML), which is a formal standardised notation for communicating the design of software systems. I do use UML myself, but I only tend to use it sparingly for sketching out any important low-level design aspects of a software system. I don’t find that UML works well for describing the high-level software architecture of a software system and while it’s possible to debate this, it’s often irrelevant because many teams have already thrown out UML or simply don’t know it. Such teams typically favour informal boxes and lines style sketches instead but often these diagrams don’t make much sense unless they are accompanied by a detailed narrative, which ultimately slows the team down. Next time somebody presents a software design to you focussed around one or more informal sketches, ask yourself whether they are presenting what’s on the sketches or whether they are presenting what’s still in their head.

 (Click on the image to enlarge it)

Abandoning UML is all very well but, in the race for agility, many software development teams have lost the ability to communicate visually too. The example NoUML software architecture sketches (above) illustrate a number of typical approaches to communicating software architecture and they suffer from the following types of problems:

  • Colour-coding is usually not explained or is often inconsistent.
  • The purpose of diagram elements (i.e. different styles of boxes and lines) is often not explained.
  • Key relationships between diagram elements are sometimes missing or ambiguous.
  • Generic terms such as “business logic” are often used.
  • Technology choices (or options) are usually omitted.
  • Levels of abstraction are often mixed.
  • Diagrams often try to show too much detail.
  • Diagrams often lack context or a logical starting point.

Some simple abstractions

Informal boxes and lines sketches can work very well, but there are many pitfalls associated with communicating software designs in this way. My approach is to use a small collection of simple diagrams that each show a different part of the same overall story. In order to do this though, you need to agree on a simple way to think about the software system that you’re building. Assuming an object oriented programming language, the way that I like to think about a software system is as follows … a software system is made up of a number of containers, which themselves are made up of a number of components, which in turn are implemented by one or more classes. It’s a simple hierarchy of logical building blocks that can be used to model most of the software systems that I’ve encountered.

  • Classes: in an OO world, classes are the smallest building blocks of our software systems.
  • Components: components (or services) are typically made up of a number of collaborating classes, all sitting behind a coarse-grained interface. Examples might include a “risk calculator”, “audit component”, “security service”, “e-mail service”, etc depending on what you are building.
  • Containers: a container represents something in which components are executed or where data resides. This could be anything from a web or application server through to a rich client application, database or file system. Containers are typically the things that need to be running/available for the software system to work as a whole. The key thing about understanding a software system from a containers perspective is that any inter-container communication is likely to require a remote interface such as a web service call, remote method invocation, messaging, etc.
  • System: a system is the highest level of abstraction and represents something that delivers value to, for example, end-users.

Summarising the static structure of your software with NoUML

By using this set of abstractions to think about a software system, we can now draw a number of simple boxes and lines sketches to summarise the static structure of that software system as follows (you can see some examples on Flickr):

  1. Context diagram: a very high-level diagram showing your system as a box in the centre, surrounded by other boxes representing the users and all of the other systems that the software system interfaces with. Detail isn’t important here as this is your zoomed out view showing a big picture of the system landscape. The focus should be on people (actors, roles, personas, etc) and software systems rather than technologies, protocols and other low-level details. It’s the sort of diagram that you could show to non-technical people.
  2. Containers diagram: a high-level diagram showing the various web servers, application servers, standalone applications, databases, file systems, etc that make up your software system, along with the relationships/interactions between them. This is the diagram that illustrates your high-level technology choices. Focus on showing the logical containers and leave other diagrams (e.g. infrastructure and deployment diagrams) to show the physical instances and deployment mappings.
  3. Components diagrams: a diagram (one per container) showing the major logical components/services and their relationships. Additional information such as known technology choices for component implementation (e.g. Spring, Hibernate, Windows Communication Foundation, F#, etc) can also be added to the diagram in order to ground the design in reality.
  4. Class diagrams: this is an optional level of detail and I will typically draw a small number of high-level UML class diagrams if I want to explain how a particular pattern or component will be (or has been) implemented. The factors that prompt me to draw class diagrams for parts of the software system include the complexity of the software plus the size and experience of the team. Any UML diagrams that I do draw tend to be sketches rather than comprehensive models.

(Click on the image to enlarge it)

A single diagram can quickly become cluttered and confused, but a collection of simple diagrams allows you to easily present the software from a number of different levels of abstraction. And this is an important point because it’s not just software developers within the team that need information about the software. There are other stakeholders and consumers too; ranging from non-technical domain experts, testers and management through to technical staff in operations and support functions. For example, a diagram showing the containers is particularly useful for people like operations and support staff that want some technical information about your software system, but don’t necessarily need to know anything about the inner workings.

Organisational ideas, not a standard

This simple sketching approach works for me and many of the software teams that I work with, but it’s about about providing some organisational ideas and guidelines rather than creating a prescriptive standard. The goal here is to help teams communicate their software designs in an effective and efficient way rather than creating another comprehensive modelling notation. It’s worth reiterating that informal boxes and lines sketches provide flexibility at the expense of diagram consistency because you’re creating your own notation rather than using a standard like UML. My advice here is to be conscious of colour-coding, line style, shapes, etc and let a set of consistent notations evolve naturally within your team. Including a simple key/legend on each diagram to explain the notation will help too.

There seems to be a common misconception that “architecture diagrams” must only present a high-level conceptual view of the world, so it’s not surprising that software developers often regard them as pointless. In the same way that software architecture should be about coding, coaching and collaboration rather than ivory towers, software architecture diagrams should be grounded in reality too. Including technology choices (or options) is a usually a step in the right direction and will help prevent diagrams looking like an ivory tower architecture where a bunch of conceptual components magically collaborate to form an end-to-end software system.

“Just enough” up front design

As a final point, Grady Booch has a great explanation of the difference between architecture and design where he says that architecture represents the “significant decisions”, where significance is measured by cost of change. The context, containers and components diagrams show what I consider to be the significant structural elements of a software system. Therefore, in addition to helping teams with effective and efficient communication, adopting this approach to diagramming can also help software teams that struggle with either doing too much or too little up front design. Starting with a blank sheet of paper, many software systems can be designed and illustrated down to high-level components in a number of hours or days rather than weeks or months. Illustrating the design of your software can be a quick and easy task that, when done well, can really help to introduce technical leadership and instil a sense of a shared technical vision that the whole team can buy into. Sketching should be a skill in every software developer’s toolbox. It’s a great way to visualise a solution and communicate it quickly plus it paves the way for collaborative design and collective code ownership.

Why Testing Matters in Agile Projects

November 18, 2012 § Leave a comment

posted by Sharon Robson

Just like the passing of a monarch (the King is dead…long live the Queen) We are now hearing a similar thing in software development …”Testing is dead, we don’t need testers anymore!”….then……whoa!!, the customer is unhappy….then…….“Long live Testing”. But an even better, rounder, more effective testing. And like many resurgent monarchs through history (my favourite is Queen Elizabeth I), Testing will powerfully help redefine the way things are done and how they work.

I bet you are thinking that’s a big boast right? Well here’s how it’s going to happen….

Let’s discuss the concept of testing – what is it? Testing is the process of considering what is “right”, defining methods to determine if the item under test is “right”, identifying the metrics that all us to know how “right” it is, understanding what the level of “rightness” means to the rest of the team in terms of tasks and activities, and assisting the team make good decisions based on good information to hit the level of “rightness” required.

Testing is way beyond random thumping of the keyboard hoping to find defects; testing is about true understanding of the required solution, participating in the planning of the approach taken to deliver it, understanding the risks in the delivery methods and how to identify them as soon as possible to allow the appropriate corrective action to be taken. Testing is about setting projects up for success and helping everyone to understand the appropriate level of success required.

So why do we still care about testing, isn’t everyone in the agile team doing it? Well, actually NO!!

It all begins with the concept of quality. “That’s easy” you say to yourself, and if you do I dare you to take it to the next step….define it! Ask your development team, ask the customer, ask the Product Owner, ask the Project Manager, ask the CIO and CEO of the organisation to define quality, define good, define good enough. Do they agree? If not there is your first problem. The role of testing is there to help teams define and understand the impact of quality.

“Impact of quality?? What is that?” is your next question. Here is a fact – Quality costs! But even worse – true quality cost more! To build it in we first have to define it and then find it. There is no way to have a quality solution without building the quality into the process, the techniques and building thorough testing, at all levels, into the work that we do.

“Gotcha!” says the devs “We define done to tell us about quality in agile”. “Rubbish!” is my reply. In all my time in IT the most exciting concept I ever heard of was that of defining “done” – all the components, all the knowledge gathered, all the information passed on….the complexity of the solution defined up front, all the team (development and customer teams), being aware of the work to be completed to generate “done”. Defining done reminds me of what testing is all about. But the bad news is that we don’t do it! No! We don’t! Just like we don’t define quality…we just pretend we do. Ouch! Did that hurt?

Why did I say that? Firstly, the definition of done, like quality is very difficult. Quality is like beauty – it is in the eye of the beholder. Testing is all about being trained to focus on the definition of and then the detection of quality (or the lack of it) and also communicate what the quality levels mean across the project in terms of progress, risk and the work remaining. Defining done is the same really…done is in the eye of the “doer” (not the beholder) and this allows us to understand the many levels of done…done (my bit), done (our bits), done (the story), done (the iteration), done (the feature), done (the release), done (the product), done (the project).

“Well that’s ok, we can define when it’s finished” is your witty response to this problem. Now here is the challenge! Defining “done” is very different to defining “done well”. The “well” bit of “done well” is not only about finishing the work required for the thing under production, but for also defining how we will know that it is finished to the standard required. Each level of done has a different standard of completion and a very different standard of quality of the “well”. There is one group of people inside a team who are ideally suited to not only assisting in defining “done well” but also the process and techniques that can be used to find the degree of “well doneness”.

Step one, define finished…well that seems easy – make sure that all the components needed to deliver the level of done have been completed by the doer. Ok, sounds good so far. But here’s the rub…nothing is “done” until the customer is happy with the product. That is one of the underpinning attributes of the Agile Manifesto. I quote “working software over comprehensive documentation”. For some unknown reason the definition of “working” got confused with the definition of “done” and the concept of “comprehensive documentation” got confused with the definition of well tested. And then this is trumped with the principle “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. So what makes software valuable? Is it that the product  is there? No! it is that it does its job well!! So is this done (my bit), done (our bits), done….?

So how can we do it? For a start we need to recognise that testing considers more than just the functionality which is where the developers and the users focus. What it “does” is the easy bit (tongue in cheek – I promise). It is easy to define, easy to build, easy to assess. Functionality tends to be binary….like done! There are two levels of done…”Done” and “Not Done”….there is nothing like “almost done”. Functionality is like doneness….it functions or it does not….binary! But then we get into the realm of “done” and then “done well”, and then even further “done well for whom?”

Testing focuses on understanding what makes a solution or an approach valuable to the people using it. Value is context dependent and has to be defined in the context of the project and the customer. Using standards such as ISO9126 with its 6 quality characteristics (functionality, reliability, usability, efficiency, maintainability and portability) with their sub characteristics allow the testers to provoke great discussions around what is good and well and valuable. But even better true testing is needed to find these attributes. This type of testing also takes time and planning to do well, and even longer to do very well.

All the non-functional attributes of a solution are design level attributes and usually cannot be evolved iteratively. They need to be discussed up front, as soon as possible in the definition of the solution and yes….as soon as possible in the definition of the design of the solution. If these attributes are not built in right from the beginning they will never be able to be found through testing at the end. Can unit testing do that? No!

“Ahhhh – that’s why we do Acceptance Testing Driven Development!” You say. I agree, but we don’t do ATDD properly, we only focus on what the customers know about and ask about, not about the things that are needed to be thought about and captured early.

“Let’s just focus on the functionality” is a phrase I often hear that causes me to cringe….it means that it is too hard to think about anything else so let’s just get going and hope that it is right. Have you EVER heard of anything LESS agile? Agile is about building it right, the first time!

Testing contributes to this building it right, the first time, via static testing. Static testing is “testing the solution without executing the code”. The beauty of static testing is that it can be done anywhere and at any time. Static testing should happen when someone comes up with the first idea for a solution. Ideally a tester is there saying things like “that’s interesting functionality…how will you know it is valuable?”. Testing the concept to see it will actually deliver the required solution through questions, diagrams and the planning of the solution is a vital part of the life of a product.

We can also test the planning of the delivery, focusing on risks and the timing and dependencies of the components and then how the various levels of “done” can be used to prove that we are all heading towards the right solution. Defining done to the extent of defining “well done” requires the engagement of the right people at the beginning of the work, not after the code cutting has started. Testing the plan is vital – have the right environments, teams, resources, approaches been defined to deliver value? This is often a question that is NOT answered prior to code cutting beginning. The wonderful new regime of Testing being alive and well sees it being asked….AND answered prior to anyone moving to the next step.

This is best achieved by the up- front definition of acceptance criteria – using test design techniques. “WHAT??? Testing already???” you yell. Yes, of course! What is the value of all the training and certifications that testers achieve if they don’t apply it up front? Most of the test execution activities that you see testers do are as a result of their test design activities based on risk and specification based test design techniques. Concepts, features, epics and stories are just specifications masquerading under another name. Even better, in the ideal agile world, testers are involved in their definitions so that they are able to be statically tested and then have dynamic techniques applied to them prior to anyone attempting to cut a line of code.

So then we start the real work (chuckle….anyone who thinks that what happens before code cutting isn’t work does not understand the concept of work), We start to cut the code, do unit testing, promote the code, do integration testing, promote the code to the test environment all in the hunt for (drum roll)……Emergent Behaviour!

Emergent Behaviour  – This is the true value of the testers in an agile team, focusing on how the modules and code and stories hang together to deliver the required functionality. But we all know this is where the good bugs live! Bugs that only show their heads when we start moving through the solution in various ways. The skill of the tester is to design those paths through the system following both the customer’s needs and also the risk of the path, using test techniques to identify key areas of focus. This is where the skill in using Decision Tables and Finite State Models (such as N-1 switch coverage) really come into the fore. These are the bugs that won’t be found at unit or integration test time, but will cripple the acceptance testing immediately.

The process of system testing, designing tests to promote risky emergent behaviour and also designing tests to provide the empirical evidence required to assess such things as coverage, residual risk, defect density, progress rates, and other quality attributes is also the skill of the tester. I would not suggest that developers or BAs cannot do this. I would suggest however that they are way too busy doing their jobs! I would also suggest that the testing mind set and skills are best to be effective and efficient in planning, executing and reporting these things.

This brings us to the discussion of Validation and Verification. What’s the difference? Verification is making sure it is built correctly – adhering to standards, following patterns, doing the right thing at the right time. Validation on the other hand is defining what is the right thing! Both of them need to be done, and testing gives us the skills and techniques to do both, as well as do them to the various attributes of the system that need to be covered (such as the quality).

The next question is “do we still need testers?” IMHO – yes!!! Why? The testing practitioners think differently to everyone else on the team. Testers are “professional pessimists” (ISTQB Foundation Syllabus). Good testers spend our time focusing on the potential problems, not the potential solutions. Right from the beginning we consider the bad news  – what could go terribly wrong, and how quickly can we find it, or even better, what can we do to stop it? This fits perfectly with the agile concepts of “failing fast” and understanding the risks as soon as possible. We need this mind set engaged as soon as possible in the project and solution design to identify as many of the potential hurdles as soon as we can.

Not many people know enough about testing to be able to accurately plan the testing effort, and in an agile team the focus on where and when to test stuff is huge! There needs to be a clear line between story level testing, iteration level testing and feature level testing; remember the levels of “done” from before? Who and where and when each of the tests need to be done needs to be clearly defined to ensure that all the environments, tools, techniques, data and people are available to execute it. Testing (like most things) does not happen by accident in good teams…good testing takes great planning. Great testing takes excellent planning. Testers and testing need to be intimately considered in this planning to make sure all the appropriate set ups are established and put in place.

“How do testers do that?” you ask? Most people only think of testing as test execution, but in the real world the bit of testing that you see is the bit of testing that is the easiest. Executing test cases takes about 25% of the total test effort. Most testing is done in the mind or in documentation. “OMG”….you are shocked…..”Agile says “working software over comprehensive documentation”! Yes it does! But testing can happen on any and all documentation (stories, whiteboard designs, acceptance criteria etc). The first and biggest hurdle of all is people or teams who do not want to define “well, value, done” or don’t want to get into specifics because it is too hard.

Blended teams allow us to have the best of every world, deliberately excluding a skill set or knowledge set is naïve and truly immature behaviour and does not promote longevity of solutions or longevity of the approach. An integrated team that covers all the skills required to deliver the best possible solution in the best possible time for the best possible price is just plain smart and good business. Recognising the skills of the other people in the team and leveraging them to their maximum is also just plain smart.

Do testers need to be a special group of people? No….anyone can be a tester on an agile project, in fact everyone is a test executer on an agile project. The main thing is that all the team members  have the discipline to ensure that they have put their “testing heads” on during the day to day work to complete all the testing activities (not just execution) that are required. If team members don’t take the time or make the effort to plan, design and then applying testing to their work products, to their approach and their solution then the team will have no idea of the progress they are making and the issues that they are facing.

So what advice can I leave you with:

  • Make sure your whole team has a clear and shared understanding of the definition of done at every level – my task, the story, the iteration, the release, the project and the product
  • Make sure your whole team has a clear and shared understanding of what quality means on this product – what constitutes “working software”
  • Testing is not bashing a keyboard hoping to find defects, nor is it just running unit tests
  • Testing is a whole team responsibility, that should start with the very first concept discussions and pervade every aspect of an agile project
  • Test early, and test often – waiting until the end of any piece of work is the wrong time to start thinking about testing
  • Static testing (examining every piece of work to ensure it contributes to the quality needed) is more valuable than executing test cases
  • Designing good tests is a specialist activity, all members of an agile team can do it but it needs the right mindset

Is testing dead in agile? Yes….traditional, old fashioned, end of the lifecycle testing is dead. Long live the new testing, integrated, up front, actively involved, challenging mind sets, challenging the status quo and enabling the team to deliver…deliver Value, deliver “working software” and deliver solutions that customers actually want!

Enterprise Architecture Anti Patterns: Proved No Concept

September 11, 2012 § Leave a comment

My last blog I mention the advantage of PoC in Enterprise Architecture and in the end I mention the negative part it. I thought to cover this part. We all know there is always pros and cons. Its suits someone or it won’t. That’s why commonsense matters. And chose whats right for you. Here is article on Enterprise Architecture Anti Patterns: Proved No Concept

 

When Concepts are as clear as the The Elephant on Acid

elephant

Anti Pattern Name: [Proved No Concept]

Type: [Management, Technical]

Problem: [Proof of Concept usually started in a hurry without a clear definition of purpose and agreed specification of the actual ‘concept to prove’. These end in acrimony when no concept is actually validated as the fundamental objective was not clear from the outset. Quite often they become tenuous ‘proof of technologies’ or really more orientation projects with technologies being trialled.]

Context: [Poor specification of requirements for the Proof of Concept is the main culprit. Over exuberance and lack of planning, ill-defined concepts, or ‘make it up as we go along’ behaviours all act as amplifiers.]

Forces: [lack of governance, poor scope definition, no real understanding of the concept to prove at outset, the Proof of Concept is often really about finding and defining the concept to prove.]

Resulting Context: [Inconclusive outcomes, project overrun, false starts, confusion, weak hypotheses, badly designed research vehicles.]

Solution(s): [Resist pressure to commence a Proof of Concept without a well-articulated and signed off specification of the concept, its scope and how success (or otherwise) will be determined. If the concept is very complex or elusive, split the Proof of Concept into multiple phases with definition and agreement / candidate selection being the first stage(s). A Proof of Concept (PoC) that proves OR disproves the validity of the concept is a successful PoC. One that fails to reach any meaningful conclusion due to confusion over the concept being proved or disproved is a failure.]

Source : http://stevenimmons.org/2011/12/enterprise-architecture-anti-patterns-proved-no-concept/

How I Would Design a Programming Degree

May 28, 2012 § Leave a comment

Yesterday, I attended ECPI Columbia’s Spring Advisory Board meeting. I was involved in the panel discussion regarding their IT degrees, and I viewed it as an opportunity to explain what I consider lacking in developer education. ECPI is regularly involved in community activities by providing facilities to user groups and code camps, so it was my pleasure to contribute in making their curriculum more valuable to both current and future programmers seeking degrees.

The Developer Degree

There was only one programming degree up for discussion: Bachelor’s in Database Programming. Of course, that meant it had many database courses with minimal impact to the modern software developer; unless you’re coding to a database without any form of data access abstraction. Due to the nature of this degree, I skipped over the database courses and focused my feedback on implementing agile practices where appropriate. For example, in the coding project classes, I think it’s better for the students as a whole to work in an Agile/Scrum manner. I’m not sure how exactly they implement the courses at this time, but if they’re doing waterfall, or they’re avoiding teams, then it’s not as valuable. I know I will have a reader prepared to criticize me for suggesting a practice where the whole team sinks or swims, but that’s the way it is in the real world: either the project is released or it isn’t.

I also recommended eliminating the four object-oriented courses in favor of courses focused on progressive programming paradigms. The criticism in turn was that students should learn different programming languages they can put on their resumes. I suppose that is useful to some degree, but without work experience in the language, it will not help much. The issue I have is that covering Java, Visual Basic, and C# is like learning different dialects of English: their structure is extremely similar, so it’s a stretch to say you really know another language. I don’t program in Java, but I can begin writing class libraries with little difficulty. The primary differences are in the frameworks used, and that’s not even the case with VB and C#.

If I’m hiring a developer fresh out of school for a project utilizing an object-oriented language, I want to know that the candidate understands object-oriented concepts and design. There are many other things I would like in a junior developer, but the language isn’t much of a barrier if the concepts are understood. Of course, my criterion for language experience is different when confronted with a more seasoned developer.

How I Would Do It

 

My experience on this advisory board made me start considering how I would design a degree program that would properly prepare a student to enter the business world as a developer. So, I’m going to list the courses I would require for a software developer degree (not database programmer) in a serialized course setting. If you’re reading this blog and don’t know what serialized means, I’ll assume you’re having one of those ill-caffeinated moments. In the context of a degree program, serialized is where only one class is taken at a time, typically in a very short cycle.

Note: ECPI’s program is required to carry certain courses by their accreditation program, and this is in no way like their course. This is simply me rattling off ideas, and actually implementing my approach would require much refinement.

Course Title Description
Professional Conduct You may not be a decent human being, but you should at least learn to act like one.
Professional Communication You don’t need to have the prose of Faulkner. You don’t even need to know who this Faulkner fellow is. In fact, I suggest you avoid his writing style. On the other hand, your emails and reports should not read like a teenager’s text message.
Critical Thinking You’re never going to make it as a software developer if you can’t analyze concepts with appropriate reasoning. You should also be able to identify when you’re wrong, because guess what: you’re not always right.
Logic Some developers manage to lack a basic understanding of logic. This is the source of some of the stupidest bugs known to mankind.
Imperative Programming With a firm basis in logic, imperative languages will come easily. Don’t be scared by the name, it’s basically conditions and statements, kind of like, “if I study all night, then I will be sleepy.”
Unit Testing Testing your code by running the program will either become tedious or neglected. Automate those tests, and write them up front in the future.
Object-oriented Programming Imperative programming is good and all, but what is this ‘I’ thing and how can it get the adjective ‘sleepy’ by the verb ‘study’?
Declarative Programming No one wants to hear you yap about how you missed class from staying up all night studying. We certainly don’t want to hear about what you were studying. Just tell us you missed class.
Refactoring Your code is ugly, clean it up.
Optimizing You’ve learned how to write your code so any decent developer can maintain it. Now it’s time to make it ugly again. Don’t worry, you already know how to hide your well-optimized, hideous implementations.
Object-oriented Design Patterns So, you came up with an intriguing solution to a problem. Someone else came up with it before you; stop wasting your time “inventing” new things.
Relational Data One day, your boss will ask you for a report. The next day, you will be asked for another, completely different view of the same data. Learning how to store and retrieve relational data will prevent the need of becoming an Excel expert.
Object-relational Mapping Enough data queries and your code will once again become an unreadable mess. Use the techniques of ORM to store and retrieve state.
Human-readable Data Store, transmit, and read data in formats computers, humans, and possibly other biological life-forms can read. Bonus: user interfaces can be specified in a similar manner.
User Interface Design You may not be an artist, but that’s no excuse for creating unusable user interfaces. You should be severely punished for even considering that Nyan Cat’s rainbow elimination can be used as a menu.
Tools of the Trade A dog swallowing your USB drive containing your classwork is sad, but it’s no excuse for not turning it in. Developers have many tools at their disposal to ensure their work is never lost, their software always builds, and their tasks are always known. Sounds horrible, I know, but they also have tools to prevent rote memorization and typing.
Requirements and Specifications If you thought coding was tough, try translating what some people consider English to a logical construct.
Organizing Do you want to be a basement dweller? I thought not. Team work is essential in creating complex, functional software. Knowing the techniques that work for creating continuously deliverable software goes much further than hacking it alone.
Develop Software Now that everyone understands how to organize, how to write specifications, and how to code, it’s time to put those skills to the test. The entire class will form a development team and create software for the project owner (your instructor).

That’s nearly enough credit hours to fill an associate’s degree. There are several options to pad it out, but I think I would go with a targeted math class focusing on operators, their properties, and functions as it would be a nice and useful extension to the class on logic. Discrete math would be valuable as well, particularly since it refers to certain data structures a developer may need to implement, and it contains concepts that are extremely valuable for more advanced programming topics (e.g. combinatorics).

On the Glaring Omission

Notice I didn’t include classes in algorithms and data structures. Generally, if it’s an algorithm deemed worthy enough to be taught from a textbook, it’s either an extremely rare, highly-optimized solution or it has been implemented directly in the default frameworks of common production languages. Useful data structures are almost always implemented in default frameworks (with the exception of the tuple, probably due to its impracticality without language support). Practical knowledge on algorithms is obtained from Imperative Programming, and the most useful data structures are covered by Object-oriented Programming. Let me explain: what better way to introduce linked lists than with the concept of the containment form of composition? Then, you can show the already implemented linked list in .NET: LinkedList<T> (what a surprise!). Afterwards, be sure to explain how almost everyone uses the array-backed List<T> instead. You can take this further in the Declarative Programming class to show that many modern C# developers have taken up immutable manipulation of sequences using LINQ.

I’m not saying that a class on algorithms and data structure is useless; just a waste of time if your goal is to be effective immediately out of school. If I see someone writing their own bubble sort on a project in C#, I would want to know why they’re wasting their time. When I took a position as a junior developer on a Delphi project, the senior developer reviewing my code asked me why I was reading data into an array and sorting it in just that manner. He then introduced me to TStringList, and I never looked back. I recall that I defended myself by stating my manipulations were faster. They probably were, but I wasted far more time and made my code much less readable. The real reason: I transitioned from Pascal to Delphi and was unaware of classes that already implemented the functionality I was used to creating myself. If you’re going to write production code, it’s better to learn the higher-level constructs and implementations available and then learn the lower-level way of doing things for clarification, understanding, and future flexibility.

What is infinitely more useful for the modern application developers is knowledge of design patterns. Consider this: design patterns are “recurring solutions to common problems in software design.” Replace “design patterns” with “algorithms and data structures,” and you see that they fit the exact same definition. The difference is that most named design patterns are still informal, having yet to be implemented in abstract form (formalized) or made part of the language (invisible). Note that patterns that have become formalized or invisible are no longer considered design patterns. Since developers still need to implement design patterns themselves, time is better spent on learning them instead of reinventing them.

I didn’t address this particular subject at the board meeting, but I probably should have; perhaps another year.

Other Classes of Note

Critical Thinking and Logic should really be taught in elementary school through high school. The number of people who can’t properly evaluate propositions or do not analyze their own viewpoints is absolutely staggering. Depending on “common sense” over rational thought is what leads people to make reason errors like the Gambler’s Fallacy. Since the subject material in these classes is essential for rational thought, you better believe they’re essential for software development.

I left out all math classes. There are parts of math that are essential for software developers; algebra is one of them. However, it’s really pieces of standard math and algebra that are important, and it would be nice to have a class focused on those specific parts. When I was in high school, I was told I would need to learn calculus to make it as a programmer. I did learn calculus, and I find it fascinating (including how it was invented by both Newton and Leibniz). However, for the vast majority of development jobs, it is completely unnecessary. Boolean logic carries the day, but it’s also necessary to understand standard operators, the various properties of operators, and functions (a calculation template).

I had no better name for Tools of the Trade, but it is essential to understand common tools used in software development. Of course, the concepts are important as well, so it would be a good idea to include continuous integration and such.

I feel that half a dozen classes on SQL is unnecessary, but you should learn about Relational Data. In the process, you learn relational algebra and tuple relational calculus (not differential or integral calculus). By learning how to manipulate relational data, you learn set logic. I added the Object-relational Mapping course because it’s much more practical than writing large queries. Let your DBAs specialize in that; you specialize in software development. Besides, many startups are using non-relational document databases such as RavenDB.

Human-readable Data is a necessity if you’re writing integrated systems. Guess what, your dynamic website is rendered on a client that calls a server which returns data. You will also most likely integrate both external and internal services in applications you develop, so you need to know JSON and XML. I don’t think these are particularly hard to grasp, so I clumped other markups in here as well for describing user interfaces (HTML, XAML). Markup for UI is human-readable data after all, but you could design entire courses around those technologies. It’s also important to teach style sheets and resources, so I would definitely split it up. The UI portion doesn’t fall under User Interface Design as that is specific to usability. Considering how many developers create awful interfaces, I wish every programming degree program included this sort of class.

Requirements and Specifications should cover things like how to gather requirements and write Gherkin style specs so tests can be designed and the code written without the ambiguity that plagues most shops.

I see Organizing as an Agile/Scrum course.

I clumped functional programming under Declarative Programming. This is accurate, but it doesn’t do functional programming justice. I didn’t put generic programming anywhere despite its heavy usage nowadays, but I think it would fit in Declarative Programming. I know it’s not technically part of the declarative programming paradigm, but it does seem to have a relationship with a sub-paradigm to declarative programming: constraint programming. It would probably be best to add a separate course for generic programming, aspect-oriented programming, and other useful paradigms.

Suggestions?

The courses I laid out are of my own imagining, and it’s really meant to demonstrate the kind of skills I would like developers to have when they leave the university behind to find a job. Unless programming outside of class for sheer passion, fresh graduates seem to grasp little more than an imperative style of coding. They eventually seem to either go the way of the business analyst or gain the practical knowledge necessary to become awesome developers. This tells me that 1) more people should be pursuing business analyst degrees, which could be corrected by offering 2 and 4-year business analyst degrees, and 2) Developer/software engineering (not CompSci) degrees should focus on topics more relevant to modern developers.

How would you craft a developer degree?

(Source : http://www.kodefuguru.com)

The Development Pendulum (source : SimpleProgrammer.com)

February 15, 2012 § Leave a comment

Recently I read this article and I felt like sharing it… here is the article

The Development Pendulum

I’ve noticed a rather interesting thing about best practices and trends in software development, they tend to oscillate from one extreme to another over time.

So many of the things that are currently trendy or considered “good” are things that a few years back were considered “bad” and even further back were “good.”

This cycle and rule seems to repeat over and over again and is prevalent in almost all areas of software development.

It has three dimensions

Don’t misunderstand my point though, we are advancing.  We really have to look at this from a 3 dimensional perspective.

Have you ever seen one of those toys where you rock side to side in order to go forward?

snakeboard

Software development is doing this same thing in many areas.  We keep going back and forth, yet we are going forward.

Let’s look at some examples and then I’ll tell you why this is important.

JavaScript!

Is JavaScript good or bad?

Depends on who you ask, but it is definitely popular right now.

If we go back about 5 years or so, you’ll get a totally different answer.  Most people would suggest to avoid JavaScript.

Now, JavaScript itself hasn’t changed very much in this timespan, but what has changed is how we use it.

We learned some tricks and the world changed around us.  We figured out how to solve the biggest problem of all for JavaScript…

Working with the DOM!

JQuery made it extremely easy to manipulate the DOM, the pain was removed.

Yet, new pains emerge, hence backbone.js is born.

Thick client or the web?

Take a look at how this has changed back and forth so many times.  First the web was a toy and real apps were installed on your machine.

Then it became very uncool to develop a desktop app, everyone was developing web apps.

But soon we ran into a little problem – those darn page refreshes.  Gosh!

So what did we do?  We sort of made the browser a thick client with AJAX.

That created so much of a mess that we really needed clean separation of views from our models and our logic (at least on the .NET side), so we went back to rendering the whole view on the server and sending it down to the client with MVC.  (Yes, you could argue this point, but just pretend like you agree and bear with me.)

Then we decided that we needed to start moving this stuff back to the client so we could do much more cool things with our pages. We started pumping JavaScript into the pages and ended up creating thick clients running in browsers running on JavaScript and HTML5.

And now we are seeing traditional thick clients again with iOS and Android devices and even those will probably eventually migrate to the web.

Simple data vs descriptive data

Check out this sine wave!

SineWave

First we had fixed-length records where we specified the length of each column and exactly what data went there.

Then we moved over to CSV, where we had loose data separated by commas.

Then we thought XML was all the rage and beat people up who didn’t define XSDs, because data without definition is just noise you know!

Now we are sending around very loosely structured JSON objects and throw-up whenever we see angle brackets.

So many other examples

Take a look at this list:

  • Static vs dynamic languages
  • Web services ease of use vs unambiguity (SOAP and REST)
  • Design upfront vs Agile (remember when we just wrote code and deployed it, it was kind of like Agile, but different)
  • Source control, constant collaboration vs branching
  • Testing and TDD
  • Databases, stored procs vs inline SQL
  • <% %> vs Controls

It goes on forever

So why is this important?

It is not just important, as a developer, it is CRITICAL for you to understand.

Why?

Because whatever happens to be “cool” right now, whatever happens to be the “right” way to do things right now, will change.

Not only will it change, but it will go the complete opposite direction.

It won’t look exactly the same as it did before – we will learn from our previous mistakes – but it will be the same concepts.

Advancement follows this sine wave pattern.  Don’t try and fight it so hard.

You have to be balanced.  You have to be able to understand the merits, strengths and weaknesses of both sides of a technology or best practice choice in development.

You have to understand why TDD improved our code until it led us into overuse of IoC and pushed C# and Java developers to the freedom of dynamic languages like Ruby.

You have to understand that eventually the course will correct itself yet again and head back to the direction of the new static language or even an old static language that will be resurrected.

This is how you will grow

It is also very important to realize that this is exactly how you will grow.

Just as the technological world around you is in a constant forward progressing pendulum swing, so are you, at a different pace, to a different beat.

I know that through my personal development journey, I have switched sides on a topic countless times.

You might call me a “waffler,” but I call it progress.

Life is a game of overshooting and adjusting.

What Happened to Software Engineering?

July 27, 2011 § Leave a comment

What Happened to Software Engineering?

(- By Phil Japikse)

Over the past few years there has been an evolutionary shift in the world of software development.  Not very long ago, the dominant Software Development Life Cycle (SDLC) methodology was the Waterfall Method with very specific phases that separated the construction phase from phases like design and test. The software development industry, still very new, was striving to find a repeatable, predictable process for developing software.

The best model for this seemed to be the physical sciences, like civil engineering and architecture. Artifacts like detailed requirements, design documents, and technical specifications were written and signed off on long before a single line of code was developed, similar to the process used in construction of physical structures like bridges, buildings, roads, and dams.

To further align with the physical sciences, job titles like “Software Engineer” and “Solutions Architect” were adopted.

This style of project management has been very successful in construction.  Yet a significant number of software projects were failing outright, and many more went significantly over budget and/or missed deadlines.  This was due to several factors, but probably the most significant have been both the speed of change in software and hardware and the speed of change in business needs.  These changes in the software industry would be similar to that of having brand new vehicles requiring a complete redesign of the roads they drive on about every 18 months.

When civil engineers are asked to build a bridge across a river to join two roads together, the engineers building the roads have fairly exact coordinates defining where the roads will come together at the river, and vehicles haven’t changed significantly over the years.  The bridge engineers merely have to join the two roads together using tried and true construction techniques that have been employed thousands of times before.

In software systems, it’s not unusual for technology or changing business needs to significantly change the requirements during construction (after all of the requirements and design documents have been completed).  To put it in the bridge building analogy, it is like having one of the roads moved six miles downstream after the bridge foundation was already in place.

To counter these issues Software Engineers developed many new techniques and practices designed to refine the construction phase through improvements in software quality, code reuse, and productivity.  Some of these new practices include defining (and enforcing) code standards and naming conventions, encouraging the use of proven software design patterns, using tools like unit test frameworks and techniques like Test Driven Development (TDD), followed by Behavior Driven Development (BDD), continuous integration, and  pair programming. These techniques were very effective in reducing defects and improving the construction phase, becoming commonly referred to as Software Engineering Best Practices.

While practices for refining the construction phase were evolving, there was also a lot of study into refining the additional phases of software development such as requirements definition, systems design, quality assurance, and testing.  These included Scrum, Extreme Programming and Kanban (the adaptation of Lean Manufacturing) to name just a few.

This reflection resulted in the development of what are referred to now as the Agile Methodologies.  In fact, a significant number of the practices to improve software construction like TDD and Continuous Integration were developed alongside the process improvements that would later come to be called Agile.

Today, agile methodologies are rapidly moving from the fringes to the mainstream, penetrating even the largest enterprise software development teams.  The agile revolution has brought about a lot of change, and exposed many of the problems that developers faced in using the old waterfall approach.

Conferences, Open Spaces, and classes are filled with discussions on how to be better at agile, focusing on topics like how to best manage the backlogs, run retrospectives, plan sprints, and other process oriented topics — all extremely important items that need to be well understood and implemented correctly as teams move to agile adoption.

What tends to be left behind in all of the agile excitement is the Engineering Practices that were developed to deliver higher quality software.  Most of the agile methodologies focus mainly on the process management topics and don’t discuss construction techniques in their teachings (the main exception is Kent Beck and Extreme Programming).  I believe this due to the agile methodologies assuming you are already doing the technical practices!

Unfortunately, in my career as a consultant and an agile coach, I have seen all too often that those engineering practices are being left behind. This could be due to one or more of the common mistaken assumptions surrounding agile, or the lack of emphasis from some of the major certification programs in agile.  Or it might just be that the term Software Engineering has developed a stigma from the waterfall days.

We grow as an industry when we learn from those that came before us.  Even process reinvention takes into account what is already known, incorporating what works and eliminating what doesn’t.  By removing all of those Software Engineering Practices like Test/Behavior Driven Development, Continuous Integration, and Pair Programming we are forgetting what counts at the end of the day: developing high quality software.

All of the other processes that come with the agile methodologies are important as well in achieving our goal, but don’t throw the baby out with the bathwater.  I go by many titles these days, including Speaker, Agile Coach, and Writer.  At the heart of it all, I am a Software Engineer.  And proud of it.

How to Handle Defects in Agile Projects

January 28, 2011 § 2 Comments

How to Handle Defects in Agile Projects

Using a preplanned method for handling bugs helps teams with a smooth delivery and keeps them out of reactionary mode.

 

For all software development teams, defects can cause a challenge on when and how to handle them.  In agile, with a focus on quality code, and user stories, sometimes team can be confused on the actual mechanics of how to address defects.  It is not a hard process, but the team needs to be clear on how they handle defects.  Let’s start by defining what we mean by defects.

 

What is a Defect?

For purposes of this article, we will use the term defects instead of bugs.  In many teams, there is sometimes confusion on when and where to identify defects?  If the team agrees on a standard process, this makes the mechanics easy.  To help teams identify how they might handle this, below are examples on good ways to handle this.  These are not cookie cutter solutions, but starting points that allow your team to try and refine these techniques.

Scrum Teams

In a standard scrum team, the team will commit to completing let’s say 25 points, which translates, let’s say to 6 stories.  While working on these stories, someone will test the story before it can get to done.  Sometimes that is a QA person, or in a more generalized team, hopefully a different developer will test the story.

If that tester finds a problem with that story, this should not be considered a bug or defect.  Sometimes teams will label this as such.  Labeling this issues as defects before the sprint for that story is done, is confusing to the product manager.  Because the story is not done, this cannot be a defect, the team has not said that we are ready for someone to use this.  The team acknowledges that this might not be finished, that is why there is testing!

However, any time after a sprint is done, if a defect or bug is found in that story, it should be considered a defect.  And how does the team handle this?

Kanban Teams

If you are using a Kanban board, and a defect is found after release (assuming release is your definition of done), that is the time to describe it as such.   Identify the defect, create a task or story for the defect and place this in your backlog.

Keep in mind that since many Kanban teams deal with Service Level Agreements (SLA’s), with their business owners, defects will need to be considered part of that.  For those SLA’s it is helpful to identify the defect at least as critical or non-critical.  Look to other material to help with those kinds of risk class service levels, such as those by David J. Anderson[1].  Now the team needs to handle this.

Give your defects visibility to Product Managers

First, teams must have business representation to help them make priority decisions.  The most common way to do this, is to designate someone as a Product Manager.  This could be a team member who communicates regularly with business stakeholders.  Ideally, this is a business person, who works directly with the team.

Once a defect has been spotted (defect according to our criteria above) the team needs to make that visible.  In the case of a Scrum team, this means logging this in your product backlog, as you would a story.   For a team using Kanban, they would get that defect into their parking lot.  This also means, the team gives business the opportunity to prioritize the defect work.

The exception is for critical production defects.  This is covered in a section below.  For all other defects, once they are in some type of backlog, then the team needs to plan when to handle these defects.

One question that might come up is should we estimate the effort for these defects.  Opinions differ [2], but to give business an idea on effort for those defects, it makes sense to size them in some way.  If your team is used to story points, then utilize that method.

For teams using Scrum, tasking could be used.  In other words, when the team is in planning session, they will create the task time estimate for a defect.  For the dev tasks, they may estimate, 10 hours, for testing time, 4 hours, and now the team has an estimate that can be used with the teams total time budget to help identify if their sprint commitment is reasonable.

 

If the team is using Kanban, point estimating makes sense.  That way the product owner can identify the risk of not doing the defect until later, as opposed to getting a new feature out.  Many defects found really are more annoying and not critical defects.  So a savvy product owner can identify which issues can harm the product without being fixed, as opposed to when a new feature is needed.

For Kanban, once the priority of the defect has been identified, then that token or story is placed on the appropriate place for that board.  In both the Scrum and Kanban scenarios it is important to differentiate defects from stories.  The reason to do this is to track defects, to make sure your quality process is on track.

For instance, see Example 1:

Example 1 (Sample burndown chart tracking defects)

Looking at this chart, our burndown shows that the team is working well through their commitments, and buring down well.   However, the defects showing at the bottom show a disturbing tendency to go up more as we get through the project.  Maybe this is expected from the team, but it would be great topic for a retrospective.  Tracking how the team is doing with their defect rate has to help drive improvement.

Production Stopped Defects

The exception to the above is any defect that is affecting production.   In this case the team needs to have rules on how to handle this before it happens.  Make sure your team is prepared to handle these inevitability.

Certainly, if a critical production defect is found, the entire team can swarm on this defect to get the fix done, and pushed out to production.  The advantages to this include a quick turnaround time.  Also, the developers who originally worked on this piece of code would be involved.  Although assuming the team uses Pair Programming, this would not be an issue.  Disadvantages include the possibility of a Scrum team blowing their commitment to the sprint.   For a Kanban team, this might cause them to blow their SLA to the customer.

These are the risks and advantages to using a swarm strategy.  Another consideration here is how many critical defects on average does the team produce within a given time period.  One would hope that this would be a rare exception, but sometimes that is not the case.  Consider this aspect before deciding on a swarming strategy, and back it up with hard data.

Another way to handle these issues would be to designate a team member for a short length of time as your on call person.  It has been called the batman position, the position no one wants and more.  The idea is that this member of the team would not be counted on to contribute significantly to new project work for that period of time.  They would have to triage the production issues to determine if they are critical and need to be fixed, or could they be put in the backlog.

It would also make sense for this position, during slack time, either works on prioritized work, maybe helping with testing, or make this learning time.  Allow this person to investigate a new technology the team wants to use.  Besides helping the team, this makes this position a little more attractive to team members.  Usually this task is not one team members look forward to performing.

This method helps keep your new production work more predictable, and allows the team to focus on completing the work committed to.

Handling Defects in an Agile way

These strategies for attacking defects are not necessarily agile in nature.  However they do work well in agile teams.  The real trick is to make sure that the team decides on which method to use, this should not be decreed from management.  Management can certainly demand that the team documents to them how they handle defects though.  That seems reasonable.

Using some preplanned method for handling defects helps teams with a smooth delivery, and keeps them out of a reactionary mode.  Make sure that your team has a plan!

 

Where Am I?

You are currently browsing the Methodology category at Naik Vinay.