Writing Software of Quality

Hesam Seyed Mousavi, January 30, 2016

Hesam_Seyed_Mousavi__Writing-Quality-Software

Source: technetpro

blog.mousavi.fr

Nobody likes debt but, as harsh as it might sound, some level of debt in life is unavoidable. Technical debt in software is no exception. Abstractly speaking, technical debt is sort of a mortgage taken out on your code. You take out a mortgage every time you do quick-and-dirty things in your code. Like mortgages in life, technical debt might allow you to achieve an immediate result (for example, meeting a deadline), but that comes at the cost of interest that needs to be paid back (for example, higher costs of further maintenance and refactoring).

Sometimes it also includes limited skills, lack of collaboration, and inefficient scheduling of tasks. In addition, lack of awareness of any of these points delays taking proper action on your code and conducting fix-up procedures, thus letting technical debt grow bigger.

Back in the 1970s, Professor Manny Lehman formulated a few laws to explain the life cycle of software programs. At the foundation of Lehman’s theory, there’s the observation that any software system in development deteriorates over time as its complexity grows. Forty years later, plenty of software architects and developers have found out on their own that software deterioration is really a concrete force and must be addressed with concrete actions to survive software projects.

Lehman’s laws and concepts, like the Big Ball of Mud and Technical Debt, all refer to the same mechanics that can lead software projects down the slippery slope of failure. Lehman formulated his laws a long time ago and mostly from a mathematical perspective. Big Ball of Mud and Technical Debt emerged more as real bites of life in the trenches.

In the end, we came to realize that all software needs to be maintained all the time it is used. Subsequently, a good piece of software is software that lends itself to being refactored effectively. Finally, the effectiveness of refactoring depends on three elements—testability, extensibility and readability—and a good number of relevant tests to catch any possible form of regression.

For years, .NET developers relied only on debugging tools in Microsoft Visual Studio to ensure the quality of their code. The undeniable effectiveness of these tools made it worthwhile to combine two logically distinct actions: manual testing of code, and working around bugs to first reproduce and then fix them.

For years, this approach worked quite nicely. In the past decade, however, a big change occurred under our .NET eyes: development teams became a lot more attentive to automated testing. Somehow developers figured out that automated tests were a better way to find out quickly and reliably what could be wrong and, more importantly, whether certain features were still working after some changes were made. Debuggers are still functional and in use, but mostly they’re used to proceed step by step in specific sections of code to investigate what’s wrong.

Automated testing adds a brand new dimension to software development. Overall, we think it was just a natural change driven by a human instinct to adapt. The recognized necessity of testing software in an automated way—we could call it the necessity of applying the RAD paradigm to tests—raised another key point: writing software that is easy to test. This is where testability fits in.

What is testability, anyway?
In the context of software architecture, a broadly accepted definition for testability describes it as the ease of performing tests on code. And testing code is just the process of checking software to ensure that it behaves as expected, contains no errors, and satisfies requirements. A popular slogan to address the importance of testing software comes from Bruce Eckel and reads like this:

If it ain’t tested, it’s broken. On the surface, that statement is a bit provocative, but it beautifully serves the purpose of calling people’s attention to the ability to determine explicitly, automatically, and repeatedly whether or not some code works as expected.

All in all, we believe there’s no difference between testable code that works and untestable code that works. Well, believe it or not, there’s a strict relationship between well-written code and testable code: they’re nearly the same thing. A fundamental quality of good code, in fact, is that is must be testable. And the attribute of testability is good to have regardless of whether you actually write tests or not.

At the end of the day, testable code is loosely coupled code that uses SOLID principles (Single responsibility, Open/close, Liskov’s principles, Interface segregation, and Dependency inversion) extensively and avoids the common pitfalls of object-orientation in particular, the Dependency Inversion principle in one of its two flavors—Dependency Injection and Service Locator—is the trademark of testable code.

Principles of testability
Testing software is conceptually simple: just force the program to work on correct, incorrect, missing, or incomplete data, and verify whether the results are in line with any set expectations. How would you force the program to work on your input data? How would you measure the correctness of the results? And in cases of failure, how would you track the specific module that failed? These questions are the foundation of a paradigm known as Design for Testability (DfT).

Any software built in full respect of DfT principles is inherently testable and, as a pleasant side effect, it is also easy to read, understand, and subsequently maintain. DfT was developed as a general concept a few decades ago in a field that was not software. In fact, the goal of DfT was to improve the process of building low-level circuits within boards and chips. DfT defines three attributes that any unit of software must have to be easily testable:

  • Control The attribute of control refers to the degree to which it is possible for testers to apply fixed input data to the software under test. Any piece of software should be written in a way that makes it clear what parameters are required and what return values are generated. In addition, any piece of software should abstract its dependencies—both parameters and low-level modules—and provide a way for external callers to inject them at will.
  • Visibility The attribute of visibility is defined as the ability to observe the current state of the software under test and any output it can produce. Visibility is all about this aspect—postconditions to be verified past the execution of a method.
  • Simplicity Simple and extremely cohesive components are preferable for testing because the less you have to test, the more reliably and quickly you can do that.

In general, simplicity is always a positive attribute for any system and in every context. Testing is clearly no exception.

Why is testability desirable?
As we see things, testability is much more important than the actual step of testing. Testability is an attribute of software that represents a (great) statement about its quality. Testing is a process aimed at verifying whether the code meets expectations.

Applying testability (for example, making your code easily testable) is like learning to fish; writing a unit test is like being given a fish to eat. Being given a fish resolves a problem; learning to fish is a different thing because it adds new skills, makes you stronger, and provides a way to resolve the same problem whenever it occurs.

When DfT is successfully applied, your code is generally of good quality, lends itself well to maintenance and refactoring, and can be more easily understood by any developers who happen to encounter it. In such conditions, writing unit tests is highly effective and easier overall.

The ROI of testability
The return-on-investment (ROI) of testability is all in the improved quality of the code you get. Writing classes with the goal of making them testable leads you to favor simplicity and proceed one small step a time. You can quickly catch when a given class is becoming bloated, spot where you need to inject dependencies, and identify which are the actual dependencies you need to take into account.

You can certainly produce testable code without actually writing all that many tests for each class and component. But writing tests and classes together helps you to comprehend the ROI. The final goal, however, is having good code, not good tests. If you need to prove to yourself, or your manager, the ROI of testability, we suggest you experiment with writing classes and tests together. It turns out that the resulting tests are a regression tool and provide evidence that in all tested conditions (including common and edge cases) your code works. The tests also improve the overall design of the classes, because to write tests, you end up making the public interface of the classes easier to use.

Tests are just more code to write and maintain, and this is an extra cost. This said, it turns out that testability is a sort of personal epiphany that each developer, and team of developers, will eventually experience—but probably not until the time is right for them.

Note:
The term code smell is often used to indicate an unpleasant aspect of some code that might indicate a more serious problem. A code smell is neither a bug nor a problem perse; it still refers to code that works perfectly. However, it refers to a bad programming practice or a less-than-ideal implementation that might have a deeper impact on the rest of the code. Code smells make code weaker. Finding and removing code smells is the primary objective of refactoring. To some extent, code smells are subjective and vary by languages and paradigms.

Testing your software
Software testing happens at various levels. You have unit tests to determine whether individual components of the software meet functional requirements. You have integration tests to determine whether the software fits in the environment and infrastructure and whether two or more components work well together. Finally, you have acceptance tests to determine whether the completed system meets customer requirements.

Unit tests
Unit testing consists of writing and running a small program (referred to as a test harness) that instantiates classes and invokes all defined methods in an automatic way. The body of each method instantiates classes to test and perform some action on them and then checks the results.
Source: technetpro

blog.mousavi.fr

 

Advertisements