Testing approaches depend on where you are in the project and your “budget,” in terms of time, money, manpower, need, etc. Ideally, unit testing is budgeted into the development process, but realistically, we often encounter existing or legacy programs that have little or no code coverage but must be upgraded or maintained.
The worst scenario is a product that is currently being developed but exhibits an increased number of failures during its development, again with little or no code coverage. As a product manager, either at the beginning of a development effort or as a result of being handed an existing application, it is important to develop a reasonable unit testing strategy.
Remember that unit tests should provide measurable benefits to your project to offset the liability of their development, maintenance, and their own testing. Furthermore, the strategy that you adopt for your unit testing can affect the architecture of your application. While this is almost always a good thing, it may introduce unnecessary overhead for your needs.
Starting From Requirements
If you are starting a sufficiently complex application from a clean slate, and all that is in your hands is a set of requirements, consider the following guidance.
Prioritizing Computational Requirements
Prioritize the application’s computational requirements to determine where the complexity lies. Complexity can be determined by discovering the number of states that a particular computation must accommodate, or it can be the result of a large set of input data required to perform the computation, or it could simply be algorithmically complex, such as doing failure case analysis on a satellite’s redundancy ring. Also consider where code is likely to change in the future as the result of unknown changing requirements. While that sounds like it requires clairvoyance, a skilled software architect can categorize code into general purpose (solving a common problem), and domain specific (solving a specific requirement problem). The latter becomes a candidate for future change.
While writing unit tests for trivial functions is easy, fast, and gratifying in the number of test cases that the program churns through, they are the least cost-effective tests—they take time to write and, because they will most likely be written correctly to begin with and they most likely will not change over time, they are the least useful as the application’s code base grows. Instead, focus your unit testing strategy on the code that is domain specific and complex.
Select an Architecture
One of the benefits of starting a project from a set of requirements is that you get to create the architecture (or select a third-party architecture) as part of the development process. Third-party frameworks that allow you to leverage architectures such as inversion of control (and the related concept of dependency injection), as well as formal architectures such as Model-View-Controller (MVC) and Model-View-ViewModel (MVVM) facilitate unit testing for the simple reason that a modular architecture is typically easier to unit test. These architectures separate out:
- The presentation (view).
- The model (responsible for persistence and data representation).
- The controller (where the computations should be occurring).
While some aspects of the model might be candidates for unit testing, most of the unit tests will likely be written against methods in the controller or view model, which is where the computations on the model or view are implemented.
Maintenance Phase
Unit testing can be of benefit even if you are involved in the maintenance of an application, one that either requires adding new features to an existing application or simply fixing bugs of a legacy application. There are several approaches one can take to an existing application and questions underlying those approaches that can determine the cost-effectiveness of unit testing:
- Do you write unit tests only for new features and bug fixes? Is the feature or bug fix something that will benefit from regression testing, or is it a one-time, isolated issue that is easier tested during integration testing?
- Do you start writing unit tests against existing features? If so, how do you prioritize which features to test first?
- Does the existing code base work well with unit testing or does the code first need refactoring to isolate code units?
- What setups or teardowns are needed for the feature or bug testing?
- What dependencies can be discovered about the code changes that may result in side effects in other code, and should the unit tests be broadened to test the behavior of dependent code?
Walking into the maintenance phase of a legacy application that lacks unit testing is not trivial—the planning, consideration, and investigation into the code may often require more resources than simply fixing the bug. However, the judicious use of unit testing can be cost-effective, and while this is not always easy to determine, it is worth the exercise, if for no other reason than to get a deeper understanding of the code base.
Determine Your Process
There are three strategies one can take with regard to the unit test process: “Test-Driven Development,” “Code First,” and, though it may seem antithetical to the theme of this book, the “No Unit Test” process.
Test-Driven Development
One camp is “Test-Driven Development,” summarized by the following workflow:
Given a computational requirement (see earlier section), first, write a stub for the method.
- If dependencies on other objects that are not yet implemented are required (objects passed in as parameters to the method or returned by the method), implement those as empty interfaces.
- If properties are missing, implement stubs for properties that are needed to verify the results.
- Write any setup or teardown test requirements.
- Write the tests. The reasons for writing any stubs before writing the test are: first, to take advantage of IntelliSense when writing the test; second, to establish that the code still compiles; and third, to ensure that the method being tested, its parameters, interfaces, and properties have all synchronized with regard to naming.
- Run the tests, verifying that they fail.
- Code the implementation.
- Run the tests, verifying that they succeed.
In practice, this is harder than it looks. It’s easy to fall prey to writing tests that are not cost-effective, and often, one discovers that the method being tested is not a sufficiently fine-grained unit to actually be a good candidate for a test. Perhaps the method is doing too much, requiring too much setup or teardown, or has dependencies on too many other objects that all must be initialized to a known state. These are all things that are more easily discovered when writing the code, not the test.
One advantage to a test-driven approach is that the process instills the discipline of unit testing and writing the unit tests first. It’s easy to determine if the developer is following the process. With practice, one can become facile at also making the process cost-effective.
Another advantage to a test-driven approach is that, by its nature, it enforces a kind of architecture. It would be absurd but doable to write a unit test that initializes a form, puts values into a control, and then calls a method that is expected to perform some computation on the values, as this code would require (actually found here):
private void btnCalculate_Click(object sender, System.EventArgs e) { double Principal, AnnualRate, InterestEarned; double FutureValue, RatePerPeriod; int NumberOfPeriods, CompoundType; Principal = Double.Parse(txtPrincipal.Text); AnnualRate = Double.Parse(txtInterest.Text) / 100; if (rdoMonthly.Checked) CompoundType = 12; else if (rdoQuarterly.Checked) CompoundType = 4; else if (rdoSemiannually.Checked) CompoundType = 2; else CompoundType = 1; NumberOfPeriods = Int32.Parse(txtPeriods.Text); double i = AnnualRate / CompoundType; int n = CompoundType * NumberOfPeriods; RatePerPeriod = AnnualRate / NumberOfPeriods; FutureValue = Principal * Math.Pow(1 + i, n); InterestEarned = FutureValue - Principal; txtInterestEarned.Text = InterestEarned.ToString("C"); txtAmountEarned.Text = FutureValue.ToString("C"); }
The preceding code is untestable as it is entangled with the event handler and the user interface. Rather, one could write the compound interest calculation method:
public enum CompoundType { Annually = 1, SemiAnnually = 2, Quarterly = 4, Monthly = 12 } private double CompoundInterestCalculation( double principal, double annualRate, CompoundType compoundType, int periods) { double annualRateDecimal = annualRate / 100.0; double i = annualRateDecimal / (int)compoundType; int n = (int)compoundType * periods; double ratePerPeriod = annualRateDecimal / periods; double futureValue = principal * Math.Pow(1 + i, n); double interestEaned = futureValue - principal; return interestEaned; }
which would then allow for a simple test to be written:
[TestMethod] public void CompoundInterestTest() { double interest = CompoundInterestCalculation(2500, 7.55, CompoundType.Monthly, 4); Assert.AreEqual(878.21, interest, 0.01); }
Furthermore, by using parameterized testing, it would be straightforward to test each compound type, a range of years, and different interest and principal amounts.
The test-driven approach actually facilitates a more formalized development process by the discovery of actual testable units and isolating them from boundary-crossing dependencies.
Code First, Test Second
Coding first is more natural if only because that’s the usual way applications are developed. The requirement and its implementation may also seem easy enough at first sight so that writing several unit tests seems like a poor use of time. Other factors such as deadlines can force a project into a “just get the code written so we can ship” development process.
The problem with the code-first approach is that it is easy to write code that requires the kind of test we saw earlier. Code first requires an active discipline to test the code that has been written. This discipline is incredibly difficult to achieve, especially as there is always the next new feature to implement.
It also requires intelligence, if you will, to avoid writing entangled, boundary-crossing code, and the discipline to do so. Who hasn’t clicked on a button in the Visual Studio designer and coded the event’s computation right there in the stub that Visual Studio creates for you? It’s easy and because the tool is directing you in that direction, the naive programmer will think this is the right way of coding.
This approach requires careful consideration of the skills and discipline of your team, and requires closer monitoring of the team, especially during high-stress periods when disciplined approaches tend to break down. Granted, a test-driven discipline may also be thrown out as deadlines loom, but that tends to be a conscious decision to make an exception, whereas it can easily become the rule in a code first approach.
No Unit Tests
Just because you don’t have unit tests doesn’t mean you are throwing out testing. It may simply be that the testing emphasizes acceptance test procedures or integration testing.
Balancing Testing Strategies
A cost-effective unit testing process requires a balance between Test-Driven Development, Code First, Test Second, and “Test Some Other Way” strategies. The cost-effectiveness of unit testing should always be considered, as well as factors such as the experience of the developers on the team. As a manager, you may not want to hear that a test-driven approach is a good idea if your team is fairly green and you need the process to instill discipline and approach.
Comments