programming4us
programming4us
ENTERPRISE

Microsoft .NET : Design Principles and Patterns - Applying Requirements by Design (part 1) - Testability

3/4/2013 6:27:13 PM

1. Testability

A broadly accepted definition for testability in the context of software architecture describes it as the ease of performing testing. And testing is the process of checking software to ensure that it behaves as expected, contains no errors, and satisfies its requirements.

A popular slogan to address the importance of software testing comes from Bruce Eckel and reads like this:

If it ain’t tested, it’s broken.

The key thing to keep in mind is that you can state that your code works only if you can provide evidence for that it does. A piece of software can switch to the status of working not when someone states it works (whether stated by end users, the project manager, the customer, or the chief architect), but only when its correctness is proven beyond any reasonable doubt.

Software Testing

Testing happens at various levels. You have unit tests to determine whether individual components of the software meet functional requirements. You have integration tests to determine whether the software fits in the environment and infrastructure and whether two or more components work well together. Finally, you have acceptance tests to determine whether the completed system meets customer requirements.

Unit tests and integration tests pertain to the development team and serve the purpose of making the team confident about the quality of the software. Test results tell the team if the team is doing well and is on the right track. Typically, these tests don’t cover the entire code base. In general, there’s no clear correlation between the percentage of code coverage and quality of code. Likewise, there’s also no agreement on what would be a valid percentage of code coverage to address. Some figure that 80 percent is a good number. Some do not even instruct the testing tool to calculate it.

The customer is typically not interested in the results of unit and integration tests. Acceptance tests, on the other hand, are all the customer cares about. Acceptance tests address the completed system and are part of the contract between the customer and the development team. Acceptance tests can be written by the customer itself or by the team in strict collaboration with the customer. In an acceptance test, you can find a checklist such as the following one:

1) Insert a customer with the following data ...;

2) Modify the customer using an existing ID;

3) Observe the reaction of the system and verify specific expected results;

Another example is the following:

1) During a batch, shut down one nodes on the application server;

2) Observe the reaction of the system and the results of the transaction;

Run prior to delivery, acceptance tests, if successful, signal the termination of the project and the approval of the product. (As a consultant, you can issue your final invoice at this point.)

Tests are a serious matter.

Testing the system by having end users poke around the software for a few days is not a reliable (and exhaustive) test practice.

Note

Admittedly, in the early 1990s Dino delivered a photographic Windows application using the test-by-poking-around approach. We were a very small company with five developers, plus the boss. Our (patented?) approach to testing is described in the following paragraph.

The boss brings a copy of the program home. The boss spends the night playing with the program. Around 8 a.m. the next day, the team gets a call from the boss, who is going to get a few hours of very well-deserved sleep. The boss recites a long list of serious bugs to be fixed instantly and makes obscure references to alleged features of the program, which are unknown to the entire team. Early in the afternoon, the boss shows up at work and discusses improvements in a much more relaxed state of mind. The list of serious bugs to be fixed instantly morphs into a short list of new features to add.

In this way, however, we delivered the application and we could say we delivered a reliable and fully functioning piece of software. It was the 1994, though. The old days.

Software Contracts

A software test verifies that a component returns the correct output in response to given input and a given internal state. Having control over the input and the state and being able to observe the output is therefore essential.

Your testing efforts greatly benefit from detailed knowledge of the software contract supported by a method. When you design a class, you should always be sure you can answer the following three questions about the class and its methods in particular:

  • Under which conditions can the method be invoked?

  • Which conditions are verified after the method terminates?

  • Which conditions do not change before and after the method execution?

These three questions are also known, respectively, as preconditions, postconditions, and invariants.

Preconditions mainly refer to the input data you pass; specifically, data that is of given types and values falling within a given range. Preconditions also refer to the state of the object required for execution—for example, the method that might need to throw an exception if an internal member is null or if certain conditions are not met.

When you design a method with testability in mind, you pay attention to and validate input carefully and throw exceptions if any preconditions are not met. This gets you clean code, and more importantly, code that is easier to test.

Postconditions refer to the output generated by the method and the changes produced to the state of the object. Postconditions are not directly related to the exceptions that might be thrown along the way. This is not relevant from a testing perspective. When you do testing, in fact, you execute the method if preconditions are met (and if no exceptions are raised because of failed preconditions). The method might produce the wrong results, but it should not fail unless really exceptional situations are encountered. If your code needs to read a file, that the file exists is a precondition and you should throw a FileNotFoundException before attempting to read. A FileIOException, say, is acceptable only if during the test you lose connection to the file.

There might be a case where the method delegates some work to an internal component, which might also throw exceptions. However, for the purpose of testing, that component will be replaced with a fake one that is guaranteed to return valid data by contract. (You are testing the outermost method now; you have tested the internal component already or you’ll test it later.) So, in the end, when you design for testability the exceptions you should care about most are those in the preconditions.

Invariants refer to property values, or expressions involving members of the object’s state, that do not change during the method execution. In a design for testability scenario, you know these invariants clearly and you assert them in tests. As an example of an invariant, consider the property Status of DbConnection: it has to be Open before you invoke BeginTransaction, and it must remain Open afterward.

Software contracts play a key role in the design of classes for testability. Having a contract clearly defined for each class you write makes your code inherently more testable.

Unit Testing

Unit testing verifies that individual units of code are working properly according to their software contract. A unit is the smallest part of an application that is testable—typically, a method.

Unit testing consists of writing and running a small program (referred to as a test harness) that instantiates classes and invokes methods in an automatic way. In the end, running a battery of tests is much like compiling. You click a button, you run the test harness and, at the end of it, you know what went wrong, if anything.

In its simplest form, a test harness is a manually written program that reads test-case input values and the corresponding expected results from some external files. Then the test harness calls methods using input values and compares results with expected values. Needless to say, writing such a test harness entirely from scratch is, at the very minimum, time consuming and error prone. But, more importantly, it is restrictive in terms of the testing capabilities you can take advantage of.

At the end of the day, the most effective way to conduct unit testing passes through the use of an automated test framework. An automated test framework is a developer tool that normally includes a runtime engine and a framework of classes for simplifying the creation of test programs. Table 1 lists some of the most popular ones.

Table 1. Popular Testing Tools

Product

Description

MSTest

The testing tool incorporated into Visual Studio 2008 Professional, Team Tester, and Team Developer. It is also included in Visual Studio 2005 Team Tester and Team Developer.

MBUnit

An open-source product with a fuller bag of features than MSTest. However, the tight integration that MSTest has with Visual Studio and Team Foundation Server largely makes up for the smaller feature set. 

NUnit

One of the most widely used testing tools for the .NET Framework. It is an open-source product. 

xUnit.NET

Currently under development as a CodePlex project, this tool builds on the experience of James Newkirk—the original author of NUnit. It is definitely an interesting tool to look at, with some interesting and innovative features. 


Unit Testing in Action

Let’s have a look at some tests written using the MSTest tool that comes with Visual Studio 2008. You start by grouping related tests in a text fixture. Text fixtures are just test-specific classes where methods typically represent tests to run. In a text fixture, you might also have code that executes at the start and end of the test run. Here’s the skeleton of a text fixture with MSTest:

using Microsoft.VisualStudio.TestTools.UnitTesting;
.
.
.
[TestClass]
public class CustomerTestCase
{
  private Customer customer;

  [TestInitialize]
  public void SetUp()
  {
    customer = new Customer();
  }

  [TestCleanup]
  public void TearDown()
  {
    customer = null;
  }

  // Your tests go here
  [TestMethod]
  public void Assign_ID()
  {
    .
    .
    .
  }
  .
  .
  .
}

It is recommended that you create a separate assembly for your tests and, more importantly, that you have tests for each class library. A good practice is to have an XxxTestCase class for each Xxx class in a given assembly.

As you can see, you transform a plain .NET class into a test fixture by simply adding the TestClass attribute. You turn a method of this class into a test method by using the TestMethod attribute instead. Attributes such as TestInitialize and TestCleanup have a special meaning and indicate code to execute at the start and end of the test run. Let’s examine an initial test:

[TestMethod]
public void Assign_ID()
{
  // Define the input data for the test
  string id = "MANDS";

  // Execute the action to test (assign a given value)
  customer.ID = id;

  // Test the postconditions:
  // Ensure that the new value of property ID matches the assigned value.
  Assert.AreEqual(id, customer.ID);
}

The test simply verifies that a value is correctly assigned to the ID property of the Customer class. You use methods of the Assert object to assert conditions that must be true when checked.

The body of a test method contains plain code that works on properties and methods of a class. Here’s another example that invokes a method on the Customer class:

[TestMethod]
public void TestEmptyCustomersHaveNoOrders()
{
  Customer c = new Customer();
  Assert.AreEqual<decimal>(0, c.GetTotalAmountOfOrders());
}

In this case, the purpose of the test is to ensure that a newly created Customer instance has no associated orders and the total amount of orders add up to zero.

Dealing with Dependencies

When you test a method, you want to focus only on the code within that method. All that you want to know is whether that code provides the expected results in the tested scenarios. To get this, you need to get rid of all dependencies the method might have. If the method, say, invokes another class, you assume that the invoked class will always return correct results. In this way, you eliminate at the root the risk that the method fails under test because a failure occurred down the call stack. If you test method A and it fails, the reason has to be found exclusively in the source code of method A—given preconditions, invariants, and behavior—and not in any of its dependencies.

Generally, the class being tested must be isolated from its dependencies.

In an object-oriented scenario, class A depends on class B when any of the following conditions are verified:

  • Class A derives from class B.

  • Class A includes a member of class B.

  • One of the methods of class A invokes a method of class B.

  • One of the methods of class A receives or returns a parameter of class B.

  • Class A depends on a class that, in turn, depends on class B.

How can you neutralize dependencies when testing a method? This is exactly where manually written test harnesses no longer live up to your expectations, and you see the full power of automated testing frameworks.

Dependency injection really comes in handy here and is a pattern that has a huge impact on testability. A class that depends on interfaces (the first principle of OOD), and uses dependency injection to receive from the outside world any objects it needs to do its own work, is inherently more testable. Let’s consider the following code snippet:

public class Task
{
  // Class Task depends upon type ILogger
  ILogger _logger;

  public Task(ILogger logger)
  {
    this._logger = logger;
  }

  public int Sum(int x, int y)
  {
    return x+y;
  }
  public void Execute()
  {
    // Invoke an external "service"; not relevant when unit-testing this method
    this._logger.Log("Begin method ...");

    // Method specific code; RELEVANT when unit-testing this method
    .
    .
    .

    // Invoke an external "service"; not relevant when unit-testing this method
    this._logger.Log("End method ...");
  }
}

We want to test the code in method Execute, but we don’t care about the logger. Because the class Task is designed with DI in mind, testing the method Execute in total isolation is much easier.

Again, how can you neutralize dependencies when testing a method?

The simplest option is using fake objects. A fake object is a relatively simple clone of an object that offers the same interface as the original object but returns hard-coded or programmatically determined values. Here’s a sample fake object for the ILogger type:

public class FakeLogger : ILogger
{
    public void Log(string message)
    {
        return;
    }
}

As you can see, the behavior of a fake object is hard-coded; the fake object has no state and no significant behavior. From the fake object’s perspective, it makes no difference how many times you invoke a fake method and when in the flow the call occurs. Let’s see how to inject a fake logger in the Task class:

[TestMethod]
public void TestIfExecuteWorks()
{
  // Inject a fake logger to isolate the method from dependencies
  FakeLogger fake = new FakeLogger();
  Task task = new Task(fake);

  // Set preconditions
  int x = 3;
  int y = 4;
  int expected = 7;

  // Run the method
  int actual = task.Sum(x, y);

  // Report about the code's behavior using Assert statements
  Assert.AreEqual<int>(expected, actual);
  .
  .
  .
}

In a test, you set the preconditions for the method, run the method, and then observe the resulting postconditions. The concept of assertion is central to the unit test. An assertion is a condition that might or might not be verified. If verified, the assertion passes. In MSTest, the Assert class provides many static methods for making assertions, such as AreEqual, IsInstanceOfType, and IsNull.

In the preceding example, after executing the method Sum you are expected to place one or more assertions aimed at verifying the changes made to the state of the object or comparing the results produced against expected values.

Note

In some papers, terms such as stub and shunt are used to indicate slight variations of what we reference here as a fake. A broadly accepted differentiation is based on the fact that a stub (or a shunt) merely provides the implementation of an interface. Methods can just throw or, at most, return canned values.

A fake, on the other hand, is a slightly more sophisticated object that, in addition to implementing an interface, also usually contains more logic in the methods. Methods on a fake object can return canned values but also programmatically set values. Both fakes and stubs can provide a meaningful implementation for some methods and just throw exceptions for other methods that are not considered relevant for the purpose of the test.

A bigger and juicier differentiation, however, is the one that exists between fakes (or stubs) and mock objects, which is discussed next.

From Fakes to Mocks

A mock object is a more evolved and recent version of a fake. A mock does all that a fake or a stub does, plus something more. In a way, a mock is an object with its own personality that mimics the behavior and interface of another object. What more does a mock provide to testers?

Essentially, a mock allows for verification of the context of the method call. With a mock, you can verify that a method call happens with the right preconditions and in the correct order with respect to other methods in the class.

Writing a fake manually is not usually a big issue—all the logic you need is for the most part simple and doesn’t need to change frequently. When you use fakes, you’re mostly interested in verifying that some expected output derives from a given input. You are interested in the state that a fake object might represent; you are not interested in interacting with it.

You use a mock instead of a fake only when you need to interact with dependent objects during tests. For example, you might want to know whether the mock has been invoked or not, and you might decide within the text what the mock object has to return for a given method.

Writing mocks manually is certainly a possibility, but is rarely an option you really want to consider. For the level of flexibility you expect from a mock, you should be updating its source code every now and then or you should have (and maintain) a different mock for each test case in which the object is being involved. Alternatively, you might come up with a very generic mock class that works in the guise of any object you specify. This very generic mock class also exposes a general-purpose interface through which you set your expectations for the mocked object. This is exactly what mocking frameworks do for you. In the end, you never write mock objects manually; you generate them on the fly using some mocking framework.

Table 2 lists and briefly describes the commonly used mocking frameworks.

Table 2. Some Popular Mocking Frameworks

Product

Description

NMock2

An open-source library providing a dynamic mocking framework for .NET interfaces. The mock object uses strings to get input and reflection to set expectations. 

TypeMock

A commercial product with unique capabilities that basically don’t require you to (re)design your code for testability. TypeMock enables testing code that was previously considered untestable, such as static methods, nonvirtual methods, and sealed classes.


Rhino Mocks

An open-source product. Through a wizard, it generates a static mock class for type-safe testing. You set mock expectations by accessing directly the mocked object, rather than going through one more level of indirection.

Let’s go through a mocking example that uses NMock2 in MSTest.

Imagine you have an AccountService class that depends on the ICurrencyService type. The AccountService class represents a bank account with its own currency. When you transfer funds between accounts, you might need to deal with conversion rates, and you use the ICurrencyService type for that:

public interface ICurrencyService
{
  // Returns the current conversion rate: how many "fromCurrency" to
  // be changed into toCurrency
  decimal GetConversionRate(string fromCurrency, string toCurrency);
}

Let’s see what testing the TransferFunds method looks like:

[TestClass]
public class CurrencyServiceTestCase
{
  private Mockery mocks;
  private ICurrencyService mockCurrencyService;
  private IAccountService accountService;
  [TestInitialize]
  public void SetUp()
  {
    // Initialize the mocking framework
    mocks = new Mockery();

    // Generate a mock for the ICurrencyService type
    mockCurrencyService = mocks.NewMock<ICurrencyService>();

    // Create the object to test and inject the mocked service
    accountService = new AccountService(mockCurrencyService);
  }

  [TestMethod]
  public void TestCrossCurrencyFundsTransfer()
  {
    // Create two test accounts
    Account eurAccount = new Account("12345", "EUR");
    Account usdAccount = new Account("54321", "USD");
    usdAccount.Deposit(1000);

    // Set expectations for the mocked object:
    //   When method GetConversionRate is invoked with (USD,EUR) input
    //   the mock returns 0.64
    Expect.Once.On(mockCurrencyService)
                .Method("GetConversionRate")
                .With("USD", "EUR")
                .Will(Return.Value(0.64));

    // Invoke the method to test (and transfer $500 to an EUR account)
    accountService.TransferFunds(usdAccount, eurAccount, 500);

    // Verify postconditions through assertions
    Assert.AreEqual<int>(500, usdAccount.Balance);
    Assert.AreEqual<int>(320, eurAccount.Balance);
    mocks.VerifyAllExpectationsHaveBeenMet();
  }
}

You first create a mock object for each dependent type. Next, you programmatically set expectations on the mock using the static class Expect from the NMock2 framework. In particular, in this case you establish that when the method GetConversionRate on the mocked type is invoked with a pair of arguments such as "USD" and "EUR", it has to return 0.64. This is just the value that the method TransferFunds receives when it attempts to invoke the currency services internally.

There’s no code around that belongs to a mock object, and there’s no need for developers to look into the implementation of mocks. Reading a test, therefore, couldn’t be easier. The expectations are clearly declared and correctly passed on the methods under test.

Note

A mock is generated on the fly using .NET reflection to inspect the type to mimic and the CodeDOM API to generate and compile code dynamically.

Other  
 
video
 
Video tutorials
- How To Install Windows 8

- How To Install Windows Server 2012

- How To Install Windows Server 2012 On VirtualBox

- How To Disable Windows 8 Metro UI

- How To Install Windows Store Apps From Windows 8 Classic Desktop

- How To Disable Windows Update in Windows 8

- How To Disable Windows 8 Metro UI

- How To Add Widgets To Windows 8 Lock Screen

- How to create your first Swimlane Diagram or Cross-Functional Flowchart Diagram by using Microsoft Visio 2010
programming4us programming4us
programming4us
 
 
programming4us