Software Testing

Testing your Software with Style and Structure

After setting up a simple build server in last months article, DevOps Ep. 4: Build Servers, our next step in the DevOps Lifecycle is Testing.

Let’s have a look at structured unit, integration and system testing, inspired by the V-Model used in automotive development.

Note that we are only looking into software verification, i.e. “the software works”. There is also software validation, which verifies, that the software works for a given user or scenario.

What types of tests are there?

When thinking about which types of tests there are, I like referring to the V-Model. When simplified, we can identify the three most important development phases and their verification via tests:

Simplified Image of the V-Model. Requirements, Architecture and Implementation are verified by System Tests, Integration Tests and Unit Tests, respectively.
Every Design step is verified within the V-Model (simplified depiction)

In summary software is developed in three stages:

  • First we gather Requirements.
  • Next we plan an Architecture (hopefully).
  • Finally we implement the system.

Now the V-Model defines a verification step for each stage of development, which we will describe in detail in the following chapters.

Unit Testing

Each Unit Test tests one small unit of code.
In object oriented code, these are your classes’ methods.
Most literature also refers to each unit test having a method under test.

Probably the most important part about good unit tests is naming and structuring them, so colleagues can easily understand, what a failing unit test means.

For this purpose I encourage fellow developers to use a naming schema inspired by Behaviour-Driven-Development.

  • ACTION_WhenCONDITION
    e.g. authenticateUser_WhenCredentialsAreCorrect
  • Condition_ThenACTION
    e.g. CredentialsAreCorrect_ThenAuthenticateUser

While the above naming schema looks good, it is missing a vital piece of information: Which unit is being tested? The solution can be found in one of my favourite articles to date: Structuring Unit Tests on You’ve Been Haacked.

In summary the article recommends creating a test class for every class in your code as usual, but also introduces an additional internal class for every method under test within the class. In C#, partial classes can be helpful if you have a class with many methods.

[TestFixture]
public class AuthenticationServiceTest {

  [TestFixture]
  public class LoginMethod {

    [Test]
    public void CredentialsAreCorrect_ThenAuthenticateUser() {
      // Test Code
    }

    [Test]
    public void UsernameIsMissing_ThenReject() {
      // Test Code
    }

    [Test]
    public void PasswordIsMissing_ThenReject() {
      // Test Code
    }
  }

  public class RegisterMethod {

    [Test]
    public void CredentialsAreCorrect_ThenAuthenticateUser() {
      // Test Code
    }

    [Test]
    public void UsernameIsMissing_ThenReject() {
      // Test Code
    }

    [Test]
    public void PasswordIsMissing_ThenReject() {
      // Test Code
    }
  }
}

When developing Unit Tests, you will sometimes encounter situations, where you depend on other software units in your code.
This is a rather advanced issue and the solution is Mocking. It enables us to create “fake” dependencies for our class under test, but it is worth its own future article.

Integration Testing

Unlike Unit Tests, Integration Tests test multiple units of code working together. They are often realised using frameworks, such as Selenium, to automate usage of small parts of your software.

The important part about Integration Tests is size.
Writing integration tests is often more work than writing unit tests, however they are not very precise. If an integration test fails, you do not immediately know, which software unit is at fault.

As setting up Integration Tests is very dependent on your chosen technologies, I cannot provide a generic example here.

System Testing

Finally System Tests are the final frontier of quality assurance. They test the entire system in a shippable state.

While automation of system tests is theoretically possible, this is a lot of work. So much, that I have never seen automated System Tests in the wild.
As such it is completely normal and okay to have manual system tests.

The only important part about manual system tests, is to have a structure. For each system test, we want to:

  • define system tests with IDs for easier communication.
  • document a step-by-step guide with numbered steps, so that anyone can execute the system test.
  • document each execution of the system tests with
    • software version or build number
    • time & date
    • person performing the test
    • test result (Pass/Fail)
    • Comments, if any necessary

Although dedicated solutions for system tests exist, such as Azure Test Plans, most of them are expensive. However you can get away by using spreadsheet software. We like using Google Drive Spreadsheets, as it enables us all to edit the documents and protocol our test runs and comments.

While strictly speaking, System Tests only cover the system developed by you, I highly recommend performing these tests on a “production-similar” setup. E.g. Execute your Android Apps System Tests on a real device.

This actually blurs out the lines between System Tests and so-called “End to End” Tests, but that is a good thing. If you are developing a cross-platform application, repeat your system tests for each and every target platform.

Conclusion

We only covered the tip of the ice berg on this subject, but we can follow-up with examples and CI integration in future articles.

For now, it should give you and your team a good overview and allow you to start with simple automated unit testing as well as manual system testing documented in spreadsheets.

You can reach out to me on Twitter or the BrutalHack Discord Server for questions.