In my previous writing, we emphasized on the necessity of structuring our application. We addressed two major hurdle that could potentially bring the flutter project to a halt and took an architectural decision by adapting to Domain Driven Design principle. Now we want to incorporate Unit Test and adhere to Test Driven Development guidelines while implementing use-cases and learn how it supplements our approach towards building a scalable application.
Test Driven Development
Test driven development is more than just using automated test suite for validating our implemented logic. It’s kind of methodology which in a way drives the design aspect of writing code by following a certain convention in cycles. Essentially, this allows our test case to serve as a technical documentation for the behavior of our application.
This convention is described as below.
Upon receiving a feature/change request, the iteration of TDD starts.
1. Add a test case in support to the feature/change request.
2. Write the minimum code to make the test case pass.
3. Adjust with the rest of production code through refactoring.
This convention can be applied at different level as we have already decomposed our application into various layers/components as in the figure below.
Here we’ll be focusing on the “Unit Test”, which is at the bottom of testing pyramid.
In Object Oriented perspective, Unit Test is the testing of state/behavior of a class in isolation. We have seen already in previous article that structuring our application by decomposing into layers/components results in writing more code and ends up with a lot of classes. So it’s natural that we want to automate the testing of classes and test them in isolation in a time effective manner. That lead us to two approach of unit testing which are as follows.
Test Last Approach
This is the default testing approach that we think of intuitively. In this approach we perform test after writing the implementation detail of our code which means our test works for that specific implementation only. Any change in the implementation detail will break the contract of test case and results as failure which might not be correct all the time.
For example, suppose you want to go from place ‘A’ to place ‘B’ and you went by train. If you write test case to validate this logic using test last approach then you’ll end up including the mode of transportation as a part of the behavior that you want to validate. However that is just an implementation detail that you’ve used. Different people might use different mode of transportation to reach the same destination from the same starting point.
So Test Last Approach couples the behavior along with implementation detail which results in rigid code that is hard to maintain. Therefore we avoid this approach while Unit Testing but it is useful for other testing scenarios.
Test First Approach
This testing approach aligns with the convention that we have mentioned earlier while explaining TDD. So we’ll use this convention and explore further with some examples. In general, this is an iterative approach where we start with fail case, then we adjust the minimum change in our code to make it pass and finally refactor to improve the quality without affecting the behavior.
Let’s start with an introduction of basic TDD terminology.
SUT : System Under Test, this is the component that we desire to test. In our application, SUT will be our Use Cases in the Application Layer and Entities in the Domain Layer along with other utilities function expressed as instance method or static method or top level function.
Test Class : It is the test class which comprises of corresponding test cases for SUTs.
Test Doubles : Test doubles are the substitutes that we use in place of production object while testing. Although the core intent is the same, these test doubles are generally categorized into following types Dummy, Fake, Stubs, Spies and Mock.
Test doubles plays a significant role while unit testing as they help to imitate a certain portion of the code which our SUT are dependent upon. These imitation allows us to mimic the actual external interaction and focus on the behavior of that particular SUT.
For example, let us consider a SUT for which we want to validate its behavior. Just like any other application, it performs IO operation to retrieve and send data. IO operation are slow in nature and since the behavior we want to validate is IO dependent, we have two possible choices, either to perform IO operation or imitate IO operation. If we perform IO operation then it will significantly increase the time it requires to complete the unit test, which is against its core principle. So we resort to using test double to imitate the IO operation and provide dummy IO interaction to validate our behavior.
Along with that, the usage of test doubles also enforces good design patterns such as dependency injection.
Considering the above example, we conclude to use the test double to imitate IO operations. Had the dependent object responsible for IO been created within the SUT, we could never substitute with a test double. So, this can only be realized if we can pass the instance of the test double externally and avoid its creation inside our SUT which ultimately enforces the dependency injection pattern.
The above statements are illustrated with the help of diagrams as below.
Here, the test cases helps to perform assertions on the behavior and state of the SUTs. The dependency object i.e. “networkClient” of the given SUT needs to be polymorphic in nature to support substitution. So, an “IService” type is introduced and implemented by HttpClient class in production as well as by MockedService class while performing assertions.
In first step, we assert the “Case First” from the available test cases with the SUT. Since, our SUT is not aware of the new test case, it is intended to fail. The failure is denoted by the red color of SUT.
In second step, we adjust the minimum change in the behavior of our SUT to pass the assertion made by “Case First”. The green color reflects the pass state of SUT. Then we proceed to refactor the SUT as a part of third step to adjust with the rest of production code without failing the assertion.
This completes an iteration, then we proceed to add another test case i.e. “Case Second” and repeat the process.
So this in a way supports the fact that our tests are actually driving the design aspects of writing our code to some extent, hence Test Driven Development justifies its name.
And you might still be wondering,
“Why so much hard-work when we could have just write the proper implementation in the first place?”
The reason behind all this tedious work is that, as the project evolves each test case consolidates and allows addition/modification of the behavior with minimum effort and without imposing any risk to other related behavior of the application.
For example, your newly recruited team member may be unaware about the current implemented domain logic. In this scenario, the test case serves as a technical document and explains the domain in a step-by-step manner. It will also allow them to modify with confidence as the existing test case prevents any potential unwanted behavior change of the application.
We can further take an analogy between playing jenga blocks and software development.
“Placing a block on top of stack by picking an existing block from underneath the structure is roughly equivalent to adding/changing a feature during software development.“
If the process of adding a new block adheres to TDD then the test case here serves as a guideline that never allows a mispositioned block to be placed on top of the stack and impose a risk to the entire structure. So even if our structure seems wicked, the risk of falling is drastically reduced.
Now let’s proceed to unit test a use-case and entity using a test first approach for demonstration.
The sole purpose of “use-case” class is to orchestrate and delegate responsibility to other object. So we can refer use-case classes as a “Manager” and the other class upon which it is dependent as “Worker” class such as the entities in domain layer.
So the test cases written for them are kind of State and Interaction Testing.
EXAMPLE : As a feature, we need to allow customers to calculate EMI for the requested loan.
The requirement is described as below.
- Customer specifies the amount and total time period for the loan.
- On the basis of amount and time that user specifies, the corresponding rate that is configured at the server is returned as a response.
- Then the EMI needs to be calculated on the basis of the acquired values to the customer.
We can now have a mental construct to implement this feature. So, we have a domain entity model which expresses the behavior to calculate EMI and a use-case to orchestrate this flow. The UML for this use-case is illustrated as below.
Lets start with “EMICalculator” domain entity with its behavior expressed as,
EMI = amount× rate × (1 + rate)time/((1 + rate)time — 1).
The first step is to prepare SUT and understand its context. As our SUT is responsible for calculating EMI only and its behavior directly depends upon the passed arguments. So the test cases for this SUT only comprises of state testing of passed arguments and behavior testing of the public method to calculate EMI.
We’ll start with bare minimum by defining EMICalculator with its related attributes only.
Now we postulate the test cases for EMICalculator
- The amount, rate and time are mandatory fields.
It is intended to fail as we have not set any value or provided a default value. This is the first step in TDD cycle.
Now we want to fix this issue with the consideration of guidance from our test case.
Even if the change seems minuscule, we have agreed to acquire value through constructor and annotated them as final non null fields. This prevents the creation of new instance through default constructor without any values. So the caller is then forced to supply value while creating its instance.
In fact this changes the design aspect of our SUT i.e. “EMICalculator” and we need to update them accordingly as below.
Now running the test, we get pass result. This completes the step 2 of TDD cycle.
We don’t need refactoring with rest of the code due to the simplicity of this test case. So we exclude step 3 of TDD for this test case.
We’ll follow similar iteration for test cases involving the state for this SUT and directly skip to the test case for validating the EMI calculation part.
2) EMI needs to be calculated using formula, EMI = amount× rate × (1 + rate)time/((1 + rate)time — 1).
The first step in TDD is to fail the test case intentionally. So we just define the behavior fo our SUT as “calculateEMI” that returns -1. Similarly, as you might have guessed we will make an assertion to be greater or equal to zero as our test case.
As we know that it is bound to fail. So we now add our formula as implementation and change it until the test cases is passed as follow.
We’ll be omitting the step 3 of TDD cycle due to its simplicity.
So all the related test cases and the final design of EMICalculator is illustrated as below.
Now moving on to unit testing of corresponding use-case class “FetchLoanDetail”. This is fairly simple as we have defined use-case to orchestrate different components only and it also doesn’t have any exclusive behavior attached to it.
We’ll start by mocking its dependency so that we can test it in isolation.
Here our SUT will be “FetchLoanDetail” use-case class. We’ll mock its dependencies to fake interactions by adjusting the mocked dependencies’ behavior and state to support our test cases. Since the test-case will guide us writing the implementation for buildUseCase method, we returned UseCaseNotImplementedException as error on stream initially.
The mocked object can be created runtime with the help of mocking frameworks such as mockito. However we’ll stick to manual process for the sake of the example.
Lets now conceptualize a test scenario for this use-case along with different variants of mocked dependencies as an illustration.
“If rate can be fetched then EMI needs be calculated and shown to the user otherwise conclude as an error.”
So, we’ll be creating a “MockedSucessRateFetchRepository” class to fake the success interaction and “MockedFailedRateFetchRepository” to fake the failure interaction respectively and support above mentioned test case.
We’ll start validating the flow of use case by using mocked dependencies with predefined behavior.
- “MockedSucessRateFetchRepository” is verified for success interaction.
The reason is obvious, we have returned “UseCaseNotImplementedException” exception on stream initially as we didn’t have any implementation at first.
This allowed us to conclude that the use-case had some flaws even when we use the mocked object with a valid behavior.
Lets make a minimum change to the use-case so that it passes the test.
Due to simplicity of this use-case we don’t need further refactoring and skip directly to next test case.
- Similarly “MockedFailedRateFetchRepository” is verified for failed interaction.
The fixes applied during validation of success interaction was suffice and complimented this test case as well. So we don’t need further evaluation for this case.
So we verified the external interaction for this use-case through unit test.
Now we postulate test case that represents whole flow of this use case and perform assertion against real data.
“EMI must be 150391.64490861617 for amount = 10,000 , rate = 15%, time =24 months”
Since the first step is intended to fail as we have not changed the implementation detail to accommodate this use-case.
Lets proceed to integrate the domain entity “EmiCalculator” to add required behavior and make the test case pass.
Here we modified “LoanDetailResponse” class to integrate “EMICalculator” domain entity in our use-case.
Running the test we get pass result as below.
All the test case related to FetchLoanDetail is listed as below.
For the sake of demonstration I’ve kept the example as simple as I could and highlighted the basics of TDD with focus on Unit Test. The overall intent was to develop intuition rather than focus on particular testing framework. However in practice, we want to rely more on testing frameworks and libraries to speed up this process which is left out as an exercise. And this concludes the part 2 of “Flutter for Enterprise” series.