Pricing

Exploring Valuable Test Cases in the Android Official MVP Project: A Comprehensive Guide to Unit Tes

This article serves as an appendix to "Interpreting the Unit Testing of the Android Official MVP Project". This MVP project and its unit testing cases can provide many insights for our work, so it is worth giving it a quick read.

This article serves as an appendix to "Interpreting the Unit Testing of the Android Official MVP Project" (hereinafter referred to as "Interpreting"). The purpose of this article is twofold. Firstly, the unit testing of this project is comprehensive, with high coverage and high learning value. The author intends to describe each test case in an attempt to force himself to examine them thoroughly. Secondly, the content of this part may be inevitably tedious, and the author has tried his best to make it more readable, only to find it quite challenging. Therefore, this appendix has been slightly modified from the perspective of "writing valuable test cases. Regardless, this MVP project and its unit testing cases can provide many insights for our work, so it is worth giving it a quick read.

What are valuable test cases?

Taking this project as an example, I believe that the design of test cases cannot be separated from the architectural and business aspects.

1. Architectural Level

Different architectures, such as MVC (Model-View-Controller) or MVP (Model-View-Presenter), determine the different ways of writing test cases. For example, in a todo-mvp project, as mentioned in the 'Interpretation' section, the testing of a feature requires the collaboration of all three layers of the MVP architecture, each with its own responsibilities yet interconnected.

Presenter layer: This layer is quite clear. We design corresponding test cases for each interface method and every logical path involved in each method. It is worth noting that, in this layer, we do not make assertions for input and output, but rather verify whether the logic of the View and Model layers is correctly covered.

Model layer: Similar to the Presenter layer, we design test cases for each method in the Model layer. Unlike the Presenter layer, this layer requires assertions to be made for the accuracy of input and output data.

View layer: We‘ll discuss this layer at the business level.

2. Business Level

Conducting unit tests with a focus on testing business logic is crucial. The View layer bears this responsibility. When designing test cases for this layer, avoid overthinking and approach it from the perspective of normal usage of the application, translating interaction behaviors into Espresso code. As the View layer serves as the entry point, when an interaction behavior occurs, the Presenter starts coordinating the View and Model layers to execute their respective logic. Therefore, from this perspective, testing the View layer covers the logic of all three layers in the MVP architecture.

After discussing the valuable ones, let's take a look at what are worthless test cases. For example, the following types:

● Testing mature utility classes

● Testing simple methods (such as getter and setter methods)

● Redundant testing across MVP layers, such as making assertions about input and output correctness in the Presenter layer

Next, I will comprehensively showcase all the unit test cases in this MVP project, divided into three categories: test cases under androidTest, androidTestMock, and test. If you find it tedious to read through all the test cases, you can directly read the overview section at the beginning of each test class.

Testing under the androidTest files

View Layer: AppNavigationTest

Overview: This test case conducts navigation testing, specifically focusing on the functionality of DrawerLayout, including opening, closing, and launching the corresponding Activity after clicking an Item.

Significance: This provides insights on how to design valuable test cases for DrawerLayout.

ClickOnStatisticsNavigationItem_ ShowsStatisticsScreen

Open Left Drawer ->Click on the Statistics button ->Assert that StatisticsActivity has opened clickOnListNavigationItem_ ShowsListScreen

Open Left Drawer ->click on the Statistics button ->open Left Drawer ->click on the TO DO list button ->assert that TasksActivity has opened clickOnAndroidHomeIcon_ OpensNavigation

Verify closing and opening the Left Drawer through the ActionBar icon

View Layer: TasksScreenTest

Overview: This test case focuses on the interface functionality testing for the task list and task details pages, covering all interactions on the pages, including adding, deleting, editing, and searching tasks, changing task status, filtering task lists, etc. In addition, it also verifies the impact of screen orientation changes on the interface data state.

Significance: This tells us how to design valuable functional interface test cases.

ClickAddTaskButton_ OpensAddTaskUi

Click the add button ->Assert that the corresponding activity has opened the addTaskToTasksList

After adding the TO-DO task for Title 1, return to the list page ->Assert that Title 1 has an editTask

After adding the TO-DO task with title 1, return to the list page ->click on this item to enter the viewing page ->click on the edit button ->modify to title 2->click on save ->assert that title 1 does not exist and title 2 does have markTaskAsComplete

Add a task and click on CheckBox to set it as completed ->Enter the All/Active/Completed view through the filter and assert whether the task has markTaskAsActive

Test marking tasks as Active, using the same method as showAllTasks

Add 2 tasks ->Enter All view ->Assert that both tasks have showActiveTasks on the interface

Add 2 tasks ->Enter Active View ->Assert that both tasks have showCompletedTasks on the interface

Add 2 tasks ->Mark as completed ->Enter Completed view ->Assert that both tasks have clearCompletedTasks on the interface

Add 2 tasks ->Mark as completed ->Click on the Clear completed button ->Assert that there are no createOneTasks for both tasks_ DeleteTask

Add 1 task ->click on the task to enter the details page ->click on delete ->assert that the task does not have createTwoTasks_ DeleteOneTask

Create 2 tasks ->Delete the 2nd one ->Assert that the 1st one exists and the 2nd one does not exist markTaskAsCompleteOnDetailScreen_ TaskIsCompleteInList

Create 1 task ->Click on the task to enter the details page ->Mark as completed ->Return to the list page ->Assert that the task is in the selected state markTaskAsActiveOnDetailScreen_ TaskIsActiveInList

Create 1 task ->Mark as selected on the list page ->Mark as unselected on the details page ->Return to the list page ->Assert that the task is not selected markTaskAsAcompleteAndActiveOnDetailScreen_ TaskIsActiveInList

Create 1 task ->Trigger two checkbox clicks on the details page ->Return to the list page ->Assert that the task is not selected markTaskAsActiveAndCompleteOnDetailScreen_ TaskIsCompleteInList

Create 1 task ->Mark as completed ->Trigger two checkbox clicks on the details page ->Return to the list page ->Assert that the task has been selected for orientationChange_ FilterActivePersists

Create 1 task ->Mark as completed ->Enter Active view ->Verify that the task does not exist ->Switch between horizontal and vertical screens ->Assert that the task status is the same as before ->OrientationChange_ FilterCompletedPersists

Create 1 task ->Mark as completed ->Enter Completed view ->Verify the existence of the task ->Switch between horizontal and vertical screens ->Assert that the task status is consistent with before

Model Laryer: TasksLocalDataSourceTest

Overview: This test case focuses on testing the addition, deletion, modification, and querying of tasks in the database, as well as changing the task status.

Significance: When testing the Create, Read, Update, and Delete (CRUD) operations in the database, it is important to perform these tests together and assert the results accordingly. This test case serves as a good example of how to do this.

SaveTask_ RetrievesTask

Test purpose: Verify the logic of saving Tasks to the database

Test case: Instantiate Task object ->Save and store ->Retrieve Task based on ID ->Assert consistency with the stored Task in the callback function

CompleteTask_ RetrievedTaskIsComplete

Test purpose: Verify the logic of setting the task to completion status

Test case: Task object saved and stored ->Trigger logic to complete task ->Retrieve task based on ID ->Interrupt the callback function to indicate that the task has been completed

ActivateTask_ RetrievedTaskIsActive

Test purpose: To verify the logic of setting the task to an active state

Test case: mock a callback object, callback ->Task object, save and store ->trigger logic to complete the task ->trigger logic to activate the task ->obtain task based on ID ->assert that the callback executed onTaskLoaded logic

ClearCompletedTask_ TaskNotRetrievable

Test purpose: Verify the logic of clearing all completed tasks

Test case: mock three callback function objects, namely callback1 to 3->Save Task 1, Task 2, and Task 3, where Task 1 and Task 2 are in the completed state and Task 3 is in the active state ->Clean up all completed tasks ->Obtain Tasks based on the IDs of the three tasks ->Assert that callback1 and callback2 have executed onDataNotAvailable logic ->Assert that callback3 has executed onTaskLoaded logic

DeleteAllTasks_ EmptyListOfRetrievedTask

Test purpose: Verify the logic of deleting all tasks in the database

Test case: Save task ->mock a callback function callback ->delete all tasks ->obtain task list ->assert that the callback executed the onDataNotAvailable logic

GetTasks_ RetrieveSavedTasks

Test purpose: Verify the logic of obtaining all tasks in the database

Test case: Save 2 tasks ->Get task list ->Interrupt the existence of these 2 tasks in the callback function

Testing under the androidTestMock files

In the "Interpretation" article, the author mentioned that the primary function of this folder is to fake network requests, meaning that it does not send actual network requests but returns pre-defined data instead.

View Layer: AddEditTaskScreenTest

ErrorShownOnEmptyTask

Test purpose: When verifying the save or edit task, if an empty title is entered, a Snackbar prompt will pop up indicating that it cannot be empty

Test case: Open the details page ->Enter an empty title and description ->Click Save ->Verify that Snackbar is displayed through the message content of Snackbar

View Layer: StatisticsScreenTest

Tasks_ ShowsNonEmptyMessage

Open the statistics interface ->Fake two task data in advance, with the status being Completed and Active ->Assert that both statistical columns are displayed

View Layer: Task Detail Screen Test

Overview: Fake identifies tasks in different states and asserts their titles, descriptions, and states on the details page.

Significance: Guide us on how to Fake network request data.

ActiveTaskDetails_ DisplayedInUi

Fake a task with a status of Active ->Open the details page ->Assert title, description, and task status completed Task Details_ DisplayedInUi

Fake a task with a status of Complete ->Open the details page ->Assert title, description, and task status orientationChange_ MenuAndTaskPersist

The testing techniques for horizontal and vertical screens are consistent with those in TasksScreenTest and will not be repeated.

Testing under the test files

Presenter Layer: AddEditTaskPresenterTest

Overview: After entering the testing of the Presenter layer, we no longer assert the input and output, but instead, whether the assertion correctly covers the logic of the View and Model layers. AddEditTaskPresenter has three methods, namely createTask, updateTask, and populateTask, which correspond to the functions of adding, modifying, and displaying tasks. Adding tasks involves both success and failure scenarios, so there are four test cases.

Significance: The testing of these Presenter layers can teach us how to mock, verify, test asynchronous callbacks, and fully cover all logical paths of the Presenter layer.

SaveNewTaskToRepository_ ShowsSuccessMessage Ui

Create a Presenter, execute the logic for creating tasks ->Assert that the Model layer executes the saved logic ->Assert that the View layer executes the logic for displaying task lists

SaveTask_ EmptyTaskShowsErrorUi

Create a Presenter, execute the logic of creating a task, and the task Title is empty ->Assert that the View layer executes the logic of displaying errors

SaveExistingTaskToRepository_ ShowsSuccessMessage Ui

This use case verifies the logic of the update task, and the testing method is the same as 1.

PopulateTask_ CallsRepoAndUpdatesView

Test purpose: Verify whether the task information displayed on the detail page is correct

Test case: Presenter executes populateTask() ->Assertion executes getTask() with correct parameters ->Assertion callback function executes correct logic ->Assertion View layer

Displaying correct Task data

Presenter Layer: StatisticsPresenterTest

Overview: The presenter interface of this class is relatively simple, with only one entry method, start, which executes the logic of loading statistical information. The execution process involves several paths: loading an empty task list, loading a non empty task list, and data unavailable, corresponding to points 1, 2, and 3, respectively.

LoadEmptyTasksFromRepository_ CallViewToDisplay

Assertion Load Empty Task List

LoadNonEmptyTasksFromRepository_ CallViewToDisplay

Assertion loading non empty task list

LoadStatisticsWhenTasksAreUnavailable_ CallErrorToDisplay

Assertion data not available

Presenter Layer: TaskDetailPresenterTest

Overview: This Presenter has 5 methods, which are:

● Start: Display task details, involving three paths: displaying Active tasks, displaying Completed tasks, and displaying tasks with illegal IDs, corresponding to test cases 1, 2, and 3

● deleteTask: Delete the task, corresponding to the fourth test case

● completeTask: Complete the task, for the 5th

● ActivateTask: Activate the task, corresponding to the 6th one

● editTask: Edit the task, corresponding to the 7th, and edit the task with an illegal ID, corresponding to the 8th test case

GetActiveTaskFromRepositoryAndLoadIntoView

GetCompletedTaskFromRepositoryAndLoadIntoView

GetUnknown Task FromRepositoryAndLoadIntoView

DeleteTask

CompleteTask

ActivateTask

ActiveTasIsShownWhenEditing

InvalidTaskIsNotShownWhenEditing

Presenter Layer: TasksPresenterTest

Overview: the testing of this TasksPresenter is similar to the previous point. Starting from the interface method, there are a total of 10 interface methods in this class. For this purpose, 8 test cases were designed, including displaying the task list of All/Active/Completed, clicking to open the task details page, and changing the task status.

loadAllTasksFromRepositoryAndLoadIntoView

loadActiveTasksFromRepositoryAndLoadIntoView

loadCompletedTasksFromRepositoryAndLoadIntoView

clickOnFab_ShowsAddTaskUi

clickOnTask_ShowsDetailUi

completeTask_ShowsTaskMarkedComplete

activateTask_ShowsTaskMarkedActive

unavailableTasks_ShowsError

Model Layer: TasksRepositoryTest

Overview: This type of test case is very comprehensive, and it has high learning value for testing techniques such as how to design test cases to expire data, and how to obtain data from local or network sources.

getTasks_repositoryCachesAfterFirstApiCall

getTasks_requestsAllTasksFromLocalDataSource

saveTask_savesTaskToServiceAPI

completeTask_completesTaskToServiceAPIUpdatesCache

completeTaskId_completesTaskToServiceAPIUpdatesCache

activateTask_activatesTaskToServiceAPIUpdatesCache

activateTaskId_activatesTaskToServiceAPIUpdatesCache

getTask_requestsSingleTaskFromLocalDataSource

deleteCompletedTasks_deleteCompletedTasksToServiceAPIUpdatesCache

deleteAllTasks_deleteTasksToServiceAPIUpdatesCache

deleteTask_deleteTaskToServiceAPIRemovedFromCache

getTasksWithDirtyCache_tasksAreRetrievedFromRemote

getTasksWithLocalDataSourceUnavailable_tasksAreRetrievedFromRemote

getTasksWithBothDataSourcesUnavailable_firesOnDataUnavailable

getTaskWithBothDataSourcesUnavailable_firesOnDataUnavailable

getTasks_refreshesLocalDataSource

 

[Disclaimer: This article is authorized and reprinted from JianShu, written by geniusmart. Unauthorized reproduction is prohibited.]

Original article link: http://www.jianshu.com/p/0429498d302b

Latest Posts
1A review of the PerfDog evolution: Discussing mobile software QA with the founding developer of PerfDog A conversation with Awen, the founding developer of PerfDog, to discuss how to ensure the quality of mobile software.
2Enhancing Game Quality with Tencent's automated testing platform UDT, a case study of mobile RPG game project We are thrilled to present a real-world case study that illustrates how our UDT platform and private cloud for remote devices empowered an RPG action game with efficient and high-standard automated testing. This endeavor led to a substantial uplift in both testing quality and productivity.
3How can Mini Program Reinforcement in 5 levels improve the security of a Chinese bank mini program? Let's see how Level-5 expert mini-reinforcement service significantly improves the bank mini program's code security and protect sensitive personal information from attackers.
4How UDT Helps Tencent Achieve Remote Device Management and Automated Testing Efficiency Let's see how UDT helps multiple teams within Tencent achieve agile and efficient collaboration and realize efficient sharing of local devices.
5WeTest showed PC & Console Game QA services and PerfDog at Gamescom 2024 Exhibited at Gamescom 2024 with Industry-leading PC & Console Game QA Solution and PerfDog