Can we conduct a "logic test" on the concepts defined in requirements before the product manager organizes the requirements review meeting?
We can communicate with the product manager in advance about the requirements specification document, propose revisions, or put forward targeted acceptance test points for their confirmation. This early collaboration between testers and product managers on feature requirements lays a solid foundation for subsequent development and testing.
During the requirements review meeting, we can review the requirements specification alongside the corresponding acceptance test points—even including historical related defects that have actually occurred—before developers even start thinking about implementation. A combined table of requirements and acceptance testing can be used to help the team check quality risks and acceptance test points synchronously during the review, significantly improving review efficiency.
This innovative approach ensures that developers keep the key test scenarios in mind early on, so low-level quality issues (from the tester’s perspective) are almost non-existent upon delivery.
It’s worth noting that the test points sorted out by testers in advance can be confirmed by the product team and become User Acceptance Test (UAT) cases, or refined into regular acceptance test (AT) cases for developers to self-test. These test cases serve as a shared reference for the entire team, eliminating misunderstandings of requirements across different roles.
In agile R&D teams, we can set an entry threshold for the completion of requirements review—Definition of Readiness (DoR)—a discipline abided by the entire team. DoR typically requires the "acceptance test cases" for user stories to be confirmed and completed by the team. Once acceptance test cases become the common delivery language for all team links, it will greatly drive all roles to prioritize quality from the start, clarify how their work aligns with quality requirements, and avoid huge discrepancies in the understanding of requirement quality across different roles.
Acceptance test cases run through the entire iteration process: from refining acceptance criteria and breaking down user stories, to development coding, testing, card verification, and finally launch, demonstration and acceptance.
First and foremost, the design quality of requirements is not the sole responsibility of the product manager—every member of the feature team is accountable for high-quality requirement design. It is nearly impossible for a single person to develop a high-quality product design plan by considering market, user, technical and other aspects comprehensively. The team needs to work together to help the product owner quickly improve the quality of requirement design and define document quality standards that meet the required level.
Agile teams do not pursue exhaustive documentation, but rather clear definition of boundaries and risks. The specific "mandatory content" can be customized by frontline teams based on their business characteristics and scale. For example, for newly launched products, requirements review focuses more on the core features rather than missing ones.
In addition to including "completion of acceptance test cases" in DoR, we can also incorporate other effective disciplines for improving requirement quality into DoR, making the whole team jointly responsible for the refinement of requirements. Only when all DoR items are completed does the requirements review phase end, and the team can enter the development and design phase.
|
No. |
DoR Items |
|
1 |
PRD and prototypes are delivered |
|
2 |
UI design and interaction drafts are complete |
|
3 |
User stories, business logic and acceptance criteria are clear |
|
4 |
Priorities are defined |
|
5 |
Development time estimation is available (frontend and backend for DEV) |
|
6 |
Testing time estimation is available (TEST) |
|
7 |
Quantifiable indicators for requirement value and benefits are set |
|
8 |
Dependent parties are identified, communicated with, and clear interface persons are assigned |
|
9 |
Expected business launch time is determined |
To drive the team to deliver results in requirements review and acceptance testing, we can use two core metrics:
As the saying goes, "Sharpening the axe won’t delay the woodcutting." More thinking and discussion in the requirement clarification phase will drastically reduce the cost of building product quality in the future.
In industry practice, UAT is usually the final checkpoint before product release. It allows end users to confirm whether the product meets requirements through acceptance testing, while enabling the R&D and product team to obtain valuable feedback and clarify future improvement directions. End users should be typical real users of the product and, if applicable, the paying party for contract delivery.
If real users cannot be involved in UAT, we need to simulate user behavior to complete the test (note: simulating users is not an easy task). We must identify different user types and roles, and cover these roles in UAT—this is the role-playing exploratory testing method, which requires thinking about what such users value most and how they will act.
UAT often incurs high costs for companies, as it requires involving business representatives or real users. If the expected benefits are not achieved, it will lead to complaints and delay the product’s market launch. For the overseas businesses of domestic R&D and product departments, UAT is even more important but time-consuming and labor-intensive. Overseas UAT personnel can indeed identify many unique problems in the overseas market, but time differences, language barriers, network environments, cross-border communication and product familiarity all become obstacles to UAT.
From the perspective of the R&D and product team, there are multiple ways to collect user feedback on products, each with its own advantages and disadvantages:
Compared with the above methods, UAT’s advantage is its ability to collect detailed feedback from a group of real users in a concentrated manner and confirm no obvious omissions before the requirement launch. Its disadvantages are prolonged full release time, high organizational costs, and usually limited value of feedback problems.
An efficient UAT mechanism is essential to satisfy both the R&D/product side and the business side.
A sustainable and efficient UAT mechanism focuses on establishing several key disciplines:
|
UAT Required |
UAT Not Required |
|
Core product versions from 0 to 1 |
Routine iterative requirements for mature products |
|
Selling points with drastic changes in the interactive interface |
Requirements that are difficult to observe from the user’s perspective |
|
Requirements involving huge changes in customer operation processes |
Products with comprehensive health monitoring systems |
|
Marketing campaign requirements involving large sums of money |
- |
|
Other high-risk requirements related to security/operations |
- |
To avoid delaying the release schedule, we must define red lines for UAT failure (requiring immediate fixes), such as:
For internet products, the principle of "small steps, fast runs and trial and error" applies. For non-red line ordinary defects or experience issues, the product owner has the final say on whether to fix them before release.
Business representatives (user representatives) should have access to the requirements document during the requirement discussion phase, confirm the scope of online features, and even have the opportunity to participate in the demonstration of iterative versions. This allows early feedback and avoids new requirements being proposed at the UAT stage.
To improve UAT execution efficiency:
Given the high cost of UAT execution, the technical team should prepare the following before UAT starts to ensure efficiency: