Customer Cases
Pricing

Requirements Review and Acceptance Testing: The Key to Shifting Quality Left

Learn how to use acceptance testing and requirements review to shift quality left. Discover DoR examples, UAT best practices, and metrics to improve team efficiency and product quality.
 
Source: TesterHome Community
 

 

Define Acceptance Test Points Before Requirements Review

Can we conduct a "logic test" on the concepts defined in requirements before the product manager organizes the requirements review meeting?

We can communicate with the product manager in advance about the requirements specification document, propose revisions, or put forward targeted acceptance test points for their confirmation. This early collaboration between testers and product managers on feature requirements lays a solid foundation for subsequent development and testing.

During the requirements review meeting, we can review the requirements specification alongside the corresponding acceptance test points—even including historical related defects that have actually occurred—before developers even start thinking about implementation. A combined table of requirements and acceptance testing can be used to help the team check quality risks and acceptance test points synchronously during the review, significantly improving review efficiency.

This innovative approach ensures that developers keep the key test scenarios in mind early on, so low-level quality issues (from the tester’s perspective) are almost non-existent upon delivery.

It’s worth noting that the test points sorted out by testers in advance can be confirmed by the product team and become User Acceptance Test (UAT) cases, or refined into regular acceptance test (AT) cases for developers to self-test. These test cases serve as a shared reference for the entire team, eliminating misunderstandings of requirements across different roles.

In agile R&D teams, we can set an entry threshold for the completion of requirements review—Definition of Readiness (DoR)—a discipline abided by the entire team. DoR typically requires the "acceptance test cases" for user stories to be confirmed and completed by the team. Once acceptance test cases become the common delivery language for all team links, it will greatly drive all roles to prioritize quality from the start, clarify how their work aligns with quality requirements, and avoid huge discrepancies in the understanding of requirement quality across different roles.

Acceptance test cases run through the entire iteration process: from refining acceptance criteria and breaking down user stories, to development coding, testing, card verification, and finally launch, demonstration and acceptance.

Quality Gatekeeping for Requirements Review

First and foremost, the design quality of requirements is not the sole responsibility of the product manager—every member of the feature team is accountable for high-quality requirement design. It is nearly impossible for a single person to develop a high-quality product design plan by considering market, user, technical and other aspects comprehensively. The team needs to work together to help the product owner quickly improve the quality of requirement design and define document quality standards that meet the required level.

Agile teams do not pursue exhaustive documentation, but rather clear definition of boundaries and risks. The specific "mandatory content" can be customized by frontline teams based on their business characteristics and scale. For example, for newly launched products, requirements review focuses more on the core features rather than missing ones.

Recommended Quality Checkpoints for Requirements Documents

  1. Are the online launch background, business objectives, requirement sources and delivery dates clearly defined?
  2. How to evaluate the business value brought by this requirement? What product health indicators should be focused on after launch, and what are the expected ranges?
  3. Market and competitor analysis for the requirement (optional). A business logic diagram is required if the requirement specification is complex.
  4. On which main body will this requirement be implemented, and who are its upstream and downstream stakeholders? What impact does this requirement have on upstream and downstream entities, and what is the interaction logic? Which services does the successful launch of this requirement depend on, and what are the specific dependency relationships?
  5. Does this requirement involve performance changes, stability risks or fund security risks? If yes, provide a detailed analysis (which can be completed by developers).
  6. Are the priorities of all requirements clear and unique?
  7. What is the launch strategy for this requirement?

In addition to including "completion of acceptance test cases" in DoR, we can also incorporate other effective disciplines for improving requirement quality into DoR, making the whole team jointly responsible for the refinement of requirements. Only when all DoR items are completed does the requirements review phase end, and the team can enter the development and design phase.

A Complete DoR Example for Teams

No.

DoR Items

1

PRD and prototypes are delivered

2

UI design and interaction drafts are complete

3

User stories, business logic and acceptance criteria are clear

4

Priorities are defined

5

Development time estimation is available (frontend and backend for DEV)

6

Testing time estimation is available (TEST)

7

Quantifiable indicators for requirement value and benefits are set

8

Dependent parties are identified, communicated with, and clear interface persons are assigned

9

Expected business launch time is determined

 

Metrics to Measure Review and Testing Effectiveness

To drive the team to deliver results in requirements review and acceptance testing, we can use two core metrics:

  • Acceptance test coverage: The proportion of requirements with clear acceptance test cases among all reviewed requirements.
  • Pre-requirement defect count (suggestions): The number of design defects (suggestions) raised and accepted (with requirements document revised) during the requirements review.

As the saying goes, "Sharpening the axe won’t delay the woodcutting." More thinking and discussion in the requirement clarification phase will drastically reduce the cost of building product quality in the future.

Key to Ensuring the Effectiveness of User Acceptance Testing (UAT)

In industry practice, UAT is usually the final checkpoint before product release. It allows end users to confirm whether the product meets requirements through acceptance testing, while enabling the R&D and product team to obtain valuable feedback and clarify future improvement directions. End users should be typical real users of the product and, if applicable, the paying party for contract delivery.

If real users cannot be involved in UAT, we need to simulate user behavior to complete the test (note: simulating users is not an easy task). We must identify different user types and roles, and cover these roles in UAT—this is the role-playing exploratory testing method, which requires thinking about what such users value most and how they will act.

James Bach’s Definition of UAT (fromHeuristic Test Strategy Model, HTSM)

User Testing

  1. Identify categories and roles of users.
  2. Determine what each category of user will do (use cases), how they will do it, and what they value.
  3. Get real user data, or bring real users in to test.
  4. Otherwise, systematically simulate a user (be careful—it's easy to think you're like a user even when you're not).
  5. Powerful user testing involves a variety of users and user roles, not just one.

UAT often incurs high costs for companies, as it requires involving business representatives or real users. If the expected benefits are not achieved, it will lead to complaints and delay the product’s market launch. For the overseas businesses of domestic R&D and product departments, UAT is even more important but time-consuming and labor-intensive. Overseas UAT personnel can indeed identify many unique problems in the overseas market, but time differences, language barriers, network environments, cross-border communication and product familiarity all become obstacles to UAT.

From the perspective of the R&D and product team, there are multiple ways to collect user feedback on products, each with its own advantages and disadvantages:

  1. User research in-person interviews: Advantages include direct observation of user expressions and real usage status, and capturing genuine user voices. Disadvantages are high costs—reserving users is expensive, and the expert cost of interview design and analysis is substantial, making it impossible to apply to large-scale requirements or user research.
  2. Product A/B testing: By diverting a certain amount of user traffic, observe user choices on two different interfaces to see which one has a higher conversion rate. Advantages are low implementation cost (requiring an A/B experiment platform in advance) and quantifiable, reliable conclusions. Disadvantages are limited A/B design scenarios, requiring designers to have keen insight, and failing to capture descriptive user voices.
  3. Product gray release + automatic feedback collection: Collect a large number of user trial opinions and data quickly through gray release and powerful feedback entrances (buried point systems and APP feedback boxes). Advantages are low cost, mature technology, and ability to collect large user behavior data to avoid the spread of risk incidents. Disadvantages include potential online risks and a large amount of low-value feedback data, leading to low processing efficiency.
  4. Product Showcase: Give a complete demonstration of to-be-released features to users or business representatives. Advantages are less time-consuming and direct feedback. Disadvantages are slightly high organizational costs, and one-way feedback from users without full trial, which easily conceals various problems.

Compared with the above methods, UAT’s advantage is its ability to collect detailed feedback from a group of real users in a concentrated manner and confirm no obvious omissions before the requirement launch. Its disadvantages are prolonged full release time, high organizational costs, and usually limited value of feedback problems.

An efficient UAT mechanism is essential to satisfy both the R&D/product side and the business side.

Organizational Disciplines for Efficient UAT

A sustainable and efficient UAT mechanism focuses on establishing several key disciplines:

1. Define Which Releases Require UAT (and Which Don’t)

UAT Required

UAT Not Required

Core product versions from 0 to 1

Routine iterative requirements for mature products

Selling points with drastic changes in the interactive interface

Requirements that are difficult to observe from the user’s perspective

Requirements involving huge changes in customer operation processes

Products with comprehensive health monitoring systems

Marketing campaign requirements involving large sums of money

-

Other high-risk requirements related to security/operations

-

2. Clarify UAT Pass Standards

To avoid delaying the release schedule, we must define red lines for UAT failure (requiring immediate fixes), such as:

  • Failed basic business logic
  • Blocking defects
  • Security/compliance defects involving public opinion risks

For internet products, the principle of "small steps, fast runs and trial and error" applies. For non-red line ordinary defects or experience issues, the product owner has the final say on whether to fix them before release.

Business representatives (user representatives) should have access to the requirements document during the requirement discussion phase, confirm the scope of online features, and even have the opportunity to participate in the demonstration of iterative versions. This allows early feedback and avoids new requirements being proposed at the UAT stage.

3. Set Requirements for UAT Participants

To improve UAT execution efficiency:

  • If the UAT team is not composed of end users, it should be as stable as possible, with some members proficient in product usage (and preferably having read the requirements document/operation manual in advance).
  • The UAT team must provide feedback within the specified time.
  • From the experience of organizing exploratory testing, on-site, competitive UAT with rewards for the team is the most efficient. Pre-planned acceptance test cases are only the baseline—UAT participants are encouraged to explore product problems boldly.
  • An on-site UAT organizer is required to introduce the process, maintain discipline, control time, and align the R&D/product and business sides to reach consensus on differences quickly, shortening the feedback processing cycle.

4. Technical Support for Efficiency

Given the high cost of UAT execution, the technical team should prepare the following before UAT starts to ensure efficiency:

  • A well-configured test environment
  • Test accounts/data
  • A technical Q&A guide
  • A high-quality UAT version (with sufficient internal testing and end-to-end quality assurance, as this phase is close to release)

 

Latest Posts
1Requirements Review and Acceptance Testing: The Key to Shifting Quality Left Learn how to use acceptance testing and requirements review to shift quality left. Discover DoR examples, UAT best practices, and metrics to improve team efficiency and product quality.
2Acceptance Testing: The Complete Guide to Types, Criteria, Tools, and Best Practices What is acceptance testing? Learn UAT, BAT, OAT, alpha vs beta testing, entry/exit criteria, 4 popular tools (Selenium, Cucumber, JMeter, SoapUI), and step-by-step execution. Perfect for QA engineers.
3How to Reduce Test Leakage: A Complete Guide to Software Testing Quality Test leakage can severely impact software quality. This comprehensive guide covers root causes of test leakage, prevention strategies, testing methods, and effective communication with developers. Learn how to build a robust testing process that catches defects before production.
4Interface Testing: Frontend Interaction vs. Backend Logic Learn interface testing best practices, including frontend interaction testing with mock tools and backend logic testing with comprehensive test case design. Discover how to ensure API quality across testing, staging, and production environments.
5Shifting Trends: These Changes Are Reshaping Mobile App Development Explore the 7 key trends transforming mobile app development, from AI and AR/VR integration to cross-platform frameworks and global localization. Learn how to stay ahead in a shifting digital landscape.