In 2026, AIAgent and cloud-native technologies are reshaping the entire software development process, and the software testing industry is ushering in a pivotal transformation from "manual follow-up" to "intelligent front-end deployment". Data shows that the average monthly failure rate of traditional test scripts remains as high as 25%, with maintenance costs accounting for over 60% of the total testing workload. In contrast, AI-driven testing solutions have achieved multi-fold efficiency improvements and emerged as the core quality assurance choice for industries such as finance and automotive manufacturing.
Facing the new paradigm of "human-machine collaboration", both beginners looking to enter the field and practitioners seeking skill upgrades require a knowledge system that balances basic logic and cutting-edge trends. To this end, the TesterHome Community has launched the "Advanced Testing Quality" series of articles. Starting from the core concepts of testing, the series gradually delves into process specifications, tool operations, and specialized practices, ultimately connecting to cutting-edge fields like AI testing and cloud-native testing. Through systematic content output and practical case dissection, the series aims to help readers build testing capabilities that adapt to industry changes. (This series will be continuously updated—stay tuned!)
The Software Development Life Cycle (SDLC) encompasses the entire journey of software from requirement proposal to retirement. Different development models (e.g., Waterfall, Agile, DevOps) correspond to distinct process logics, iteration rhythms, and collaboration methods. As a core quality assurance link in SDLC, testing cannot exist independently of the development model. A testing strategy aligned with the development model achieves a "balance between quality and efficiency"; conversely, a rigid testing process becomes a bottleneck for development efficiency.
Currently, the industry has gradually shifted from the traditional Waterfall model to Agile and DevOps models, and testing work has also evolved from "post-event control" to "full-cycle collaboration" and "continuous verification". This article systematically analyzes the core logics of three mainstream development models, focuses on comparing the positioning, processes, role responsibilities, and practical key points of testing under each model, and explains testing adaptation strategies through cutting-edge cases. The goal is to help readers master testing methods across different development models and enhance cross-model collaboration capabilities.
The core differences between development models are reflected in three dimensions: process structure, iteration rhythm, and collaboration method, which directly determine the intervention timing, implementation approach, and resource investment of testing work. We first establish a basic understanding of the three models by comparing their core features.
The Waterfall model is the earliest software development model, featuring rigorous processes but poor flexibility. In this model, testing work is a downstream link carried out "after development is completed", with the core positioning of "quality control" to ensure the final delivered product meets requirements.
- Core Positioning: "Acceptance testing", verifying whether the developed product complies with the requirement documents and identifying defects missed during the development phase.
- Intervention Timing: Testing can only be officially launched after the "development phase" is fully completed (except for unit testing, which is conducted by developers during the development phase).
1. Test Preparation Phase: Requirement analysis (defining the test scope based on requirement documents), test plan formulation (clarifying test resources, progress, and risks), test case design (covering all functional requirements).
2. Test Execution Phase: Building the test environment, conducting integration testing (verification of module interfaces) and system testing (verification of overall functions and non-functional requirements).
3. Defect Management Phase: Recording defects, pushing developers to fix them, and completing defect regression testing.
4. Acceptance Testing Phase: Assisting users in conducting acceptance testing to verify if the product meets actual needs; issuing test reports and confirming whether the product meets launch conditions.
The testing team is independent of the development team, with the core responsibility of "objective quality verification":
- Testing leader: Formulates test plans and allocates resources.
- Test case designers: Responsible for requirement analysis and test case design.
- Test executors: Responsible for test case execution and defect recording.
- Test report writers: Summarize test results and output test reports.
1. Advantages
- Rigorous processes, adequate test preparation, and comprehensive test case coverage, effectively identifying defects from the development phase.
- Clear testing roles and well-defined responsibilities.
2. Limitations
- Late testing intervention leads to high defect resolution costs (the cost of fixing defects in the later stages of development is more than 10 times that of the early stages).
- Poor adaptability to requirement changes; adjustments to test plans and cases are required if requirements change midway, affecting project progress.
- Minimal collaboration between development and testing teams, which easily leads to "misunderstandings of requirements".
- Project Background: Clear requirements (upgrading the existing ERP system and adding a new financial accounting module) with a 6-month cycle.
- Testing Implementation:
- The testing team starts preparation work in the 4th month of the development phase, designing test cases based on requirement documents (covering core modules such as financial accounting, inventory management, and human resources).
- After development is completed (in the 6th month), integration testing and system testing are conducted, identifying defects such as "abnormal data synchronization between the financial accounting module and inventory module" and "incorrect report export format".
- After promoting defect fixes by developers, the team assists users in acceptance testing, and the project is successfully launched.
- Problem Encountered: Due to late testing intervention, fixing the "financial accounting logic error" takes 2 weeks, delaying the project schedule.
The Agile model takes "quick response to changes" as its core, splitting the entire process into short iteration cycles. Testing is no longer a downstream link but a core role of "full-process collaboration", running through each iteration cycle to achieve "rapid verification of incremental functions".
- Core Positioning: "Quality Enabler", preventing quality risks in advance through full-process collaboration and ensuring the incremental functions delivered in each iteration meet requirements.
- **Intervention Timing**: Intervenes during the requirement phase, participating in requirement reviews and user story decomposition to achieve "test shift-left".
Scrum is the most widely used Agile framework, with "Sprint" as the iteration unit (usually 2-4 weeks). Testing work is integrated into the entire Sprint process:
1. Sprint Planning Phase: Testers, product managers, and developers jointly review user stories (the smallest unit of user requirements), clarify Acceptance Criteria (Definition of Done, DoD), and decompose test points.
2. Sprint Execution Phase: After developers complete a single user story, testers immediately perform unit testing and interface testing verification; daily stand-up meetings are held to synchronize test progress and issues (e.g., "unclear acceptance criteria for a certain user story").
3. Sprint Review Phase: Testers assist product managers and users in verifying the incremental functions delivered in the current Sprint and collect feedback.
4. Sprint Retrospective Phase: Summarize issues in the current Sprint's testing work (e.g., "delayed test case design affects testing efficiency") and formulate optimization plans.
1. User Story-Driven Testing: Design test cases based on user stories and acceptance criteria, focusing on users' actual needs rather than simple functional coverage.
2. Continuous Regression Testing: Conduct regression testing after each iteration to ensure new functions do not affect existing ones; improve regression efficiency using automated testing tools (e.g., Selenium, pytest).
3. Acceptance Test-Driven Development (ATDD): Testers write acceptance test cases in advance, and developers carry out development with the goal of "passing acceptance tests", ensuring the development process is requirement-centric.
4. Incremental Non-Functional Testing: Non-functional testing (performance, security) no longer waits until the entire system is completed; instead, targeted non-functional testing (e.g., interface performance testing) is conducted for core incremental functions in each iteration.
1. Advantages
- Early testing intervention reduces defect discovery costs.
- Capable of quickly responding to requirement changes and delivering usable functions in each iteration.
- Close collaboration between development and testing teams enables efficient problem-solving.
2. Limitations
- Short iteration cycles result in tight test preparation time, easily leading to incomplete test case coverage.
- High requirements for testers' comprehensive capabilities (needing skills in requirement analysis, rapid test case design, and automated tool usage).
- Non-functional testing is easily overlooked (e.g., excessive focus on functional delivery neglects performance optimization).
- Project Background: Adopts a 2-week Sprint cycle, with core requirements of rapidly iterating new functions (e.g., "member points redemption", "next-day delivery").
- Testing Implementation:
- During the Sprint planning phase, testers participate in user story reviews and clarify acceptance criteria for "member points redemption" (e.g., "100 points can be exchanged for a 10-yuan coupon", "points are deducted in real-time after redemption").
- During the Sprint execution phase, after developers complete the core logic of "points redemption", testers immediately conduct interface and functional testing, discovering the defect of "coupons not arriving in real-time after points deduction" and promoting a same-day fix by developers.
- After each Sprint, regression testing is performed using automated scripts to ensure new functions do not affect core existing functions (e.g., "product ordering", "payment").
- During the Sprint review phase, testers assist product managers in verifying incremental functions, collect user feedback, and provide a basis for the next round of Sprint planning.
- Project Outcome: The project delivers 2 iterations per month, quickly responding to market demand and increasing user retention rate by 15%.
The DevOps model is an extension of the Agile model, with the core of "in-depth collaboration between development and operation". It realizes "Continuous Integration (CI) - Continuous Testing (CT) - Continuous Deployment (CD)" through automated toolchains, pursuing "high-frequency delivery + stable production environment". The core of testing in this model is "continuous verification", integrated into the entire process through automated testing to ensure the quality of each code submission can be verified quickly.
- Core Positioning: "Continuous Quality Guardian", realizing "code submission and verification" through automated testing, shifting from "passive testing" to "proactive quality assurance".
- Intervention Timing: Full-process intervention, covering all stages from requirement analysis, code development, integration deployment to production operation (test shift-left + test shift-right).
The core of DevOps is the CI/CD pipeline. As a core link in the pipeline, testing work achieves "automated and high-frequency" verification:
1. Continuous Integration (CI) Phase: After developers submit code to the code repository (e.g., GitLab), the pipeline automatically triggers unit testing, static code analysis (e.g., SonarQube), and automated interface testing to quickly identify code-level defects; only code that passes the tests can be merged into the main branch.
2. Continuous Testing (CT) Phase: After code merging, the pipeline automatically deploys to the test environment and triggers automated system testing (function, performance, security); testers focus on exploratory testing (for complex scenarios not covered by automated testing).
3. Continuous Deployment (CD) Phase: After passing the tests, the pipeline automatically deploys to the pre-production environment, conducting acceptance testing and production environment simulation testing; after passing acceptance, it is automatically deployed to the production environment.
4. Production Operation Phase (Test Shift-Right): Collect production environment data through monitoring tools (e.g., Prometheus, ELK), conduct production environment performance testing and user behavior analysis, and identify production-specific defects (e.g., performance bottlenecks in high-concurrency scenarios).
1. Full-Coverage Test Automation: Achieve full-process automation of unit testing, interface testing, system testing, and performance testing; the automation rate must reach over 80% to support high-frequency iterative verification.
2. Test Environment Automation: Quickly build and destroy test environments through containerization technology (Docker) and Infrastructure as Code (IaC, e.g., Terraform) to ensure environmental consistency.
3. Test Data Automation: Automatically generate test data using data generation tools (e.g., Mockaroo) to avoid reliance on real production data while ensuring test data diversity.
4. Chaos Testing: Simulate production environment abnormalities (e.g., service downtime, network delays, database failures) in the pre-production environment to test the system's fault tolerance and stability.
5. Quality Gate Mechanism: Set up quality gates in the CI/CD pipeline (e.g., unit test pass rate ≥ 90%, no high-risk security vulnerabilities, performance indicators meeting standards); code that fails to pass the quality gates cannot proceed to the next stage.
- CI/CD Tools: GitLabCI, Jenkins, GitHub Actions
- Code Management: GitLab, GitHub
- Test Automation: Selenium, Cypress, JMeter, RestAssured
- Static Code Analysis: SonarQube
- Monitoring Tools: Prometheus, Grafana, ELK
- Containerization: Docker, Kubernetes
- Infrastructure as Code: Terraform, Ansible
1. Advantages
- High degree of test automation, fast verification efficiency, supporting high-frequency delivery.
- Full-process quality control, enabling timely defect discovery and repair.
- Close collaboration among development, testing, and operation teams for efficient problem-solving.
- Test shift-right ensures production environment quality and improves system stability.
2. Limitations
- High toolchain construction costs and high technical requirements for the team (needing mastery of automated testing, containerization, CI/CD, etc.).
- Large initial investment (tool selection, script development, personnel training).
- Exploratory testing is easily overlooked; quality assurance for complex scenarios relies on testers' experience.
- Project Background: Needs to frequently iterate promotional functions (e.g., "full discounts", "limited-time flash sales") before "618" while ensuring production environment stability.
- Testing Implementation:
- The team builds a CI/CD pipeline with GitLabCI + Jenkins: after developers submit code, the pipeline automatically triggers pytest unit testing, SonarQube static code analysis, and RestAssured interface automated testing.
- After passing the tests, the code is automatically deployed to the Docker test environment, triggering Cypress front-end function automated testing and JMeter interface performance testing.
- Quality gates are set (unit test pass rate ≥ 95%, no high-risk vulnerabilities, interface response time ≤ 2 seconds); code that fails to meet the standards cannot be merged.
- After passing the tests, the code is automatically deployed to the pre-production environment for chaos testing (simulating high service concurrency in flash sale scenarios).
- After final acceptance, it is automatically deployed to the production environment.
- The production environment monitors performance indicators via Prometheus, and ELK analyzes user behavior logs, identifying production defects such as "order submission failures for some users in flash sale scenarios", which can be quickly rolled back and fixed.
- Project Outcome: The project averages 2 iterations per day, with no major production failures; system stability reaches 99.99% during promotional activities.
In actual work, testers may face various development models. The core principle is to "adjust testing strategies based on needs" rather than sticking to a single method. Based on industry practices, the following adaptation suggestions are provided:
- Focus on improving the adequacy of test preparation; intervene in requirement analysis in advance to avoid misunderstandings of requirements.
- Develop detailed test plans and cases covering all functional and non-functional requirements.
- Establish a defect classification mechanism (high/medium/low risk) and prioritize defect fixes for core modules.
- Adjust test plans and cases promptly if requirements change midway to avoid rework.
- Focus on improving rapid response and collaboration capabilities; master user story decomposition and acceptance criteria definition methods.
- Enhance rapid test case design capabilities, focusing on core scenarios and avoiding over-design.
- Build a lightweight automated regression system to support high-frequency iterative regression testing.
- Strengthen daily collaboration with development and product teams to quickly resolve requirement ambiguities and testing issues.
- Focus on improving automation and toolchain capabilities; master full-process automated testing technologies (unit, interface, system, performance).
- Familiarize yourself with CI/CD pipeline construction and integration methods.
- Learn containerization and IaC technologies to automate test environments.
- Pay attention to production environment monitoring and test shift-right practices to improve production quality assurance capabilities.
Adopt flexible combined testing strategies. Some projects may use a "Agile + Waterfall" hybrid model (e.g., core functions follow Waterfall, non-core functions follow Agile iterations). Testing strategies should be adjusted based on module characteristics: strengthen test preparation and full coverage for core functions, and adopt incremental verification and rapid iterative testing for non-core functions.
Project Background
A fintech company needs to upgrade its existing core trading system. The core requirements are divided into two parts:
- Reconstruction of the "fund clearing module": Clear requirements, extremely high stability requirements, and no frequent changes allowed.
- Addition of the "user-side transaction visualization function": Requirements need to quickly respond to market feedback, requiring frequent iterations and optimizations.
Therefore, the project adopts a "Waterfall + Agile" hybrid development model, and testing work adapts to the characteristics of the two models to build a "layered testing + collaborative verification" strategy.
Test Adaptation Practice
1. Waterfall Mode Adaptation (Fund Clearing Module)
- Test Intervention Timing: During the requirement phase, jointly conduct requirement reviews with product and development teams, clarifying core quality indicators such as "fund clearing accuracy" and "data consistency" (e.g., "clearing error rate ≤ 0.001%") to avoid requirement ambiguity.
- Test Preparation: Complete full test case design in parallel with the development phase, covering all scenarios including normal clearing, abnormal clearing (e.g., inter-bank transfer failures, insufficient account balances), and boundary scenarios (e.g., large-amount fund clearing, cross-midnight clearing); the test case review pass rate must reach 100%.
- Test Execution: After the fund clearing module development is fully completed, conduct full integration testing and system testing, focusing on verifying interface compatibility between the module and the original core system, as well as clearing logic accuracy.
- Defect Management: Establish a "zero tolerance" mechanism for high-risk defects; immediately suspend testing and promote developer fixes when high-risk defects such as "clearing logic errors" and "data synchronization anomalies" are discovered, followed by full regression testing.
2. Agile Mode Adaptation (User-Side Transaction Visualization Function)
- Iterative Planning: Adopt a 2-week Sprint cycle, with each iteration focusing on 1-2 core visualization functions (e.g., "transaction details chart display", "income trend forecasting"); testers participate in Sprint planning and clarify acceptance criteria for each user story (e.g., "chart loading time ≤ 2 seconds", "support time-dimension data filtering").
- Test Execution: In each iteration, conduct interface and functional testing immediately after a single function is developed; synchronize test progress daily (e.g., "data loading delay in the chart filtering function"). Conduct lightweight regression testing at the end of each iteration to ensure new functions do not affect the use of original trading functions.
- User Feedback Integration: After each iteration, invite 100 seed users for alpha testing to collect feedback on visualization function usability (e.g., "unclear chart style", "cumbersome operation steps") and implement rapid iterative optimization.
- Automation Support: Build lightweight automation scripts covering regression testing for core visualization functions (e.g., chart data accuracy verification, page element display integrity) to improve iterative testing efficiency.
Project Outcome
- The fund clearing module passes full Waterfall model testing, with no clearing-related defects after launch, complying with financial regulatory requirements.
- The user-side transaction visualization function quickly responds to market demand through 6 Agile iterations, achieving a user satisfaction rate of 92%.
- The overall project cycle is shortened by 30% compared to the pure Waterfall model, and core module stability is improved compared to the pure Agile model.
The evolution of the three development models (Waterfall, Agile, DevOps) essentially reflects a transformation from "rigid processes to flexible collaboration". Corresponding testing work has also evolved from "post-event control" to "full-process collaboration" and "continuous verification".
The core value of testing is not to execute fixed processes, but to "adapt to the development model and collaborate with the team to ensure quality": in Waterfall mode, quality is controlled through "full preparation"; in Agile mode, iteration efficiency is ensured through "quick response"; in DevOps mode, high-frequency delivery is supported through "automated collaboration".
With the in-depth application of AI and cloud-native technologies, testing work will further evolve toward "intelligent automation" and "full-link quality control". In the next article, we will enter the "Core Process" section, explaining in detail the core methods of "Test Planning and Requirement Analysis" to help readers master the specific execution processes and practical key points of testing work.
(From: TesterHome)