Customer Cases
Pricing

Enhancing Business Value with Automation: Practical Team Practices

Learn how QJIAYI Tech Quality Team enhances automation business value with practical practices. 10k+ test cases, 80+ monthly bugs detected—turn automation into a business-driven capability.

Introduction

Automation has become a core part of modern quality assurance and R&D efficiency. However, many teams struggle with low defect detection, high maintenance costs, misaligned technologies, and difficulty proving real business value.

After years of practice and iteration, the QJIAYI Quality Team has built a mature automation system that now runs 10,000+ UI and API test cases in continuous regression and detects more than 80 bugs per month. This article shares how we transformed automation from a costly experiment into a stable, business-driven capability.

Common Challenges in Test Automation

Many teams face similar pain points when building automation systems:

  • High volumes of automated test cases, but few real defects found.

  • Frequent business changes lead to heavy script maintenance.

  • New frameworks and tools often fail to adapt to real business scenarios.

  • Stakeholders doubt the value of automation and are reluctant to invest.

  • Long-term investment does not meet expected return on investment.

Teams often jump between new tools and frameworks without solving fundamental problems, creating a cycle of inefficiency. At QJIAYI, we broke this cycle by aligning automation closely with business goals and team workflows.

Align Automation Goals with Business Stages

Rather than chasing universal metrics such as code coverage, pass rate, or CI compliance, we believe automation goals must match the business lifecycle.

Lessons from Real-World Practice

  • In early-stage products, over-emphasizing high test coverage can slow iteration and hurt customer experience.

  • Neglecting automation during rapid growth leads to online failures, regressions, and damaged brand reputation.

We structured our automation roadmap into three progressive stages:

1. Tool-Driven Stage

We built foundational frameworks including:

  • Apollo API Automation Framework

  • Hades UI Automation Framework

  • Data Regression Platform

The goal was to replace repetitive manual tests with stable script execution and improve case writing efficiency.

2. Platform-Driven Stage

We developed a unified automation platform for:

  • Centralized test case management

  • Standardized execution scheduling

  • Automated reporting and metric analysis

This platform allowed more team members to participate and improved overall efficiency.

3. Engineer-Driven Stage

Test engineers took ownership to optimize systems, expand scenarios, and innovate solutions:

  • Scenario-based and data-driven testing

  • Image comparison, JSON validation, and data consistency checks

  • Platform-based testing for internal packages

This stage turned automation from a tool into a team-driven capability.

Strategic Automation for Complex Business

One-size-fits-all automation rarely works in complex systems. At QJIAYI, our business spans design tools, merchant backends, open platforms, mini-programs, and international services. We use different automation strategies for different technical architectures, including backend services, open APIs, front-end components, and plugins.

Lessons from Long-Chain Automation Attempts

We once built an end-to-end automation system covering design, generation, and data production. While it detected real issues, we faced:

  • Unstable data and inconsistent IDs

  • High comparison noise and maintenance costs

  • Difficulty generalizing front-end interactions

  • High cross-team collaboration costs

This experience taught us to:

  • Strengthen front-end data validation

  • Conduct in-depth feasibility research before implementation

  • Consider both technical and organizational challenges

How We Implemented Automation Effectively

In the early stages, our metrics looked good—high coverage, high pass rate—but stakeholders still questioned automation value. We took targeted actions:

1. Analyze Online Failures Backwards

We reviewed every missed bug to identify gaps in validation, scenario design, and false positives. This made automation more targeted.

2. Conduct Automated Code Reviews

Regular reviews improved stability, reduced redundancy, and standardized development practices.

3. Integrate Automation into Stability Projects

For high-risk businesses, automation became part of project goals from the start. We worked with developers to simplify data construction and improve testability.

4. Optimize CI Stability

We tracked frequent failures, fixed environmental issues, and reduced non-bug CI blockages. This made automation trusted and efficient.

Deepening the Business Value of Automation

Once automation was stable, we amplified its impact:

  • Recognized and rewarded teams for automation innovation

  • Used major tech projects to test and improve automation capabilities

  • Encouraged internal sharing of practical methods

  • Connected automation failures directly to bug tickets with logs and quick re-run functions

These steps turned automation into a trustworthy, indispensable part of the development pipeline.

Conclusion

Building automation is easy. Building a sustainable, business-aligned automation system is difficult. Our key takeaways:

  • Match automation goals to business stages, not just KPIs.

  • Collaborate closely with product and R&D teams.

  • Build your automation platform like a real product—with tools, training, and operation mechanisms.

  • Learn from failures and iterate continuously.

When you focus on making automation stable, effective, and business-oriented, your team and partners will follow.

Latest Posts
1WeTest at GDC 2026: AI Automated Testing Ushers in a New Era of Game Quality WeTest at GDC 2026 showcases a revolutionary AI Automated Testing Solution that transforms game quality assurance. Learn how WeTest's AI Test Agent Platform enables scalable quality production through computing power, delivering controllable, reproducible, and intelligent testing capabilities.
2Precision Testing in Practice: A Fund Team's Journey from Experience-Based to Data-Driven Quality Assurance Learn how Shenwanhongyuan Securities implemented precision testing to reduce regression testing by 67%. This technical guide covers JaCoCo implementation, method-level code mapping, and intelligent test case recommendation for financial services applications.
3How to Do Performance Test Monitoring: Key Metrics & Tuning Tips Learn how to do performance test monitoring effectively. Discover key metrics (RT, TPS, IOPS), identify CPU/memory/database bottlenecks, and follow step-by-step tuning tips for stable, efficient systems.
4The Ultimate Guide to AI Agent Performance Testing Learn comprehensive AI Agent performance testing strategies, environment setup, tool selection, and optimization techniques. Master how to ensure stability and efficiency in production.
5LLM Security Testing in ToB Scenarios: A Practical Guide & Framework Explore the unique security risks of LLMs in ToB scenarios, including prompt injection and system prompt leakage. Learn about the 'llm-safe-test' framework and how to automate safety judgment for enterprise AI applications.