Customer Cases
Pricing

Common Questions about PerfDogService

GET ALL YOUR PERFDogSERVICE QUERIES ANSWERED WITH OUR COMPREHENSIVE FAQ GUIDE.

How does PerfDogService select child threads?

  1. First, start the App.
  2. Call the getAppRunningProcess interface.

Does PerfDogService support screenshots?

Yes, screenshots are supported and can be called through the interface.

Is PerfDogService free?

PerfDogService is a paid module, separate from PerfDog. It is more suitable for scenarios such as automated testing, cloud testing, or customized demand customization.

How to get Android app process PID?

Determine the process based on the package name.

Do I have to save the data before stopping the test?

If you do not save first, the data will be cleared after stopping and restarting. This depends on your needs.

How to get caseID?

After uploading, caseID will be returned.

How to prevent the PerfDog app from popping up and affecting UI automation?

Turn off the display of the floating box (the parameters of the StartTestAppReq interface support turning off the display of the floating box).

After uploading to the server fails, can I only rerun the data once?

Save_data upload can be called again.

Does one token of PerfDogService support deployment on multiple machines?

The same token of PerfDogService can be deployed on different machines. Consult your business classmates during the purchase process to increase the number of deployment machines.

Where can I find the definition file of the PerfDogService interface?

In the PerfDogService installation file directory, there is the "Perfdog.proto" file, which can be viewed.

How does PerfDogService connect to multiple mobile phones at the same time?

1. Start a script for each device, specifying the device ID and application name as parameters [Recommended and easy to implement].

2. Connect to N mobile phones, call startDeviceMonitor to monitor the devices, and open a thread to test when receiving a device.

How can I add the performance metrics I want in PerfDogService?

Refer to the "Indicator Parameter Mapping Table" to add: https://perfdog.wetest.net/article_detail?id=176&issue_id=0&plat_id=2.

When PerfDogService saves data, an error message appears "No data"

Possible reasons:

      - FPS cannot be collected during the test.

      - The test application was not started after starting the test. This could be due to the device not starting, a wrong package name, or two versions of the same app installed on the device.

Can PerfDogService specify a port to start the service?

Currently, specifying a port is not supported.

How does PerfDogService specify the test window?

      - After starting the application, call getAppWindowsMap to get the window list and find the window named SurfaceView.

      - Fill in the window name into the subWindow field of StartTestAppReq, then call StartTestApp.

Where are the specific details of each method of PerfDogService?

In addition to the manual, refer to perfdog.proto for the definition file of the gRPC interface and protobuf structure.

PD网络测试推广
Latest Posts
1Balancing Product Quality and Test Efficiency: A Better Solution Learn how to balance product quality and test efficiency using Context-Driven Testing and Risk-Based Testing (RBT). A practical model for testers based on HTSM, MFQ & PPDCS.
2UX Testing Methods: Usability, Expert & Simulated User Testing | Guide Learn 3 key UX testing methods: usability testing with a 3-level indicator system, expert testing using Nielsen's heuristics, and simulated user testing. Includes real-world case study and results.
3Data Migration Testing: Best Practices & Key Strategies Learn data migration testing best practices, key risks, and technical/business validation strategies. Includes real-world example from Commercial Drafts System.
4AI Makes You a DevTest Engineer But Testing Work Gets Heavier AI makes DevTest engineering accessible to everyone, but core testing work remains untouched—and AI-generated code actually adds hidden risks. A frontline tester explains why.
5The Underlying Logic of Software Testing: Core Skills & Black‑Box Strategies Understanding the underlying logic of software testing: black‑box input‑output model, 2W1H analysis, tester core skills, and invisible outputs. Essential for QA engineers.