Source: TesterHome Community
I recently came across an article about an API test automation platform on TesterHome. After giving it a try, it got me thinking about test automation in general — and why most manual testers feel these platforms are so hard to use. Please take this as my personal take, not absolute truth.
In my opinion, before doing anything, you should clarify your motivation and goal. That alone can save you from many unnecessary detours.
At its core, API automation revolves around three steps. There’s nothing overly fancy. The real value still comes from the tester’s domain knowledge in creating effective test points.
We plan test points and implementation based on these three steps and the goals above:
I’ll use pytest as an example here, but the ideas apply to other frameworks as well.
|
Principle |
Explanation |
|
Fit for purpose |
Choose tools that match your project’s actual needs, not the hottest tech |
|
Low learning curve |
The team should be able to pick it up quickly |
|
Easy to maintain |
Scripts/cases should be readable and modifiable by anyone on the team |
|
Good debugging experience |
When a test fails, you should quickly know why |
|
Integrates with existing stack |
Plays nicely with your CI/CD, reporting, and version control |
|
Test Category |
What to Validate |
Example |
|
Basic connectivity |
Can we reach the API? |
HTTP status code (200, 401, 404, 500, etc.) |
|
Request/response structure |
Are required fields present? Data types correct? |
JSON schema validation, field presence/absence |
|
Business logic |
Does the API do what it’s supposed to? |
Creating an order updates inventory, sends notification, etc. |
|
Edge cases |
How does it handle boundary values, missing params, invalid types? |
Empty string, negative number, SQL injection attempt, malformed JSON |
|
Data persistence |
Is data correctly written to the database? |
After POST /user, check the users table for the new record |
|
State management |
Does the API correctly handle state changes? |
After DELETE /order, GET /order returns 404 |
|
Idempotency |
Multiple identical requests produce the same result? |
PUT /resource/1 with same payload twice → same response |
|
Error messages |
Are error responses informative and consistent? |
Validation failures return 400 with clear “field X is required” |
|
Performance baseline |
Response time under normal load |
95th percentile < 200ms for a typical GET request |
|
Challenge |
Typical Solution (in pytest or similar) |
When to Accept It |
|
Test data dependencies |
Fixtures with scope=“session” or “module” to set up once and reuse |
Use real test data if setup cost is low |
|
Chained API calls |
Pass data between tests using a shared context dict or class attributes |
If the chain is very long (10+ steps), consider testing links individually |
|
Authentication / tokens |
Fixture that obtains a token and injects it into all requests via a session object |
Manual token refresh in each test if it’s a one‑off |
|
Asynchronous operations |
Use time.sleep() sparingly; better: poll with a timeout + retry logic |
If the async job completes within 1‑2 seconds, a simple sleep is fine |
|
Database validation |
Use a dedicated test database + rollback transactions per test |
Skip DB validation if the API’s response is sufficient |
|
External mock services |
Use pytest-mock or unittest.mock to patch external calls |
Call the real external service if it’s a stable sandbox |
|
Parameterized testing |
@pytest.mark.parametrize for different inputs/expected outputs |
Write separate test functions if there are only 1‑2 cases |
|
Test ordering |
Design tests to be independent; use pytest‑order only as last resort |
Accept order dependency for a small, stable suite |
Current discussions focus heavily on technology. But the real pain points of automation are actually human:
Recommended Python Stack (for HTTP)
If you’re using Python, here’s a typical stack (for HTTP; for other protocols, find corresponding libraries):
|
Component |
Tool |
Purpose |
|
Test framework |
pytest |
Write and execute tests, support parameterization, fixtures, etc. |
|
HTTP requests |
requests |
Send GET, POST, PUT, DELETE, etc. |
|
Data driver |
yaml + PyYAML |
Store test data like request parameters and expected responses |
|
Database connection |
mysql-connector-python |
Connect to MySQL, validate data consistency |
|
Logging |
logbook |
Record execution logs for debugging |
|
Reporting |
allure-pytest |
Generate detailed, visual test reports with history |
The core problem is losing touch with real users. Many platforms claim “unique test writing” and “low barrier, easy to use,” but they’re often just crude web versions of JMeter or Postman, or bizarre low‑code drag‑and‑drop interfaces. They make test creation inflexible, debugging time‑consuming, and over‑componentization leads to a steep learning curve — without the transferable skills of tools like JMeter.
|
Flaw |
What It Looks Like |
Why It’s a Problem |
|
“No code” obsession |
Drag‑and‑drop, visual workflow builders |
Actually slower than writing a few lines of code. Hard to version control. |
|
Over‑generalization |
Endless configuration options to handle “every possible scenario” |
Analysis paralysis. Most options are never used. |
|
Component explosion |
Pre‑request scripts, post‑request scripts, conditionals, loops, variables, etc. as separate visual blocks |
Learning the platform becomes harder than learning the underlying framework. |
|
Vendor lock‑in mindset |
Proprietary ways of defining tests that don’t map to standard tools |
Skills don’t transfer. You’re stuck if the platform dies or gets abandoned. |
|
Poor debugging UX |
“Test failed” with no stack trace, no request/response diff, no logs |
Wastes hours figuring out whether the bug is in your test or the system under test. |
|
Performance theater |
Built on a fancy React/Vue frontend + microservices backend for a simple task |
Slow to load, slow to run. The original pytest finishes in 2 seconds; the platform takes 30 seconds just to start. |
Newcomers often feel that not using design patterns makes their automation less professional. Yes, design patterns can improve maintainability and scalability, but not every project needs them. For simple interface tests, introducing patterns only adds complexity and maintenance overhead. Design patterns should evolve as needed — don’t do them for their own sake. Focus on writing simple, readable, maintainable scripts.
Even a straightforward structure like this is perfectly fine if it meets your goals:
project_root/
|-- api_tests/
| |-- project1/
| | |-- __init__.py
| | |-- test_api_project1.py
| | |-- utils/
| | |-- __init__.py
| | |-- api_client_project1.py
| |-- project2/
| |-- __init__.py
| |-- test_api_project2.py
| |-- utils/
| |-- __init__.py
| |-- api_client_project2.py
|-- conftest.py
|-- pytest.ini
|-- requirements.txt
Whether it’s Java TestNG, Python pytest, JMeter, Postman, or a platform built on top of them — all do the same thing: send requests and assert responses. No matter how complex the implementation, the test goal remains identical. The real challenge is always test case maintenance and management, even on platforms. The quality of test case design and execution directly impacts effectiveness. Poorly designed cases miss important issues.
Automation is an abstraction of business logic. Compared to manual testing, it’s indeed more efficient, but only under one essential condition: a stable project.
Don’t try to cover all interfaces and scenarios from day one. That adds unnecessary pressure and maintenance cost. Start incrementally: begin with a few core interfaces and key scenarios, then expand gradually.
After all the criticism above, let me summarize what actually works for API automation:
|
What Doesn’t Work |
What Actually Works |
|
Bloated “universal” test platforms |
Scripting + a mature framework (like pytest) |
|
Over‑componentization and endless config options |
Simple, readable, copy‑paste friendly scripts |
|
No‑code drag‑and‑drop that’s slower than coding |
Writing actual code with GPT/CodeGeeX assistance |
|
Trying to automate everything from day one |
Starting small with core paths, then expanding |
|
Chasing advanced tech to show off skills |
Focusing on practicality, speed, and real test value |
|
Proprietary platform logic that doesn’t transfer |
Standard tools with transferable skills (pytest, requests, etc.) |
The bottom line: The key to good API automation isn’t the tool or platform you use. It’s whether your test cases point in the right direction, based on solid business understanding. Keep it simple. Keep it maintainable. And remember — automation is only a supporting measure, not a silver bullet.