Pricing

Choosing the Right API Testing Tools: A Comprehensive Guide

API testing can be performed by simply recording script through tools with GUI such as soapUI, or by writing code by open source project tools

API testing can be performed by simply recording script through tools with GUI such as soapUI, or by writing code by open source project tools

The e2e test in the project team has been running for many years. After all kinds of inexplicable environmental problems and slow running problems, the project team finally decided to introduce API function tests. At the same time, under the premise of ensuring the test coverage as much as possible, the e2e test scripts of repeated tests can be cleaned up to improve the continuous integration efficiency (refer to the test pyramid for strategies).

So the question is, how to select tools for API function testing? API function testing can be performed by simply recording scripts with GUI tools such as soapUI or postman, or by writing code with open source project tools. According to the actual situation of the project, we choose the latter to facilitate customization and continuous integration.

 

Tool Selection

At present, there are many popular open source frameworks for API testing on the market. The first thing I can think of is REST assured. Rest Assured is a REST API testing framework implemented by Java. It is a lightweight REST API client, which can directly write code to send HTTP requests to the server and verify the returned results. The official introduction is:

Testing and validation of REST services in Java is hard than in dynamic languages such as Ruby and Groovy REST Assured brings the simplicity of using these languages into the Java domain

Open the github submission record and find that someone has been submitting code for this framework recently. This indicates that it is well maintained and is listed as an alternative project.

In addition, through various ways, we learned that there is a very popular NodeJS testing framework supertest developed by Dashen tj and others. This is an API testing framework derived from the famous super agent. The official statement is:

Super agent driven library for testing node.js HTTP servers using a fluent API

HTTP assertions make easy via super agent

Compare the two tools and consider the trade-offs from several aspects:

The project code is based on Java, and NodeJS code is also included. From the perspective of environment, neither tool needs additional configuration. This is a tie between the two.

In terms of learning costs, both tools can easily find a lot of learning materials from the Internet, and the official information is quite complete. It's a draw again.

In terms of maintenance cost, supertest is based on dynamic language, and does not waste compilation time; In case of wrong code, immediately change it and run again. Moreover, the so-called "SuperTest works with any test framework" on the official website seems to have strong scalability.

From the perspective of portability, supertest uses NodeJS. Theoretically, if the framework is good enough, as long as there are nodes, the same set of scripts can be run in different places.

Finally, compare the ease of use. In terms of installation, REST assured usually uses tools such as maven and grade to install. It is troublesome to configure the running environment. The superset can be used after a simple one line npm install command is installed. Considering that I'm lazy, supertest wins completely, so I'll take it easy.

Learning

Start learning supertest.

First, open its github to learn some key information about supertest:

It inherits all the APIs and usages of super agent.

Before using, install the node, and then use npm install supertest -- save dev or cnpm install supertest -- save dev to install supertest.

Like the super agent, it needs to execute a request request by calling. end ().

Call. expect() to make assertions. If you fill in numbers, the default is to check the status code returned by the http request;

After that, let's analyze the official sample code, and then imitate it to build a piece of code.

Var request=require ('supertest ');

Var express=require ('express');

Var app=express();

 

The visual inspection of the app here is only used to make a mock server, and the supertest related tests only include the following parts

Request (app)

. get ('/user')

. expect ('Content Type ',/json/)

. expect ('Content Length ','15')

. expect (200)

. end (function (err, res){

If (err) throw err;

});

 

To analyze this test code, first use request (app) to instantiate a server, and then. expect () to verify whether the content type, content length, and http status of the response in the response header are 200 This is the basic way to write supertest.

Practice

We use github, the world's largest same-sex dating platform, to do an experiment and design a use case to judge whether it has successfully entered the home page.

Preparations: use your chrome, open the Network tab of development tools, see what requests are on the github home page, record the requests to the home page, and find the URL, Method and other key information of the request.

Implementation phase: we can open a vim, a text editing tool like Notepad, and a short piece of code to try:

var request = require('supertest')('https://github.com/');

request

.get('/')

.expect(2010)

.end(function(err, res) {

if (err) throw err;

});

Save it, name a test.js or something, and then run it

node test.js

 

Then you find that you get this abnormal result

This shows that our assertion is successful! Change. expect (2010) to actually return. expect (200) and try again. If no exception result is returned, the test is passed!

Optimization: Although the test is successful, the readability of the test results is not very satisfactory, especially when the test is successful, there is no hint.

So we consider using the test framework Mocha mentioned in the official website example to optimize this test.

Mocha is an excellent JavaScript testing framework, which looks like Jasmine. The introduction on the official website is:

Mocha is a feature rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases Hosted on GitHub

This framework provides various style test reports. Combined with supertest, our API test report can be visualized to a higher level.

By the way, you can add a common post request test:

Var request=require ('supertest ') (' https://github.com ');

Describe ('Github home page ', function(){

This. timeout (10000);

Before ('must be on home page ', function (done){

Request. get ('/')

. expect (200, done);

});

It ('could be navigated to register page ', function (done){

Request. get ('/join')

. expect (200, done);

});

It ('will reuse the request if username has been taken ', function (done){

Request. post ('/signup_check/username')

. type ('form ')

. send ('value=lala ')

. expect (404)

. end (function (err, res){

If (err) return done (err)

Done ();

})

});

});

 

This test is more readable than the previous version. With the help of the Mocha framework, each test has a description before it, so you can see what your code is testing.

Wherein, before() is the hook provided by Mocha, which is equivalent to beforeAll and will be executed before all tests. Other hooks include after(), which will be executed after all tests are executed, beforeEach (), which will be executed once before each test, and afterEach (), which will be executed once after each test is executed. Hook is very convenient for cleaning up test data.

Then describe () describes what is being tested:

Describe ('describe the test object ', function(){

//Test case

})

The 'it()' in 'describe()' describes specific test cases:

It ('describe test cases', function (done){

//Test case implementation

Done ();

})

 

Done() is a callback method provided by Mocha. If there is no done(), Javascript will always wait for the callback to timeout. Incidentally, the default timeout of Mocha is 2 seconds, so add this. timeout (10000) under describe; Reset the timeout to 10 seconds.

Note that while using Mocha, you can ignore the. end() of superset and directly add the done parameter in. expect (), such as. expect (200, done). However, if the writing method of. end() is used, you still need to call done() in the. end() block.

The. send ('value=lala ') in the last use case is the request body of the post, and the type is specified through. type() Type () is JSON by default (see the super agent source code for details). In this example, the form type is used. Of course, you can also choose to add the parameter request. post ('/signup_check/username? Value=lala') directly to the url of the post instead of sending (). However, if you want to parameterize, it is recommended to use. send().

Mocha also provides the watch function. Use the mocha - w test script. js command with parameters to monitor the test script. When the script changes, Mocha will automatically run the script.

The test results are as follows:

Update the. expect (404) in the last use case to. expect (403), and the test passes.

Both the readability of the test code and the readability of the test report are much better than before. You can also use the -- reporter parameter to make the test report into various shapes, such as

Leak detection and filling: finally, the problem of code readability and test report has been solved. Looking back at the entire demo, I suddenly found that after such a long day of research, I had ignored the problem that in many business scenarios, calling the API needs to verify whether the user is logged in. In other words, cookies need to be kept in different http requests.

Fortunately, supertest provides this solution and uses the agent function of supertest to solve this problem.

Var request=require ('superset ')

Describe ('test cookie ', function()){

Var agent=request.agent ('server to be tested ');

It ('should save cookies', function (done){

Agent

. get ('/')

. expect ('set cookie ',' cookie=hey; Path=/', done);

})

It ('should send cookies', function (done){

Agent

. get ('/return')

. expect ('hey ', done);

})

})

 

We can see that the first case is the test cookie=hey, and in the second test case, because the tested instance has changed from a simple "request" to a "request. agent()", the cookie "hey" is brought into the second case by the agent. When accessing the "/return", there is no need to reset the cookies.

In addition, we can achieve the same effect by setting cookies before each request.

. set ('Cookie ','a cookie string')

Finally, if you want to test authorization resources, the super agent also provides the. auth() method to obtain authorization.

Request. get (' http://local ')

. auth ('tobi ','learnboost')

End (callback);

Now it seems that the research work is almost enough, which can meet most of the test scenarios. Next, we just need to design the test code structure again, abstract the public components, do parameterization, and separate the test data. But think about it, if you need to write a lot of tests, do you want to run the tests by executing the mocha xxx script commands one by one?

Fortunately, the grunt construction tool has been used for item groups. Google found a grunt plug-in "grunt mocha test" that seems pretty good. According to its instructions, just add a paragraph to the grunt configuration file

Reporter is the format of the report, src is the path of the script to be executed, and *. js specifies the execution of all files in js format.

Finally, register a grunt command, such as:

Grunt. registerTask ('apitest ','mochaTest');

Can be simply used on the command line

Grunt apitest

To execute all the test files. It is also convenient to configure a new test task in Jenkins and add continuous integration.

So far, the tool selection has been completed. The core is supertest, the packaging is mocha, and the execution is grunt.

To sum up

To sum up, it is recommended to consider these aspects when selecting tools:

Used in combination with project technology stack

Learning cost, maintenance cost and scalability of new tools

Whether it is possible to simply implement code to meet all business scenarios, such as non REST style APIs, or some special scenarios

The code is easy to read and the test report is visual

Simple script execution

Published in Testerhome

Author: Quandan Zhang

Original link: https://testerhome.com/topics/5372

 

Latest Posts
1Enhancing Game Quality with Tencent's automated testing platform UDT, a case study of mobile RPG game project We are thrilled to present a real-world case study that illustrates how our UDT platform and private cloud for remote devices empowered an RPG action game with efficient and high-standard automated testing. This endeavor led to a substantial uplift in both testing quality and productivity.
2How can Mini Program Reinforcement in 5 levels improve the security of a Chinese bank mini program? Let's see how Level-5 expert mini-reinforcement service significantly improves the bank mini program's code security and protect sensitive personal information from attackers.
3How UDT Helps Tencent Achieve Remote Device Management and Automated Testing Efficiency Let's see how UDT helps multiple teams within Tencent achieve agile and efficient collaboration and realize efficient sharing of local devices.
4WeTest showed PC & Console Game QA services and PerfDog at Gamescom 2024 Exhibited at Gamescom 2024 with Industry-leading PC & Console Game QA Solution and PerfDog
5Purchase option change notification Effective from September 1, 2024, the following list represents purchase options will be removed.