In the ecosystem of Large Language Models, a Token is the atomic unit of processing. Unlike human readers who process characters, LLMs interpret language through a "Tokenizer".
Tokenization Mechanism: The system breaks sentences into words or sub-words based on a predefined vocabulary list, where each entry has a unique ID.
Composition: A token can be a word, a punctuation mark, or even a single letter within a word (e.g., "R", "A", and "P" in "RAP").
Efficiency: This method mimics the human brain's retrieval process, reducing memory and computational costs.
Billing and Limits: Commercial LLMs often bill based on tokens (e.g., 0.01 yuan per 1k tokens). Model parameters like "72B 32K/16K" specify the model's scale (72 billion parameters) and its maximum input/output token capacity.
Truncation and Errors: Exceeding token limits leads to system errors or automatic content truncation.
Performance Correlation: The number of tokens is directly proportional to latency; 100K problems require significantly more processing time than 1K problems.
Tooling: Testers should use libraries like transformers to pre-calculate the token length of test datasets.
Traditional performance testing focuses on QPS and Average Response Time. However, LLMs require specific metrics due to their streaming output nature.
Time to First Token (TTFT): The latency between the user's request and the arrival of the first token. This is the most critical metric for user experience.
Token Generation Rate (Token Throughput): Often called the "articulation rate," measuring tokens returned per second (Tokens/s). High-performance models typically require $\ge$ 20/s under concurrency.
QPM (Queries Per Minute): Since LLM response times are long, statistics are measured in minutes rather than seconds.
Input/Output Token Magnitude: Performance must be aligned with data scale (e.g., grouping data into 16k-32k, 32k-48k buckets).
Stress testing LLMs involves analyzing streaming interfaces such as SSE (Server-Sent Events), WebSockets, or the OpenAI SDK.
Packet Analysis: Testers must distinguish between different types of packets, including "thinking" packets (for reasoning models), answer packets, statistical packets, and heartbeats.
Thinking Process: Reasoning models output their internal logic before providing an answer.
Metrics Adjustment: For these models, TTFT should be calculated from the start of the "thinking" packet.
Locust: Commonly used for stress testing, but requires custom functions to report TTFT and token rates.
Boomer: A Go-based worker for Locust, capable of simulating high concurrency (e.g., 100,000 QPM) which Python-based Locust might struggle to achieve.
Prefill Stage: Responsible for calculating the K/V matrices and generating the first token. Performance here defines the TTFT.
Decode Stage: Pulls data from the KV Cache to output subsequent tokens. Performance here defines the Token Generation Rate.
KV Cache Optimization: Storing cached data in GPU memory or system memory to speed up responses for identical or similar prefixes.
P/D Separation (Prefill/Decode Separation): Decoupling the prefill and decode stages into different instances to optimize them independently.
Model Quantization: Reducing storage precision (e.g., FP32 to FP16, INT8, or INT4) to decrease model size and increase speed, though this may impact accuracy.
Multi-head Prediction (MTP): Using a small auxiliary model to predict multiple tokens at once, significantly increasing the token generation rate.
TP (Tensor Parallelism): Splitting a neural network across multiple GPUs.
DP (Data Parallelism): Processing multiple data inputs simultaneously.
PP (Pipeline Parallelism): Cutting the model into stages for sequential GPU processing.
EP (Expert Parallelism): Used in MoE (Mixture of Experts) architectures, where different "experts" (sub-networks) are assigned to different GPUs to handle specific domains.
To simulate real-world traffic and avoid artificial TTFT spikes caused by simultaneous task queuing, use Locust’s gradual pressure function (e.g., adding 1-2 concurrent users per second).
Accuracy Testing: Every performance optimization must be validated against accuracy benchmarks (e.g., Math500) to ensure optimizations haven't degraded model quality.
Success Rate: Monitor for empty answer packets or truncated thinking processes.
Bypass Testing: Mirroring real online traffic to the test environment to verify performance under authentic user behavior.