Unfortunately, we are a nation that doesn't appreciate precision.
In Renyu Huang's famous historical book "1587, a Year of No Significance: The Ming Dynasty in Decline" it is argued that China has never been a country that relies on accurate numbers for management. Historically, we have not emphasized numbers and accuracy, which is an important reason why the modern industrial and technological revolution did not originate in China.
In Weigang Wan's book "万万没想到:用理工科思维理解世界" it is mentioned that Europeans and Americans are used to using numbers to explain problems since childhood, unlike us who use vague concepts to argue our points.
Although I am not good at cooking, I have seen Chinese recipes that often use terms like "a little salt" or "an appropriate amount of sugar," which rely more on the chef's experience to determine the exact amount.
This lack of precision has led to many negative issues. For example, in the field of game development and optimization, if data cannot be quantified, it is difficult to test and improve the quality, which is not conducive to further enhancements.
Let's take a look at a specific example:
Due to my job, I often have to interview many technical candidates. During the interviews, a common topic we discuss is optimization. This subject is a relatively easy way to identify the depth of a candidate's knowledge since typical beginner-to-expert programming books rarely mention optimization, making it difficult to prepare in advance. Additionally, the ability to optimize can reveal a candidate's dedication, problem-solving approach, and creativity since each optimization challenge is often a unique case that can be solved using different methods. As an interviewer, this is a versatile topic that allows me to either delve deeper or switch gears depending on the candidate's response.
A crucial criterion for performance optimization is the improvement magnitude. Regrettably, many candidates have not established a performance measurement system in their projects, so they don't know how much their optimizations can improve performance. Some candidates only provide a rough estimate, such as an increase in frame rate after optimization. In my opinion, both of these approaches represent a preliminary stage of optimization as many small improvements may not be directly reflected in the frame rate. If we only focus on the so-called "low-hanging fruits" (easier performance issues to address) and neglect these minor optimizations, the overall optimization level may not be very deep.
Simply by using data as a single dimension, we can make a preliminary distinction between the abilities of technical professionals. Without data support, the quality of your optimization cannot be assessed, your self-promotion cannot be proven or disproven, and it is no different than superstition.
Reasonable optimization should involve a comprehensive testing system that measures as many details as possible. Each optimization attempt should be supported by thorough statistical data to determine whether there has been an improvement or regression, rather than relying on intuition or guesswork.
Once you have established such a comprehensive system, you will gain confidence in pursuing more detailed optimizations and be motivated to investigate even the smallest performance improvements. These minor optimizations may not have any immediate impact on the frame rate, but you know they are there, working together over time to create a positive effect on the game's performance.
Another issue with the inability to quantify is that knowledge cannot be explicitly passed on or communicated efficiently, relying on vague qualitative conclusions that are not convincing. For example, if you are negotiating your salary with HR during an interview, would you say you want "a little salary"? Or would it be more appropriate to provide a specific number or percentage?
Let's look at another professional example:
When an optimization brings about some side effects, how do you measure its impact? An optimized rendering algorithm may result in performance improvements, which are relatively easy to anticipate and measure. However, optimization algorithms often cause a loss of image quality, an aspect that many people overlook. Simply stating that there is no significant impact is not convincing enough. A rigorous approach would be to define image quality and compare it pixel by pixel under the same conditions as the original algorithm. A less rigorous, but still far better than doing nothing, approach would be to consult with art colleagues, carefully examine comparison images, and assess the impact on image quality.
With such comparisons, you can confidently inform the rest of the team that there is no discernible loss in visual effects, and the difference from the original image is less than 5%, with the lead artist stating that there is no noticeable difference to the naked eye. This kind of argument is much more persuasive than personal subjective judgments and can be used to reassure producers and tease teammates.
Not only can technical aspects be measured, but planning and art fields can also be quantified. Instead of blindly trusting intuition or catering to personal preferences, we should genuinely consider the user's perspective, conduct research, read reports, analyze data, and find the appropriate planning and art direction. As a programmer, I won't dwell on this topic in my area of expertise, so I'll leave it to the planning and art colleagues to elaborate.
In everyday life, self-quantification and letting data speak is a good habit.
I once switched from a full-spelling input method to a double-spelling input method. There is some learning curve for double-spelling, so I needed data to prove that the switch was worthwhile. Undoubtedly, as a responsible geek, I had to quantify my progress.
First, I researched online data and found that the average keystrokes for full-spelling were 3. x, while double-spelling only required 2, so theoretically, it would be faster. Then, I tested my full-spelling input speed using a sample text and recorded the results. I persevered through the initial pain of switching to double-spelling and stuck with it.
During the first few weeks of the switch, I tested my input speed weekly using the same sample text. This served two purposes: it motivated me to continue using the new input method by showing me my improved speed, and it allowed me to verify whether I could reach the theoretical speed limit. After more than a month, my input speed with double-spelling caught up and eventually surpassed full-spelling, finally stabilizing at a level that could no longer be improved.
These are some of my statistical findings. With quantification, I can see my speed improvements, observe my typing efficiency on my phone and Mac, determine if a mechanical keyboard helps my input speed, and even reference my data in articles to take pride in my seemingly insignificant progress.
So how can we let numbers speak for themselves in our work? Let's take optimization in game development as an example.
During development, it's essential to establish a testing framework for all aspects that require comparison, including memory, display memory, CPU overhead, GPU overhead, and integration with various third-party testing tools and profiler tools if possible. Continuously collect data in your work, and try to preserve the data context as completely as possible, such as raw log files from testing tools, temporary analysis results, screenshots, and videos. If you can retain the corresponding game version (although this is usually not feasible due to the scale of modern games), or at least not the change list number, that would be even better. The context allows you to quickly review, fill in gaps, and observe and summarize optimizations from different perspectives when summarizing your work later, as well as to create a comprehensive report. If the context is not preserved and the environment cannot be restored, it would be unfortunate if you later realize that some data is missing.
Afterward, summarize your findings and share them through emails and presentations. A detailed report will undoubtedly benefit both your future technical development and your communication with team leaders. Of course, knowing how to perform accurate data analysis requires a broad perspective and in-depth thinking. Observe how others collect data in their projects, what data other game engines provide, and what statistical indicators other testing tools offer. Then, consider how to apply these resources to your game. In the information age, there are no isolated islands; breakthroughs are unlikely to happen if you only rely on your thoughts.
If you can't find a way to quantify your work, keep searching for a solution—there are no shortcuts to success.