Pricing

Exploring the Fundamentals of Server-Side Systems in Internet Applications: Challenges and Solutions

This article aims to explore the fundamental concepts of server-side system technology from the very basics.

We often hear about how impressive the server-side systems of certain internet applications are, such as QQ, WeChat, and Taobao. So, what exactly makes the server-side system of an internet application so remarkable? Why does massive user traffic make a server-side system more complex? This article aims to explore the fundamental concepts of server-side system technology from the very basics.

Load capacity is the reason for the existence of distributed systems.

When an internet service becomes popular, the most prominent technical issue encountered is that the server becomes extremely busy. When 10 million users visit your website daily, handling the load with just one machine is impossible, regardless of the server hardware used. Therefore, when internet programmers solve server-side problems, they must consider how to use multiple servers to provide services for the same internet application. This is the origin of the so-called "distributed systems."

However, the problem caused by a large number of users accessing the same Internet service is not simple. On the surface, in order to meet many users' requests from the Internet, the most basic demand is the so-called performance demand: users respond that the webpage is slow to open, or the action in online games is very stuck, and so on. These requirements for "service speed" actually include the following parts: high throughput, high concurrency, low delay, and load balancing.

High throughput means that your system can carry a large number of users at the same time. The number of users that the whole system can serve at the same time is a concern here. This throughput is definitely impossible to solve with a single server, so it needs the cooperation of multiple servers to achieve the required throughput. In the cooperation of multiple servers, how to effectively use these servers so that some of them will not become bottlenecks, thus affecting the processing capacity of the whole system. This is a distributed system that needs to be carefully weighed in architecture.

High concurrency is an extension of high throughput. When we are carrying many users, we certainly hope each server can do its best to avoid unnecessary consumption and waiting. However, the software system is not a simple design, which can handle multiple tasks at the same time and achieve "as much as possible" processing. Many times, our program will cause extra consumption because it has to choose which task to deal with. This is also the problem solved by distributed systems.

Low latency is not a problem for services with few people. However, it will be much more difficult if we need to return the calculation results quickly when a large number of users visit. Because in addition to a large number of user visits, requests may be queued, and the length of the queue may be too long, which may lead to spatial problems such as memory exhaustion and bandwidth fullness. If the strategy of retry is adopted because of queue failure, the whole delay will become higher. Therefore, the distributed system will adopt many methods of request sorting and distribution, and let more servers come out with users' requests as soon as possible. However, due to a large number of distributed systems, it is necessary to distribute users' requests many times, and the whole delay may become higher because of these distribution and transfer operations. Therefore, in addition to distributing requests, distributed systems should try their best to reduce the number of distribution levels so that requests can be processed as soon as possible.

Since internet service users come from all over the world, they may be connected through various networks and lines with different latencies in the physical space, and they may also be located in different time zones. To effectively deal with this complexity, it is necessary to deploy multiple servers in different locations to provide services. At the same time, we also need to enable concurrently occurring requests to be effectively handled by multiple different servers. Load balancing is an inherent task that distributed systems need to accomplish.

Since distributed systems are almost the most basic method to solve the load capacity problem of internet services, mastering distributed system technology becomes extremely important for server-side programmers. However, the issues with distributed systems are not easily solved by simply learning a few frameworks and using a few libraries. This is because when a program that runs on a single computer becomes a system that runs simultaneously and collaboratively on countless computers, it brings significant differences in both development and operation.

Basic methods for distributed systems to improve load capacity

Layered model (routing, proxy)

The simplest idea for using multiple servers to collaboratively complete computing tasks is to allow each server to handle all requests and then randomly send requests to any server for processing. In the early days of internet applications, DNS round-robin was used in this way: when a user entered a domain name to visit a website, the domain name would be resolved into one of multiple IP addresses. Then, the website access request would be sent to the server corresponding to that IP address. In this way, multiple servers (multiple IP addresses) can work together to handle a large number of user requests.

However, simply randomly forwarding requests cannot solve all problems. For example, many internet services require users to log in. After logging in to a server, users will initiate multiple requests. If we randomly forward these requests to different servers, the user's login status will be lost, causing some request processing failures. Relying solely on a single layer of service forwarding is not enough, so we add a batch of servers. These servers will, based on the user's cookie or login credentials, forward the requests again to the specific servers handling the business operations.

In addition to the login requirement, we also find that many data need to be processed by databases, and these data often can only be centralized in one database; otherwise, the data stored on other servers will be lost during queries. So, we often separate the database into a batch of dedicated servers.

At this point, we can see a typical three-layer structure: access, logic, and storage. However, this three-layer structure is not a panacea. For example, when we need users to interact online (online gaming is a typical example), the online status data separated on different logic servers cannot know each other. In this case, we need to create a dedicated system, similar to an interaction server. When users log in, they also record a piece of data there, indicating that a certain user is logged in on a certain server. All interactive operations must first go through this interaction server to correctly forward the message to the target user's server.

For example, when using an online forum (BBS) system, the articles we post cannot be written into just one database, as too many read requests from users would overload it. We often write to different databases based on the forum sections or write to multiple databases simultaneously. By storing article data on different servers, we can handle a large number of operation requests. However, when users read articles, a dedicated program is needed to find out which server the specific article is on. At this time, we need to set up a dedicated proxy layer to forward all article requests to it first. This proxy layer then follows our preset storage plan to find the corresponding database and retrieve the data.

Based on the examples above, distributed systems have a typical three-layer structure, but in reality, they often have more than three layers, designed according to business requirements. In order to forward requests to the correct process, we design many processes and servers specifically for forwarding requests. These processes are often named Proxy or Router, and a multi-layer structure often has various Proxy processes. These proxy processes often connect the front and back ends through TCP. However, although TCP is simple, it has problems with being difficult to recover after a failure. Moreover, TCP network programming is somewhat complex. Therefore, people have designed better inter-process communication mechanisms: message queues.

Although powerful distributed systems can be built through various proxy or router processes, their management complexity is also very high. So people have come up with more methods on the basis of layered mode to make the program of this layered mode simpler and more efficient.

Concurrency model (multithreading, asynchronous)

When writing server-side programs, we clearly know that most programs will handle multiple simultaneous requests. Therefore, we cannot simply calculate the output from a single input like in a HelloWorld program. Since we receive multiple inputs at the same time and need to return multiple outputs, we often encounter situations where we need to "wait" or "block," such as waiting for database processing results or waiting for results from another process. If we process requests one after another, the idle waiting time will be wasted, resulting in increased response delays for users and a significant decrease in the overall system throughput.

There are two typical solutions in the industry for handling multiple requests simultaneously: multithreading and asynchronous. In early systems, multithreading or multiprocessing was the most commonly used technology. Writing code for this technology is relatively simple, as the code in each thread is executed in sequence. However, since multiple threads run simultaneously, you cannot guarantee the order of the code between multiple threads. This is a serious problem for logic that needs to process the same data. The simplest example is displaying the number of views for a news article. If two ++ operations run simultaneously, the result may only increase by 1 instead of 2. Therefore, under multithreading, we often need to add locks to data, and these locks may in turn cause deadlocks.

As a result, the asynchronous callback model became more popular than multithreading. In addition to solving the deadlock problem of multithreading, asynchronous can also solve the unnecessary overhead caused by frequent thread switching in multithreading: each thread requires an independent stack space, and when multiple threads run in parallel, the data in these stacks may need to be copied back and forth, consuming extra CPU resources. Also, since each thread occupies stack space, the memory consumption is huge when a large number of threads exist. The asynchronous callback model can solve these problems well, but it is more like a "manual" parallel processing method that requires developers to implement how to handle parallelism.

Asynchronous callbacks are based on non-blocking I/O operations (network and file), so we don't get "stuck" on a function call when calling read/write functions but immediately return the result of "whether there is data or not." Linux's epoll technology uses the underlying kernel mechanism to quickly "find" connections/files with data available for reading and writing. Since each operation is non-blocking, our program can handle a large number of concurrent requests with just one process. Because there is only one process, the order of all data processing is fixed, and there is no possibility of the statements of two functions being executed alternately in multithreading, so there is no need for various "locks." From this perspective, asynchronous non-blocking technology greatly simplifies the development process. Since there is only one thread, there is no need for overhead like thread switching, making asynchronous non-blocking the preferred choice for systems with high throughput and concurrency requirements.

int epoll_create(int size);//Create an epoll handle. Size is used to tell the kernel how many listeners there are

int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);

int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);

Buffering technology

In internet services, most user interactions require immediate results, so there are certain latency requirements. For services like online games, the latency requirement is even more stringent, needing to be reduced to within tens of milliseconds. To reduce latency, buffering is one of the most common technologies used in internet services.

In early web systems, if every HTTP request involved reading and writing to the database (MySQL), the database would quickly become unresponsive due to the number of connections being maxed out. Generally, databases can only support a few hundred connections, while web applications can easily have thousands of concurrent requests. This is the most direct cause of poorly designed websites becoming unresponsive when the number of users increases. To minimize database connections and access, many buffering systems have been designed to store the results of database queries in faster facilities and read directly from there if there are no related modifications.

The most typical web application buffering system is Memcache. Due to PHP's thread structure, it is stateless. Early PHP versions did not even have a method to operate on "heap" memory, so those persistent states had to be stored in another process. Memcache is a simple and reliable open-source software for storing temporary states. Many PHP applications now read data from the database and then write it to Memcache. When the next request comes in, they first try to read data from Memcache, which can significantly reduce database access.

However, Memcache itself is an independent server process, which does not have special cluster functions. That is to say, these Memcache processes cannot be directly formed into a unified cluster. If a Memcache is not enough, we need to use code manually to allocate which data should go to which Memcache process—— This is a tedious job for a really large distributed website to manage such a buffer system.

Therefore, people began to consider designing more efficient buffer systems: in terms of performance, every request of Memcache must be transmitted through the network before data in memory can be pulled. This is undoubtedly a bit wasteful because the requester's own memory can also store data—— This is a lot of buffering algorithms and technologies that make use of the memory of the requester. The simplest one is to use the LRU algorithm to put the data in the heap memory of a hash table structure.

Memcache does not have a cluster function, which is also a pain point for users. So many people began to design how to make the data cache not be distributed to different machines. The simplest idea is the so-called read/write separation, that is, every write in the cache is written to multiple buffer processes, and the read can read any process randomly. The effect is very good when there is an obvious imbalance between reading and writing business data.

However, not all businesses can simply solve problems by separating reading and writing, such as some online interactive Internet businesses, such as communities and games. The data reading and writing frequency of these services is not very different and also requires high latency. Therefore, people have tried again to combine the local memory and the memory cache of the remote process to make the data have a two-level cache. At the same time, data is not replicated in all cache processes at the same time, but distributed in multiple processes according to a certain rule—— The most popular algorithm used by this distribution rule is the so-called "consistent hash". The advantage of this algorithm is that when a process fails, it does not need to change the location of all cached data in the entire cluster. You can imagine that if our data cache distribution simply uses the data ID to model the process number, then once the process number changes, the location of each data stored process may change, which is detrimental to the server's fault tolerance.

Orcale has a product called Coherence, which is well-designed on the cache system. This product is a commercial product that supports the collaboration between local memory cache and remote process cache. The cluster process is fully self-managed. It also supports user-defined computing (processor function) in the process where the data cache is located. This is not only caching but also a distributed computing system.

Storage technology (NoSQL)

It is believed that the CAP theory is familiar to everyone. However, in the early days of Internet development, when everyone was still using MySQL, many teams struggled with how to make the database store more data and carry more connections. Even for many businesses, the main data storage mode is a file, and the database has become an auxiliary facility instead.

However, when NoSQL emerged, we suddenly found that the data format of many Internet businesses is so simple that many times the root does not need complex tables like relational databases. The index is often only searched according to the primary index. The more complex full-text search cannot be done by the database itself. Therefore, NoSQL is the preferred storage facility for a large number of highly concurrent Internet services. The earliest NoSQL databases include MongoDB, etc. Now it seems that the most popular is Redis. Even some teams regard Redis as a part of the buffer system, which actually recognizes the performance advantages of Redis.

In addition to being faster and carrying more load, NoSQL is more important because this data storage method can only be retrieved and written according to one index. This requirement constraint brings distribution benefits. We can define the process (server) for storing data according to this primary index. The data of such a database can be conveniently stored on different servers. In the inevitable trend of distributed systems, the data storage layer has finally found a distribution method.

Issues with manageability caused by distributed systems

Distributed systems are not simply satisfied by running a bunch of servers together. Compared to a single machine or a small number of server clusters, there are some specific problems that need to be addressed.

Hardware failure rate

A distributed system definitely involves more than just one server. Suppose the average failure time for a server is 1%; when you have 100 servers, there will almost always be one in a failed state. Although this analogy may not be entirely accurate, as the number of hardware components involved in your system increases, hardware failures will change from occasional events to inevitable ones. Generally, when writing functional code, we do not consider what to do in the event of hardware failure. However, when developing distributed systems, this issue must be addressed. Otherwise, a failure of just one server could cause the entire cluster of hundreds of servers to malfunction.

In addition to failures of the server's own memory and hard drives, network line failures between servers are even more common. Moreover, these failures may be sporadic or self-recovering. In the face of such issues, it is not enough to simply remove the "failed" machines. The network may recover after a while, and your cluster may lose more than half of its processing capacity due to this temporary failure.

How to make distributed systems maintain and sustain external services as much as possible under various potential failure situations becomes a consideration when writing programs. Since we need to consider these failure scenarios, we should consciously preset some redundancy and self-maintenance features when designing the architecture. These are not product-related business requirements but purely technical functional requirements. Identifying the right requirements in this area and implementing them correctly is one of the most important responsibilities of a server-side programmer.

Resource utilization optimization

In a distributed system cluster, there are many servers. When the hardware capacity of such a cluster reaches its limit, the most natural idea is to add more hardware. However, it is not that easy for a software system to improve its load capacity by simply "adding" hardware. This is because the software needs to perform complex and detailed coordination work across multiple servers. When expanding a cluster, we often have to stop the entire cluster's service, modify various configurations, and finally restart a cluster with the new servers added.

Since there may be some data used by users in the memory of each server, if you try to modify the configuration of services provided in the cluster while running, you may lose memory data and make errors. Therefore, it is relatively easy to expand the capacity of stateless services at runtime, such as adding some Web servers. But for stateful services, such as online games, it is almost impossible to conduct simple runtime expansion.

In addition to capacity expansion, distributed clusters also need capacity reduction. When the number of users drops and the server hardware resources become idle, we often need these idle resources to be used and put into other new service clusters. There are some similarities between capacity reduction and disaster recovery in case of failure in the cluster. The difference is that the time point and goal of capacity reduction are predictable.

The expansion and shrinkage of the distributed cluster and the desire to operate online as far as possible, have led to very complex technical problems that need to be addressed, such as how to correctly and efficiently modify the interrelated configurations in the cluster, how to operate stateful processes, and how to ensure the normal communication between nodes in the cluster during the expansion and shrinkage process. As a server programmer, you need to spend a lot of experience to develop a series of problems caused by the cluster status changes of multiple processes.

Software service content updates

Nowadays, the term "iteration" from agile development methods is popular for describing the continuous updating of a service to meet new requirements and fix bugs. If we only manage one server, updating the program on that server is very simple: just copy the software package and modify the configuration. However, if you need to perform the same operation on hundreds or thousands of servers, it is impossible to log in to each server and handle it.

Batch installation and deployment tools for server-side programs are essential for every distributed system developer. However, our installation work involves not only copying binary files and configuration files but also many other operations, such as opening firewalls, creating shared memory files, modifying database table structures, rewriting some data files, and even installing new software on the server.

If we consider software updates and version upgrades when developing server-side programs, we will plan the use of configuration files, command-line parameters, and system variables in advance, making the installation and deployment tools run faster and more reliably.

In addition to the installation and deployment process, another important issue is the data problem between different versions. When upgrading versions, some persistent data generated by the old version program is generally in the old data format. If the upgraded version involves modifying the data format, such as the data table structure, the old format data must be converted and rewritten into the new version's data format. This requires us to carefully consider the structure of these tables when designing data structures and choose the most straightforward expression method to make future modifications easier or anticipate the scope of modifications early on, presetting some fields or using other forms to store data.

In addition to persistent data, if there are client programs (such as mobile apps), their upgrades often cannot be synchronized with the server. If the upgrade content includes modifications to the communication protocol, this creates the need to deploy different server-side systems for different versions. To avoid maintaining multiple sets of servers simultaneously, we often lean towards the so-called "version-compatible" protocol definition method when developing software. How to design a protocol with good compatibility is another issue that server-side programmers need to carefully consider.

Data statistics and decision-making

Generally, the log data of distributed systems is centralized and then unified for statistics. However, when the scale of the cluster reaches a certain level, the volume of these logs becomes enormous. Often, it takes more than a day for a computer to process one day's worth of logs. Therefore, log statistics have become a highly specialized activity.

The classic distributed statistics model is Google's MapReduce model. This model is flexible and can utilize a large number of servers for statistical work. However, the downside is that its usability is often not good enough, as the data statistics have significant differences from the common SQL data table statistics. As a result, we often end up throwing the data into MySQL for more detailed statistics.

Due to the large number of distributed system logs and the increase of log complexity. It has become necessary for us to master the technology similar to Map Reduce in order to truly conduct data statistics on distributed systems. And we also need to find ways to improve the efficiency of statistical work.

Basic methods for solving manageability in distributed systems

Directory service (ZooKeeper)

A distributed system is a whole composed of many processes, and each member of this whole has some states, such as their responsible module, their load situation, and their mastery of certain data. These data related to other processes become crucial during fault recovery, scaling up, and scaling down.

A simple distributed system can record these data through static configuration files, such as the connection correspondence between processes, their IP addresses, and ports. However, a highly automated distributed system necessarily requires these state data to be dynamically saved. This allows the program to perform disaster recovery and load balancing tasks on its own.

Some programmers will specifically write a DIR service (directory service) to record the running status of processes in the cluster. Processes in the cluster will automatically associate with this DIR service, so during disaster recovery, scaling, and load balancing, they can automatically adjust the request's destination based on the data in these DIR services, thereby bypassing faulty machines or connecting to new servers.

However, if we only use one process to perform this task, this process becomes the "single point" of the cluster, meaning that if this process fails, the entire cluster may not be able to run. Therefore, the directory service that stores the cluster status also needs to be distributed. Fortunately, we have ZooKeeper, an excellent open-source software that serves as a distributed directory service area.

ZooKeeper can simply start an odd number of processes to form a small directory service cluster. This cluster will provide all other processes with the ability to read and write to its huge "configuration tree." These data will not only be stored in one ZooKeeper process but will also be carried by multiple processes according to a very secure algorithm, making ZooKeeper an excellent distributed data storage system.

Since the data storage structure of ZooKeeper is a tree-like system similar to a file directory, we often use its functions to bind each process to one of the "branches" and then check these "branches" to forward server requests, which can easily solve the request routing (who will do it) problem. In addition, the load status of the process can be marked on these "branches," making load balancing easy to implement.

The directory service is one of the most critical components in a distributed system. ZooKeeper is a great open-source software specifically designed for this task.

Message queue service (ActiveMQ, ZeroMQ, Jgroups)

If two processes need to communicate across machines, we almost always use protocols like TCP/UDP. However, writing inter-process communication directly using network APIs is very complicated. In addition to writing a large amount of low-level socket code, we also need to deal with issues such as how to find the process to interact with, how to ensure the integrity of data packets without loss, and what to do if the communication process fails or needs to be restarted. These issues include a series of requirements such as disaster recovery and load balancing.

To solve the inter-process communication problem in distributed systems, people have summarized an effective model, the "message queue" model. The message queue model abstracts the interaction between processes into the handling of individual messages. For these messages, we have some "queues" or pipelines to temporarily store them. Each process can access one or more queues to read messages (consume) or write messages (produce). With a caching pipeline, we can safely change the process state. When the process starts, it can automatically consume messages. The routing of messages is determined by the queues they are stored in, which turns the complex routing problem into a problem of managing static queues.

General message queue services provide simple "delivery" and "pickup" interfaces, but the management of message queues themselves is more complex. Generally, there are two types. Some message queue services advocate point-to-point queue management: there is a separate message queue between each pair of communication nodes. The advantage of this approach is that messages from different sources do not affect each other and will not occupy the message buffer space of other queues due to too many messages in a certain queue. Moreover, the message processing program can define its priority for processing – prioritizing certain queues and processing others less.

However, this point-to-point message queue will increase the number of queues as the cluster grows, which is a complex issue for memory usage and operation management. Therefore, more advanced message queue services can allow different queues to share memory space, and the address information, creation, and deletion of message queues are all automated. These automations often rely on the "directory service" mentioned earlier to register the physical IP and port information corresponding to the queue ID. For example, many developers use ZooKeeper as the central node of the message queue service, while software like Jgroups maintains a cluster status to store the history of each node.

Another type of message queue is similar to a public mailbox. A message queue service is a process, and any user can deliver or receive messages in this process. This makes the use of message queues more convenient, and operation management is relatively easy. However, under this usage, any message from sending to processing goes through at least two inter-process communications, resulting in relatively high latency. Moreover, since there are no predetermined delivery and pickup constraints, it is more prone to bugs.

Regardless of which message queue service is used, inter-process communication is a problem that must be solved in a distributed server-side system. Therefore, as a server-side programmer, when writing distributed system code, the most commonly used code is based on message queue-driven code, which directly led to the inclusion of "message-driven Beans" in the EJB 3.0 specification.

Transaction system

In distributed systems, transactions are one of the most challenging technical problems to solve. Since a process may be distributed across different processing processes, any process may experience a failure, which requires a rollback. Most of these rollbacks involve multiple other processes. This is a diffusive multi-process communication problem. To solve transaction issues in distributed systems, two core tools are required: a stable state storage system and a convenient and reliable broadcast system.

In a transaction, the status of any step must be visible throughout the entire cluster and have disaster recovery capabilities. This requirement is generally undertaken by the cluster's "directory service." If our directory service is robust enough, we can synchronize the processing status of each transaction step to the directory service. ZooKeeper plays an essential role in this aspect once again.

If a transaction is interrupted and needs to be rolled back, this process will involve multiple steps that have already been executed. Perhaps the rollback only needs to be done at the entry point (if the data required for rollback is saved there), or it may need to be rolled back on each processing node. If it is the latter, the nodes with exceptions in the cluster need to broadcast a message like "Rollback! Transaction ID is XXXX" to all other relevant nodes. The underlying layer of this broadcast is generally supported by the message queue service, while software like Jgroups directly provides broadcast services.

Although we are discussing transaction systems now, the "distributed lock" function often required in distributed systems can also be completed simultaneously by this system. The so-called "distributed lock" is a limiting condition that allows various nodes to check and then execute. If we have an efficient and single-child operation directory service, this lock status is actually a state record of a "single-step transaction," and the rollback operation defaults to "pause operation, try again later." This "lock" method is simpler than transaction processing and therefore more reliable, so more and more developers are willing to use this "lock" service instead of implementing a "transaction system."

Automatic deployment tool (Docker)

The biggest demand for distributed systems is to change service capacity during runtime (possibly requiring service interruption): scaling up or down. When some nodes in a distributed system fail, new nodes are also needed to restore work. If we still use the old-fashioned server management method, such as filling out forms, declaring, entering the computer room, installing servers, deploying software, etc., the efficiency will not be sufficient.

In a distributed system environment, we generally use a "pool" method to manage services. We will pre-apply for a batch of machines, run service software on some machines, and use others as backups. Obviously, this batch of servers cannot be dedicated to just one business service but will provide multiple different business loads. Those backup servers will become a common backup "pool" for multiple businesses. As business requirements change, some servers may "exit" Service A and "join" Service B.

This frequent service change relies on highly automated software deployment tools. Our operation and maintenance personnel should master the deployment tools provided by developers, rather than thick manuals, to perform such operation and maintenance operations. Some experienced development teams will unify all business underlying frameworks, expecting that most deployment and configuration tools can be managed by a set of general-purpose systems. In the open-source world, there are similar attempts, with the most widely known being the RPM installation package format. However, the RPM packaging method is still too complicated and does not meet the deployment requirements of server-side programs. Therefore, later, programmable general-purpose deployment systems represented by Chef emerged.

However, when NoSQL emerged, we suddenly found that the data format of many Internet businesses is so simple that many times the root does not need complex tables like relational databases. The index is often only searched according to the primary index. The more complex full-text search cannot be done by the database itself. Therefore, NoSQL is the preferred storage facility for a large number of highly concurrent Internet services. The earliest NoSQL databases include MangoDB, etc. Now it seems that the most popular is Redis. Even some teams regard Redis as a part of the buffer system, which actually recognizes the performance advantages of Redis.

In addition to being faster and carrying more load, NoSQL is more important because this data storage method can only be retrieved and written according to one index. This requirement constraint brings distribution benefits. We can define the process (server) for storing data according to this primary index. The data of such a database can be conveniently stored on different servers. In the inevitable trend of distributed systems, the data storage layer has finally found a distribution method.

To manage a large number of distributed server-side processes, we do need to put a lot of effort into optimizing their deployment and management. Unifying the running standards of server-side processes is a basic condition for achieving automated deployment and management. We can use Docker technology based on the "operating system" as a standard, adopt some PaaS platform technology based on the "Web application" as a standard, or define some more specific standards and develop a complete distributed computing platform ourselves.

Log service (log4j)

Server-side logs have always been an important yet easily overlooked issue. Many teams initially regard logs only as auxiliary tools for development debugging and bug elimination. However, they soon realize that logs are almost the only effective means to understand the program situation during runtime in server-side systems.

Although we have various profile tools, most of them are not suitable for enabling on officially operating services, as they will significantly reduce their performance. Therefore, we need to analyze logs more often. Although logs are essentially lines of text information, they are highly flexible and valued by developers and operation and maintenance personnel.

From a conceptual standpoint, logs are a vague thing. You can open a file at random and write some information. However, modern server systems generally have some standardized requirements for logs: logs must be line by line, making it easier for future statistical analysis; each line of log text should have some unified header, such as date and time as a basic requirement; log output should be graded, such as fatal/error/warning/info/debug/trace, etc., and the program can adjust the output level during runtime to save log printing consumption; the log header generally needs some header information like user ID or IP address for quick search, location filtering of a batch of log records, or some other fields for filtering and narrowing down the log viewing scope, which is called coloring function; log files also need a "rollback" function, that is, maintaining multiple files of a fixed size to avoid filling up the hard disk after long-term operation.

Due to the various requirements mentioned above, the open-source community provides many excellent log component libraries, such as the well-known log4j and the log4X family of libraries with numerous members, which are widely used and well-received tools.

However, compared to log printing functions, log collection and statistics functions are often more easily overlooked. As a distributed system programmer, you definitely want to collect and analyze the entire cluster's log situation from a central node. Moreover, some log statistics results are even expected to be obtained repeatedly in a short time to monitor the overall health of the cluster. To achieve this, a distributed file system is required to store the continuously arriving logs (these logs are often sent via the UDP protocol). On this file system, a MapReduce-like architecture statistics system is needed to quickly analyze and alert the massive log information. Some developers directly use the Hadoop system, while others use Kafka as the log storage system and build their statistical programs on top of it.

Log service is the dashboard and periscope for distributed operation and maintenance. Without a reliable log service, the entire system's operation may be out of control. Therefore, whether your distributed system has many or few nodes, you must devote significant effort and dedicated development time to establishing an automated log statistical analysis system.

Issues and solutions caused by distributed systems in development efficiency

As mentioned earlier, in addition to the functional requirements of business needs, distributed systems need to add many non-functional requirements. These non-functional requirements are often designed and implemented to ensure the stable and reliable operation of a multi-process system. These "extra" tasks usually make your code more complex, and without good tools, your development efficiency will be significantly reduced.

Microservice framework: EJB, WebService

When discussing the distribution of server-side software, communication between service processes is inevitable. However, communication between service processes is not as simple as sending and receiving messages. It also involves message routing, encoding and decoding, reading and writing service status, and so on. If the entire process is developed by yourself, it would be too exhausting.

So the industry has introduced various distributed server-side development frameworks for a long time, and the most famous one is "EJB" - enterprise JavaBean. Any technology titled "enterprise" is often required under the distributed environment, and EJB is also a technology of distributed object invocation. If we need to let multiple processes cooperate to complete the task, we need to decompose the task into multiple "classes", and then the objects of these "classes" will survive in each process container, so as to provide services cooperatively. This process is very "object-oriented". Each object is a "microservice" that can provide some distributed functions.

Other systems are learning the basic model of the Internet: HTTP. So there are various Web Service frameworks, from open source to commercial software, with their own Web Service implementations. This model simplifies complex routing, encoding and decoding operations into a common HTTP operation, which is a very effective abstraction. Developers develop and deploy multiple WebServices to the Web server, which completes the construction of the distributed system

No matter whether we are learning EJB or WebService, we actually need to simplify the complexity of distributed calls. The complexity of distributed invocation is that disaster tolerance, capacity expansion, load balancing and other functions need to be integrated into cross process invocation. Therefore, the use of a set of common codes to achieve unified disaster tolerance, capacity expansion, load balancing, overload protection, state cache hit and other non functional requirements for all cross process communications (calls) can greatly simplify the complexity of the entire distributed system.

In general, our microservice framework will observe the status of all nodes in the entire cluster during the routing phase, such as which service processes are running on which addresses, how the load of these service processes is, and whether they are available. Then, for stateful services, we will use an algorithm similar to consistency hashing to try to improve the cache hit ratio as much as possible. When the status of nodes in the cluster changes, all nodes in the microservice framework can get this change as soon as possible, and re plan the future service routing direction according to the current status, so as to achieve automatic routing and avoid those nodes with high load or failure.

Some microservice frameworks also provide tools similar to IDL conversion into "skeleton" and "stub" code, so that when writing remote calling programs, it is completely unnecessary to write complex network related code, and all transmission layer and coding layer codes are automatically written. EJB, Facebook's Thrift and Google gRPC all have this capability. In a framework with code generation capability, we can write a functional module (possibly a function or a class) that is available in a distributed environment, just like writing a local function. This is absolutely a very important efficiency improvement in a distributed system.

Asynchronous programming tools: Coroutine, Future, Lambda

When programming in distributed systems, you will inevitably encounter a large number of "callback" type APIs. This is because distributed systems involve a lot of network communication. Any business command may be decomposed into multiple processes and combined through multiple network communications. Due to the popularity of the asynchronous non-blocking programming model, our code often encounters "callback functions." However, the callback asynchronous programming model is a programming method that is not conducive to code reading. You cannot read the code from beginning to end to understand how a business task is gradually completed. The code belonging to a business task is divided into many callback functions due to multiple non-blocking callbacks and connected in various parts of the code.

What's more, sometimes we choose to use the "observer pattern," where we register a large number of "event-response functions" in one place and then send out an event at all places where callbacks are needed. Such code is even more challenging to understand than simply registering callback functions. This is because the response functions corresponding to events are usually not found at the event-sending location. These functions will always be placed in some other files, and sometimes these functions will change during runtime. Moreover, the event names themselves are often baffling and difficult to understand because when your program requires thousands of events, it is almost impossible to come up with an easy-to-understand name that accurately represents the event.

To solve the destructive effect of callback functions on code readability, many different improvement methods have been invented. The most famous of these is the "coroutine." We were often accustomed to using multithreading to solve problems, so we are very familiar with writing code in a synchronous manner. Coroutine continues this habit, but unlike multithreading, coroutines do not run "simultaneously." They only switch to other coroutines at places where blocking is needed using Yield() and then continue to execute downward with Resume() after the blocking ends. This is equivalent to connecting the content of the callback function to the back of the Yield() call. This way of writing code is very similar to the synchronous method, making the code very easy to read. The only drawback is that the Resume() code still needs to run in the so-called "main thread." Users must call Resume() themselves when recovering from blocking. Another drawback of coroutines is the need for stack saving. After switching to another coroutine, temporary variables on the stack also need to occupy extra space, which limits the way coroutine code is written and prevents developers from using large temporary variables.

Another way to improve the writing of callback functions is often called the Future/Prose model. The basic idea of this writing method is to "write all callbacks together at one time". This is a very practical programming model. It does not allow you to completely eliminate callbacks, but allows you to focus callbacks from scattered to one place. In the same code, you can clearly see how the asynchronous steps are executed in series or parallel.

Lastly, let's talk about the lambda model, which is popular due to the widespread use of the JavaScript language. Defining a callback function in other languages is quite troublesome: in Java, you need to design an interface and then implement it, which is a five-star level of trouble; C/C++ supports function pointers, which is relatively simple but can also make the code difficult to understand; scripting languages are relatively better, but you still need to define a function. Writing the content of the callback function directly at the callback invocation is the most convenient for development and is also more conducive to reading. More importantly, lambda generally implies closure, which means that the call stack of this callback function is separately saved. Many asynchronous operations require establishing something like a "session pool" for state-saving variables, which is not needed here, as it can naturally take effect. This is similar to the ingenuity of coroutines.

No matter which asynchronous programming method is used, its coding complexity must be higher than that of synchronous calling code. Therefore, when writing distributed server code, we must carefully plan the code structure to avoid adding function code at will, which will lead to the destruction of the readability of the code. Unreadable code is unmaintainable code, which is more likely to happen with a large number of server-side code for asynchronous callback.

Cloud service model: IaaS/PaaS/SaaS

In the development and use of complex distributed systems, how to carry out operation and maintenance for a large number of servers and processes has always been a pervasive issue. Whether using a microservice framework, unified deployment tools, or log monitoring services, managing a large number of servers centrally is not easy. The main reason behind this is that a large amount of hardware and networks divide the logical computing power into many small pieces.

With the improvement of computer computing power, virtualization technology has emerged, which can intelligently unify the divided computing units. The most common example is IaaS technology: when we can run multiple virtual server operating systems on a single server hardware, the number of hardware we need to maintain will decrease significantly.

The popularity of PaaS technology allows us to deploy and maintain a unified system runtime environment for a specific programming model without having to install operating systems, configure runtime containers, and upload runtime code and data on each server. Before the unification of PaaS, installing a large number of MySQL databases used to consume a lot of time and effort.

When our business model matures to be abstracted into some fixed software, our distributed systems become more user-friendly. Our computing power is no longer code and libraries but clouds - SaaS, which provide services through the network. This way, users do not need to maintain or deploy anything; they just need to apply for an interface and fill in the expected capacity limit to use it directly. This not only saves a lot of time in developing corresponding functions but also hands over a large number of operation and maintenance tasks to the SaaS maintainers, who will be more professional in doing such maintenance.

In the evolution of operation and maintenance models, from IaaS to PaaS to SaaS, the application scope may be getting narrower, but the convenience of use has increased exponentially. This also proves that the work of software labor can improve efficiency by moving towards more specialized and refined directions through division of labor.

Latest Posts
1WeTest showed PC & Console Game QA services and PerfDog at Gamescom 2024 Exhibited at Gamescom 2024 with Industry-leading PC & Console Game QA Solution and PerfDog
2Purchase option change notification Effective from September 1, 2024, the following list represents purchase options will be removed.
3Try Out WeTest UDT In-Vehicle Infotainment Testing Solution EXPERIENCE THE POWER OF WETEST UDT, THE ULTIMATE IN-VEHICLE INFOTAINMENT SOLUTION FOR VEHICLE INDUSTRY.
4Try Out WeTest UDT: The Ultimate Cloud Testing Solution for Developers EXPERIENCE THE POWER OF WETEST UDT, THE ULTIMATE CLOUD TESTING SOLUTION FOR DEVELOPERS, AND TRANSFORM YOUR TESTING PROCESS.