Lord of Ashes - Chapter 827 update later

If audo player doesn't work, press Reset or reload the page.

I haven't finished coding today, so I'll update it later, maybe one or two in the morning. The main reason is that this chapter is a bit laborious. It's been written for two or three hours, but it's still a long way off. After the update, I just need to refresh it again. This chapter will do.

Abstract: In order to reduce the security transmission delay of multi-heterogeneous network data, a multi-heterogeneous network data security transmission technology based on machine learning is designed. By selecting the importance definitions of data sources and data attributes, preprocessing the multi-heterogeneous network data, and establishing a multi-path parallel transmission architecture. The scheduling and channel security protocol system is established to complete the secure transmission of multi-heterogeneous network data based on machine learning. The experimental results show that the multi-heterogeneous network data security transmission based on machine learning in this study effectively reduces the data transmission delay, and reduces the data transmission interruption and data packet loss rate, which meets the design requirements of data transmission technology.

Key words: machine learning; multi-heterogeneous network; data security transmission; network data preprocessing; parallel transmission architecture

2k novel

1 Introduction

At present, with the rapid development of communication technology, the characteristics of various networks are obvious, and after years of reform and innovation, the transmission rate of wireless access technology is gradually approaching the limit. In this context, in order to meet various business needs, multi-net writing is required. However, in the use of network transmission resources, the traditional writing mechanism cannot be used simultaneously and efficiently, cannot effectively guarantee efficient service transmission, and will increase the energy consumption during transmission, resulting in interference problems during transmission. Therefore, many scholars have carried out research on multiple network data transmission methods. In [1], Shi Lingling and Li Jingzhao studied the secure data transmission mechanism in heterogeneous networks. This mechanism mainly adopts a secure data transmission mechanism based on the combination of optimized AES-GCM authentication encryption algorithm and SHA-based digital signature algorithm. In the literature [2], Zhou Jing and Chen Chen studied a data security model based on heterogeneous networks. This model encrypts data in advance, and then establishes a secure transmission channel for data transmission. The above two methods can achieve certain effects, but there are still some deficiencies. In view of the above shortcomings, this paper applies the machine learning method to the secure transmission of multi-heterogeneous network data to solve the current problems. The experimental results show that the researched multi-heterogeneous network data security transmission technology effectively solves the existing problems and has certain practical application significance.

2 Multivariate Heterogeneous Network Data Preprocessing

In the secure transmission of multi-heterogeneous network data, there are many data that are useless. Therefore, it is necessary to select relevant data sources from the multi-dimensional network data for transmission, thereby improving the accuracy and efficiency of data transmission. In the process of effective data source selection, importance is used to measure the relationship between data attributes [3-4], and data with strong correlation is captured. The calculation expression is as follows: (1) In formula (1), T represents the number of comprehensive tables for all data sources, and (i, j) represents the correlation between example source classes. According to the judgment of the importance of the data source, the set of data tables with the highest degree of correlation can be selected to reduce the number of irrelevant tables. After the above important data source selection is completed, the data attributes are analyzed. Since a data source is composed of a set of data attributes, the basic information of the data to be transmitted can be reflected through these attribute characteristics. It is mainly measured by the correlation of the data tuple, and the number of occurrences of the tuple data is analyzed, that is, it is defined by the tuple data density. The data tuple density diagram is shown in Figure 1. In Figure 1, ε represents the radius of the specified neighborhood. According to this idea, weights are assigned to each tuple data in the above dataset [5-7], and its expression is as follows: (2) In formula (2), w(C) represents the attribute weight, and w( tk) represents the number of core tuples, δ represents outliers, and w(tb) represents the number of edge tuples.

3 Multipath Parallel Transmission Architecture

After the above preprocessing is completed, a multi-path parallel transmission architecture is established. The main contents are as follows: The traffic is divided in advance. The communication flow segmentation is used by the sender to divide large data blocks into data units of different sizes or the same size [8]. The size is determined by the granularity of the communication flow segmentation, which is mainly divided into the following categories: First, in the packet-level service segmentation, the packet is the smallest unit of the data stream. Therefore, the segmentation method has the smallest granularity, and the packet probability is independent of each other. To the sender; second, flow-level traffic segmentation [9], encapsulating a specific destination address in the packet header, and then aggregating packets with the same destination address into data streams, these different data streams are independent of each other and pass through a unique stream identifiers. The effect of data distortion on multipath transmission can be effectively solved by using stream-level segmentation techniques [10]. Third, traffic segmentation at the sub-stream level, the data stream with the same destination header is divided into multiple sub-streams, and the packets in all sub-streams have the same destination address, which solves the load in the stream segmentation algorithm to a certain extent. imbalance problem. The multipath parallel transmission architecture is shown in Figure 2. In addition, in the bandwidth aggregation architecture, the scheduling algorithm is the core of determining the service transmission mode and the scheduling order of service sub-streams [11] to ensure that the service sub-streams arrive at the receiving end in an orderly manner. Next, we will discuss data scheduling.

4. Development of bandwidth scheduling plan

For data transmission in multiple heterogeneous networks, when the bandwidth of a certain path reaches a certain value, the network bandwidth will continue to increase, and the transmission performance will be relatively stable. To improve throughput, allocating too much bandwidth will reduce spectrum utilization, resulting in wasted spectrum resources. Under the current situation of increasingly tight spectrum resources, scheduling and managing the bandwidth of each channel in multi-path parallel transmission can not only ensure the transmission performance of multi-path parallel transmission, but also effectively utilize resources. To deal with this, the main steps are as follows: First, use machine learning methods to estimate effective bandwidth, reasonably estimate the wireless bandwidth resources that can be fully utilized by each substream, and achieve high throughput with less bandwidth resources. , is the key to the bandwidth scheduling algorithm. To this end, the coupled congestion control algorithm is used to jointly control each sub-flow, and its expression is as follows: (3) In formula (3), MSS represents the constant of the maximum length of the message, which is set by the protocol, and RTTi and PLRi represent the sub-flows, respectively. The round-trip delay and packet loss rate of the path. Second, parameter filtering processing, because of the diversity and time-varying characteristics of wireless channels, the link parameters and path effective bandwidth will change dynamically, and there will be errors. To remove errors, Kalman filters are applied to the network parameters to obtain accurate estimates. Kalman filter is a discrete-time recursive estimation algorithm. It calculates a more accurate current state as an output based on the measured value of the current state, the state at the last time, and the prediction error through the differential recurrence of the current time. When studying the discrete control system, the linear stochastic differential equation is used as follows: (4) In formula (4), xk and xk-1 represent the state parameters at time k and time k-1 respectively, and Ak and Bk respectively represent the system parameters. In the model system, it is a matrix, which represents the state transition matrix and the input matrix, respectively, uk represents the input parameters of the control, and wk represents the noise during calculation. Third, bandwidth scheduling, assuming a multi-path connection sub-stream, each sub-stream is independent of each other, each sub-stream occupies a path for data transmission, the following is its scheduling process as shown in Figure 3. The bandwidth is scheduled according to the above process, and finally a channel security protocol is established to ensure the secure transmission of multiple heterogeneous data. The security protocol consists of the SSL protocol, the rule establishment protocol, and the tunnel information protocol. Among them, the SSL protocol mainly includes two parts: authentication algorithm and encryption algorithm. All data packets on the server side will be encrypted by the SSL protocol to ensure the security of message communication. The rule establishment protocol includes connection information and message identification, and the record table matches successfully. Generate socket, forward and publish to ensure the forwarding and application of data information on the VPN technology channel. Programming with OpenVPN is the primary method for implementing the tunnel message protocol. The client sends a request command message to establish a connection with the server. After the connection is passed, the server writes the encrypted and verified data information into the tunnel information data area according to the SSL protocol to realize data exchange and transmission with the client. The structure of the channel security protocol is shown in Figure 4. In the process of data transmission, the transmission is performed according to the above-mentioned channel security protocol, so as to complete the secure transmission of multi-heterogeneous network data based on machine learning.

5 Experimental comparison

In order to verify the effectiveness of the designed multi-heterogeneous network data security transmission technology based on machine learning, experimental analysis was carried out, and the security data transmission mechanism in the heterogeneous network of the literature [1], the heterogeneous network-based security data transmission mechanism of the literature [2] were analyzed. One of the data security models is compared, and the effectiveness of the three systems is compared. The experimental data set in this experiment is shown in Table 1. From the experimental data collected above, it can be seen that more and more data are selected in the experiment, so as to better verify the effectiveness of the three methods. The main comparison is the transmission delay, data transmission interruption and link The packet loss rate is as follows.

5.1 Comparison of Transmission Delay

The transmission delays of the three methods are compared respectively, and the comparison results are shown in Figure 5. By analyzing Figure 5, it is found that in the transmission of Google's public data set, the transmission delay of the three methods is small, and the data transmission delay of the three methods increases with the increase of the amount of transmitted data. The multi-heterogeneous network data security transmission technology based on machine learning in this study has the smallest transmission delay, which is less than the two traditional methods.

5.2 Comparison of data transmission interruption

After comparing and applying the three transmission technologies, the data transmission interruption situation is compared, and the comparison results are shown in Figure 6. It can be found from Figure 6 that the transmission technology studied in this study has the least interruption of data transmission, which is less than the two traditional transmission technologies in several experiments.

5.3 Link Packet Loss Rate Comparison

The multi-heterogeneous network data security transmission technology based on machine learning in this study and the traditional two transmission technologies are respectively used for data transmission. The comparison results of the packet loss rates of the three methods are shown in Figure 7. By analyzing Figure 7, it can be found that the link packet loss rate of the secure data transmission mechanism in the traditional heterogeneous network is the highest, which is higher than that of a data security model based on the heterogeneous network and the transmission technology of this study. To sum up, the multi-heterogeneous network data security transmission technology based on machine learning in this study has less transmission delay and lower packet loss rate than the traditional two transmission technologies. The reason is that the transmission technology of this research has pre-processed the multi-heterogeneous network data in advance, formulated the bandwidth scheduling scheme, and established a secure transmission protocol, thereby improving the security transmission effect of the multi-heterogeneous network data.

6 Conclusion

In this paper, a multi-heterogeneous network data security transmission technology based on machine learning is designed, and the effectiveness of this research technology is verified by experiments. The technology can improve the efficiency of data transmission, and can also reduce the packet loss rate of data transmission, and has strong practical application significance. However, due to the limitation of research time, the security transmission technology of multi-heterogeneous network data in this study still has certain shortcomings. Therefore, further optimization is needed in the follow-up research.

Abstract: It expounds that virtualization technology ensures the stability and fluency of information use, cloud storage technology ensures the rationality of data body distribution, and information security technology ensures the security of big data usage and browsing.

Keywords: computer system, big data, cloud storage, virtualization. UU reading www.uukanshu.com

0 Preface

Computer software technology can process a large amount of data in a relatively short period of time, adopt certain logic to edit and analyze, propose relevant data information required by users, perform reprocessing, and determine the relevant data content that is subject to the data analysis required by users.

1 Virtualization technology

Virtualization technology is an innovative technology of computer software technology. It can create a new virtual machine for users to use in a short period of time. Virtualization technology truly makes rational use of information resources and effectively configures software resources. In addition to mobilization, rational allocation and utilization of computer software resources can also prevent the computer from being stuck or slow due to uneven distribution of software resources during the running process of computer software. Flexible transformation is a significant feature of virtualization technology. It can run and calculate on virtual computing elements, realize cross-domain sharing and cooperation of computers, process and switch resources required by users, and form a new resource chain. Virtualization technology mainly includes the following categories: server virtualization, Docker container technology, among which the focus is Docker container technology. Server virtualization is based on the multi-dimensional virtualization of computers, virtualizing an ontology computer into multiple virtual logically associated computers, and establishing a virtualization hierarchy to interconnect the hardware of the computer and the logically associated system, which is achieved by decoupling associations. specific function. The so-called virtualization layer means that multiple virtualized operating systems can run on one physical computer, which can be switched with each other, and virtual computers at these layers can share one or several unique software and hardware resources. For example, the memory, motherboard, graphics card, etc. in a conventional computer, on this basis, plus the corresponding operating system support, users can freely download programs and software on it for their own use (Figure 1).

User rating: 2.9

5