High Performance Computing
-
A time-aware dominant resource fair scheduling algorithm for edge function computing
- LI Bao, ZHU Shu, WANG Xiao-chuan, REN Yi, TAN Yu-song
-
2024, 46(10):
1711-1719.
doi:
-
Abstract
(
55 )
PDF (880KB)
(
57
)
-
To address the issues of unfair resource allocation and low utilization caused by resource preemption among different workloads in Function-as-a-Service (FaaS) edge computing, a time-aware dominant resource fair scheduling algorithm is proposed. Firstly, the limitations of existing dominant resource fair scheduling algorithms when applied to edge function computing services are analyzed. Then, the algorithm incorporates the runtime weight of function instances and utilizes a time-aware queue in conjunction with a dominant resource fair queue to achieve fair allocation of resources required for function execution and maximize cluster resource utilization. Finally, the algorithm is implemented based on the scheduler of a mainstream open-source function computing service platform. Test results using public workload datasets show that the proposed algorithm improves CPU utilization by up to 18.1% and memory utilization by up to 21.8%, while reducing execution time by up to 26.1%. This effectively enhances the fairness of resource allocation and improves resource utilization in edge function computing services.
-
Quantitative analysis of Linux kernel compatibility based on relationship diagram
- QIN Ying, YANG Ya-jing, MA Jun, WAN Jia-qi
-
2024, 46(10):
1720-1734.
doi:
-
Abstract
(
22 )
PDF (2011KB)
(
29
)
-
The lack of effective theoretical guidance for the migration of device driver modules and application system libraries caused by kernel upgrades brings many inconveniences to the development and deployment of operating systems. In response to the above situation, this paper proposes a quantitative analysis method for kernel compatibility based on kernel module difference detection and dependency analysis. Combining with the open-source Linux kernel, it constructs a kernel module dependency graph, statistics graph features that affect kernel compatibility such as indegree, out-degree, dependency depth, and centrality of kernel modules, analyzes the changes in system calls and exported functions, which are two types of functions strongly related to compatibility, and their impact on kernel compatibility. It also provides a basic method to measure the compatibility rate and influence domain of kernel modules, and experimental verification is conducted in the Linux kernel 5.x series and typical versions of Kylin.
-
Construction of SFC+X orchestration evaluation environment in data center:A POC approach
- LIU Zhen-yu, LI Hua, WANG Lu
-
2024, 46(10):
1735-1747.
doi:
-
Abstract
(
23 )
PDF (1246KB)
(
30
)
-
According to different scenario requirements and corresponding business needs, service function chaining (SFC) can be deployed from the edge cloud to the data center, between data centers, and within the data center. However, due to the inherent heavy asset nature of data centers, the assessment of flexibly combined SFC+X orchestration on this basis is difficult to carry out without an economically efficient assessment environment. Using a proof of concept (POC) method for assessment allows for the realization of a model or its most critical components in actual situations, enabling the validation of concerns at a relatively low implementation cost. In the process of assessing SFC+X orchestration in data centers, this paper analyzes whether the POC assessment environment can effectively evaluate the correctness of SFC+X orchestration functions and effectively measure the performance indicators of SFC+X orchestration, explores whether the POC assessment method can solve the problem of assessing SFC+X orchestration in data centers, and demonstrates the principle of correct transmission of SFC functions and the principle of maintaining the degradation relationship of performance indicators. Furthermore, the feasibility theorem of the POC assessment method is proposed. Based on these principles and theorems, combined with the characteristics of SFC+X orchestration in data centers, the hardware resources required for assessment are simplified to design and implement the POC assessment environment. Through theoretical derivation and POC validation methods, it is proved that the POC assessment method can not only effectively evaluate the correctness of SFC+X orchestration functions but also measure the performance indicators of SFC+X orchestration. Finally, an assessment method based on the POC method is proposed to determine whether the SFC+X orchestration in the data center meets the Service Level Agreement (SLA), providing an economically efficient solution to the problem of assessing SFC+X orchestration in data centers.
-
Edge server assignment for distributed interactive applications in edge environments
- GU Ying-cheng, WEI Liu, JIANG Ning, CHENG Huan-yu, LIU Kai, SONG Yu, LIU Mei-zhao, TANG Lei, CHEN Yu, ZHANG Sheng
-
2024, 46(10):
1748-1756.
doi:
-
Abstract
(
22 )
PDF (1213KB)
(
32
)
-
Mobile edge computing, as a highly forward-looking distributed computing paradigm, brings the computing power of cloud computing to the edge of the network to efficiently process data. In recent years, with the surge in demand for distributed interactive applications and the explosive growth in the number of mobile smart devices, edge servers, as a crucial component of mobile edge computing, enable interactive applications to execute close to users, thereby addressing issues of excessive communication and network overheads as well as delays in real-time data processing. A key challenge lies in finding a suitable edge server allocation strategy to effectively reduce interactive latency and balance server workloads. To this end, we propose the edge server allocation algorithm based on deep Q-network (ESADQN), which models the problem as a Markov decision process and utilizes reinforcement learning to effectively select edge server deployment locations and allocate users to corresponding servers. Compared to the k-means algorithm, ESADQN achieves an average reduction of 31% in total interactive latency with similar workload standard deviation. When compared to the Top-K algorithm, ESADQN reduces the workload standard deviation by an average of 49% with comparable total interactive latency. Experimental results demonstrate that the server allocation scheme selected by ESADQN can effectively reduce both interactive latency and workload standard deviation.
Computer Network and Znformation Security
-
Resource allocation algorithm for distinguished services in vehicular networks based on multi-agent deep reinforcement learning
- CAI Yu, GUAN Zheng, WANG Zeng-wen, WANG Xue, YANG Zhi-jun
-
2024, 46(10):
1757-1764.
doi:
-
Abstract
(
25 )
PDF (1112KB)
(
39
)
-
The Internet of vehicles (IoV) generates a massive amount of network connections and diversified data. To address the challenge that a single agent struggles to collect channel state information and perform service-differentiated resource allocation and link scheduling in dynamic scenarios, a multi-agent deep reinforcement learning-based service-differentiated resource allocation method for IoV is proposed. This method aims to maximize the successful delivery rate of V2V link data packets and the total capacity of V2I links, under the constraint of minimizing interference to emergency service links. It employs deep reinforcement learning algorithms to optimize spectrum allocation and power selection strategies in a single-antenna vehicle-mounted network where multiple cellular users and device-to-device users coexist. Each agent is trained using deep Q-network(DQN), and they interact with the communication environment collectively, achieving coordination through a global reward function. Simulation results show that, in high-load scenarios, compared to traditional random allocation schemes, this scheme increases the total throughput of V2I links by 3.76 Mbps, improves the packet delivery rate of V2V links by 17.1%, and reduces the interference to emergency service links by 1.42 dB compared to ordinary links. This achieves priority guarantee for emergency service links and effectively enhances the overall transmission capacity of V2I and V2V links.
-
Toxic comments detection based on bidirectional capsule network
- LI Gong-jin, SHAO Yu-bin, DU Qing-zhi, LONG Hua, MA Di-nan
-
2024, 46(10):
1765-1774.
doi:
-
Abstract
(
13 )
PDF (1163KB)
(
24
)
-
To address the issue that existing detection models struggle to accurately identify malicious comments with varied linguistic styles and implicit semantics, a malicious comment detection model based on a bidirectional capsule network is proposed. Firstly, the BERT model is utilized to perform word embedding on comment texts, creating an input matrix. This input matrix is then passed to a bidirectional feature extraction layer, which comprises stacked LSTM, bidirectional capsule networks, and attention networks. This layer captures the deep semantic information of the text simultaneously from both forward and backward directions. The generated forward and backward matrices are concatenated and input into an attention mechanism, which focuses on words related to malicious comments and generates an output vector. Secondly, the output vector is concatenated with a context-assisted feature vector to enrich the feature representation. Finally, the concatenated vector is input into a fully connected layer, and the comment text is classified through the Sigmoid activation function. Experiments conducted on the Wikipedia malicious comment dataset demonstrate that compared to existing research, the malicious comment detection model based on the bidirectional capsule network achieves significant performance improvements. It is capable of capturing richer semantic information in comment texts and effectively detecting malicious comments.
-
A survey of source code vulnerability detection research based on graph neural networks
- CHEN Zi-xiong, CHEN Xu, JING Yong-jun, SONG Ji-fei
-
2024, 46(10):
1775-1792.
doi:
-
Abstract
(
48 )
PDF (920KB)
(
49
)
-
With the widespread application of open-source software across various domains, source code vulnerabilities have led to a series of serious security issues. Given the potential threats these vulnerabilities pose to computer systems, detecting source code vulnerabilities in software to prevent network attacks is a crucial research area. To achieve automated detection and reduce human labor costs, researchers have proposed numerous traditional deep learning-based methods. However, these methods mostly treat source code as natural language sequences and do not adequately consider the structural information of the code, limiting their detection effectiveness. In recent years, methods for detecting source code vulnerabilities based on code graph representation and graph neural networks have emerged. This paper provides a comprehensive review of the application of graph neural networks in source code vulnerability detection and proposes a general framework for source code vulnerability detection based on graph neural networks. Starting from three levels of vulnerability detection granularity: file-level, function-level, and slice-level, the existing methods and relevant datasets are systematically summarized and elucidated. Finally, the challenges faced by this field are discussed, and potential research directions for the future are outlined.
-
Dynamic agile software project scheduling using dual-index group learning particle swarm optimization
- SHEN Xiao-ning, XU Ji-yong, MAO Ming-jian, CHEN Wen-yan, SONG Li-yan,
-
2024, 46(10):
1793-1806.
doi:
-
Abstract
(
13 )
PDF (1239KB)
(
30
)
-
To address the two tightly coupled sub-problems of user story selection and task allocation in agile software development, while considering the uncertainties of new user stories and developers' working hours, a dynamic periodic scheduling model for agile software projects is constructed. A particle swarm optimization algorithm based on grouped learning using both target values and potential values as indicators is proposed. By selecting different learning objects based on the characteristics of diffe- rent groups, the diversity of search is enhanced. Initialization and local search strategies are designed based on return on investment and time utilization, allowing the algorithm to adapt to environmental changes and improve its exploration capabilities. Compared with seven existing algorithms, the proposed algorithm can devise a scheduling plan with greater output value and higher time utilization.
-
A study of formalizing programming languages with Barendregt’s variable convention
-
2024, 46(10):
1807-1814.
doi:
-
Abstract
(
11 )
PDF (529KB)
(
24
)
-
Implementing name bindings occurring in programming languages, types, and logical systems is not easy. In theory, the abstract thinking of the human mind can detect and avoid a possible variable capture. In implementation though, detecting variable capture requires clumsy auxiliary operations, which complicates formalization and proofs. Several name binding techniques have been proposed to have readable representations, capture-free substitutions, and intuitive proofs. However, their formalizations are quite different from theory: terms and proofs do not look like of theory. This paper proposes a name binding technique, substitutions and inference rules incorporating a term refreshing function comply with Barendregts variable convention, thus the formalization of formal systems almost identical to their theory. Untyped λ-calculus and simply typed λ-calculus are formalized to demonstrate the merits of this technique.
-
Code plagiarism detection based on graph neural network
- CHEN Chang-feng, ZHAO Hong-zhou, ZHOU Kai-qing
-
2024, 46(10):
1815-1824.
doi:
-
Abstract
(
17 )
PDF (1195KB)
(
30
)
-
As open-source data becomes increasingly accessible, the cost of code plagiarism has decreased, significantly impacting the healthy development of the software industry. Addressing the limitation of existing plagiarism detection methods, which struggle to deeply mine the semantic and structural information of source code, leading to suboptimal semantic plagiarism detection results, this paper introduces a graph neural network-based code plagiarism detection method. This method uses graph neural networks to effectively represent the characteristics of source code, including semantic and structural information, and employs graph attention networks to enhance these features. Furthermore, it utilizes neural tensor networks to obtain similarity vectors between different source codes. Finally, a fully connected network calculates the similarity between different source codes. Meanwhile, the dropout mechanism is incorporated to balance neuron weights, optimize model design, and prevent overfitting. To validate the effectiveness of the proposed method, experiments were conducted on an OJ system dataset, and the results were compared with those of current popular detection methods. The experimental results demonstrate that the proposed method achieves better performance.
-
An improved fighting behavior recognition algorithm based on YOLOv8: EFD-YOLO
- CAO Yu-qi, XU Hui-ying, ZHU Xin-zhong, HUANG Xiao, CHEN Chen, ZHOU Si-yu, SHENG Ke
-
2024, 46(10):
1825-1834.
doi:
-
Abstract
(
28 )
PDF (2633KB)
(
33
)
-
In today's society, fighting behavior detection technology is crucial for preventing violent incidents and conflicts. By integrating surveillance cameras with object detection, real-time monitoring of crowd activities becomes possible, effectively preempting potential threats. Based on YOLOv8, EFD-YOLO employs EfficientRep to replace the backbone network, enhancing the efficiency of feature extraction and enabling accurate real-time feature extraction within the surveillance area. The introduction of the FocalNeXt focus module, through a combination of deep convolutions and skip connections, addresses occlusion issues and multi-scale feature requirements. Furthermore, Focal-DIoU is adopted as the bounding box regression loss function, reducing false detections in complex scenarios. Experimental results show that the EFD-YOLO algorithm outperforms YOLOv8n by 4.2% in the mAP@0.5 metric and 2.5% in the mAP@0.5:0.95 metric, making it suitable for real-time detection of fighting behaviors in critical locations.
-
Hybrid U-shaped network and Transformer for image deblurring
- CHEN Qing-jiang, SHAO Fei, WANG Xuan-jun
-
2024, 46(10):
1843-1851.
doi:
-
Abstract
(
15 )
PDF (3014KB)
(
29
)
-
To address the problem that existing deblurring methods cannot effectively restore fine details of images, an image deblurring method combining a U-shaped network and Transformer is proposed. Firstly, a multi-scale feature extraction module is used to extract shallow feature information from the image. Then, a hierarchical nested U-shaped subnet with a stepwise feature enhancement module is employed to obtain deep feature information while preserving image detail information. Next, a local-global residual refinement module is constructed, which fully extracts global and local information through information interaction between convolutional neural networks and Swin Transformer, and further refines the feature information. Finally, a 1×1 convolutional layer is used for feature reconstruction. The proposed method achieves a peak signal-to-noise ratio (PSNR) of 32.92 and a structural similarity index mean (SSIM) of 0.964 on the GoPro dataset, both outperforming other comparative methods. Experimental results demonstrate that the proposed method can effectively remove blur and reconstruct a potentially clear image with rich details.
Artificial Intelligence and Data Mining
-
Multi-spatial scale traffic prediction model based on spatio-temporal Transformer
- ZHANG Yue, ZHANG Lei, LIU Bai-long, LIANG Zhi-zhen, ZHANG Xue-fei
-
2024, 46(10):
1852-1863.
doi:
-
Abstract
(
17 )
PDF (2227KB)
(
30
)
-
Accurate traffic prediction is crucial for improving the efficiency of intelligent transportation systems. The spatial dependence of the transportation system is not only reflected in the connectivity of roads, but more importantly, in the hidden spatial dependence formed by factors such as road attributes and regional functions. In addition, the time dependence between traffic data has a strict relative positional relationship, and ignoring this issue will make it difficult to achieve accurate traffic prediction. To address these issues, a multi-spatial scale traffic prediction model based on spatio-temporal Transformer (MSS-STT) is proposed. MSS-STT uses multiple specific Transformer networks to model different spatial scales to capture hidden spatial dependencies, while using graph convolutional networks to learn static spatial features. Then, a gating mechanism is used to fuse spatial dependencies and static spatial features at different spatial scales based on their respective importance for prediction. Finally, different time dependencies are extracted according to the different contributions of different relative positions in the time series to the prediction. The experimental results on PeMS dataset indicate that MSS-STT outperforms state-of-the-art baselines.
-
Text classification combining feature projection and negative supervision
- FENG Xing-jie, CAO Ruo-xuan
-
2024, 46(10):
1864-1874.
doi:
-
Abstract
(
24 )
PDF (720KB)
(
28
)
-
Text used for classification often suffers from semantic ambiguity and sparse features, and the meaning of certain words in the sentence may not be consistent with the semantics represented by the actual label of the text, which can lead to classification errors. To address the above issues, a multi-task text classification model combining feature projection and negative supervision is proposed. The main task uses feature projection networks to extract purified vectors with obvious class features and perform classification. The auxiliary task gives the model negative supervision to expand the differences between different categories of text vectors and eliminate the negative impact of individual words. In addition, RoBERTa and BiLSTM are used to simultaneously extract features from positive and negative samples to capture rich semantic information. The model was tested on the THUCNews title classification and micro-loan semantic similarity analysis dataset, and the results show that the model has better performance than existing models.
-
A multi-strategy improved hunter-prey optimization algorithm
- WANG Kun, LIU Jie, LI Wei, TAN Wei, QIN Tao, YANG Jing,
-
2024, 46(10):
1875-1887.
doi:
-
Abstract
(
17 )
PDF (1548KB)
(
27
)
-
Addressing the issues of slow convergence speed and the tendency to fall into local optima in the Hunter-Prey Optimizer (HPO), a multi-strategy improved hunter-prey optimization algorithm (IHPO) is proposed. Firstly, the good point set is used to initialize the population to enhance the diversity of the population. Secondly, the nonlinear control parameter strategy is introduced to optimize search, develop balance parameters, adjust global-local searching weights, and improve the convergence speed. Then, the Levy flight strategy and the greedy strategy are introduced to update the hunter position, which make it possible for the population to jump out of the local optimal, and the golden sine strategy is introduced to update the prey position and improve the local exploitation ability. The benchmark functions are used for optimization comparison, and the Wilcoxon rank sum test between IHPO and other six intelligent algorithms is used. The simulation results show that IHPO has better optimization ability and convergence speed. Finally, IHPO is applied to two practical engineering optimization problems, and the simulation results show that IHPO has good applicability and stability in solving engineering optimization problem.
-
A low-rank cross-modal Transformer for multimodal sentiment analysis
- SUN Jie, CHE Wen-gang, GAO Sheng-xiang
-
2024, 46(10):
1888-1900.
doi:
-
Abstract
(
19 )
PDF (982KB)
(
31
)
-
Multimodal sentiment analysis, which extends text-based affective computing to multimodal contexts with visual and speech modalities, is an emerging research area. In the pretrain-finetune paradigm, fine-tuning large pre-trained language models is necessary for good performance on multimodal sentiment analysis. However, fine-tuning large-scale pretrained language models is still prohibitively expensive and insufficient cross-modal interaction also hinders performance. Therefore, a low-rank cross-modal Transformer (LRCMT) is proposed to address these limitations. Inspired by the low-rank parameter updates exhibited by large pretrained models adapting to natural language tasks, LRCMT injects trainable low-rank matrices into frozen layers, significantly reducing trainable parameters while allowing dynamic word representations. Moreover, a cross-modal modules is designed where visual and speech modalities interact before fusing with the text. Extensive experiments on benchmarks demonstrate LRCMT's efficiency and effectiveness, achieving comparable or better performance than full fine-tuning by only tuning ~0.76% parameters. Furthermore, it also obtains state-of-the-art or competitive results on multiple metrics. Ablations validate that low-rank fine-tuning and sufficient cross-modal interaction contribute to LRCMT's strong performance. This paper reduces the fine-tuning cost and provides insights into efficient and effective cross-modal fusion.