• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    In last 2 years
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A survey of Chinese text classification based on deep learning
    GAO Shan, LI Shi-jie, CAI Zhi-ping
    Computer Engineering & Science    2024, 46 (04): 684-692.  
    Abstract430)      PDF (1058KB)(762)      
    In the era of big data, with the continuous popularization of social media, various text data are growing in the network and in life. It is of great significance to analyze and manage text data using text classification technology. Text classification is a basic research field in the field of artificial intelligence natural language processing. Under the given criteria, it classifies text according to content. The application scenarios of text classification are very extensive, such as sentiment analysis, topic classification, relationship classification, etc. Deep learning is a method of representation learning based on data in machine learning, and it shows good classification effect in text data processing. Chinese text and English text have differences in form, sound, and image. Focusing on the uniqueness of Chinese text classification, this paper analyzes and expounds the deep learning methods used for Chinese text classification, and finally sorts out commonly used datasets for Chinese text classification.

    Reference | Related Articles | Metrics
    Survey on fuzzy testing technologies
    NIU Sheng-jie, LI Peng, ZHANG Yu-jie,
    Computer Engineering & Science    2022, 44 (12): 2173-2186.  
    Abstract608)      PDF (884KB)(731)      
    As people pay more and more attention to software system security issues, fuzzy testing, as a security testing technology for security vulnerability detection, has become more and more widely used and more and more important due to its high degree of automation and low false alarm rate. After continuous improvement in recent years, fuzzy testing has achieved many achievements in both technical development and application innovation. Firstly, we briefly explain the related concepts and basic theories of fuzzing, summarize the application of fuzzy testing in various fields, and analyze the corresponding fuzzy testing solutions according to the needs of vulnerability mining in different fields. Then ,we focus on the important development results of fuzzy testing in recent years, including the improvement and innovation of testing tools, frameworks, systems, and methods. We also analyze and summarize the innovative methods and theories adopted by each development results, as well as the advantages and disadvantages of each tools and systems. Finally, from the perspectives of protocol reverse engineering application, cloud platform construction, emerging technology integration, fuzzy testing countermeasure technology research, and fuzzing tool integration, we provide direction reference for the further research of fuzzy testing.

    Reference | Related Articles | Metrics
    A survey of precipitation nowcasting based on deep learning
    MA Zhi-feng, ZHANG Hao, LIU Jie
    Computer Engineering & Science    2023, 45 (10): 1731-1753.  
    Abstract735)      PDF (1495KB)(731)      
    Precipitation nowcasting refers to the high-resolution prediction of precipitation in the short term, which is an important but difficult task. In the context of deep learning, it is viewed as a radar echo map-based spatiotemporal sequence prediction problem. Precipitation prediction is a complex self-supervised task. Since the motion always changes significantly in both spatial and temporal dimensions, it is difficult for ordinary models to cope with complex nonlinear spatiotemporal transformations, resulting in blurred predictions. Therefore, how to further improve the model prediction performance and reduce ambiguity is a key focus of research in this field. Currently, the research on precipitation nowcasting is still in the early stage, and there is a lack of systematic classification and discussion about the existing research work. Therefore, it is necessary to conduct a comprehensive investigation in this field. This paper comprehensively summarizes and analyzes the relevant knowledge in the field of precipitation nowcasting from different dimensions, and gives future research directions. The specific contents are as follows: (1) The significance of precipitation nowcasting, and the advantages and disadvantages of traditional forecasting models are clarified. (2) The mathematical definition of the nowcasting problem is given. (3) Common predictive models are comprehensively summarized, analyzed. (4) Several open source radar datasets in different countries and regions are introduced, and download links are given. (5) The metrics used for prediction quality assessment are briefly introduced. (6) The different loss functions used in different models is discussed. (7) The research direction of precipitation nowcasting in the future is pointed out.

    Reference | Related Articles | Metrics
    State of the art analysis of China HPC 2023
    ZHANG Yun-quan, DENG Li, YUAN Liang, YUAN Guo-xing
    Computer Engineering & Science    2023, 45 (12): 2091-2098.  
    Abstract512)      PDF (979KB)(676)      
    In this paper, according to the latest China HPC TOP100 rank list released by CCF TCHPC in the late November, the total performance trends of China HPC TOP100 and TOP10 of 2023 are presented. Followed with this, characteristics of the performance, manufacturer, and application area are analyzed separately in detail.

    Reference | Related Articles | Metrics
    Survey on graph convolutional neural network
    LIU Jun-qi, TU Wen-xuan, ZHU En
    Computer Engineering & Science    2023, 45 (08): 1472-1481.  
    Abstract332)      PDF (787KB)(584)      
    With the widespread existence of graph data, the development of graph convolutional neural networks (GCNNs) is becoming faster and faster. According to the different definitions of the convolution operator, GCNNs can be roughly divided into two categories: one based on spectral methods and the other based on spatial methods. Firstly, representative models of these two categories and their connections are discussed in detail, and then the graph pooling operations are comprehensively summarized. Furthermore, the extensive applications of GCNNs in various fields are introduced, and several possible development directions of GCNNs are proposed. Finally, a conclusion is done.

    Reference | Related Articles | Metrics
    Research and application of whale optimization algorithm
    WANG Ying-chao
    Computer Engineering & Science    2024, 46 (05): 881-896.  
    Abstract191)      PDF (901KB)(556)      
    The Whale Optimization Algorithm (WOA) is a novel swarm intelligence optimization algorithm that converges based on probability. It features simple and easily implementable algorithm principles, a small number of easily adjustable parameters, and a balance between global and local search control. This paper systematically analyzes the basic principles of WOA and factors influencing algorithm performance. It focuses on discussing the advantages and limitations of existing algorithm improvement strategies and hybrid strategies. Additionally, the paper elaborates on the applications and developments of WOA in support vector machines, artificial neural networks, combinatorial optimization, complex function optimization, and other areas. Finally, considering the characteristics of WOA and its research achievements in applications, the paper provides a prospective outlook on the research and development directions of WOA.


    Reference | Related Articles | Metrics
    Review of personalized recommendation research based on meta-learning
    WU Guo-dong, LIU Xu-xu, BI Hai-jiao, FAN Wei-cheng, TU Li-jing
    Computer Engineering & Science    2024, 46 (02): 338-352.  
    Abstract231)      PDF (1157KB)(527)      
    As a tool to alleviate “information overload”, recommendation system provides personalized recommendation services for users to filter redundant information, and has been widely used in recent years. However, in actual recommendation scenarios, there are often issues such as cold start and difficulty in adaptively selecting different recommendation algorithms based on the actual environment. Meta-learning, which has the advantage of quickly learning new knowledge and skills from a small number of training samples, is increasingly being applied in research related to recommendation systems. This paper discusses the main research on using meta-learning techniques to alleviate cold start problems and adaptive recommendation issues in recommendation systems. Firstly, it analyzes the relevant research progress made in meta-learning-based recommendations in these two areas. Then, it points out the challenges faced by existing meta-learning recommendation research, such as difficulty in adapting to complex task distributions, high computational costs, and a tendency to fall into local optima. Finally, it provides an outlook on some of the latest research directions in meta-learning for recommendation systems.

    Reference | Related Articles | Metrics
    GNNSched: A GNN inference task scheduling framework on GPU
    SUN Qing-xiao, LIU Yi, YANG Hai-long, WANG Yi-qing, JIA Jie, LUAN Zhong-zhi, QIAN De-pei
    Computer Engineering & Science    2024, 46 (01): 1-11.  
    Abstract310)      PDF (1464KB)(491)      
    Due to frequent memory access, graph neural network (GNN) often has low resource util- ization when running on GPU. Existing inference frameworks, which do not consider the irregularity of GNN input, may exceed GPU memory capacity when directly applied to GNN inference tasks. For GNN inference tasks, it is necessary to pre-analyze the memory occupation of concurrent tasks based on their input characteristics to ensure successful co-location of concurrent tasks on GPU. In addition, inference tasks submitted in multi-tenant scenarios urgently need flexible scheduling strategies to meet the quality of service requirements for con-current inference tasks. To solve these problems, this paper proposes GNNSched, which efficiently manages the co-location of GNN inference tasks on GPU. Specifically, GNNSched organizes concurrent inference tasks into a queue and estimates the memory occupation of each task based on a cost function at the operator level. GNNSched implements multiple scheduling strategies to generate task groups, which are iteratively submitted to GPU for concurrent execution. Experimental results show that GNNSched can meet the quality of service requirements for concurrent GNN inference tasks and reduce the response time of inference tasks.

    Reference | Related Articles | Metrics
    An improved dense pedestrian detection algorithm based on YOLOv8: MER-YOLO
    WANG Ze-yu, XU Hui-ying, ZHU Xin-zhong, LI Chen, LIU Zi-yang, WANG Zi-yi
    Computer Engineering & Science    2024, 46 (06): 1050-1062.  
    Abstract375)      PDF (3288KB)(478)      
    In large-scale crowded places, abnormal crowd gathering occurs from time to time, which brings certain challenges to the dense pedestrian detection technology involved in application scenarios such as autonomous driving and large-scale public place crowd monitoring systems. The new generation of dense pedestrian detection technology requires higher accuracy, smaller computing overhead, faster detection speed and more convenient deployment. In view of the above requirements, a lightweight dense pedestrian detection algorithm MER-YOLO based on YOLOv8 is proposed, which first uses MobileViT as the backbone network to improve the overall feature extraction ability of the model in pedestrian gathering areas. The EMA attention mechanism module is introduced to encode the global information, further aggregate pixel-level features through dimensional interaction, and strengthen the detection ability of small targets by combining the detection head with 160×160 scale. The use of Repulsion Loss as the bounding box loss function reduces the missed detection and misdetection of small target pedestrians under dense crowds. The experimental results show that compared with YOLOv8n, the mAP@0.5 of the MER-YOLO pedestrian detection algorithm is improved by 4.5% on the Crowd Human dataset and 2.1% on the WiderPerson dataset, while only 3.1×106 parameters and 9.8 GFLOPs, which meet the deployment requirements of low computing power and high precision.

    Reference | Related Articles | Metrics
    A hybrid multi-strategy improved sparrow search algorithm
    LI Jiang-hua, WANG Peng-hui, LI Wei
    Computer Engineering & Science    2024, 46 (02): 303-315.  
    Abstract183)      PDF (1768KB)(474)      
    Aiming at the problems that the Sparrow Search Algorithm (SSA) still has premature convergence when solving the optimal solution of the objective function, it is easy to fall into local optimum under multi-peak conditions, and the solution accuracy is insufficient under high-dimensional conditions, a hybrid multi-strategy improved Sparrow Search Algorithm (MISSA) is proposed. Considering that the quality of the initial solution of the algorithm will greatly affect the convergence speed and accuracy of the entire algorithm, an elite reverse learning strategy is introduced to expand the search area of the algorithm and improve the quality and diversity of the initial population; the step size is controlled in stages, in order to improve the solution accuracy of the algorithm. By adding the Circle mapping parameter and cosine factor to the position of the follower, the ergodicity and search ability of the algorithm are improved. The adaptive selection mechanism is used to update the individual position of the sparrow and add Lévy flight to enhance the algorithm optimization and the ability to jump out of local optima. The improved algorithm is compared with Sparrow Search Algorithm and other algorithms in 13 test functions, and the Friedman test is carried out. The experimental comparison results show that the improved sparrow search algorithm can effectively improve the optimization accuracy and convergence speed, and it can be used in high-dimensional problems. It also has high stability.


    Reference | Related Articles | Metrics
    A survey of target tracking algorithms based on Siamese network
    MA Yu-min, QIAN Yu-rong, ZHOU Wei-hang, GONG Wei-jun, Palladium Turson
    Computer Engineering & Science    2023, 45 (09): 1578-1592.  
    Abstract356)      PDF (3012KB)(452)      
    Siamese network is a coupled framework established by two or more artificial neural networks, which turns the regression problem into a similarity matching problem and has attracted much attention from researchers in the computer vision field. With the rapid development of deep learning theory, target tracking technology has been widely used in daily life. Siamese network-based target tracking algorithms have gradually replaced traditional target tracking algorithms with their relatively superior accuracy and real-time performance, becoming the mainstream algorithm for target tracking. Firstly, the challenges and traditional methods faced by target tracking tasks are introduced. Then, the basic structure and development of Siamese network are introduced, and the design principles of Siamese network-based target tracking algorithms in recent years are summarized. In addition, the performance of Siamese network-based target tracking algorithms is compared using multiple mainstream datasets for target tracking testing. Finally, the problems and prospects of Siamese network-based target tracking algorithms are proposed.

    Reference | Related Articles | Metrics
    Review on security issues of blockchains
    SHEN Chuan-nian
    Computer Engineering & Science    2024, 46 (01): 46-62.  
    Abstract294)      PDF (959KB)(449)      
    Blockchain, with its disruptive innovative technology, is continuously changing the operational rules and application scenarios of various industries such as digital finance, digital government, Internet of Things, and intelligent manufacturing. It is an indispensable key technology for building a new trust and value system in the future society. However, due to the defects of its own technology and the complexity and diversity of application scenarios, the security issues of blockchain are becoming increasingly serious. Security has become a major bottleneck restricting the future development of blockchain, and the road to blockchain regulation is arduous. This paper introduces the background know- ledge, basic concepts, and architecture of blockchain. Starting from the architecture of blockchain, it analyzes the security issues and prevention strategies of blockchain from seven aspects: data layer, network layer, consensus layer, incentive layer, contract layer, application layer, and cross-chain. Based on this, it discusses the safety supervision of blockchain from the current situation and difficulties of policy supervision, the establishment of technical supervision standards, innovative methods, and deve- lopment trends.

    Reference | Related Articles | Metrics
    A vehicle object detection algorithm in UAV video stream based on improved Deformable DETR
    JIANG Zhi-peng, WANG Zi-quan, ZHANG Yong-sheng, YU Ying, CHENG Bin-bin, ZHAO Long-hai, ZHANG Meng-wei
    Computer Engineering & Science    2024, 46 (01): 91-101.  
    Abstract278)      PDF (1626KB)(444)      
    Aiming at the problems of a large number of small targets in UAV video stream detection, insufficient contextual semantic information due to low image transmission quality, slow inference speed of traditional algorithm fusion features, and poor training effect caused by unbalanced dataset category samples, this paper proposes a vehicle object detection algorithm based on improved Deformable DETR for UAV video streaming. In terms of model structure, this method designs a cross-scale feature fusion module to increase the receptive field and improve the detection ability of small objects, and adopts the squeeze-excitation module for object_query to improve the response value of key objects and reduce the missed or false detection of important objects. In terms of data processing, online difficult sample mining technology is used to improve the problem of uneven distribution of class samples in the data set. The experimental results show that the improved algorithm improves the average detection accuracy by 1.5% and the small target detection accuracy by 1.2% compared with the baseline algorithm without detection speed degradation.
    Reference | Related Articles | Metrics
    Convolutional neural network inference and training vectorization method for multicore vector accelerators
    CHEN Jie, LI Cheng, LIU Zhong
    Computer Engineering & Science    2024, 46 (04): 580-589.  
    Abstract143)      PDF (982KB)(435)      
    With the widespread application of deep learning, represented by convolutional neural networks (CNNs), the computational requirements of neural network models have increased rapidly, driving the development of deep learning accelerators. The research focus has shifted to how to accelerate and optimize the performance of neural network models based on the architectural characteristics of accelerators. For the VGG network model inference and training algorithms on the independently designed multi core vector accelerator FT-M7004, vectorized mapping methods for core operators such as convolution, pooling, and fully connected layers are proposed. Optimization strategies, including SIMD vectorization, DMA double-buffered transfer, and weight sharing, are employed to fully exploit the architectural advantages of the vector accelerator, achieving high computational efficiency. Experimental results indicate that on the FT-M7004 platform, the average computational efficiency for convolution layer inference and training is 86.62% and 69.63%, respectively; for fully connected layer inference and training, the average computational efficiency reaches 93.17% and 81.98%, respectively. The inference computational efficiency of the VGG network model on FT-M7004 exceeds that on the GPU platform by over 20%.

    Reference | Related Articles | Metrics
    A survey of error correction codes in holographic storage
    YU Qin, Wu Fei, ZHANG Meng, XIE Chang-sheng
    Computer Engineering & Science    2024, 46 (04): 571-579.  
    Abstract169)      PDF (1981KB)(434)      
    In the era of big data, the demand for high-density and large-capacity storage technology is increasing day by day. Unlike traditional storage technologies that record data bit by bit, holographic storage uses two-dimensional data pages as the unit for reading and writing, adopting a three-dimensional volume storage mode. With advantages such as high storage density, fast data conversion rate, energy efficiency, safety, and ultra-long-term preservation, holographic storage is expected to become the strong competitor for mass cold data storage. This paper focuses on phase-modulated collinear holographic storage and analyzes the current research status of error correction codes for holographic storage. A detailed introduction is provided to an reference beam-assisted low-density parity-check (LDPC) code scheme.

    Reference | Related Articles | Metrics
    Hardware design and FPGA implementation of a variable pipeline stage SM4 encryption and decryption algorithm
    ZHU Qi-jin, CHEN Xiao-wen, LU Jian-zhuang,
    Computer Engineering & Science    2024, 46 (04): 606-614.  
    Abstract160)      PDF (1475KB)(433)      
    As the first commercial cryptographic algorithm in China, SM4 algorithm is widely used in data encryption storage and information encryption communication and other fields due to its advantages of simple and easy implementation of algorithm structure, fast encryption and decryption speed and high security. With the variable pipeline stage SM4 encryption and decryption algorithm hardware design and FPGA implementation as the research topic, this study focuses on the performance differences in designs with different pipeline stages. A controllable pipeline stage SM4 encryption and decryption circuit is designed and encapsulated into an IP core with AXI and APB interfaces. Based on XILINX ZYNQ devices, a small SoC is constructed on the XILINX ZYNQ-7020 development board, and the designed SM4 IP core is mounted onto the AXI bus for simulating real-world scenarios and conducting performance tests. The correctness of the design functionality is verified by comparing software encryption and decryption data with simulated data. Testing the performance of different pipeline stages helps identify the most suitable pipeline stage number.


    Reference | Related Articles | Metrics
    Research progress on information extraction methods of Chinese electronic medical records
    JI Xu-rui, WEI De-jian, ZHANG Jun-zhong, ZHANG Shuai, CAO Hui
    Computer Engineering & Science    2024, 46 (02): 325-337.  
    Abstract262)      PDF (887KB)(433)      
    The large amount of medical information carried in the electronic medical record can help doctors better understand the situation of patients and assist doctors in clinical diagnosis. As the two core tasks of Chinese electronic medical record (EMR) information extraction, named entity recognition and entity relationship extraction have become the main research directions. Its main goal is to identify the medical entities in the EMR text and extract the medical relationships between the entities. This paper systematically expounds the research status of Chinese electronic medical record, points out the important role of named entity recognition and entity relationship extraction in Chinese electronic medical record information extraction, then introduces the latest research results of named entity recognition and relationship extraction algorithm for Chinese electronic medical record information extraction, and analyzes the advantages and disadvantages of each model in each stage. In addition, the current problems of Chinese EMR are discussed, and the future research trend is prospected.


    Reference | Related Articles | Metrics
    Computer Engineering & Science    2024, 46 (04): 0-0.  
    Abstract128)      PDF (247KB)(427)      
    Related Articles | Metrics
    A texture image classification method based on adaptive texture feature fusion
    Lv Fu, HAN Xiao-tian, FENG Yong-an, XIANG Liang
    Computer Engineering & Science    2024, 46 (03): 488-498.  
    Abstract150)      PDF (1155KB)(423)      
    The existing image classification methods based on deep learning generally lack the pertinence of texture features, and have low classification accuracy, which is difficult to be applied to the classification of simple texture and complex texture. A deep learning model based on adaptive texture feature fusion is proposed, which can make classification decisions based on differential texture features between classes. Firstly, the texture feature image is constructed according to the difference between the largest categories of texture features. Secondly, the improved bilinear model is trained in parallel with the original image and the distinctive texture feature image to obtain the dual-channel features. Finally, an adaptive classification module is constructed based on decision fusion, the channel weight is extracted by the average pooling feature map connecting the original image and texture map. The optimal fusion classification result is obtained by fusing the classification vector of two parallel neural network models according to the channel weight. The classification performance of the algorithm was evaluated on four common texture data sets, namely KTH-TIPS, KTH-TIPS-2b,UIUC and DTD, and the accuracy rates are 99.98%, 99.95%, 99.99% and 67.09%, respectively, indicating that the proposed recognition method has generally efficient recognition performance.

    Reference | Related Articles | Metrics
    A survey of pedestrian trajectory prediction based on graph neural network
    CAO Jian, CHEN Yi-mei, LI Hai-sheng, CAI Qiang,
    Computer Engineering & Science    2023, 45 (06): 1040-1053.  
    Abstract276)      PDF (883KB)(417)      
    With the rapid development of the technology of computer vision and autonomous driving, the ability to sense, understand and predict human behavior is becoming more and more important. The popularity of various sensors has generated a large amount of position data of moving objects in society. Predicting the movement trajectory of pedestrians based on these data has great value in social prediction and other fields. To gain insight into the development in this area, a literature review is conducted on graph neural network-based pedestrian trajectory prediction methods. The graph neural network algorithms for pedestrian trajectory prediction are compared, analyzed and summarized from multiple perspectives, and the research and development of different algorithms in this field are discussed. The comparison and analysis are carried out on the current public data sets, an overview of the corresponding performance indicators is provided, and the performance comparison results of different algorithms are given. At the same time, this paper puts forward the research problems that still exist and looks forward to the possible research directions in the future.

    Reference | Related Articles | Metrics