• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    In last 3 years
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A survey of Chinese text classification based on deep learning
    GAO Shan, LI Shi-jie, CAI Zhi-ping
    Computer Engineering & Science    2024, 46 (04): 684-692.  
    Abstract431)      PDF (1058KB)(769)      
    In the era of big data, with the continuous popularization of social media, various text data are growing in the network and in life. It is of great significance to analyze and manage text data using text classification technology. Text classification is a basic research field in the field of artificial intelligence natural language processing. Under the given criteria, it classifies text according to content. The application scenarios of text classification are very extensive, such as sentiment analysis, topic classification, relationship classification, etc. Deep learning is a method of representation learning based on data in machine learning, and it shows good classification effect in text data processing. Chinese text and English text have differences in form, sound, and image. Focusing on the uniqueness of Chinese text classification, this paper analyzes and expounds the deep learning methods used for Chinese text classification, and finally sorts out commonly used datasets for Chinese text classification.

    Reference | Related Articles | Metrics
    Survey on fuzzy testing technologies
    NIU Sheng-jie, LI Peng, ZHANG Yu-jie,
    Computer Engineering & Science    2022, 44 (12): 2173-2186.  
    Abstract609)      PDF (884KB)(735)      
    As people pay more and more attention to software system security issues, fuzzy testing, as a security testing technology for security vulnerability detection, has become more and more widely used and more and more important due to its high degree of automation and low false alarm rate. After continuous improvement in recent years, fuzzy testing has achieved many achievements in both technical development and application innovation. Firstly, we briefly explain the related concepts and basic theories of fuzzing, summarize the application of fuzzy testing in various fields, and analyze the corresponding fuzzy testing solutions according to the needs of vulnerability mining in different fields. Then ,we focus on the important development results of fuzzy testing in recent years, including the improvement and innovation of testing tools, frameworks, systems, and methods. We also analyze and summarize the innovative methods and theories adopted by each development results, as well as the advantages and disadvantages of each tools and systems. Finally, from the perspectives of protocol reverse engineering application, cloud platform construction, emerging technology integration, fuzzy testing countermeasure technology research, and fuzzing tool integration, we provide direction reference for the further research of fuzzy testing.

    Reference | Related Articles | Metrics
    A survey of precipitation nowcasting based on deep learning
    MA Zhi-feng, ZHANG Hao, LIU Jie
    Computer Engineering & Science    2023, 45 (10): 1731-1753.  
    Abstract736)      PDF (1495KB)(733)      
    Precipitation nowcasting refers to the high-resolution prediction of precipitation in the short term, which is an important but difficult task. In the context of deep learning, it is viewed as a radar echo map-based spatiotemporal sequence prediction problem. Precipitation prediction is a complex self-supervised task. Since the motion always changes significantly in both spatial and temporal dimensions, it is difficult for ordinary models to cope with complex nonlinear spatiotemporal transformations, resulting in blurred predictions. Therefore, how to further improve the model prediction performance and reduce ambiguity is a key focus of research in this field. Currently, the research on precipitation nowcasting is still in the early stage, and there is a lack of systematic classification and discussion about the existing research work. Therefore, it is necessary to conduct a comprehensive investigation in this field. This paper comprehensively summarizes and analyzes the relevant knowledge in the field of precipitation nowcasting from different dimensions, and gives future research directions. The specific contents are as follows: (1) The significance of precipitation nowcasting, and the advantages and disadvantages of traditional forecasting models are clarified. (2) The mathematical definition of the nowcasting problem is given. (3) Common predictive models are comprehensively summarized, analyzed. (4) Several open source radar datasets in different countries and regions are introduced, and download links are given. (5) The metrics used for prediction quality assessment are briefly introduced. (6) The different loss functions used in different models is discussed. (7) The research direction of precipitation nowcasting in the future is pointed out.

    Reference | Related Articles | Metrics
    Research progress in resistive switching mechanism and materials of memristor
    DENG Ya-feng, WEI Zi-jian, WANG Dong
    Computer Engineering & Science    2022, 44 (01): 36-47.  
    Abstract619)      PDF (1245KB)(727)      
    Memristor can integrate information storage and logic operation into one electronic device, which will break the traditional von Neumann computer architecture, and its application prospect is immeasurable. Firstly, the development course and basic concept of memristor are introduced. Secondly, the resistive  switching mechanism of memristor and the choice of its material are summarized. The known resistive  switching mechanism of memristor can be mainly divided into three categories: anion-dominant resistive switching mechanism, cation-dominant resistive switching mechanism, and pure electron-dominant resistive switching mechanism. At the same time, the characteristics of different types of materials in memristor application are described in detail. Then the application of memristor in Boolean calculation and neuromorphic system is discussed. Finally, the future development direction of memristor and the possible problems in its practical application are expected.

    Reference | Related Articles | Metrics
    State of the art analysis of China HPC 2023
    ZHANG Yun-quan, DENG Li, YUAN Liang, YUAN Guo-xing
    Computer Engineering & Science    2023, 45 (12): 2091-2098.  
    Abstract512)      PDF (979KB)(682)      
    In this paper, according to the latest China HPC TOP100 rank list released by CCF TCHPC in the late November, the total performance trends of China HPC TOP100 and TOP10 of 2023 are presented. Followed with this, characteristics of the performance, manufacturer, and application area are analyzed separately in detail.

    Reference | Related Articles | Metrics
    A survey on deep learning based video anomaly detection
    HE Ping, LI Gang, LI Hui-bin,
    Computer Engineering & Science    2022, 44 (09): 1620-1629.  
    Abstract500)      PDF (612KB)(587)      
    Recent years, with the widespread use of video surveillance technology, video anomaly detection, which can intelligently analyze massive videos and quickly discover the abnormalities, has received wide attention. This paper aims to give a comprehensive survey on deep learning based video anomaly detection methods. Firstly, a brief introduction of video anomaly detection is given, including the basic concepts, basic tasks, modeling process, learning paradigms as well as the evaluation perspectives. Secondly, the video anomaly detection methods are classified into four categories: reconstruction-based, prediction-based, classification-based, and regression-based. Their basic modeling ideas, typical algorithms, advantages, and disadvantages are discussed in detail.  On this basis, the commonly used single-scene video anomaly detection public datasets and evaluation indicators are introduced, and the performance of representative anomaly detection algorithms is compared and analyzed. Finally, summary is conducted, and the future development directions related to datasets, algorithm and evaluation criteria of video anomaly detection are proposed.

    Reference | Related Articles | Metrics
    Survey on graph convolutional neural network
    LIU Jun-qi, TU Wen-xuan, ZHU En
    Computer Engineering & Science    2023, 45 (08): 1472-1481.  
    Abstract334)      PDF (787KB)(585)      
    With the widespread existence of graph data, the development of graph convolutional neural networks (GCNNs) is becoming faster and faster. According to the different definitions of the convolution operator, GCNNs can be roughly divided into two categories: one based on spectral methods and the other based on spatial methods. Firstly, representative models of these two categories and their connections are discussed in detail, and then the graph pooling operations are comprehensively summarized. Furthermore, the extensive applications of GCNNs in various fields are introduced, and several possible development directions of GCNNs are proposed. Finally, a conclusion is done.

    Reference | Related Articles | Metrics
    Research and application of whale optimization algorithm
    WANG Ying-chao
    Computer Engineering & Science    2024, 46 (05): 881-896.  
    Abstract195)      PDF (901KB)(570)      
    The Whale Optimization Algorithm (WOA) is a novel swarm intelligence optimization algorithm that converges based on probability. It features simple and easily implementable algorithm principles, a small number of easily adjustable parameters, and a balance between global and local search control. This paper systematically analyzes the basic principles of WOA and factors influencing algorithm performance. It focuses on discussing the advantages and limitations of existing algorithm improvement strategies and hybrid strategies. Additionally, the paper elaborates on the applications and developments of WOA in support vector machines, artificial neural networks, combinatorial optimization, complex function optimization, and other areas. Finally, considering the characteristics of WOA and its research achievements in applications, the paper provides a prospective outlook on the research and development directions of WOA.


    Reference | Related Articles | Metrics
    A survey of fusion methods for multi-source image
    ZHANG Li-xia, ZENG Guang-ping, XUAN Zhao-cheng
    Computer Engineering & Science    2022, 44 (02): 321-334.  
    Abstract542)      PDF (1038KB)(554)      
    Due to the different imaging mechanisms, the multi-source images are essentially diffe- rent, which makes the fusion process different. Based on a large number of Chinese and foreign literature, this paper classifies fusion methods, and focuses on the fusion processes and typical algorithms of various fusion methods, and elaborates its key techniques in detail. At the same time, an in-depth review of the current evaluation indicators and classifications is carried out. Finally, combining the influencing factors of key techniques and the technology development status, the future development trend of the fusion image field is prospected from the five aspects such as data characteristics, time efficiency, information extraction, evaluation angles and universal applicability of the method. 

    Reference | Related Articles | Metrics
    Review of personalized recommendation research based on meta-learning
    WU Guo-dong, LIU Xu-xu, BI Hai-jiao, FAN Wei-cheng, TU Li-jing
    Computer Engineering & Science    2024, 46 (02): 338-352.  
    Abstract231)      PDF (1157KB)(529)      
    As a tool to alleviate “information overload”, recommendation system provides personalized recommendation services for users to filter redundant information, and has been widely used in recent years. However, in actual recommendation scenarios, there are often issues such as cold start and difficulty in adaptively selecting different recommendation algorithms based on the actual environment. Meta-learning, which has the advantage of quickly learning new knowledge and skills from a small number of training samples, is increasingly being applied in research related to recommendation systems. This paper discusses the main research on using meta-learning techniques to alleviate cold start problems and adaptive recommendation issues in recommendation systems. Firstly, it analyzes the relevant research progress made in meta-learning-based recommendations in these two areas. Then, it points out the challenges faced by existing meta-learning recommendation research, such as difficulty in adapting to complex task distributions, high computational costs, and a tendency to fall into local optima. Finally, it provides an outlook on some of the latest research directions in meta-learning for recommendation systems.

    Reference | Related Articles | Metrics
    GNNSched: A GNN inference task scheduling framework on GPU
    SUN Qing-xiao, LIU Yi, YANG Hai-long, WANG Yi-qing, JIA Jie, LUAN Zhong-zhi, QIAN De-pei
    Computer Engineering & Science    2024, 46 (01): 1-11.  
    Abstract311)      PDF (1464KB)(494)      
    Due to frequent memory access, graph neural network (GNN) often has low resource util- ization when running on GPU. Existing inference frameworks, which do not consider the irregularity of GNN input, may exceed GPU memory capacity when directly applied to GNN inference tasks. For GNN inference tasks, it is necessary to pre-analyze the memory occupation of concurrent tasks based on their input characteristics to ensure successful co-location of concurrent tasks on GPU. In addition, inference tasks submitted in multi-tenant scenarios urgently need flexible scheduling strategies to meet the quality of service requirements for con-current inference tasks. To solve these problems, this paper proposes GNNSched, which efficiently manages the co-location of GNN inference tasks on GPU. Specifically, GNNSched organizes concurrent inference tasks into a queue and estimates the memory occupation of each task based on a cost function at the operator level. GNNSched implements multiple scheduling strategies to generate task groups, which are iteratively submitted to GPU for concurrent execution. Experimental results show that GNNSched can meet the quality of service requirements for concurrent GNN inference tasks and reduce the response time of inference tasks.

    Reference | Related Articles | Metrics
    An improved dense pedestrian detection algorithm based on YOLOv8: MER-YOLO
    WANG Ze-yu, XU Hui-ying, ZHU Xin-zhong, LI Chen, LIU Zi-yang, WANG Zi-yi
    Computer Engineering & Science    2024, 46 (06): 1050-1062.  
    Abstract380)      PDF (3288KB)(490)      
    In large-scale crowded places, abnormal crowd gathering occurs from time to time, which brings certain challenges to the dense pedestrian detection technology involved in application scenarios such as autonomous driving and large-scale public place crowd monitoring systems. The new generation of dense pedestrian detection technology requires higher accuracy, smaller computing overhead, faster detection speed and more convenient deployment. In view of the above requirements, a lightweight dense pedestrian detection algorithm MER-YOLO based on YOLOv8 is proposed, which first uses MobileViT as the backbone network to improve the overall feature extraction ability of the model in pedestrian gathering areas. The EMA attention mechanism module is introduced to encode the global information, further aggregate pixel-level features through dimensional interaction, and strengthen the detection ability of small targets by combining the detection head with 160×160 scale. The use of Repulsion Loss as the bounding box loss function reduces the missed detection and misdetection of small target pedestrians under dense crowds. The experimental results show that compared with YOLOv8n, the mAP@0.5 of the MER-YOLO pedestrian detection algorithm is improved by 4.5% on the Crowd Human dataset and 2.1% on the WiderPerson dataset, while only 3.1×106 parameters and 9.8 GFLOPs, which meet the deployment requirements of low computing power and high precision.

    Reference | Related Articles | Metrics
    Webshell detection based on deep learning
    CHE Sheng-bing, ZHANG Guang-lin
    Computer Engineering & Science    2022, 44 (06): 994-1002.  
    Abstract428)      PDF (1454KB)(487)      
    Based on Webshell detection in AWD offensive and defensive competition, fuzzy C-means clustering is used to analyze Webshell in hyperspace, and find that the attack vector is globally sparse and locally closely related. Two deep learning models are proposed for Webshell detection. Since most of the Webshells collected by GitHub are obtained randomly and are not well targeted, the length of the training data is limited and a limited number of relevant samples are retained. Because one attack is closely related to the adjacent 2 to 4 operations, the attack vector has obvious correlation characteristics in the vertical direction, and the horizontal direction is relatively stable, considering that the scale of the feature vector will be reduced during the transfer process, the zero padding of the convolutional layer is increased. Aiming at the sawtooth oscillation phenomenon of the deep learning training curve, the fast calculation formula of the Adam optimization algorithm is proved, and the learning parameters are corrected, which continuously eliminates the sawtooth in the training Loss curve, and maks the training curve drop smoothly according to the exponential law. The training results are obtained soon. Experiments are conducted to compare the two deep learning models with existing similar detection models. The experimental results show that the proposed deep learning models can better detect Webshell attacks in AWD.

    Reference | Related Articles | Metrics
    A survey of fast convolution algorithms
    LI Chuang, LIU Zong-lin, LIU Sheng, LI Yong, XU Xue-gang, XIA Yi-min
    Computer Engineering & Science    2021, 43 (10): 1711-1719.  
    Abstract411)      PDF (605KB)(479)      
    Convolutional neural network is one of the most widely applied directions of deep learning algorithms. At present, the application of convolutional neural network is not only in the field of science and technology, but also in medical, military and other fields, and has played a huge role in related fields. Convolution is the most core part of convolutional neural network, and the computation amount of convolution accounts for more than 70% of the time of the whole network. Therefore, it is very important to study the acceleration of convolution operation. Firstly, the convolution algorithms in recent years are introduced, and their complexity is analyzed. The advantages and disadvantages of these algorithms are summarized. Finally, the possible breakthroughs in theoretical research and application are discussed and prospected.

    Reference | Related Articles | Metrics
    A hybrid multi-strategy improved sparrow search algorithm
    LI Jiang-hua, WANG Peng-hui, LI Wei
    Computer Engineering & Science    2024, 46 (02): 303-315.  
    Abstract183)      PDF (1768KB)(475)      
    Aiming at the problems that the Sparrow Search Algorithm (SSA) still has premature convergence when solving the optimal solution of the objective function, it is easy to fall into local optimum under multi-peak conditions, and the solution accuracy is insufficient under high-dimensional conditions, a hybrid multi-strategy improved Sparrow Search Algorithm (MISSA) is proposed. Considering that the quality of the initial solution of the algorithm will greatly affect the convergence speed and accuracy of the entire algorithm, an elite reverse learning strategy is introduced to expand the search area of the algorithm and improve the quality and diversity of the initial population; the step size is controlled in stages, in order to improve the solution accuracy of the algorithm. By adding the Circle mapping parameter and cosine factor to the position of the follower, the ergodicity and search ability of the algorithm are improved. The adaptive selection mechanism is used to update the individual position of the sparrow and add Lévy flight to enhance the algorithm optimization and the ability to jump out of local optima. The improved algorithm is compared with Sparrow Search Algorithm and other algorithms in 13 test functions, and the Friedman test is carried out. The experimental comparison results show that the improved sparrow search algorithm can effectively improve the optimization accuracy and convergence speed, and it can be used in high-dimensional problems. It also has high stability.


    Reference | Related Articles | Metrics
    A survey of target tracking algorithms based on Siamese network
    MA Yu-min, QIAN Yu-rong, ZHOU Wei-hang, GONG Wei-jun, Palladium Turson
    Computer Engineering & Science    2023, 45 (09): 1578-1592.  
    Abstract356)      PDF (3012KB)(453)      
    Siamese network is a coupled framework established by two or more artificial neural networks, which turns the regression problem into a similarity matching problem and has attracted much attention from researchers in the computer vision field. With the rapid development of deep learning theory, target tracking technology has been widely used in daily life. Siamese network-based target tracking algorithms have gradually replaced traditional target tracking algorithms with their relatively superior accuracy and real-time performance, becoming the mainstream algorithm for target tracking. Firstly, the challenges and traditional methods faced by target tracking tasks are introduced. Then, the basic structure and development of Siamese network are introduced, and the design principles of Siamese network-based target tracking algorithms in recent years are summarized. In addition, the performance of Siamese network-based target tracking algorithms is compared using multiple mainstream datasets for target tracking testing. Finally, the problems and prospects of Siamese network-based target tracking algorithms are proposed.

    Reference | Related Articles | Metrics
    Review on security issues of blockchains
    SHEN Chuan-nian
    Computer Engineering & Science    2024, 46 (01): 46-62.  
    Abstract294)      PDF (959KB)(450)      
    Blockchain, with its disruptive innovative technology, is continuously changing the operational rules and application scenarios of various industries such as digital finance, digital government, Internet of Things, and intelligent manufacturing. It is an indispensable key technology for building a new trust and value system in the future society. However, due to the defects of its own technology and the complexity and diversity of application scenarios, the security issues of blockchain are becoming increasingly serious. Security has become a major bottleneck restricting the future development of blockchain, and the road to blockchain regulation is arduous. This paper introduces the background know- ledge, basic concepts, and architecture of blockchain. Starting from the architecture of blockchain, it analyzes the security issues and prevention strategies of blockchain from seven aspects: data layer, network layer, consensus layer, incentive layer, contract layer, application layer, and cross-chain. Based on this, it discusses the safety supervision of blockchain from the current situation and difficulties of policy supervision, the establishment of technical supervision standards, innovative methods, and deve- lopment trends.

    Reference | Related Articles | Metrics
    A vehicle object detection algorithm in UAV video stream based on improved Deformable DETR
    JIANG Zhi-peng, WANG Zi-quan, ZHANG Yong-sheng, YU Ying, CHENG Bin-bin, ZHAO Long-hai, ZHANG Meng-wei
    Computer Engineering & Science    2024, 46 (01): 91-101.  
    Abstract278)      PDF (1626KB)(445)      
    Aiming at the problems of a large number of small targets in UAV video stream detection, insufficient contextual semantic information due to low image transmission quality, slow inference speed of traditional algorithm fusion features, and poor training effect caused by unbalanced dataset category samples, this paper proposes a vehicle object detection algorithm based on improved Deformable DETR for UAV video streaming. In terms of model structure, this method designs a cross-scale feature fusion module to increase the receptive field and improve the detection ability of small objects, and adopts the squeeze-excitation module for object_query to improve the response value of key objects and reduce the missed or false detection of important objects. In terms of data processing, online difficult sample mining technology is used to improve the problem of uneven distribution of class samples in the data set. The experimental results show that the improved algorithm improves the average detection accuracy by 1.5% and the small target detection accuracy by 1.2% compared with the baseline algorithm without detection speed degradation.
    Reference | Related Articles | Metrics
    Convolutional neural network inference and training vectorization method for multicore vector accelerators
    CHEN Jie, LI Cheng, LIU Zhong
    Computer Engineering & Science    2024, 46 (04): 580-589.  
    Abstract143)      PDF (982KB)(437)      
    With the widespread application of deep learning, represented by convolutional neural networks (CNNs), the computational requirements of neural network models have increased rapidly, driving the development of deep learning accelerators. The research focus has shifted to how to accelerate and optimize the performance of neural network models based on the architectural characteristics of accelerators. For the VGG network model inference and training algorithms on the independently designed multi core vector accelerator FT-M7004, vectorized mapping methods for core operators such as convolution, pooling, and fully connected layers are proposed. Optimization strategies, including SIMD vectorization, DMA double-buffered transfer, and weight sharing, are employed to fully exploit the architectural advantages of the vector accelerator, achieving high computational efficiency. Experimental results indicate that on the FT-M7004 platform, the average computational efficiency for convolution layer inference and training is 86.62% and 69.63%, respectively; for fully connected layer inference and training, the average computational efficiency reaches 93.17% and 81.98%, respectively. The inference computational efficiency of the VGG network model on FT-M7004 exceeds that on the GPU platform by over 20%.

    Reference | Related Articles | Metrics
    Research progress on information extraction methods of Chinese electronic medical records
    JI Xu-rui, WEI De-jian, ZHANG Jun-zhong, ZHANG Shuai, CAO Hui
    Computer Engineering & Science    2024, 46 (02): 325-337.  
    Abstract262)      PDF (887KB)(436)      
    The large amount of medical information carried in the electronic medical record can help doctors better understand the situation of patients and assist doctors in clinical diagnosis. As the two core tasks of Chinese electronic medical record (EMR) information extraction, named entity recognition and entity relationship extraction have become the main research directions. Its main goal is to identify the medical entities in the EMR text and extract the medical relationships between the entities. This paper systematically expounds the research status of Chinese electronic medical record, points out the important role of named entity recognition and entity relationship extraction in Chinese electronic medical record information extraction, then introduces the latest research results of named entity recognition and relationship extraction algorithm for Chinese electronic medical record information extraction, and analyzes the advantages and disadvantages of each model in each stage. In addition, the current problems of Chinese EMR are discussed, and the future research trend is prospected.


    Reference | Related Articles | Metrics