• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Most Down Articles

    Published in last 1 year| In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    Published in last 1 year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A survey of Chinese text classification based on deep learning
    GAO Shan, LI Shi-jie, CAI Zhi-ping
    Computer Engineering & Science    2024, 46 (04): 684-692.  
    Abstract399)      PDF (1058KB)(710)      
    In the era of big data, with the continuous popularization of social media, various text data are growing in the network and in life. It is of great significance to analyze and manage text data using text classification technology. Text classification is a basic research field in the field of artificial intelligence natural language processing. Under the given criteria, it classifies text according to content. The application scenarios of text classification are very extensive, such as sentiment analysis, topic classification, relationship classification, etc. Deep learning is a method of representation learning based on data in machine learning, and it shows good classification effect in text data processing. Chinese text and English text have differences in form, sound, and image. Focusing on the uniqueness of Chinese text classification, this paper analyzes and expounds the deep learning methods used for Chinese text classification, and finally sorts out commonly used datasets for Chinese text classification.

    Reference | Related Articles | Metrics
    A survey of precipitation nowcasting based on deep learning
    MA Zhi-feng, ZHANG Hao, LIU Jie
    Computer Engineering & Science    2023, 45 (10): 1731-1753.  
    Abstract717)      PDF (1495KB)(703)      
    Precipitation nowcasting refers to the high-resolution prediction of precipitation in the short term, which is an important but difficult task. In the context of deep learning, it is viewed as a radar echo map-based spatiotemporal sequence prediction problem. Precipitation prediction is a complex self-supervised task. Since the motion always changes significantly in both spatial and temporal dimensions, it is difficult for ordinary models to cope with complex nonlinear spatiotemporal transformations, resulting in blurred predictions. Therefore, how to further improve the model prediction performance and reduce ambiguity is a key focus of research in this field. Currently, the research on precipitation nowcasting is still in the early stage, and there is a lack of systematic classification and discussion about the existing research work. Therefore, it is necessary to conduct a comprehensive investigation in this field. This paper comprehensively summarizes and analyzes the relevant knowledge in the field of precipitation nowcasting from different dimensions, and gives future research directions. The specific contents are as follows: (1) The significance of precipitation nowcasting, and the advantages and disadvantages of traditional forecasting models are clarified. (2) The mathematical definition of the nowcasting problem is given. (3) Common predictive models are comprehensively summarized, analyzed. (4) Several open source radar datasets in different countries and regions are introduced, and download links are given. (5) The metrics used for prediction quality assessment are briefly introduced. (6) The different loss functions used in different models is discussed. (7) The research direction of precipitation nowcasting in the future is pointed out.

    Reference | Related Articles | Metrics
    State of the art analysis of China HPC 2023
    ZHANG Yun-quan, DENG Li, YUAN Liang, YUAN Guo-xing
    Computer Engineering & Science    2023, 45 (12): 2091-2098.  
    Abstract503)      PDF (979KB)(642)      
    In this paper, according to the latest China HPC TOP100 rank list released by CCF TCHPC in the late November, the total performance trends of China HPC TOP100 and TOP10 of 2023 are presented. Followed with this, characteristics of the performance, manufacturer, and application area are analyzed separately in detail.

    Reference | Related Articles | Metrics
    Research and application of whale optimization algorithm
    WANG Ying-chao
    Computer Engineering & Science    2024, 46 (05): 881-896.  
    Abstract185)      PDF (901KB)(504)      
    The Whale Optimization Algorithm (WOA) is a novel swarm intelligence optimization algorithm that converges based on probability. It features simple and easily implementable algorithm principles, a small number of easily adjustable parameters, and a balance between global and local search control. This paper systematically analyzes the basic principles of WOA and factors influencing algorithm performance. It focuses on discussing the advantages and limitations of existing algorithm improvement strategies and hybrid strategies. Additionally, the paper elaborates on the applications and developments of WOA in support vector machines, artificial neural networks, combinatorial optimization, complex function optimization, and other areas. Finally, considering the characteristics of WOA and its research achievements in applications, the paper provides a prospective outlook on the research and development directions of WOA.


    Reference | Related Articles | Metrics
    Review of personalized recommendation research based on meta-learning
    WU Guo-dong, LIU Xu-xu, BI Hai-jiao, FAN Wei-cheng, TU Li-jing
    Computer Engineering & Science    2024, 46 (02): 338-352.  
    Abstract221)      PDF (1157KB)(487)      
    As a tool to alleviate “information overload”, recommendation system provides personalized recommendation services for users to filter redundant information, and has been widely used in recent years. However, in actual recommendation scenarios, there are often issues such as cold start and difficulty in adaptively selecting different recommendation algorithms based on the actual environment. Meta-learning, which has the advantage of quickly learning new knowledge and skills from a small number of training samples, is increasingly being applied in research related to recommendation systems. This paper discusses the main research on using meta-learning techniques to alleviate cold start problems and adaptive recommendation issues in recommendation systems. Firstly, it analyzes the relevant research progress made in meta-learning-based recommendations in these two areas. Then, it points out the challenges faced by existing meta-learning recommendation research, such as difficulty in adapting to complex task distributions, high computational costs, and a tendency to fall into local optima. Finally, it provides an outlook on some of the latest research directions in meta-learning for recommendation systems.

    Reference | Related Articles | Metrics
    GNNSched: A GNN inference task scheduling framework on GPU
    SUN Qing-xiao, LIU Yi, YANG Hai-long, WANG Yi-qing, JIA Jie, LUAN Zhong-zhi, QIAN De-pei
    Computer Engineering & Science    2024, 46 (01): 1-11.  
    Abstract299)      PDF (1464KB)(469)      
    Due to frequent memory access, graph neural network (GNN) often has low resource util- ization when running on GPU. Existing inference frameworks, which do not consider the irregularity of GNN input, may exceed GPU memory capacity when directly applied to GNN inference tasks. For GNN inference tasks, it is necessary to pre-analyze the memory occupation of concurrent tasks based on their input characteristics to ensure successful co-location of concurrent tasks on GPU. In addition, inference tasks submitted in multi-tenant scenarios urgently need flexible scheduling strategies to meet the quality of service requirements for con-current inference tasks. To solve these problems, this paper proposes GNNSched, which efficiently manages the co-location of GNN inference tasks on GPU. Specifically, GNNSched organizes concurrent inference tasks into a queue and estimates the memory occupation of each task based on a cost function at the operator level. GNNSched implements multiple scheduling strategies to generate task groups, which are iteratively submitted to GPU for concurrent execution. Experimental results show that GNNSched can meet the quality of service requirements for concurrent GNN inference tasks and reduce the response time of inference tasks.

    Reference | Related Articles | Metrics
    A hybrid multi-strategy improved sparrow search algorithm
    LI Jiang-hua, WANG Peng-hui, LI Wei
    Computer Engineering & Science    2024, 46 (02): 303-315.  
    Abstract178)      PDF (1768KB)(433)      
    Aiming at the problems that the Sparrow Search Algorithm (SSA) still has premature convergence when solving the optimal solution of the objective function, it is easy to fall into local optimum under multi-peak conditions, and the solution accuracy is insufficient under high-dimensional conditions, a hybrid multi-strategy improved Sparrow Search Algorithm (MISSA) is proposed. Considering that the quality of the initial solution of the algorithm will greatly affect the convergence speed and accuracy of the entire algorithm, an elite reverse learning strategy is introduced to expand the search area of the algorithm and improve the quality and diversity of the initial population; the step size is controlled in stages, in order to improve the solution accuracy of the algorithm. By adding the Circle mapping parameter and cosine factor to the position of the follower, the ergodicity and search ability of the algorithm are improved. The adaptive selection mechanism is used to update the individual position of the sparrow and add Lévy flight to enhance the algorithm optimization and the ability to jump out of local optima. The improved algorithm is compared with Sparrow Search Algorithm and other algorithms in 13 test functions, and the Friedman test is carried out. The experimental comparison results show that the improved sparrow search algorithm can effectively improve the optimization accuracy and convergence speed, and it can be used in high-dimensional problems. It also has high stability.


    Reference | Related Articles | Metrics
    A vehicle object detection algorithm in UAV video stream based on improved Deformable DETR
    JIANG Zhi-peng, WANG Zi-quan, ZHANG Yong-sheng, YU Ying, CHENG Bin-bin, ZHAO Long-hai, ZHANG Meng-wei
    Computer Engineering & Science    2024, 46 (01): 91-101.  
    Abstract269)      PDF (1626KB)(427)      
    Aiming at the problems of a large number of small targets in UAV video stream detection, insufficient contextual semantic information due to low image transmission quality, slow inference speed of traditional algorithm fusion features, and poor training effect caused by unbalanced dataset category samples, this paper proposes a vehicle object detection algorithm based on improved Deformable DETR for UAV video streaming. In terms of model structure, this method designs a cross-scale feature fusion module to increase the receptive field and improve the detection ability of small objects, and adopts the squeeze-excitation module for object_query to improve the response value of key objects and reduce the missed or false detection of important objects. In terms of data processing, online difficult sample mining technology is used to improve the problem of uneven distribution of class samples in the data set. The experimental results show that the improved algorithm improves the average detection accuracy by 1.5% and the small target detection accuracy by 1.2% compared with the baseline algorithm without detection speed degradation.
    Reference | Related Articles | Metrics
    An improved dense pedestrian detection algorithm based on YOLOv8: MER-YOLO
    WANG Ze-yu, XU Hui-ying, ZHU Xin-zhong, LI Chen, LIU Zi-yang, WANG Zi-yi
    Computer Engineering & Science    2024, 46 (06): 1050-1062.  
    Abstract342)      PDF (3288KB)(424)      
    In large-scale crowded places, abnormal crowd gathering occurs from time to time, which brings certain challenges to the dense pedestrian detection technology involved in application scenarios such as autonomous driving and large-scale public place crowd monitoring systems. The new generation of dense pedestrian detection technology requires higher accuracy, smaller computing overhead, faster detection speed and more convenient deployment. In view of the above requirements, a lightweight dense pedestrian detection algorithm MER-YOLO based on YOLOv8 is proposed, which first uses MobileViT as the backbone network to improve the overall feature extraction ability of the model in pedestrian gathering areas. The EMA attention mechanism module is introduced to encode the global information, further aggregate pixel-level features through dimensional interaction, and strengthen the detection ability of small targets by combining the detection head with 160×160 scale. The use of Repulsion Loss as the bounding box loss function reduces the missed detection and misdetection of small target pedestrians under dense crowds. The experimental results show that compared with YOLOv8n, the mAP@0.5 of the MER-YOLO pedestrian detection algorithm is improved by 4.5% on the Crowd Human dataset and 2.1% on the WiderPerson dataset, while only 3.1×106 parameters and 9.8 GFLOPs, which meet the deployment requirements of low computing power and high precision.

    Reference | Related Articles | Metrics
    Review on security issues of blockchains
    SHEN Chuan-nian
    Computer Engineering & Science    2024, 46 (01): 46-62.  
    Abstract281)      PDF (959KB)(419)      
    Blockchain, with its disruptive innovative technology, is continuously changing the operational rules and application scenarios of various industries such as digital finance, digital government, Internet of Things, and intelligent manufacturing. It is an indispensable key technology for building a new trust and value system in the future society. However, due to the defects of its own technology and the complexity and diversity of application scenarios, the security issues of blockchain are becoming increasingly serious. Security has become a major bottleneck restricting the future development of blockchain, and the road to blockchain regulation is arduous. This paper introduces the background know- ledge, basic concepts, and architecture of blockchain. Starting from the architecture of blockchain, it analyzes the security issues and prevention strategies of blockchain from seven aspects: data layer, network layer, consensus layer, incentive layer, contract layer, application layer, and cross-chain. Based on this, it discusses the safety supervision of blockchain from the current situation and difficulties of policy supervision, the establishment of technical supervision standards, innovative methods, and deve- lopment trends.

    Reference | Related Articles | Metrics
    Computer Engineering & Science    2024, 46 (04): 0-0.  
    Abstract126)      PDF (247KB)(416)      
    Related Articles | Metrics
    Convolutional neural network inference and training vectorization method for multicore vector accelerators
    CHEN Jie, LI Cheng, LIU Zhong
    Computer Engineering & Science    2024, 46 (04): 580-589.  
    Abstract142)      PDF (982KB)(415)      
    With the widespread application of deep learning, represented by convolutional neural networks (CNNs), the computational requirements of neural network models have increased rapidly, driving the development of deep learning accelerators. The research focus has shifted to how to accelerate and optimize the performance of neural network models based on the architectural characteristics of accelerators. For the VGG network model inference and training algorithms on the independently designed multi core vector accelerator FT-M7004, vectorized mapping methods for core operators such as convolution, pooling, and fully connected layers are proposed. Optimization strategies, including SIMD vectorization, DMA double-buffered transfer, and weight sharing, are employed to fully exploit the architectural advantages of the vector accelerator, achieving high computational efficiency. Experimental results indicate that on the FT-M7004 platform, the average computational efficiency for convolution layer inference and training is 86.62% and 69.63%, respectively; for fully connected layer inference and training, the average computational efficiency reaches 93.17% and 81.98%, respectively. The inference computational efficiency of the VGG network model on FT-M7004 exceeds that on the GPU platform by over 20%.

    Reference | Related Articles | Metrics
    A survey of error correction codes in holographic storage
    YU Qin, Wu Fei, ZHANG Meng, XIE Chang-sheng
    Computer Engineering & Science    2024, 46 (04): 571-579.  
    Abstract161)      PDF (1981KB)(412)      
    In the era of big data, the demand for high-density and large-capacity storage technology is increasing day by day. Unlike traditional storage technologies that record data bit by bit, holographic storage uses two-dimensional data pages as the unit for reading and writing, adopting a three-dimensional volume storage mode. With advantages such as high storage density, fast data conversion rate, energy efficiency, safety, and ultra-long-term preservation, holographic storage is expected to become the strong competitor for mass cold data storage. This paper focuses on phase-modulated collinear holographic storage and analyzes the current research status of error correction codes for holographic storage. A detailed introduction is provided to an reference beam-assisted low-density parity-check (LDPC) code scheme.

    Reference | Related Articles | Metrics
    Hardware design and FPGA implementation of a variable pipeline stage SM4 encryption and decryption algorithm
    ZHU Qi-jin, CHEN Xiao-wen, LU Jian-zhuang,
    Computer Engineering & Science    2024, 46 (04): 606-614.  
    Abstract152)      PDF (1475KB)(408)      
    As the first commercial cryptographic algorithm in China, SM4 algorithm is widely used in data encryption storage and information encryption communication and other fields due to its advantages of simple and easy implementation of algorithm structure, fast encryption and decryption speed and high security. With the variable pipeline stage SM4 encryption and decryption algorithm hardware design and FPGA implementation as the research topic, this study focuses on the performance differences in designs with different pipeline stages. A controllable pipeline stage SM4 encryption and decryption circuit is designed and encapsulated into an IP core with AXI and APB interfaces. Based on XILINX ZYNQ devices, a small SoC is constructed on the XILINX ZYNQ-7020 development board, and the designed SM4 IP core is mounted onto the AXI bus for simulating real-world scenarios and conducting performance tests. The correctness of the design functionality is verified by comparing software encryption and decryption data with simulated data. Testing the performance of different pipeline stages helps identify the most suitable pipeline stage number.


    Reference | Related Articles | Metrics
    A texture image classification method based on adaptive texture feature fusion
    Lv Fu, HAN Xiao-tian, FENG Yong-an, XIANG Liang
    Computer Engineering & Science    2024, 46 (03): 488-498.  
    Abstract139)      PDF (1155KB)(400)      
    The existing image classification methods based on deep learning generally lack the pertinence of texture features, and have low classification accuracy, which is difficult to be applied to the classification of simple texture and complex texture. A deep learning model based on adaptive texture feature fusion is proposed, which can make classification decisions based on differential texture features between classes. Firstly, the texture feature image is constructed according to the difference between the largest categories of texture features. Secondly, the improved bilinear model is trained in parallel with the original image and the distinctive texture feature image to obtain the dual-channel features. Finally, an adaptive classification module is constructed based on decision fusion, the channel weight is extracted by the average pooling feature map connecting the original image and texture map. The optimal fusion classification result is obtained by fusing the classification vector of two parallel neural network models according to the channel weight. The classification performance of the algorithm was evaluated on four common texture data sets, namely KTH-TIPS, KTH-TIPS-2b,UIUC and DTD, and the accuracy rates are 99.98%, 99.95%, 99.99% and 67.09%, respectively, indicating that the proposed recognition method has generally efficient recognition performance.

    Reference | Related Articles | Metrics
    Research progress on information extraction methods of Chinese electronic medical records
    JI Xu-rui, WEI De-jian, ZHANG Jun-zhong, ZHANG Shuai, CAO Hui
    Computer Engineering & Science    2024, 46 (02): 325-337.  
    Abstract254)      PDF (887KB)(399)      
    The large amount of medical information carried in the electronic medical record can help doctors better understand the situation of patients and assist doctors in clinical diagnosis. As the two core tasks of Chinese electronic medical record (EMR) information extraction, named entity recognition and entity relationship extraction have become the main research directions. Its main goal is to identify the medical entities in the EMR text and extract the medical relationships between the entities. This paper systematically expounds the research status of Chinese electronic medical record, points out the important role of named entity recognition and entity relationship extraction in Chinese electronic medical record information extraction, then introduces the latest research results of named entity recognition and relationship extraction algorithm for Chinese electronic medical record information extraction, and analyzes the advantages and disadvantages of each model in each stage. In addition, the current problems of Chinese EMR are discussed, and the future research trend is prospected.


    Reference | Related Articles | Metrics
    Multi-domain sentiment analysis of Chinese text based on prompt tuning
    ZHAO Wen-hui, WU Xiao-ling, LING Jie, HOON Heo
    Computer Engineering & Science    2024, 46 (01): 179-190.  
    Abstract198)      PDF (1348KB)(392)      
    The expression of sentiment texts in different domains are different, so it is usually necessary to train the corresponding sentiment analysis model for each domain. In order to solve the problem that one model cannot be used for multi-domain sentiment analysis, this paper proposes a multi-domain text sentiment analysis method based on prompt tuning, called MSAPT. With the help of hard prompts, indicating the domain of the emotional text and the selected emotional labels, the model is prompted to draw on its knowledge of different domain sentiment analysis. Then, a unified "generalized model" is pretrained for sentimental analysis. In downstream learning of various domain texts, the model is frozen and prompt tuning is used to make the model learn the characteristics of emotional text in each downstream domain. MSAPT only requires saving a model and some prompts with far fewer parameters than the model for multi-domain sentiment analysis. Experiments were conducted using multiple datasets of emotional text in different fields, and the results show that MSAPT outperforms model fine-tuning when only prompted tuning is applied. Finally, the length of prompt tuning, hard prompt adapted to specific domains, soft prompt and the size of  intermediate training dataset are ablated respectively, to prove their impact on the effectiveness of sentiment analysis.

    Reference | Related Articles | Metrics
    A survey of satisfiability modulo theories
    TANG Ao, WANG Xiao-feng, HE Fei
    Computer Engineering & Science    2024, 46 (03): 400-415.  
    Abstract211)      PDF (1183KB)(363)      
    Satisfiability modulo theories (SMT) refers to the decidability problem of first-order logic formulas under specific background theories. SMT based on first-order logic have a stronger expressive capability compared to SAT, with higher abstraction ability to handle more complex issues. SMT solvers find applications in various domains and have become essential engines for formal verification. Currently, SMT is widely used in fields such as artificial intelligence, hardware RTL verification, automated reasoning, and software engineering. Based on recent developments in SMT, this paper first expounds on the fundamental knowledge of SMT and lists common background theories. It then analyzes and summarizes the implementation processes of Eager, Lazy, and DPLL(T) methods, providing further insights into the implementation processes of mainstream solvers Z3, CVC5, and MathSAT5. Subsequently, the paper introduces extension problems of the SMT as  #SMT, the SMTlayer approach applied to deep neural networks (DNNs), and quantum SMT solvers. Finally, the paper offers a per spective on the development of SMT and discusses the challenges they face.

    Reference | Related Articles | Metrics
    Self-supervised few-shot medical image segmentation with multi-attention mechanism
    YAO Yuan-yuan, LIU Yu-hang, CHENG Yu-jing, PENG Meng-xiao, ZHENG Wen,
    Computer Engineering & Science    2024, 46 (03): 479-487.  
    Abstract241)      PDF (1132KB)(360)      
    Mainstream fully supervised deep learning segmentation models can achieve good results when trained on abundant labeled data, but the image segmentation in the medical field faces the challenges of high annotation cost and diverse segmentation targets, often lacking sufficient labeled data. The model proposed in this paper incorporates the idea of extracting labels from data through self-supervision, utilizing superpixels to represent image characteristics for image segmentation under conditions of small sample annotation. The introduction of multiple attention mechanisms allows the model to focus more on spatial features of the image. The position attention module and channel attention module aim to fuse multi-scale features within a single image, while the external attention module highlights the connections between different samples. Experiments were conducted on the CHAOS healthy abdominal organ dataset. In the extreme case of the 1-shot, DSC reached 0.76, which is about 3%higher than the baseline result. In addition, this paper explores the significance of few-shot learning by adjusting the number of N-way-K-shot tasks. Under the 7-shot setting, DSC achieves significant improvement, which is within an acceptable range of the segmentation effect based on full supervision based on deep learning.

    Reference | Related Articles | Metrics
    A microblog rumor detection model based on user authority and multi-feature fusion
    XU Li-fen, CAO Zhan-mao, ZHENG Ming-jie, XIAO Bo-jian
    Computer Engineering & Science    2024, 46 (04): 752-760.  
    Abstract171)      PDF (718KB)(342)      
    The widespread dissemination of online rumors and their negative impact on society urgently require efficient rumor detection. Due to the lack of semantic information and strict syntactic structure in the text of the dataset, it is meaningful to combine user characteristics and contextual features to enrich semantic information. In this regard, MRUAMF is proposed. Firstly, four indicators including user information completeness, user activity, user communication span, and user platform authentication index are extracted to construct a quantitative calculation model for user authority. By cascading user authority and its constituent indicators, and using a two-layer fully connected network to fuse features, user characteristics are effectively quantified. Secondly, considering the effectiveness of context in understanding rumors, relevant contextual features are extracted. Finally, the BERT pre-training model is used to extract text features, which are then combined with the Multimodal Adaptation Gate (MAG) to fuse user features, contextual features, and text features. Experiments on the microblog dataset show that compared with the baseline model, the MRUAMF model has better detection performance with an accuracy rate of 0.941.

    Reference | Related Articles | Metrics