Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • Performance evaluation of Sugon exascale prototype with GTC-P
      WANG Yi-chao1,HU Hang1,William Tang2,WANG Bei2,LIN Xin-hua1
      2020, 42(01): 1-7. doi:
      Abstract ( 680 )   PDF (1011KB) ( 439 )      Review attachment
      As one of three exascale prototypes for the 13th Five-Year Plan, Sugon exascale prototype uses heterogeneous computing architecture. In the prototype, CPUs are the domestically-produced Hygon x86 processors authorized by AMD, and accelerators are Hygon DCUs (Deep Computing Units). In addition to testing the chips with benchmarks, in order to study the performance of real applications on this prototype, we port a Particle-In-Cell application called Gyrokinetic Toroidal Code at Princeton (GTC-P). We compare the performance and scalability of GTC-P on Hygon CPU and DCU with Intel 6148 CPU and NVIDIA V100 GPU. Our evaluation show the performance of real HPC application on Sugon exascale prototype.
       
      Workload characterization and task scheduling
      optimization of co-located Internet data centers
      2020, 42(01): 8-17. doi:
      Abstract ( 232 )   PDF (891KB) ( 225 )      Review attachment
      Modern Internet Data Centers (IDCs) are facing challenges in terms of energy consumption, reliability, management ability, and scalability, when their sizes increase gradually. Currently, IDCs carry a variety of services including online web services and offline batch processing jobs. Online jobs require lower latency, while offline jobs require higher throughput. In order to improve server utilization and reduce energy consumption, IDCs often deploy online and offline jobs in the same computing cluster. In the co-located scenario, how to meet the different requirements of online and offline jobs at the same time is the key challenge. This paper analyzes the Alibaba co-located cluster trace data (cluster-trace-v2018), which includes the data traces from 4034 machines during 8 days. Based on static configuration, dynamic co-located run-time status, and DAG (Directed Acyclic Graph) dependency structure of offline batch jobs, the co-located workloads including the relationship between task skew and container distribution are characterized. Based on the task dependencies and critical paths, a corresponding task scheduling optimization strategy is proposed.
       
      A rule processing architecture
      based on distributed platform
      CHEN Meng-dong,YUAN Hao,XIE Xiang-hui,WU Dong
      2020, 42(01): 18-24. doi:
      Abstract ( 117 )   PDF (777KB) ( 164 )      Review attachment
      Deformation transformation of dictionary using string transformation rules is an effective method for secure string recovery. However, the processing of rules is complicated, and the existing methods are based on software implementation. Aiming at the practical requirements of processing performance and power consumption, a rule processing architecture based on distributed platform is proposed. For the first time, the rule processing is accelerated by FPGA hardware, and the rule processing is further accelerated by splitting complex rule combinations into parallel nodes. The experimental results on the ant cluster system show that the rule processing system adopting this architecture meets the actual needs, and the performance and energy efficiency are significantly improved in comparison to CPU and GPU, indicating the effectiveness of this distributed rule processing architecture.
       
      Detecting outliers in data stream based on grid coupling 
      YANG Jie,ZHANG Dong-yue,ZHOU Li-hua,HUANG Hao,DING Hai-yan
      2020, 42(01): 25-35. doi:
      Abstract ( 131 )   PDF (943KB) ( 190 )      Review attachment
      The grid-based data analysis method processes data in units of grids, avoiding the point-to-point calculation of data objects and greatly improving the efficiency of data analysis. However, the traditional grid-based method processes the grid independently in the analysis process, ignoring the coupling relationship between the grids and resulting in unsatisfactory analysis accuracy. In this paper, the grids are no longer processed independently and the coupling relationship between grids are considered, when the grids are used to detect outliers in data stream. A grid coupling based outliers detection algorithm for data streams (GCStream-OD) is proposed. The algorithm exactly expresses the correlation between data stream objects through grid coupling, and improves the efficiency of the algorithm through pruning strategy. Experimental results on five real data streams show that GCStream-OD has higher quality and efficiency of outliers detection.
       
      Reviewing big data recommendation
      methods of commodity collocation
      CHEN Xin1,WANG Bin1,ZENG Fan-qing2
      2020, 42(01): 36-45. doi:
      Abstract ( 211 )   PDF (562KB) ( 261 )      Review attachment
      With the continuous development of e-commerce, recommender systems are facing problems such as diverse data sources, complex data structure, poor recommendation diversity and cold start. The big data recommendation methods of commodity collocation can not only solve the above problems effectively, but also have the important significance of giving suggestions to consumers and helping businesses to promote sales. Firstly, through reviewing the relevant domestic and foreign literatures, the paper explains the basic conception and form of collocation recommendation methods, and analyzes their differences and advantages compared with the traditional recommendation methods. Then, the classification of collocation recommendation methods is discussed, including collocation recommendation based on commodity content, collocation recommendation based on collaborative filtering, and hybrid collocation recommendation. Finally, based on the aforementioned research and analysis, it is pointed out that the future research hotspots will focus on the collocation recommendation of multiple commodities, collocation recommendation based on multi-source heterogeneous data fusion, and collocation recommendation based on knowledge graph. In particular, applying knowledge graph to collocation recommendation field will be a very promising research work in the future.
       
      A sensor network spatial range aggregation query
       processing algorithm against data eavesdropping attacks
      HU Zhen-hai,WANG Li-song
      2020, 42(01): 46-54. doi:
      Abstract ( 88 )   PDF (1102KB) ( 182 )     

      Currently, in sensor networks, the privacy-preserving aggregation query processing methods use the form of encryption and decryption to protect sensing data, and require all nodes in the network to participate in query processing. Excessive encryption and decryption operations consume a lot of node energy, and users may be only interested in the aggregate result of the partial region. To deal with these problems, a sensor network spatial range aggregation query processing algorithm against eavesdropping attacks (PCPDA: Part of the area based on cluster Privacy-preserving Data Aggregation) is proposed. The algorithm gathers along the established route while querying, so that the algorithm does not depend on the pre-configured topology and is suitable for the sensor network with dynamically changed network topology, which saves the overhead of maintaining the topology. The algorithm guarantees the privacy of node-aware data without any encryption measures. Theoretical analysis and simulation results show that PCPDA is superior to the existing algorithms in terms of energy loss and privacy protection.

      An edge importance measurement method based
      on information dissemination characteristics
      XU Man1,2,3,LU Fu-rong1,2,3,MA Guo-shuai1,2,3,QIAN Yu-hua1,2,3
      2020, 42(01): 55-63. doi:
      Abstract ( 211 )   PDF (1189KB) ( 225 )     
      Edge importance measurement is a very important issue in information dissemination. Edge is the carrier of information dissemination, and edges at different locations have different information loads and propagation capabilities. Removing some edges that have a significant impact on communication is of great importance in curbing the spread of rumors and maximizing the dissemination of public information. Information dissemination is susceptible to factors such as communicators, communication channels and communication environment. Based on these observations, by comprehensively considering various factors affecting information dissemination, this paper proposes an edge importance measurement method based on information dissemination characteristics: ISM (Information Spreading Model). On nine real network datasets, ISM is compared with four classical edge importance methods such as Jaccard coefficient, Bridgeness index, Betweenness centrality, and Reachability index. The experimental results show that the proposed method is superior to other commonly used methods in the process of network connectivity and diffusion dynamics.
       
      An attention-based hybrid neural network
       relation classification method
      ZHUANG Chuan-zhi1,2,JIN Xiao-long1,2,LI Zhong1,2,SUN Zhi1,2
      2020, 42(01): 64-70. doi:
      Abstract ( 126 )   PDF (518KB) ( 172 )     
      Relation classification is an important semantic processing task in the field of natural language processing. The traditional relation classification method judges the relationship between two entities within a sentence by manually designing various features and various kernel functions. In recent years, the main work of the relation classification methods has focused on obtaining semantic feature representations of sentences through various neural networks to perform classification, so as to reduce the manual construction of various features. In sentences, the contribution of different keywords to the relation classification tasks are different, but the most important word meanings may appear at any position in the sentences. To this end, we propose an attention-based hybrid neural network relation classification method to capture the important semantic information for relation classification. This method is an end-to-end method. Experimental results show the effectiveness of the method.
       
      Query consistency constraints of differential privacy
      JIA Jun-jie,CHEN Hui,MA Hui-fang,MU Yu-xiang
      2020, 42(01): 71-79. doi:
      Abstract ( 121 )   PDF (585KB) ( 180 )     
      Aiming at the inconsistency phenomenon of range query in differential privacy histogram publishing, we study the local optimal linear unbiased estimation algorithm LBLUE that needs iterative adjustment and propose a CA algorithm that does not need iteration and satisfies the consistency constraint query. Consistency adjustment is performed on a full k-ary range tree with Laplace noise: the TDICE algorithm is first used for top-down inconsistency estimation, and then the BUCE algorithm is used for bottom-up consistency estimation, so as to obtain a full k-ary range tree with differential privacy, which meets consistency constraint query. The histogram data satisfying the consistency constraint query is published after traversal. Proof and experimental analysis show that the query range after consistency adjustment satisfies the consistency constraint query. It has higher accuracy than Boost-2 algorithm and LBLUE algorithm, and higher time efficiency than LBLUE algorithm.
       
      A color image encryption algorithm based on
      new chaos and matrix convolution operation
      WEI Lian-suo,HU Xian-cheng,CHEN Qi-qi,HAN Jian
      2020, 42(01): 80-88. doi:
      Abstract ( 128 )   PDF (1452KB) ( 202 )     
      In order to solve the problems of strong correlation and high redundancy in the process of color image encryption, a color image encryption algorithm based on cloud model Fibonacci chaotic system and matrix convolution operation is proposed. Firstly, the algorithm permutates the pixel coordinates of the image stitched by the R, G, and B components of the color image. Secondly, the chaotic sequence values are used as the input values of the convolution kernel. Alternative matrix convolution operation is performed on the chaotic sequence values and the pixel values to achieve the pixel transformation. Thirdly, two opposite XOR operations with the cloud model Fibonacci chaotic sequence and the previous adjacent pixel values are done to generate the encrypted image. Finally, simulation experiments show that the histogram of the encrypted image is smoother, the pixel distribution is uniform, the correlation between adjacent image pixels is low, and the average horizontal, vertical and diagonal correlation coefficients of the RGB components of the encrypted image are -0.0010, 0.0016 and 0.0031, respectively. The encrypted image can resist the attack experiments such as differential attack, plaintext attack, noise attack and shear attack. The proposed new encryption algorithm has the characteristics of high encryption security, high anti-interference and strong robustness.
       
      Summary of graph data compression technologies
      LI Feng-ying,YANG En-yi,DONG Rong-sheng
      2020, 42(01): 89-97. doi:
      Abstract ( 217 )   PDF (990KB) ( 301 )     
      Using appropriate compression techniques to compactly and accurately represent and store graph data with hundreds of millions of nodes and edges is a prerequisite for the analysis and operation of large-scale graph data. Compact graph data representation not only reduces the storage space of graph data, but also supports efficient operation on graph data. This paper summarizes the research progress of graph data compression technologies in graph data management from the storage point of graph data, and focuses on the following three compression technologies: compression technology based on adjacency matrix, compression technology based on adjacency list, and compression technology based on formal method. Their related representative algorithms, application scopes, advantages and disadvantages are discussed. Finally, the current situation and problems of graph data compression technologies are summarized, and the development trend of future graph data compression technologies is given.
       
      A 3D vehicle detection algorithm
      based on attention mechanism
      WAN Si-yu
      2020, 42(01): 98-102. doi:
      Abstract ( 186 )   PDF (540KB) ( 200 )     
      3D vehicle detection is a key problem in automatic driving scene, which involves 3D object detection and 3D object classification. Current 3D detection and classification networks treat all input point cloud data equally. However, in the actual detection process, the importance of different points in the point cloud for detection may not be the same. In order to get better detection results, attention mechanism is introduced to get the weights of the features of different points, so that the features of some points can get more attentions in regression. Experiments show that the model has higher accuracy than the existing methods while maintaining real-time efficiency.
       
      A training set optimization and detection
      method based on YOLOv3 algorithm
      GAO Xing1,LIU Jian-fei1,HAO Lu-guo2,DONG Qi-qi1
      2020, 42(01): 103-109. doi:
      Abstract ( 250 )   PDF (795KB) ( 287 )     
      YOLOv3 is a one-stage target detection algorithm that does not generate Region Proposal Network (RPN) to extract target information. Compared with the two-stage target detection algorithm, it has faster detection speed. However, the existing algorithms has the problems of low accuracy and missed detection in small target detection. Therefore, based on YOLOv3 algorithm, a detection method of training set optimization and layer processing is proposed. Firstly, K-means algorithm is used to cluster the standard dataset VOC2007+2012 and the self-built behavior dataset, so as to get the anchor size that fits the training size of the dataset. Then, The training is carried out by adjusting the training parameters and selecting a reasonable labeling method. Finally, the input image is processed by layer and the target is detected. The experimental results show that the average accuracy (mAP) of VOC2007 verification set is improved by 1.4% after clustering analysis, the problem of small feeling field on the higher convolution layer is effectively solved during the detection process of the original algorithm, and the accuracy of YOLOv3 algorithm is improved when detecting small target objects and reducing the missed detection rate.
       
      An insect image segmentation and counting
      method based on convolutional neural network
      WANG Wei-min1,FU Shou-fu1,GU Rong-rong1,WANG Dong-sheng1,HE Lin-rong2,GUAN Wen-bin3
      2020, 42(01): 110-116. doi:
      Abstract ( 181 )   PDF (979KB) ( 258 )     
      In order to improve the accuracy of segmentation and counting of insect images, an insect image segmentation and counting method based on convolutional neural network is proposed. Based on the U-Net model, this method constructs an insect image segmentation model named Insect-Net. After inputting the complete insect image and the split insect image into the model, the features of the two images are extracted and merged. The merged features are inputted into a 1×1 convolutional layer to get the final segmentation results. After the obtained results are binarized, the contour detection algorithm is used to extract the contours of the insects and count them. The experimental results show that the method has higher segmentation accuracy and counting accuracy in the detection of insects, which are 89.2% and 94.4% respectively. The idea of deep learning and convolutional neural network effectively improves the counting accuracy of insect images, and provides a large number of non-background datasets for insect identification classification.
       
      Monocular visual odometry based on
      deep learning feature point method
      XIONG Wei1,2,JIN Jing-yi1,WANG Juan1,LIU Min1,ZENG Chun-yan1
      2020, 42(01): 117-124. doi:
      Abstract ( 171 )   PDF (863KB) ( 225 )     
      Aiming at the adverse effect of luminosity and viewpoint change on feature point extraction stability in Visual Odometry (VO) of feature point method, a monocular VO method based on deep learning feature point method is proposed. The deep learning SuperPoint (DSP) feature point detector is obtained by self-supervised deep learning network training. Firstly, the brightness of the training image is adjusted by the brightness nonlinear point-by-point adjustment method. Secondly, the redundant DSP feature points are eliminated by using the non-maximum value suppression method. The two-way nearest neighbor algorithm is improved based on the nearest neighbor algorithm to solve the feature point matching problem. Finally, the equation for minimizing the reprojection error is established to solve the optimal pose and spatial point parameters. The experimental results on Hpatches and Visual Odometry datasets show that the DSP feature point detector enhances the robustness of feature matching to luminosity and viewpoint changes. Without the backend optimization, this method reduces the root mean square error obviously in comparison to ORB method. The real-time performance of the system is guaranteed, which provides a new solution for the VO of feature-based method.
       
      ELM-based urban road extraction
      from remote sensing images
      CAI Heng1,2,3,CHU Heng1,2,4,SHAN De-ming1,2,3
      2020, 42(01): 125-130. doi:
      Abstract ( 123 )   PDF (657KB) ( 205 )     
      Aiming at the unsatisfactory road extraction of complex scenes and the fast learning ability of Extreme Learning Machine (ELM) in high-resolution remote sensing images, an ELM-based urban road extraction method is proposed. Firstly, the improved Cuckoo Search algorithm (CS) is used to adaptively select the number of hidden layer nodes of the ELM, in order to improve the stability of the model. Secondly, the discriminant information in the data sample is introduced to make up for the insufficient ELM learning, thus improving the ELM classification performance. Finally, the mathematical morphology processing is used to optimize the extracted road so as to obtain the final road extraction effect. The road extraction test results of remote sensing image show that the proposed method not only enhances the stability of the network, but also improves the accuracy of road extraction, and can extract road information better.
       
      Keyword extraction of Uyghur-Kazakh
      texts based on stem units
      SARDAR Parhat,MIJIT Ablimit,ASKAR Hamdulla
      2020, 42(01): 131-137. doi:
      Abstract ( 116 )   PDF (556KB) ( 175 )     
      A keywords extraction method of Uyghur and Kazakh (Uyghur-Kazakh) texts based on stem units is proposed. Uyghur-Kazakh is a derivative language lacking resources. Morpheme structure analysis and stem extraction can effectively reduce the granularity capacity and improve the coverage of derivative languages. In this paper, Uyghur-Kazakh texts are downloaded from the Internet and segmented into morpheme sequences. word2vec is used to train stem vectors to represent text content in a distributed way. Then, TF-IDF (Term Frequency-Inverse Document Frequency) algorithm is used to weight the stem vectors. Keywords are extracted by using the keyword vector of training set and the stem vector similarity of testing set. The experimental results show that the proposed method based on morpheme segmentation and stem vector representation are the important steps and has more excellent performance in the extraction of keywords from derivative languages like Uygur-Kazakh.
       
       
      A control and compensation strategy of the
      electric power recirculating ball steering system
      WEI Juan,LI Zi-zhuo,TIAN Hai-bo
      2020, 42(01): 138-143. doi:
      Abstract ( 169 )   PDF (649KB) ( 313 )     
      Considering the influence of multiple factors in the recirculating ball steering system, a control and compensation strategy of the electric power recirculating ball steering system is designed. The model of the electric power recirculating ball steering system is established, the current assist curve is designed, and the fuzzy PID control method is adopted, so as to realize the real-time control of the motor. In order to obtain better boosting torque and compensate for the loss in the system, based on the LuGre friction model, the observed system parameters are used to establish the friction state observer, in order to obtain the friction compensation superimposed current. Through the joint simulation verification control system of Matlab/Simulink and CarSim, the comparative analysis before and after adding the friction compensation strategy shows that the designed electric power steering current control system can comprehensively consider the friction, speed and steering wheel angle of the vehicle while it is dri-ving and use the motor to generate appropriate power so as to achieve the driver’s driving intention more accurately and make the return process more stable.
       
      Research on the application of
      topic  model in short text
      HAN Xiao-yun,HOU Zai-en,SUN Mian
      2020, 42(01): 144-152. doi:
      Abstract ( 117 )   PDF (1455KB) ( 187 )     
      The paper aims at the problem that traditional LDA-based topic models on short texts are susceptible to sparseness, noise, and redundancy. Firstly, the changes of text feature representation and the development of topic models on short texts are reviewed. The generation process of the Latent Dirichlet Allocation (LDA) model and the Dirichlet Multinomial Mixture (DMM) model and the corresponding Gibbs sampling parameter derivation are systematically summarized. Regarding the optimal number of topics in the topic model, a detailed comparison of the four common optimization indicators is given. Finally, the extended research of the topic model in the past two years and its simple application in network public opinion are analyzed, and the research direction and focus of the future topic model are pointed out.

       

       

       
      A news keyword extraction method
      combining LSTM and LDA differences
      NING Shan, YAN Xin, ZHOU Feng, WANG Hong-bin, ZHANG Jin-peng
      2020, 42(01): 153-160. doi:
      Abstract ( 149 )   PDF (611KB) ( 184 )     
      Aiming at the influence of semantic information on TextRank, and considering both the high concentration of news headline information and the characteristics of coverage and difference of keywords, a news keyword extraction method is proposed, which combines LSTM and LDA differences. Firstly, the news text is preprocessed to obtain the candidate keywords. Secondly, the topic difference influence degree of the candidate keywords is obtained through the LDA topic model. Then, the LSTM model and the word2vec model are combined to calculate the semantic relevance between the candidate keywords and the title. Finally, according to the topic difference influence degree and the semantic relevance influence degree, the candidate keyword nodes are non-uniformly transferred to obtain the final candidate keyword ranking and extract the keywords. The proposed method combines the different attributes of keywords such as semantic importance, coverage and difference. The experimental results on the Sogou news corpus show that, compared with the traditional method, the proposed method significantly improves the accuracy and recall rate.