Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • Performance analysis with Likwid
      on the FT-1500A processor
      PENG Lin1,FANG Jianbin1,DU Qi1,TANG Tao1,HUANG Chun1,YANG Canqun1,2
      2018, 40(07): 1147-1154. doi:
      Abstract ( 159 )   PDF (576KB) ( 296 )     

      We exploit a performance analysis tool Likwid on the FT-1500A processor, which aims to be easy-to-use and low-overheaded. We mainly study the acquisition of hardware topology information, access to the performance monitor unit (PMU), and the use of performance analysis tools, and data analysis. The hwloc is used to get the hardware information of FT-1500A CPU. The programmers are provided with the topology information and the related summary information of FT-1500A multi-core CPU. The kernel drive module is written to enable the PMU. The event types are specified and the corresponding hardware counters are used to count the number of events during the execution of the target program. Based on simple codes and template micro benchmarks, performance analysis tools are used to collect data during the program execution and do the performance analysis.

      A fair and balanced load scheduling based on
      Gini coefficient for service networks
      CHEN Huapeng1,2,LIN Jie1
      2018, 40(07): 1155-1164. doi:
      Abstract ( 118 )   PDF (1143KB) ( 203 )     

      We propose a supervised fair and load balance scheduling algorithm for task distribution in an intraarea service network with stable structure. The algorithm is inspired by the Gini coefficient of income distribution in the economics field. Through the monitoring of the Gini coefficient of network load distribution, the fair scheduling of service tasks in the area is finally completed. In this paper, the system structure and algorithm steps needed to achieve fair task scheduling are given. Simulation results show that the proposed method can effectively complete the taskbalanced scheduling of service networks and has good global fairness.

      Energy and performance optimization based on
      Kalman filtering in the cloud data center
      HE Li,TANG Li
      2018, 40(07): 1165-1172. doi:
      Abstract ( 124 )   PDF (682KB) ( 202 )      Review attachment
      It is necessary to follow the running state of the servers for the virtual machine dynamical consolidation in cloud data centers, and the running state of the server can be affected by the load change of the cloud data center. Most of existing methods only concern the CPU utilization change of the current server. We propose a CPU utilization prediction model based on the Kalman filtering, formulate a load variation model of the cloud data center based on the variation coefficient of the CPU utilization of all servers, and then describe the prediction procedure based on the Kalman filtering in detail. Moreover, the energy consumption and performance evaluation of the cloud data center are discussed. Finally, we conduct experiments on the CloudSim with five workloads of PlanetLab. Experimental results show that the Kalman filtering can better reflect the change tendency of the CPU utilization, and maintain better computational performance with lower energy consumption.
       
      A cloud service QoS prediction method
       based on timeaware ranking
      JIANG Bingting1,HU Zhigang1,MA Hua2,YAO Jing1
      2018, 40(07): 1173-1179. doi:
      Abstract ( 124 )   PDF (695KB) ( 169 )      Review attachment
      With the maturity of cloud computing theories and technologies, more and more cloud services have been gaining enormous momentum, so how to establish highquality cloud services has become a critical problem in the field of cloud computing. QoS rankings provide users with valuable information for making optimal cloud service selection from a set of functionally equivalent service candidates. To obtain QoS values of the cloud service, it is indispensable to invoking the realworld candidates of the cloud service. In order to avoid huge time consumption and expensive resource waste, we propose a QoS prediction approach based on timeaware ranking. Unlike traditional QoS value prediction, the proposed prediction method based on QoS ranking similarity can examine the order of services for a particular user. The ranking similarity is calculated by timeweight, and combining with time preference for synthetic similarity the topk neighbours are selected to provide QoS information support for the evaluation. Experiments based on WSDream, a realworld dataset, show that the QoS prediction method based on timeaware ranking has better prediction accuracy.

       
      A new outlier detection method based on large data
      YANG Xiansheng1,JIANG Lei1,PENG Xiong2,ZHOU Qian1,LIU Jujun1
      2018, 40(07): 1180-1186. doi:
      Abstract ( 221 )   PDF (776KB) ( 356 )      Review attachment
      Outlier detection, whose aim is to find abnormal data from the massive data, has two advantages. First, as a way of data preprocessing, it can reduce the impact of noise on the model. Second, in a specific scene, it can find outliers accurately and analyze the abnormal phenomenon. At present, domestic and foreign mainstream methods, such as KNN and ORCA etc., do not take the global outliers, local outliers and outlier cluster into account, and it is difficult for them to deal with largescale data sets. Based on the Spark platform, we propose a new outlier detection model. In order to maximize the overall detection results, iForest, LOF, and DBSCAN are used respectively for their high sensitivity. First, the three specific base classifiers are selected, and their object functions are changed. Then, the error rate calculation method of the framework is modified, improved and merged to form a new outlier detection model,called ILDBOOST. The results show that the model fully takes into account the detection of global, local outliers and outlier cluster, which improves the precision and recall rate as a whole, and the effect is obviously better than the current mainstream outliers detection methods.
       
      Hardware Trojan detection based on
      PCA and logistics regression
       
      ZHANG Jinling1,L Lei2
      2018, 40(07): 1187-1191. doi:
      Abstract ( 158 )   PDF (482KB) ( 207 )      Review attachment
      We propose a hardware Trojan detection model based on the PCA and logistics regression to promote the detection performance of the IC planted with hardware Trojan. We employ the PCA to analyze the collected side channel power signals and select main features, remove the effect of noise, and simplify the computation. The logistics regression algorithm is adopted to train the classifier. We detect hardware Trojan by calculating the logarithmic ratio between the probability that includes Trojan and the probability that does not include Trojan. An FPGA experiment platform is designed and established to validate the proposed model. Two indicators (precision and recall) are used to evaluate the model’s performance. Experimental results show that this model can detect hardware Trojan effectively.
       
      A multi-domain access control scheme based on
      multi-authority attribute encryption for cloud storage
       
      YANG Xiaodong,YANG Miaomiao,LIU Tingting,WANG Caifen
      2018, 40(07): 1192-1198. doi:
      Abstract ( 143 )   PDF (663KB) ( 183 )     
      In order to solve the problems of collusion attack and multidomain shared data in the multiauthority attributebased encryption scheme, we present a multidomain access control scheme based on multiauthority attribute encryption for cloud storage. In the proposed scheme, the central authority does not participate in the generation process of users' key, which can effectively avoid the attack between users and authorized institutions. To achieve single domain and multidomain data sharing, the cloud server utilizes the linear secret sharing scheme and the proxy reencryption technique to reencrypt the data files. Our analysis shows that the proposed scheme has satisfactory performance in key generation and file encryption & decryption. Furthermore, this scheme is adaptively secure under the qparallel BDHE assumption.
       
       
       
      Related-key impossible differential cryptanalysis on ESF
      XIE Min,YANG Pan
      2018, 40(07): 1199-1205. doi:
      Abstract ( 140 )   PDF (963KB) ( 185 )     
      ESF is a lightweight block cipher based on a modified 32round Feistel structure. In order to study the ESF algorithm's ability to resist the impossible differential attack, we use relatedkey impossible differential cryptanalysis to analyze the security of ESF for the first time. And two 10round relatedkey impossible differential paths are constructed based on the characteristics of the key extended algorithm and the structure of round functions. Then a relatedkey impossible differential attack on 13round ESF is proposed by adding 1 round at the top and 2 rounds at the bottom to a 10round relatedkey impossible differential path. The attack has a complexity of 223 13round encryptions and about 260 chosen plaintexts with 18 recovered keybits. A relatedkey impossible differential attack on a 14round ESF is also proposed by adding 2 rounds both at the top and the bottom to another 10round relatedkey impossible differential path, which has a complexity of 243.95 14round encryptions and about 262 chosen plaintexts with 37 recovered keybits.
       
      A dynamic update algorithm on (p,k) anonymity
      JIA Junjie,YAN Guolei,XING Licheng,CHEN Fei
      2018, 40(07): 1206-1212. doi:
      Abstract ( 104 )   PDF (611KB) ( 200 )     
      With the arrival of the era of big data, the number of data increases exponentially, onetime release of all data can no longer meet the needs of realtime data, so an incremental update algorithm on (p, k) anonymity is proposed to dynamically update anonymous publication data tables.In order to avoid privacy leakage when data is dynamically updated, the algorithm uses encryption technology to protect sensitive attributes. We create a temporary table and an interim table to aid the timely insertion of updated data. The incremental update algorithm on (p, k) anonymity improves the problem that traditional algorithms cannot update data in real time, ensures the realtime performance of data, and uses encryption technology to enhance data privacy protection. Experimental results show that the incremental update algorithm on (p, k) anonymity achieves the goal of realtime data update with less information loss and faster update rate.

       
      Video smoke detection based on
       dense optical flow and edge features
      LIN Chengzhong1,ZHANG Wei1,WANG Xin2,LIU Yanyan3
      2018, 40(07): 1213-1220. doi:
      Abstract ( 127 )   PDF (917KB) ( 457 )     

      To overcome the deficiencies of traditional fire smoke detection techniques and improve the detection rate of smoke detection algorithms, we propose a new smoke detection algorithm based on dense optical flow and edge features according to the characteristics of smoke movements. Firstly, the algorithm extracts the moving regions by combining the Gaussian mixture model (GMM) for background modeling with the frame difference method. Then by dividing the motion area into three parts, including the upper, middle and lower parts, the algorithm extracts optical flow vector features and edge orientation histograms from each part. Considering the continuous relevance of smoke movement in the time domain, the algorithm extracts the feature vectors of smoke from every three adjacent frames to enhance the robustness. Finally, the training and detection of smoke are implemented by using support vector machines. A high detection rate above 94% is obtained on the video test set. Experimental results show that the proposed algorithm can better adapt to complex environmental conditions in practical applications than other existing algorithms.

      A fast two-dimensional Otsu image segmentation
      algorithm based on wolf  pack  algorithm  optimization
      CAO Shuang,AN Jiancheng
      2018, 40(07): 1221-1226. doi:
      Abstract ( 118 )   PDF (788KB) ( 194 )     
      Threshold selection of traditional  twodimensional Otsu algorithm generally depends on the exhaustive search method. However it cannot be applied to realtime systems for its long segmentation time and poor realtime performance which affect the efficiency of image segmentation. In order to reduce the running time of the twodimensional Otsu algorithm, we use the wolf pack algorithm to find the best threshold vector. Each artificial wolf represents a feasible twodimensional threshold vector. And the wolves get the best threshold through constant iteration of intelligent behaviors, including scouting behaviors, summoning behaviors and beleaguering behaviors, as well as the communication information among wolves. Simulation results show that compared with the twodimensional Otsu algorithm with standard PSO optimization and the traditional twodimensional Otsu algorithm, the proposed algorithm can reduce segmentation time and improve the accuracy of image segmentation.
       
      Two-stage non-local means denoising based on
      hybrid robust weight and improved method noise
      LU Haiqing1,2,GE Hongwei1,2
      2018, 40(07): 1227-1236. doi:
      Abstract ( 123 )   PDF (977KB) ( 179 )     
      Traditional nonlocal means denoising algorithm calculates similarity weight between image patches using exponential functions, which cannot accurately reflect the similarity between image patches. Method noise obtained by existing twostage nonlocal means methods is unsatisfactory, and the information contained  is insufficiently used. Aiming at the problems above, we propose a novel algorithm called twostage nonlocal means denoising based on hybrid robust weight and improved method noise. Firstly, we propose to use an enhanced hybrid robust weight function to calculate the similarity between image patches. Secondly, we use the predenoised image to construct improved method noise, which is then combined with a twostage framework. Finally, the new hybrid robust weight function as well as the improved method noise is applied to the twostage nonlocal means scheme. Experimental results show that the proposed algorithm can calculate the similarity between image patches more precisely and make the best of method noise, and it has better performance in denoising and preserving structure details than traditional ones.
       
      A small sample face recognition algorithm based on
      improved fractional order singular value decomposition
      and collaborative representation classification
       
      ZHANG Jianming1,2,LIAO Tingting1,2,WU Honglin1,2,LIU Yukai1,2
      2018, 40(07): 1237-1243. doi:
      Abstract ( 98 )   PDF (706KB) ( 211 )     
      With the reduction of training samples, the performance of traditional face recognition methods drops sharply. We propose an improved fractional order singular value decomposition (IFSVDR) method combined with patch based CRC (PCRC) framework. As the performance can be affected when training samples contain noise, we improve the SVD algorithm by using the fractional order to increase the weight of the main orthogonal basis, and decrease the weight of the relatively small basis to reduce the influence of noise on classification results. Then, we use the PCRC to classify the patches which are reconstructed by the IFSVDR. Compared with the classical sparse representation, the idea of ensemble learning enables the PCRC to deal with the small sample size problem. And the CRC has a lower computation complexity than the SRC. Experiments on the extended Yale B and AR face databases show that the proposed IFSVDR combining with the PCRC has a high recognition rate, even in the case of small sample.
       
      Fast speaker recognition based on hierarchical recognition
      MAO Zhengchong,TU Wenhui
      2018, 40(07): 1244-1249. doi:
      Abstract ( 114 )   PDF (538KB) ( 150 )     
      As the number of speaker models increases, the recognition speed of the speaker recognition system decreases, thus it cannot meet realtime requirement. To solve this problem, we propose a fast speaker recognition method based on hierarchical recognition model. The approximate value of the KL divergence solved by the variational method is used as the similarity measure between speaker models and a speaker model clustering method is designed. Experimental results show that the proposed method can ensure the validity of speaker model clustering results and improve the recognition speed of the system greatly while maintaining a small system recognition rate loss.
       
      A remote sensing image fusion algorithm based on
       guided filtering and shearlet sparse base
      WANG Wei1,2,ZHANG Jiae1,2
      2018, 40(07): 1250-1255. doi:
      Abstract ( 78 )   PDF (687KB) ( 168 )     
      For the situation that the spatial resolution and spectral resolution of remote sensing images cannot be combined, we propose a remote sensing image fusion algorithm based on shearlet sparse base and guided filtering by combing multiscale transform with sparse representation. Based on the IHS fusion model, we adopt the guided filtering for the fitting process. Then the brightness image and the panchromatic image are decomposed by the shearlet transform to obtain the high and low frequency subband coefficients of the image. The lowfrequency subimages are sparsely processed and the optimal sparse coefficients are obtained, and fusion is performed based on the criterion that the activity degree of image blocks is large. The corresponding highfrequency subimages are fused based on regional energy and regional variance and obtain the fusion results via the shearlet inverse transformation. Experimental results show that the proposed algorithm can improve image sharpness and spectral retention, and it outperforms other algorithms in image integrity and detail.


       
      Scene text detection based on
      perpendicular regional regression networks
      YANG Guoliang,WANG Zhiyuan,ZHANG Yu,KANG Lele,HU Zhengwei
      2018, 40(07): 1256-1263. doi:
      Abstract ( 153 )   PDF (811KB) ( 182 )     
      As the text detection in natural scenes is different from traditional object detection, using the region proposcal network (RPN) method proposed by FasterRcnn for text detection directly has some restrictions. On the one hand, because of variable length, background complexity, diversification of the text area and other factors, a greater receptive field design is required. On the other hand,
      in the RPN training phase, there are a large number of false positives and missed detections in the selection of positive samples.
       
      We propose a method based on perpendicular regional regression networks. Firstly, the Hough method is used to adjust the slope of the partial scene image. Secondly, in the training phase, based on the groundtruth box and the candidate box Anchor, the samples with an IOU value (intersection and union ratio) in vertical direction greater than a threshold, are selected as the positive sample. Thirdly, the positive samples in vertical direction are classified as regression. Finally, multiple adjacent Anchors are combined to form a text area. Experiments on the ICDAR2011 and ICDAR2013 data sets have a good detection result.

       

       

       
      A cognitive computational model of
      generalized topic structure in Chinese text
      LU Dawei1,SONG Rou2,SHANG Ying3
      2018, 40(07): 1264-1274. doi:
      Abstract ( 108 )   PDF (1219KB) ( 187 )     
      Generalized topic structure (GTS) is the fundamental objective structure in Chinese text. We design a computational model to recognize this structure based on the idea of finite-state machine (FSM). We preliminarily prove its validity in large-scale corpus and analyze its spatial complexity and time complexity. The characteristics of this model are: iterative control, synchronization of output and input in punctuation clauses (P-clause), none backtracking in long distance, limited backfilling, limited storage, and unchanged lexical order. These features are also the principles obeyed by human being while cognizing the topiccomment information in text. Thus, this model can be regarded as a mechanical model of the cognitive process of human.
       
      Short-term passenger flow prediction in Shanghai
      subway system based on stacked autoencoder
      XU Yizhi1,2,PENG Ling1,LIN Hui1,2,LI Xiang1,2
      2018, 40(07): 1275-1280. doi:
      Abstract ( 178 )   PDF (760KB) ( 222 )     
      Urban public traffic networks have been carrying huge passenger flow all the time. And the increase of passenger flow brings great pressure to public traffic networks and traffic intelligent dispatch. Shortterm passenger flow forecasting on subway station provides an important technical support for decisionmaking in the intelligent subway dispatch system. With the historical data of metro cards, we propose a shortterm passenger flow prediction method based on deep learning which is able to extract inherent and deep features from data. So a deep learning network model is built based on stacked autoencoder (SAE) and we pretrain the model in a downtop fashion. After pretraining, we use the BP algorithm to finetune and update the whole network’s parameters in a topdown fashion. Results on the data of metro cards for a month period of Shanghai subway show that the proposed method outperforms  the wavelet neural network (waveletNN) and the autoregressive integrated moving average (ARIMA) in terms of prediction performance.
       
      Short text similarity measure based on
      co-occurrence distance and discrimination
      LIU Wen1,MA Huifang1,2,TUO Ting1,CHEN Haibo1
      2018, 40(07): 1281-1286. doi:
      Abstract ( 122 )   PDF (749KB) ( 209 )     
      Aiming at the typical characteristics of severe sparseness and high dimension of short texts, we propose a short text similarity measure method based on cooccurrence distance and discrimination. On the one hand, the method leverages the cooccurrence distance between terms in each document to determine cooccurrence distance correlation. On the other hand, we calculate the cooccurrence discrimination to improve the accuracy of cooccurrence distance correlation, and then the relevance weight of the terms in the text is calculated. The text similarity between two short texts is calculated according to the term weights and the cooccurrence distance between terms. Experimental results show that the proposed method outperforms the baseline algorithm in term of performance and efficiency in similarity calculation.


       
      A  short text similarity calculation method based
      on semantics and syntax structure
      ZHAO Qian1,JING Qi1,LI Aiping1,2,DUAN Liguo1
      2018, 40(07): 1287-1294. doi:
      Abstract ( 107 )   PDF (588KB) ( 266 )     
      In order to improve the accuracy of short text semantic similarity calculation, we propose a new calculation method. Firstly the short text is segmented to sentence units and we conduct syntactic dependency analysis. Similarity calculation between sentences is based on the similarity calculation between words. We then propose to take the emotional characteristics of the words into consideration when calculating semantic similarity, and put forward a comprehensive method for word sense disambiguation. Based on the parts of words and the context, we leverage the Hownet semantic dictionary to do word semantic similarity calculation. The semantic similarity of sentences is obtained by the weighted average of the semantic similarity between words in a sentence according to sentence structures. Finally we calculate the semantic similarity of short texts through a new method called binary set . Experimental results show that the accuracy of word similarity and short text similarity reaches 87.63% and 93.77% respectively, which demonstrates the improvement in the accuracy of semantic similarity.