Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • 论文
      A sparse local scaling parallel
      spectral clustering algorithm based on MPI    
      LI Ruilin1,2 ,ZHAO Yonghua1,HUANG Xiaolei2,3
      2016, 38(05): 839-847. doi:
      Abstract ( 142 )   PDF (1029KB) ( 339 )     

      The spectral clustering algorithm is widely used in many fields because of its advantages of identifying the nonconvex data distribution, and effectively avoiding the local optimal solution without the dimension limitation of data points. However, with the growth of the amount and dimension of the data, it is very necessary to reduce the algorithm’s computation time on the premise of guaranteeing the clustering accuracy. Moreover, besides the data set itself, the factors affecting the clustering quality of the spectral clustering algorithm include the method of solving distance matrix, the scale parameters of similarity matrix and the form of Laplacian matrix. Aiming at the problems mentioned above, we apply the message passing interface (MPI) parallel programming model to the spectral clustering algorithm. The tnearest neighbor method is then used in the transformation of Laplacian matrix approximation in the spectral clustering algorithm. Meanwhile, we select the local scaling parameter as the selftuning scaling parameter in the algorithm. Based on the above analysis, we propose a parallel implementation of the sparse local scaling parallel spectral clustering (SLSPSC), then conduct experiments on four different data sets, and analyze and compare the results with those of the current parallel spectral clustering (PSC) in running time and clustering quality. Experimental results show that the total computation time of the SLSPSC is greatly reduced when calculating the Laplacian matrix, and the quality of some data sets is improved.

      Application fairness and energy aware
      scheduling on heterogeneous multi-core processors   
      YANG Yaqi,LUAN Zhongzhi,YANG Hailong,YANG Shu,QIAN Depei
      2016, 38(05): 848-856. doi:
      Abstract ( 126 )   PDF (1406KB) ( 225 )     

      Heterogeneous multicore processors mostly consist of highperformance big cores and energyefficient small cores. Proper scheduling on such kind of processors can improve resource utilization and reduce energy consumption effectively. Fairnessaware scheduling proposed previously does not take DVFS into account. However, processors supporting DVFS become more and more common. It is necessary to extend fairnessaware scheduling to processors with multiple frequency/voltage states on each core. In this paper, we propose a scheduling on heterogeneous multicore processors which can reduce the energy consumption of the workload by keeping both the fairness of all threads and the total power of the processor satisfying the preset threshold. We propose a calculation method for the fairness among cores with multiple frequency/voltage states, and improve the existing performance prediction models which can estimate the performance of a thread for all configurations using hardware counters collected on the current configuration. Then the best scheduling policy is chosen, which can not only satisfy the fairness and the power threshold, but also maximize the energy efficiency at the same time. Some experiments are done with the Odroid XU3 development board and more experiments with Sniper simulator. Experimental results show that compared with the static, DVFSonly, swaponly scheduling, the proposal can reduce energy consumption obviously while the total running time is almost not affected.

      A remote sensing data division strategy in
      a high-performance atmospheric correction algorithm      
      BAI Lianhong1,2,XU Shu1,2,SI Yidan1,LI Shenshen1
      2016, 38(05): 857-862. doi:
      Abstract ( 95 )   PDF (772KB) ( 274 )     

      Highresolution satellite data is of  powerful potentiality in applications such as surface object identification. However, the related quantitative remote sensing usually requires an accurate atmospheric correction, which is quite timeconsuming. The serial and parallel processing of atmospheric correction in the PCbased cluster system is studied. The atmospheric corrections of HJsatellite CCD data over North China Plain in July 2012 are shown in this paper, and the execution time of each step in the serial and parallel processing is also analyzed, which proves the high feasibility of atmospheric correction parallelization. To avoid unbalanced loads and frequent communications, we design a data division strategy based on pixel features and compare the performance of three different division strategies, which indicates that the proposed strategy has higher reliability and can achieve a higher speedup ratio.

      A NoC floorplanning scheme based on a hybrid of
      particle swarm optimization and simulated annealing       
      SONG Gouzhi,TU Yao,ZHANG Dakun,WEN Yuebo
      2016, 38(05): 863-870. doi:
      Abstract ( 155 )   PDF (1072KB) ( 297 )     

      Using intelligent optimization algorithms as a method to solve the problem of floorplanning in the design of VLSI has been a trend for so many years. In order to solve the slow convergence speed and low efficiency of optimization for the simulated annealing (SA) algorithm and to improve the search strategy and probabilistic inferior transfer, combing the features of the floorplanning problem for heterogeneous 3D NoCs, we adopt the B*tree to indirectly describe the structure of the answer to floorplanning problems. We then introduce the improved SA algorithm into the PSO algorithm, and the resulting hybrid algorithm possesses the advantages of PSO's parallel computing and the SA's global optimization. Simulation results validate the superiority of the new hybrid algorithm to the traditional SA.

      Text classification based on fast autoencoder RELM  
      ZHOU Hangxia,YE Jiajun,REN Huan
      2016, 38(05): 871-876. doi:
      Abstract ( 130 )   PDF (666KB) ( 292 )     

      The regularized extreme learning machine (RELM) is a kind of singlehidden layer feed forward neural networks (SLFNs). Unlike the traditional neural network algorithm, the RELM randomly chooses input weights and bias of hidden nodes, and analytically determines the output weights, Besides, the introduction of the regularized factor can improve the generalization ability of the model. Aiming at the problem of highdimension and multiclass of text information, we propose a novel RELM based on fast autoencoder (FARELM). The fast autoencoder neural network improved by the RELM is used for unsupervised feature learning, and the RELM is used to classify the data after feature extraction. Experimental results show that the FARELM can achieve faster learning speed and better classification accuracy.

      A Z-ordering storage mapping algorithm and
      cache optimization for multidimensional data  
      HOU Fang,LU Jiyuan,HUANG Chenghui
      2016, 38(05): 877-884. doi:
      Abstract ( 127 )   PDF (1045KB) ( 192 )     

      Multidimensional data are processed and accessed in the linear form. Adjacent nodes in 2dimension or higher space are mapped to nonadjacent addresses in 1dimensional storage hardware. These variant address distances lead to variant access latencies. We propose a storage mapping algorithm based on ZOrdering function, and several indicators of this algorithm are presented and calculated, which are compared with the ordinary rowmajor ordering mapping method. The proposed algorithm manifests a stronger locality to aggregate the adjacent nodes in a small linear scope. An increscent prefetching space in a fixed cache space maximizes this strength, therefore, the cache hit rate is increased and its system performance is improved.

      A splice site prediction algorithm
      based on SNP and neural network  
      ZHAO Jing1,WEI Bin2,CHEN Mingshu1,ZHANG Xiaojuan1
      2016, 38(05): 885-890. doi:
      Abstract ( 143 )   PDF (1031KB) ( 254 )     

      Along with the progress of Human Genome Plan, people need to find out the genome structure from gained information. Recognition of splice sites is an essential part of this task. However, it is also a complicated problem, and satisfied results still cannot be reached. We design a splice site prediction algorithm based on BP network and SNP data to discover the influence of SNP on splice. In addition, a new encoding method is used to convert the DNA character sequences into decimal strings. Experimental results show the effectiveness of the SNP data and the new encoding method.

      An energy aware buffer management
      strategy in opportunistic networks 
      ZHANG Feng,WANG Xiaoming
      2016, 38(05): 891-897. doi:
      Abstract ( 128 )   PDF (753KB) ( 205 )     

      We propose an energy aware buffer management strategy, which can reduce energy consumption of nodes and balance the energy allocation among different nodes in opportunistic networks. Under the condition of a limited buffer size, the proposed strategy dynamically adjusts the buffer size of nodes where messages are stored and forwarded according to the energy level of their neighbors, thus decreasing the energy consumed during storing and forwarding processes. A new node state named inactive state is introduced to avoid the failure of message delivery caused by the sleep scheme in the traditional energy saving routing strategy. Simulation results show that the proposed strategy used in opportunistic networks can save more than 50% energy, and the standard deviation of residual energy among different nodes can decrease more than 80% in comparison with the traditional routing protocol with a sleep scheme.

      A structural hole identification algorithm in social networks
      based on overlapping communities and structural hole degree   
      FENG Jian,DING Yuanyuan
      2016, 38(05): 898-904. doi:
      Abstract ( 127 )   PDF (558KB) ( 323 )     

      Structural holes take up key positions in social networks, and they play an intermediary role in information diffusion. To efficiently and accurately identify nodes which occupy structural holes in social networks with community structure, we propose a structural hole identification algorithm based on overlapping communities and structural hole degree. We attempt to find a set of nodes which possess the most information superiority and control superiority. The basic idea is to locate the overlapping nodes between communities in the first place, and then calculate the structural hole degree of overlapping nodes by measuring the nonredundancy degree through an integration of adjacent differences and community connection differences. A set of structural holes can be finally found out according to the ascending order of nodes' structural hole degree. Experiments on real datasets show that the proposed algorithm has the best identification accuracy and the lowest time complexity in comparison with the network constraint index algorithm, the betweenness centrality algorithm and the MaxD algorithm.

      Review on the signal acquisition of  Beidou
      navigation satellite system      
      LI Dengao,LI Shuai,ZHAO Jumin,NIU Wenhui,LIU Jinqiang
      2016, 38(05): 905-913. doi:
      Abstract ( 198 )   PDF (852KB) ( 314 )     

      There is  a large number of  study on the signal acquisition techniques of BeiDou Navigation Satellite System(BDS)in recent years, however, few focus on the classification, summary and performance comparison. Given this situation, we study the acquisition algorithms based on the following three categorization standards: singlecycle, multicycle and auxiliary capture. On the basis of analyzing relevant methods in the field at home and abroad, we explore the hardware implementation status of capture algorithms and compare their performance. Simulation results demonstrate the advantages and disadvantages respectively. We finally point out the challenges and development trend of BeiDou system signal capture techniques in the future, hoping to provide a theoretical support in this field.

      A spectrum allocation algorithm based
      on auction theory and Gaussian process regression 
      LIU Juefu,YANG Jiang,WANG Jianxu,HU Jing
      2016, 38(05): 914-920. doi:
      Abstract ( 106 )   PDF (871KB) ( 225 )     

      We propose a spectrum allocation algorithm based on the auction theory and Gaussian process regression in cognitive wireless networks. Based on vickreyclarkegroves(VCG ) auction model, the algorithm takes the requirement for communication quality of cognitive users into consideration, and formulates a more effective utility function. During the auction, cognitive users predict the bids of other cognitive users by utilizing historical auction data to optimize their bidding strategy. The auctioneer assigns spectrum according to all the bids of cognitive users. Theoretical analysis and simulation results prove the effectiveness of the proposed algorithm, and verify that it can enhance spectrum utilization and the earnings of cognitive users.

      Subset of field testing for industrial
      software and generation of test data       
      ZHAO Yiding1,FAN Yinting1,ZHENG Qiusheng1,CHU Jizheng2,LUO Jing1
      2016, 38(05): 921-931. doi:
      Abstract ( 121 )   PDF (959KB) ( 206 )     

      Field testing of industrial software may be insufficient due to risk and other constraints. Aiming at the above problems in practice, we propose an innovative approach of field testing. The concept of testing subset and the related attributes are defined. Testing subset is designed according to a comprehensive variety of factors concerning customers' production, through which the initiative of the field testing is improved. Every testing subset can be driven by corresponding auxiliary program for field testing. Preparation databases for different testing subsets are established based on historical data. The input data for active tests are designed based on the search results of the preparation databases in combination with the factors on the field through multiple steps. The integrated implementation process in practice is illustrated. The practical case of field testing on the optimal control system of petrochemical production proves that the proposed method can improve the sufficiency and efficiency of field testing significantly on the premise of risk prevention.

      A software defect prediction method based on topic model       
      ZHANG Zetao1,2,YE Lijun1,2,CHENG Wei1,2,GU Jun1,2
      2016, 38(05): 932-937. doi:
      Abstract ( 137 )   PDF (588KB) ( 239 )     

      Traditional models for defect prediction always consider the textual features of source codes, comments, etc, ignoring hidden topics such as technical aspects, business logics, etc. To solve these problems, we present a new topicbased defect prediction model. The software corpus is assumed to be composed by a collection of different topics and technical aspects which lead to different defect tendencies. A set of topicbased metrics are proposed. Then, the LDA topic model is adopted to generate topics and the corresponding parameters, and the prediction model is trained by both topic metrics as well as some traditional metrics. Experimental results show that the proposed method outperforms traditional defect prediction methods and can also ensure a stable model through the evolution of software, which means the new method can be efficiently used in defect prediction tasks in software engineering.

      A DNAAFSA for  location allocation
      of distribution centers 
      FEI Teng1,2,ZHANG Liyi1,2
      2016, 38(05): 938-945. doi:
      Abstract ( 118 )   PDF (1824KB) ( 210 )     

      Since artificial fish swarm algorithm (AFSA) is easy to fall into local optimum at the latter stage, the accuracy and convergence rate of optimization are reduced. Aiming at this problem, we propose an improved algorithm, called DNAAFSA, which applies the crossover and mutation operations of the DNA algorithm to the basic AFSA. The proposed algorithm can enrich the diversity of fish stocks, thus helping the artificial fish escape from local optima. The DNAAFSA is applied to solve the location allocation problem of distribution centers and the simulation results show that the DNAAFSA has better optimization capability.

      A hybrid algorithm of particle swarm and ant
      colony for partner selection in supply chain        
      LU Zhigang,SHEN Kang
      2016, 38(05): 946-953. doi:
      Abstract ( 95 )   PDF (819KB) ( 187 )     

      To improve the accuracy and efficiency of partner selection in the supply chain, we propose a hybrid algorithm of particle swarm and ant colony optimization (PSACO), establish a directed graph path model based on supply chain nodes and directed arcs, and construct a multiobjective optimization model. A discrete particle swarm optimization (DPSO) is modified so as to obtain initial solutions to the partner selection problem. Then the initial solutions are used to form pheromoneinitializing matrix for ant colony optimization (ACO), which is further modified by redefining its searching strategy to find the optimal solution. Experimental results demonstrate that the proposed PSACO algorithm can achieve more accurate solutions with greater efficiency, and is of better performance.

      A fruit fly coupled uniform design algorithm
      for optimizing SVM parameters      
      GAO Leifu,ZHAO Shijie,YU Dongmei,TU Jun
      2016, 38(05): 954-959. doi:
      Abstract ( 110 )   PDF (2099KB) ( 281 )     

      Parameter optimization of support vector machines (SVMs) is an important research direction, however, there is a lack of systematic theoretical guidance for SVM parameter selection. The fruit fly optimization algorithm (FOA) can find a better approximate optimal solution quickly and then iterates in the solution neighborhood, but the search time is prolonged. We therefore propose a fruit fly coupled uniform design algorithm (FFUD), which couples the FOA and the uniform design method (UD) to solve the problem of SVM parameter optimization. The proposed algorithm can quickly gain an approximate optimal solution to the problem via the parallel optimization performance of the FOA, and then jumps out of the FOA and continues to perform the local optimization searching through the UD method to obtain a much better approximate optimal solution. Experimental results show that the proposed algorithm has better searching efficiency and higher classification accuracy, and it is effective and feasible in the SVM parameter optimization.

      An improved HLBP texture feature
      method for pedestrian detection 
      ZHOU Shuren1,2,WANG Gang1,2,XU Yuefeng1,2
      2016, 38(05): 960-967. doi:
      Abstract ( 106 )   PDF (858KB) ( 242 )     

      Compared with the local binary pattern (LBP), the Haarlike LBP (HLBP) can effectively reduce the noise by using local statistics in pedestrian detection. However, when calculating the feature value in the HLBP texture, since the center point is not involved, its information is lost. In order to solve this problem, we propose an improved HLBP (IHLBP) texture feature method, which includes the central point in calculating the feature value with a maximum weight. We realize the two level decomposition operation via the twodimensional discrete Haar wavelet transform before the IHLBP feature extraction, and obtain three images with different scales. The IHLBP features of the three images of different scales are then extracted and normalized. They are series connected and finally the final feature vectors are obtained. Experiments on the INRIA Person data set using support vector machine (SVM) show that the proposed method can effectively improve the recognition rate of pedestrian detection.

      Hyperspectral image compression using
      mixed PCA/ICA in conjunction with JPEG2000   
      YE Zhen1,BAI Lin1,LIU Yu2,HE Mingyi3,NIAN Yongjian2
      2016, 38(05): 968-974. doi:
      Abstract ( 136 )   PDF (567KB) ( 222 )     

      The principal component analysis (PCA) method combined with JPEG2000 is widely used in hyperspectral image compression. However, the covariance matrix of the PCA only represents the second order statistics. In many applications of hyperspectral image analysis, only preserving the information of the second order statistics is not sufficient. Taking the anomalous pixels for example, more subtle information needs to be captured by using higherorder statistics. To solve this problem, we propose a hyperspectral image compression algorithm using mixed PCA/ICA in conjunction with JPEG2000 standard. Firstly, the PCA is performed for the original hyperspectral image to find the eigenvector matrix WPCA  corresponding to the first m largest eigenvalues. Then, the ICA is employed for the remaining eigenvectors to find n  eigenvector matrix WICA. Finally, the mixed projection matrix, original hyperspectral image and it's mean vectors are embedded into the JPEG2000 bitstream for compression. At different bit rates, the performance of the mixed PCA/ICA+JPEG2000 is evaluated by spatial correlation coefficient (ρ), signal noise ratio (SNR) and spectral angel map (SAM). Experimental results reveal that the proposed algorithm is not only better in spectral correlation reduction, but also can improve spectral fidelity and protect anomaly pixel information.

      A nonlocal segmentation model based
      on local signed difference energy
      YAN Mo1,WANG Yu2
      2016, 38(05): 975-982. doi:
      Abstract ( 133 )   PDF (576KB) ( 209 )     

      We propose a nonlocal image segmentation model based on local signed difference (LSD) energy for images with intensity inhomogeneity. The model consists of a data driven term based on LSD energy and a nonlocal total variation regularization term, and possesses local separability and global consistency. Due to the convexity of the model, the splitBregman iteration algorithm can be used in the numerical implementation, thus having a fast speed. Compared with the classical local regionbased active contour models (ACMs), the proposed model has several advantages as follows: 1) it is less sensitive to the initialization; 2) it is more efficient by using the splitBregman iteration algorithm; 3) it can correctly segment images with fine textures and weak object boundaries. Experimental results show that the model can segment images with intensity inhomogeneity correctly and works more robust than other models.

      An improved electrocardiogram QRS
      complexes detection algorithm         
      WANG Xiaohua,XU Xuejun,HE Qiuya
      2016, 38(05): 983-987. doi:
      Abstract ( 196 )   PDF (449KB) ( 227 )     

      While the wavelet transform is used to detect QRS complexes, the most critical part is the match of mould extreme value. We present a regional exetrem value matching algorithm to detect R waves. The wavelet transform is realized by means of the quadratic spline wavelet basis function and the Atrous algorithm, and then the mould extreme values of ECG signals are calculated. The search area is defined by the positive maximum. Taking the positive maximum as the starting point, the specific parts as the search space, we search for the negative maximum from the starting point to the left. The zero crossing point between the positive maximum and the negative maximum is the corresponding point of R wave. Then we detect the Q wave and S wave on the basis of the R wave detection, and the onset and offset points of QRS complexes are detected by the maximum distance method. Related medical theories are also adopted to optimize the detection results, thus removing wrong points and compensating leak points. Experiments on the MITBIH arrhythmia database verify the proposed algorithm, which can detect the QRS complexes accurately with an average detection rate of 99.97%.