Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • 论文
      Study on the improved OSRM routing algorithm
      with back-track feature for fat-tree networks             
      CAO Jijun,ZHENG Yi,WANG Kefei,XIAO Liquan
      2014, 36(06): 997-1004. doi:
      Abstract ( 194 )   PDF (1220KB) ( 231 )     

      Fat-tree is one of the most important topologies of interconnection networks. Several routing algorithms have been proposed for fat-tree topology. Among them, the OSRM routing algorithm is proved to be an optimal routing algorithm. However, all these algorithms ignore the ease of diagnosis of linkfault. Therefore, based on OSRM, a novel routing algorithm, named BT-OSRM, is proposed. In the BT-OSRM algorithm, the less-equal-great relationship between any pairs of processing nodes is defined, and hence the routing path is chosen from the OSRM routing path and its back-track routing path based on the defined relationship. Further, the BT-OSRM2 and BT-OSRM3 algorithms are described in detail for the commonly used two stages and three stages fattree topologies respectively. Theoretical analysis shows that the BTOSRM algorithm not only is as deadlockfree, loadbalanced and optimal as the OSRM algorithm, but also can guarantee the back-track feature for routing between any two endnodes, thus facilitating the diagnosis of linkfault in interconnection networks.

      Design of a detection system for 1553B
      bus devices based on ARM and FPGA         
      WANG Zhanling,ZHANG Dengfu,LI Yunjie
      2014, 36(06): 1005-1010. doi:
      Abstract ( 243 )   PDF (1047KB) ( 318 )     

      1553B bus is extensively used in the military field, but the detection equipments for 1553B bus devices cannot meet the actual demands absolutely. In order to facilitate the data communication between computers and 1553B bus devices and improve the detection efficiency of 1553B bus devices, a detection system for 1553B bus devices is designed based on ARM and FPGA. The integrated design idea of the system is given firstly, and the hardware and software design methods are introduced in detail. By using the modular method, the ARM module, the FPGA module and the interface are designed independently. The ARM module implements the USB and Ethernet interfaces. By using the top-down method, the 1553B protocol IP core is implemented on FPGA. Besides, the peripheral circuits and the power module are optimized. Simulation and verification are carried out. The results indicate that the receive and send functions are designed and conform to the design specification. Finally, the practical testing on the hardware platform achieves effective results, proving that our design can meet the actual demands.

      Optimizing the auto-vectorization in GCC
      based on data-alignment directives          
      LI Chunjiang,HUANG Juanjuan,XU Ying,DONG Yushan
      2014, 36(06): 1011-1017. doi:
      Abstract ( 178 )   PDF (891KB) ( 171 )     

      The general purpose processors support multicore parallelism on a chip and SIMD parallelism in each core. Although GCC complier makes use of the auto-vectorization for SIMD parallelism, the effects of autovectorization for OpenMP program is far from the expectation. Based on the implementation of the OpenMP compilation in GCC, we extend complier directives with data alignment attribute directive for OpenMP program. Our work enables GCC to make a more accurate estimation  on the alignment of data access, and optimizes the auto-vectorization in GCC.

      Research on parallelizing the
      TFIDF algorithm based on Hadoop           
      WANG Jingyu1,2,ZHAO Weiyan2
      2014, 36(06): 1018-1022. doi:
      Abstract ( 151 )   PDF (661KB) ( 227 )     

      Aiming to improve the efficiency of text classification algorithm on a large data set during the training and testing process, the TFIDF text classification algorithm based on the Hadoop distribution platform is proposed, and its implementation process is given. By using the MapReduce programming model, the parallelized TFIDF text classification algorithm is implemented, which takes the word locations into consideration. Comparative experiments are conducted between the improved TFIDF algorithm and the traditional serial algorithm in both the standalone mode and the cluster mode. The experimental results show that the improved TFIDF text classification algorithm can achieve highspeed mass data classification and optimize performance.Key words

      Exploration for parallel computing of
      CRC16 checksum on FPGA           
      NING Ping
      2014, 36(06): 1023-1027. doi:
      Abstract ( 225 )   PDF (994KB) ( 180 )     

      For the past low efficient serial algorithm of CRC16 CCITT checksum, the reason of calculation inefficiency is studied and a generalpurpose parallel algorithm is introduced. The parallel algorithm is realized by Verilog HDL and simulated under the Quartus II. The Nios II custom instruction is used to show the performance improvement of the parallel algorithm in comparison to the serial algorithm. Finally, the basic parallel circuit is improved by means of multilevel pipeline technology and is simulated under Quartus II. The results reveal the problems of using pipeline technology to enhance the logic circuit Fmax with feedback structure, so a new method is also proposed in order to solve the problem. The simulation results show that the improved circuit with multistage pipeline can significantly increase the circuit Fmax and the computing efficiency of the CRC16 CCITT checksum is also improved.

      Curve fitting compression method for
      massive multi-dimensional floating data storage        
      HOU Fang,LU Jiyuan,HUANG Chenghui
      2014, 36(06): 1028-1033. doi:
      Abstract ( 123 )   PDF (814KB) ( 172 )     

      Multidimensional data such as threedimensional position information is one of the major data objects of current high performance computer systems. Its date compression is an important technique to tackle the problem that lack of data storage space and I/O bandwidth cannot meet the demands of rapidly increasing massive multidimensional data. Existing algorithms are insufficient for multidimensional floating data compression. A curve fitting method for massive multidimensional data compression is proposed. Multidimensional floatingpoint data is projected onto a twodimensional coordinates. By using polynomial curve fitting, the original data is compressed by storing the polynomial coefficients. Sorting is introduced in the design of the algorithm as the data preprocessing means; thereby a smaller compression error is obtained. The theoretical analysis and experimental results show that the compression ratio of our proposed algorithm outperforms the existing algorithms with the same error rate.

      SuperStar:A scalable high-radix topology in
      high performance interconnection network          
      LEI Fei,DONG Dezun,LIAO Xiangke
      2014, 36(06): 1034-1041. doi:
      Abstract ( 193 )   PDF (859KB) ( 201 )     

      The development of the transmission of high-speed signals and the VLSI technology motivate the use of high-radix routers to deal with the new challenge of interconnection networks. How to reduce the latency and cost of the interconnection networks is crucial to designing the highradix topology. Several typical high-radix topologies such as fat tree, flattened butterfly, and dragonfly are studied. The performance and cost of those highradix networks is analyzed theoretically and a variation of those highradix topologies, the SuperStar, is considered, whose diameter and cost are effective. Meanwhile, SuperStar takes advantages of high-radix routers to approve of larger systems. A high-radix interconnection network simulator, named xNetSim, is explored to evaluate these topologies. xNetSim is built in the OMNeT++ platform. In the simulation, the network loads are varied, and the throughput and latency of different topologies are verified. The performance of SuperStar is briefly analyzed in comparison to other kinds of highradix interconnection network topologies.

      Research on fast intra prediction algorithm
      for high efficient video coding        
      DUAN MU Chunjiang,DONG Duo
      2014, 36(06): 1042-1049. doi:
      Abstract ( 107 )   PDF (510KB) ( 166 )     

      The HEVC video coding standard is the current most advanced video compression standard. Its compression performance is better than other current video compression standards. Most big companies in the world are developing the products based on this standard. However, the high computational complexity of this standard becomes the bottleneck of its implementation. Especially, the complexity of intraframe prediction component is more than twice that of the other video compression standards. Thus, a lot of fast intraprediction algorithms have been proposed in the literature. The paper reviews, analyzes, and simulates these algorithms, compares their performances, and points out their pros and cons. Based on this, we also describe how to chose a suitable algorithm under different implementation scenarios. Finally, the future development directions of these fast algorithms are discussed. Key words:

      Research on index system of quantitative
      evaluation for network systems viability           
      WANG Pengfei,ZHAO Wentao,ZHANG Fan,ZOU Rongnian
      2014, 36(06): 1050-1056. doi:
      Abstract ( 142 )   PDF (840KB) ( 270 )     

      On the basis of four key attributes of the network system viability, analytical hierarchy process is used to divide the network system from top to bottom into three factor layers of software, hardware and running environment, and a set of extractable, observable and maneuverable index system is proposed. A proper algorithm is applied to perform quantitative calculation, and the results are analyzed by  a graded evaluation,which can clearly reflect the key attributes of the network system viability. Verified by experiment, the proposed index system of network system viability quantitative evaluation is more feasible, which makes the evaluation results more significant to the guidance of engineering.

      A greedy algorithm for minimum
      seed set based on counting sort           
      ZHAO Xuefeng1,CHEN Xiangen2
      2014, 36(06): 1057-1063. doi:
      Abstract ( 117 )   PDF (650KB) ( 192 )     

      The Minimum Seed Set (MSS) problem is related to the network influence maximization. The study of the MSS problem desires discovering the smallest possible set of nodes such that, if initially activated, the influence guarantees spreading to the entire network under a given threshold model. For computing MSS in a network with node thresholds, a new greedy algorithm is proposed based on counting sort,which sorts nodes of the network at first by the key values of node degrees minus its thresholds and then processes the current nodes with the minimum key values.The time complexity of the new algorithm is analyzed, which improves the method based on minimum heap.The upper bound on the average size of calculated seed sets in classic scalefree BA networks is produced under simple majority threshold. The experimental results on synthetic and realworld datasets show that the proposed approach is efficient and especially it can find smaller size of seed set in scalefree networks than the related algorithm counterpart.

      Multi-path  routing in multi-factor
      wireless sensor monitoring network      
      ZHANG Zhiyuan,LIU Yuanjian,WANG Xiaodong
      2014, 36(06): 1064-1071. doi:
      Abstract ( 123 )   PDF (815KB) ( 153 )     

      The multi-factor wireless monitoring network involves the collection and transmission of graphics, images,video,and other elements.Traditional sensor network routing in high load networks shows a lack of performance. In order to improve energy efficiency, the multi-path routing for such network is designed and optimized, and it is proposed that gradient oriented heuristic information is used to guide the establishment of the routing path, thus improving the energy efficiency of path establishment. Meanwhile, for both sparse and dense monitoring node distribution, the greedy forwarding path creation algorithm based on gradient information GBGF and the limited broadcast path creation algorithm based on gradient information  GBRB are proposed. The simulation results show that multi-path routing based on the gradient can greatly improve the energy efficiency.

      Research of the network covert channel
      technique based on TCP protocol header          
      ZHANG Lingtong1,2,LUO Senlin2
      2014, 36(06): 1072-1076. doi:
      Abstract ( 124 )   PDF (593KB) ( 230 )     

      Through studying the mechanism established by network covert channel, a network covert channel implementation method using TCP protocol header is proposed. The firewall and intrusion detection system are penetrated by the following procedure: Firstly, certain AESencrypted secret information is embedded into the sequence number/confirm number fields of the TCP header. Secondly,TCP data packets are constructed by web behavior simulating technique. Finally, information transferring and remote controlling can be implemented through this covert channel.A prototype system is also implemented.The experimental results show that, the system has some advantages such as high concealment performance, fast transmission speed, good expansibility, etc. The transmission of privacy information can be achieved. Theory basis and technical support are also provided for the network information security problem solving.

      Study of security analysis and attack on a secure protocol          
      ZHONG Jun,WU Xueyang,JIANG Yimin,DUAN Guangming
      2014, 36(06): 1077-1082. doi:
      Abstract ( 116 )   PDF (556KB) ( 188 )     

      The handshake protocol of web security protocol SSL is described detailedly and its security is analyzed. The three protocol logical bugs are theoretically elaborated in detail. In the end, by establishing the test environment, it is pointed out that the defect on *** mode exists in SSL protocol.

      Dynamic attributebased encryption scheme    
      DENG Yuqiao
      2014, 36(06): 1083-1087. doi:
      Abstract ( 131 )   PDF (420KB) ( 215 )     

      The access control of encrypted documents can be well realized by using attribute-based encryption schemes. But the attributes are static in all attribute-based encryption schemes, resulting in certain problems in practical applications. Based on conditionsbased encryption, a dynamic attribute-based encryption scheme is developed.The scheme allows one user to calculate his own attributes key in the scheme once he satisfies a certain attribute after the authenticating party’s digital signature is given.The security of the scheme is discussed at last.

      An artificial bee colony algorithm for the vehicle routing problem               
      WANG Zhigang,XIA Huiming
      2014, 36(06): 1088-1094. doi:
      Abstract ( 147 )   PDF (534KB) ( 252 )     

      An artificial bee colony algorithm is proposed to solve the vehicle routing problem. The algorithm gives a natural number coding method for the food source and adopts neighborhood inversion to produce a candidate food source. It is applied to solve multiple instances of the vehicle routing problem. It is compared with other heuristic algorithms on a set of benchmark instances, and the results show the effectiveness of the proposed artificial bee colony algorithm, which presents a new vision for other combination optimization problems.

      Optimizing parameters of fuzzy Petri net
      using differential evolution algorithm            
      ZHANG Chi1,YUE Xiaobo1,ZHOU Kaiqing2,MO Liping3
      2014, 36(06): 1095-1100. doi:
      Abstract ( 147 )   PDF (598KB) ( 215 )     

      It is significant and being unsolved yet for building a Fuzzy Petri Net (FPN) so as to get rid of the shortcomings of poor selflearning ability. To address this problem, differential evolution algorithm is originally introduced into the procedure of exploring parameters of FPN. According to the actual characteristics of FPN, an improved differential evolution algorithm is proposed. The algorithm utilizes the chaotic strategy to generate initial population and integrates selfadaptive factors with precocious punishment strategies as a result of enhancing the diversity of population, while ensuring being strong convergent and global. Simulation experiment shows that the trained parameters gained from the proposed algorithm are 5 times accurate than any other traditional algorithms.   

      An efficient algorithm of top-k
      inquires over continuous uncertain XML          
      ZHANG Xiaolin,ZHENG Chunhong,LIU Lixin,L Qing
      2014, 36(06): 1101-1107. doi:
      Abstract ( 101 )   PDF (896KB) ( 232 )     

      Currently, the top-k query algorithms about the uncertain XML data can not deal with continuous uncertain data. The SPCProTJFast  algorithm is proposed, which improves the traditional merging algorithm, combines with continuous uncertain data filtering methods, implements the top-k query algorithm over continuous uncertain XML data. In order to avoid the impact of too small probability limit on filtering effect, the HPCProTJFast  algorithm is proposed, which delays the handling of continuous types of nodes and visits the continuous nodes only when the entire twig that meets the probability condition are acquired. Experimental results show that, in terms of the execution time and the filtration efficiency, these two algorithms are more efficient than the ProTJFast algorithm that deals with continuous uncertain data directly, and the HPCProTJFast algorithm is the most efficient.

      Uncertainty schema matching approach
      based on evidence theory         
      LI Guanfeng,CHEN Dongmei
      2014, 36(06): 1108-1113. doi:
      Abstract ( 113 )   PDF (635KB) ( 168 )     

      Due to autonomy and heterogeneity data sources, uncertainty is an inherent character of schema matching.  In order to improve the performance of schema matching, an uncertainty matching approach based on evidence theory is proposed. Firstly, the schema space is divided into several schema subspaces according to attributes types. Secondly, different matchers are viewed as different sources of evidence, and mass distributions are defined on the basis of the match results from these matchers. Thirdly, an improved evidence theory is used to automatically combine multiple matchers, which reduces human involvement and solves the situations with high conflict results from different matchers. Finally the mapping is generated by the improved KuhnMunkres algorithm. The experiments show that the proposed method is highly accurate and effective.

      Measurement of human body’s feature dimensions
      based on the national standards          
      WEN Peizhi1,MA Chao1,HU Junrong2,LI Lifang1
      2014, 36(06): 1114-1119. doi:
      Abstract ( 120 )   PDF (668KB) ( 222 )     

      Aiming at the difficult problem of automatic extraction of human feature points   and size from the human body cloud model, the extraction technique of key dimensions based on the national standard data is proposed. Firstly, we use the data provided by the national standard GB100001988 “human dimensions of Chinese adults” to divide the feature search domain, and, in the search domain, coordinates and points to the linear projection distance are combined to extract body feature points. Secondly, the characteristics of planar contour perimeter for dimension measurement are calculated through the improved convex hull method. Finally, z coordinate difference and the distance between two points are used to realize length dimension calculation. Experiments show that the method is little influenced by the type of human body, and has good robustness, fast calculation speed and high precision.

      Theoretical analysis of the geometric relations between
      human stereo visional space and virtual 3D space        
      LIAO Ming1,2,ZHOU Liangchen1,L Guonian1,SHENG Yehua1,PAN Yi2,CAO Yan2
      2014, 36(06): 1120-1126. doi:
      Abstract ( 141 )   PDF (708KB) ( 195 )     

       The virtual 3D space is a digital one of real world and the human stereo visional space is a 3D reconstruction of the real world or virtual scene through human visional system. Traditionally, human views the environment around himself directly and the geometric relationship between the human stereo visional space and the real world exists undoubtedly. How about the relations of stereo visional space to the virtual 3D space are? According to the principle of binocular disparity, a geometric model of stereo vision is established for the virtual 3D scene and the corresponding visional 3D model is constructed too. And the screen disparity and retina disparity are analyzed with respect to an arbitrary spatial point. As a result, the geometric mapping between the point of virtual space and the one in visional space is established with matrix algebra, indicating that the 3D visional model can be mapped to a virtual 3D model unambiguously and the metric capability of human visional system is intrinsic. The main contribution of this paper is that a wholesome visional 3D model is proposed, which lays down a quantitative analysis foundation for the traditional stereo vision. The results are useful theory explorations for the stereo presence, the virtual interaction and the metric applications.