Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • 论文
      A high performance network congestion
      avoidance protocol based on SDN architecture 
      CHAI Yantao,DONG Dezun,ZHANG Heying,ZHU Chengyang,LIAO Xiangke
      2016, 38(01): 1-10. doi:
      Abstract ( 158 )   PDF (1238KB) ( 462 )     

      In the field of high performance networks, congestion management is an important research problem and has great influence on the global performance. Traditional congestion management protocols usually use distributed congestion control mechanism to alleviate the network congestion. But since this way of processing bases on local information only, it cannot make full use of network resources. Recently, the architecture of software defined network (SDN) has been proposed, and it adopts a centralized controller and the multilayer network technique, making it possible to get the global information of networks. So in this paper, we put forward a high performance network congestion avoidance protocol, called overall situation congestion protocol (OSCP), which is based on the SDN architecture. The protocol improves the existing congestion avoidance protocols on congestion information acquisition and the transmission of control information, and it integrates adaptive transmission to perform network routing. Experimental results show that the proposed protocol can effectively prevent and solve network congestion problem, reduce the network latency, and increase the throughput. 

      Parameter optimization for Spark jobs based
      on runtime data analysis        
      CHEN Qiaoan1,LI Feng1,CAO Yue1,LONG Mingsheng1,2
      2016, 38(01): 11-19. doi:
      Abstract ( 183 )   PDF (768KB) ( 448 )     

      The fast growing runtime data is one of the most complicated and valuable data resources in big data systems. Based on runtime data, developers can analyze software quality and discover important information on software development model. As a distributed system, Spark generates a large amount of runtime data during running user applications. Those runtime data include log data, monitoring data and graph representation of jobs. Developers can optimize system parameters with the help of runtime data. However, there are different types of parameters in Spark and it is difficult to identify the effects of the parameters, which makes them hard to tune. In this paper we propose the concept of runtime data historical database and a parameters optimization model based on searching the database. Experimental results validate that the proposed optimization model achieves good performance on the recommendation of system parameters.

      A transfer strategy for small files
      in cloud environment 
      ZHOU Langfeng1,ZHAO Pengfei1,PENG Junjie2
      2016, 38(01): 20-27. doi:
      Abstract ( 185 )   PDF (579KB) ( 310 )     

      File transfer strategies on the Internet directly affect the efficiency of resource utilization and file transfer, and this is more obvious in cloud computing environments where there are all kinds of resources, especially when transferring a large number of small files. Aiming at this problem, we present a transfer strategy for small files based on packing strategy, and an optimization mechanism for file packets as well. We pack small files before they are transferred and stored. Packaged files are unzipped and then stored at the serverend. The transfer process of IO operations is therefore greatly reduced and the efficiency of file transfer is also enhanced. Experimental results verify the correctness of the proposed strategy, which greatly improves the efficiency of small file transfer in cloud environments and resource utilization .

      A low noise eight phase locked loop design          
      SONG Yiliang,YUAN Hengzhou,LIU Yao,LIANG Bin,GUO Yang
      2016, 38(01): 28-32. doi:
      Abstract ( 127 )   PDF (668KB) ( 279 )     

      To meet the needs of a wide frequency range of digital systems, we design a wide output range, low phase jitter eightphase lock loop in the 0.13μm process. We first optimize the loop bandwidth through mathematical modeling to reduce the loop noise at the system level. A feedforward transfer tube unit is introduced to increase the oscillation frequency and to reduce the oscillator's phase noise. Finally, we leverage the D flipflop, which has a pseudostatic structure, to reduce the power consumption of phase detectors and dividers, and maximize the noise immunity. Simulation results show that the phase noise is -95 dBc/Hz@1 MHz,FOM power is 4.5 PJ@2 GHz when the VCO output frequency is 1.2 GHz.

      Data switching cost between NoSQL databases 
      GUO Kun1,SONG Jie1,WANG Jieping2,ZHU Zhiliang1
      2016, 38(01): 33-40. doi:
      Abstract ( 127 )   PDF (719KB) ( 263 )     

      NoSQL databases get more and more widely used in the IT industry due to their excellent performance in business processes in big data environments. However, data organizations differ from each other because of different data models in each NoSQL database,  which incurs impedance during data transfer when we perform data switching between NoSQL databases. Business data encapsulated in source databasesdata models may not be directly parsed by the target database, an additional model adapting operation therefore is needed. In this paper we attempt to define a data description model, which models the features of NoSQL databasesdata model, and describes the way NoSQL databases organize data, and then defines the corresponding distance evaluation algorithm. According to the data description model and the distance evaluation algorithm a unified data model can be designed and developed, which can transform to all the data models of NoSQL databases during data switching process, thus the related business code design just needs to align with the unified data model rather than each concrete data model.

       

      A ps level optical pulse generator in optical transceiver  
      XU Chaolong,LI Jinwen,LUO Zhang,WANG Kefei,PANG Zhengbin
      2016, 38(01): 41-45. doi:
      Abstract ( 152 )   PDF (696KB) ( 271 )     

      As the application of the highspeed optical switch is more and more popular in optical interconnection, the speed of the optical switch is directly related to the bit rate of the optical links. So the performance and integration requirement of driver circuit which drives the optical switch producing narrow optical pulse in a relative long cycle is more stringent. Depending on the breakthrough of singlechip optoelectrical integration and the advance of the optical pulse train generator technique, anoptoelectrical integration ps level optical pulse generator used in optical transceiver is proposed, which is based on optical serializer/deserialier (SerDes). The generator, made up of CMOS integration circuit, produces narrow optical pulses with a precisely adjustable width, which is used in the transceiver with optical SerDes. The generator outputs pulses as narrow as 25 ps width in SMIC 0.13 μm CMOS library, the power voltage range is in 1.4 V~2.5 V, and the clock frequency is from several KHz up to 4 GHz. In addition, the circuit is easy to be transplanted to different CMOS process platform.

      A correlator implementation method
      based on Hadoop+CUDA       
      SU Li,SUN Yanmeng,ZHANG Bowei,YANG Xianbo,ZHU Ying
      2016, 38(01): 46-51. doi:
      Abstract ( 129 )   PDF (986KB) ( 220 )     

      According to the characteristics of the 21CMA correlator algorithm, we propose a novel highefficient method to implement this specific algorithm on the Hadoop+CUDA platform, and it outperforms the MPI alone and MPI+CUDA solutions. The proposed method improves the parallel model of the correlator. Compared to the earlier MPI solution, it greatly enhances the running performance by  utilizing the advantages of GPU for FFT processing, vector multiplication and vector addition. The Hadoop software architecture, a bigdata platform, is employed in the method by using Hadoop Streaming tool to realize parallel execution of nonJava programs running on distributed systems, and linear speedups on clusters are easily obtained. In addition, the result data and procedure logs can be flexibly managed in the parallel file system of the Hadoop HDFS, which provides a well precondition for future bigdata analysis.

      An efficient intercore synchronization
      transmission method for DMA 
      TIAN Yuheng,MA Sheng,LU Jianzhuang,YANG Liu
      2016, 38(01): 52-56. doi:
      Abstract ( 157 )   PDF (617KB) ( 301 )     

      The highspeed processing for HPL benchmark programs urgently needs to create a DMA transmission way to improve the processing speed of the algorithm as much as possible. Meanwhile, the processing speed of the GEMM algorithm is also determined by the DDR's memory access efficiency. However, the GEMM algorithm operation accounts for 90% in that of the HPL benchmark programs. Thus in order to improve the processing speed of this algorithm, combining the characteristics of the DMA transmission mode, we elaborate a pointtopoint transmission mode design scheme based on intercore synchronization. The actual measurements show that, compared with general transmission modes, the transmission speed and the memory access efficiency of the  DMA is improved by 256.74%, which greatly reduces the time overhead and meets the processing need of the HPL algorithm.

      A dynamically coordinated allocation mechanism of
      cloud computing resources based on benefit game  
      LI Weiping1,2,WU Haiyan2,YANG Jie1
      2016, 38(01): 57-61. doi:
      Abstract ( 110 )   PDF (626KB) ( 240 )     

      In order to effectively improve the utilization efficiency of computing resources and reduce the cost of running tasks, we propose a dynamically coordinated allocation mechanism of cloud computing resources based on benefit game. We utilize time matrix and cost matrix as mission effectiveness measure, design a benefit game model, and employ its effectiveness calculation equation to calculate the equation of the model. Thus the best resource allocation policy is obtained. Our mechanism can guarantee a reasonable allocation of  computing resources on demand to meet all the resource  requirements for normal task performance, and simultaneously maximize the benefits of task performance. Simulations and experimental results comparison  show that the proposed mechanism achieves better results in aspects such as task completion time, average cost of task execution and the success rate of task completion.

      A method of using historical calculation data
      efficiently in evolutionary algorithms 
      YAN Pan,TAN Ying,ZHANG Jianhua
      2016, 38(01): 62-66. doi:
      Abstract ( 95 )   PDF (777KB) ( 229 )     

      Evolutionary algorithms have been widely used in practical problems for their remarkable system modeling capability and spatial searching capability. However, there is a problem of repetitive computation for individual fitness in the process of evolutionary algorithms. Especially when solving complex engineering problems, fitness calculations can expend a large amount of time. The hash table features highspeed access capability, which can be used to access historical data of fitness and solve repetitive computation problem of individual fitness during the optimization process without affecting the results. Simulation results prove the efficiency of the proposal.

      End-to-end delay in software defined networks            
      HUANG Xiaopeng1,HUANG Chuanhe1,NONG Huangwu1,Yang Danfeng1,YANG Jinling2
      2016, 38(01): 67-72. doi:
      Abstract ( 177 )   PDF (689KB) ( 281 )     

      With the deployment of large scale software defined networks (SDNs), metrics used to monitor and measure network performance become increasingly important. Endtoend delay is an important part of the indicators, and a number of methods for endtoend delay calculation have been proposed. These methods can be divided into two groups:active detection and passive detection, each with advantages and disadvantages. We propose a method combining active and passive detections. We first calculate the first packet delay, and then calculate the difference of end-to-end delay between adjacent packets, thus obtaining all the end-to-end packet delay. Experimental results show that our approach achieves high calculation performance without clock synchronization.

      Security assessment of web applications
      based on software attack surface 
      ZHANG Yufeng,LOU Fang,ZHANG Li
      2016, 38(01): 73-77. doi:
      Abstract ( 141 )   PDF (594KB) ( 335 )     

      Web applications have become the principal model of the internet and enterprise information management. With their popularity, attackers launch malicious attacks via their vulnerability. Security assessment of web applications is a major information security concern. In this paper, we first discuss the formal description of software attack surface via business logic of web applications, and then construct an attack graph model. On such basis, we realize the security assessment in web applications. Based on   current general vulnerability detection model, the proposed security assessment model introduces correlation analysis of  business logic security. Our proposal overcomes the defects of current testing models of business logic assessment, and achieves fast and comprehensive security assessment of web applications.

      A network selection algorithm in heterogeneous multinetworks
      coexistence of electric power wireless communications      
      ZHUO Ling1,NIE Jing1,XIAO Jingwei1,YUAN Yang2,HU Xin1,CHEN Ke1
      2016, 38(01): 78-83. doi:
      Abstract ( 108 )   PDF (994KB) ( 208 )     

      Combining the needs of electric power business with the performance of special wireless networks, we propose a network selection algorithm for heterogeneous multinetworks coexistence including neighbor networks, LTE special wireless networks, WiMAX special wireless networks and the special wireless networks at 230M frequency. This algorithm is based on the AHPTOPSIS algorithm. We first construct the decision matrix that can indicate the relationship between candidate networks and decision attributes. The normalized decision matrix can be obtained through normalization process. The AHP algorithm is employed to define the individual weight according to the relationship between decision attributes and the network performance under different business types, and then we add weights of normalized decision matrix to obtain the weighted normalized decision matrix. We also make an improvement for the TOPSIS algorithm, determine the plusminus ideal value according to the weighted normalized decision matrix, and use the minimum value between plus ideal value and candidate network, and the maximum value between minus ideal value and candidate network to determine the ideal network. Finally, we calculate the effective distance between each candidate network and ideal network, rank candidate networks and select the best one, which can  avoid anomaly ranking probably existed in the alternative TOPSIS algorithm. Simulation results show that this algorithm can rank candidate networks correctly and select the best network. Meanwhile this algorithm can not only meet the requirement of service QoS, but also improve resource utilization efficiency.

      kterminal network reliability analysis with length constraint  
      SONG Feng,MO Yuchang,PAN Zhusheng,ZHONG Farong
      2016, 38(01): 84-88. doi:
      Abstract ( 117 )   PDF (534KB) ( 259 )     

      Kterminal network reliability analysis with length constraint has many applications in online video and realtime communications. Basically, we calculate kterminal network reliability under the condition that the length between any terminalpair of kterminals is within a given time delay constraint called D. We study k terminal network reliability with length constraint, and propose a truncationbased path constraint method on the basis of the traditional terminalpair and kterminal network reliability algorithms. We also build a binary decision diagram (BDD) model to analyze kterminal network reliability with constraints. The proposed algorithm has strong practical significance for k terminal pointtopoint information flow to accomplish transmission under a certain time delay. Experimental results validate the feasibility and effectiveness of this method.

      A niche estimation of distribution quantum genetic
      algorithm and its simulation analysis 
      LIU Zhen,PENG Jun,LIU Yong
      2016, 38(01): 89-94. doi:
      Abstract ( 88 )   PDF (644KB) ( 245 )     

      The traditional quantum genetic algorithm is slow in convergence speed and is easy to be trapped into local optimum. In order to overcome the above problems and enhance the convergence performance of the quantum genetic algorithm, we propose a novel niche estimation of distribution quantum genetic algorithm integrated with the fitness sharing function method .The quantum chromosome can be rotated in two steps in every subpopulation: the first step is the multigranularity mechanism and the second step is the marginal product model (MPM) rotation. The quantum chromosome crossover based on the MPM can enhance the diversity of the population and avoid the loss of good models. The traits of convergence are also analyzed in the paper, and the entropy convergence criteria are proposed. Functional simulation results show that the proposed algorithm outperforms other traditional algorithms.

      An improved spanning tree covering algorithm
      for multirobot online map coverage 
      CHEN Zetao,DAI Xuefeng
      2016, 38(01): 95-101. doi:
      Abstract ( 170 )   PDF (881KB) ( 304 )     

      We discusses multirobot online map coverage. We introduce the market auction algorithm to improve the spanning tree covering algorithm of a single robot, make robot team diffuse a spanning tree and cover the map completely along their own spanning trees. Two simulation experiments are conducted and the results of two different environment map coverage show that the improved spanning tree covering algorithm can make the robot team accomplish online map coverage with less time and less repeated coverage.

      Modeling and analysis of antiaircraft C3I system
      based on hierarchical fuzzy colored Petri nets 
      CHEN Xingyou1,2,YUE Xiaobo1,CAO Wei1,ZHOU Kaiqing3
      2016, 38(01): 102-107. doi:
      Abstract ( 105 )   PDF (718KB) ( 248 )     

      Through analyzing the composition and operating mode of an anticraft C3I system sample, we specifically propose a hierarchical fuzzy colored Petri net(HFCPN) based on the characteristics of this system, namely a complex structure, multiple resources and its multiple fuzzy attributes,and describe the reasoning algorithm in detail . We build a model of the antiaircraft  C3I system based on the HFCPN, and thus propose a new method for setting up the analysis of the antiaircraft  C3I system. Finally, an application case validates that the method is  capable of describing the systematic distribution, concurrence, asynchronization, etc., as well as analyzing various fuzzy problems in the system resources.

      Multi-nash equilibrium computing
      based on geometric-transform MAGA
      GU Jiaojiao1,LIU Weihua1,ZHAO Jianjun1,LIU Jiwei2
      2016, 38(01): 108-113. doi:
      Abstract ( 91 )   PDF (884KB) ( 241 )     

      Concerning the premature convergence problem of particle swarm optimization (PSO) and the shortage of finding only one minimum, we propose a geometrictransform based Memetic algorithm (MA) for detecting multiminima, which is applied in Nash Equilibria (NE) computing in game theory. The basic MA consists of PSO and Tabu Search (TS), which improves PSO in two ways: loosing constraint on particle movements and incorporating the genetic algorithm (GA) to maintain particle diversity. TS iterates through the neighborhood to get a local optimum. Furthermore, the deflectionrepulsion geometric transformation is incorporated to tune the search for multi minima. The performance is evaluated on a series of NE detecting examples. The results show that the proposed MA has a notable ability to detect multi minima while yielding high accuracy and performance.

      Network traffic prediction based on BP neural
      networks optimized by quantum genetic algorithm 
      ZHANG Lifang,ZHANG Xiping
      2016, 38(01): 114-119. doi:
      Abstract ( 156 )   PDF (549KB) ( 322 )     

      In order to improve the prediction precision of network traffic, we propose a network traffic prediction model based on optimized BP neural networks with an improved multipopulation quantum genetic algorithm. After the neural network structure is fixed, the multipopulation quantum genetic algorithm is used to optimize the initial weights and thresholds of the BP neural network. The model divides a population into several subpopulations by using the Kmeans clustering algorithm, and maintains the diversity of the population through respective evolution of several subpopulations. Information interaction among subpopulations through immigration operation decreases the possibility of falling into local optimum. An adaptive quantum rotation gate adjustment strategy is adopted to accelerate the convergence rate. Simulation results show that compared with conventional models, the proposed model is of faster convergence rate and higher prediction precision in network traffic prediction.

      A proximal point algorithm of
      basis pursuit and its applications 
      ZHANG Xiaoya,ZHANG Hui,WANG Hongxia
      2016, 38(01): 120-124. doi:
      Abstract ( 114 )   PDF (495KB) ( 244 )     

      Basis pursuit has promising applications and becomes a hotspot research topic in recent years. The proximal point algorithm is an effective way for solving basis pursuit problems. We propose a new algorithm, which combines the proximal point algorithm and the idea of linearized Bregman algorithm to solve this problem under the Lagrange dual analysis. Compared with the original linearized Bregman algorithm, the proposed algorithm can avoid the dependency of parameter selection on the model, so its application is beyond the compressed sensing problems. In order to accelerate the convergence speed of the new algorithm, every subproblem is speeded up by Nestrove acceleration scheme. In the simulations on sparse recovery problems of both compressed sensing and noncompressed sensing, we test the influence of parameter selections on the algorithm’s convergence, and the results demonstrate the advantage of this new algorithm.