Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • 论文
      Methods to enhance reliability and serviceability of parallel
      computing software on large scale clusters  
      LIN Yanyu,CHEN Hu,MIAO Jun,HAN Jialongmei,LAI Lushuang
      2015, 37(01): 1-6. doi:
      Abstract ( 196 )   PDF (826KB) ( 398 )     

      Parallel computing software on largescale clusters requires not only fault tolerance against local nodes or network failure,but also manageability,maintainability,portability and scalability. Based on the star model,we design a parallel computing framework and achieve systemwide fault tolerance, usability,portability and scalability,using methods such as the variable granularity decomposer and associated queue on the scheduling nodes.Our system can continuously run over 150 hours with 300 TFlops computational capability.Besides,the system is scalable.

       

      High performance computing in computational
      biology (Ⅱ)—sequence analysis 
      WANG Tao
      2015, 37(01): 7-13. doi:
      Abstract ( 150 )   PDF (470KB) ( 354 )     

      Sequence analysis is an important domain of high performance computing applications.With the development of highthroughput sequencing technique,there is an explosive growth in genome data,and the demand for high performance computing becomes more urgent.The paper introduces the applications of high performance computing in sequence analysis and parallel implementation of sequence analysis algorithms, including sequence alignment,database search,resequencing, and genome assembly.

      Parallel optimization of the seismic wave PKTM algorithm
      on CPU+MIC heterogeneous platform  
      XIONG Min,WANG Yongxian
      2015, 37(01): 14-22. doi:
      Abstract ( 161 )   PDF (1027KB) ( 306 )     

      An efficient technique which is now being implemented in photographing images of complicated rock stratum is the seismic wave PKTM algorithm. With the earthquake prediction coming into massive data generation, it is of essential importance to optimize this algorithm by parallel computation. In recent years, high performance parallel computation is characterized by heterogeneous and many cores systems.A typical example of this kind of processors, featured with low cost and high performance is Xeon Phi, being known as MIC. On the basis of the classic PKTM algorithm, we parallelize and optimize the PKTM algorithm in the offload programming model, based on CPU+MIC heterogeneous platform. For applications with the scale of 64 000 000(8 000×8 000),the total parallel simulation time is reduced from 357.52 seconds to 1.66 seconds, achieving 214.37x performance improvement.

      Delay optimization for long wire in YHFT-XX chip  
      ZHAN Wu,LIU Xiangyuan,GUO Yang,DING Yanping
      2015, 37(01): 23-27. doi:
      Abstract ( 148 )   PDF (639KB) ( 254 )     

      Aiming at that there are many long paths in YHFTXX chip, the optimization of long wires in physical design is studied.The effect of three kinds of repeater insertion is studied,and the optimal sizes of repeaters and delays of different long wires after repeater insertion are obtained.Combined with the concrete engineering practice,the obtained results are used to optimize the delay of long paths. Regular repeater insertion is used to optimize the repeaters and the gaps between repeaters for the sake of reducing the path delay.Feedthrough technique is used to optimize the repeater insertion across modules,thus effectively reducing the delay and improving the timing performance of the chip.

      Cache structure design for big data oriented many-core processor  
      WAN Hu1,XU Yuanchao1,2,SUN Fengyun1,YAN Junfeng1
      2015, 37(01): 28-35. doi:
      Abstract ( 162 )   PDF (2009KB) ( 351 )     

      Some big data applications such as data sorting, search engine, streaming media running on the traditional latencyoriented multi/manycore processor are inefficiency. The hit rate of L1 Cache is high while that of L2/L3 Cache is relative low and IPC is not sensitive to LLC capacity. To address the low utilization issue of cache resources, we analyze the memory access patterns of big data applications, and then propose an optimization method of cache structure for manycore processor. Both the two structures  only have L1 cache, while one is fully shared cache structure, and the other is partly shared cache partition structure. The evaluation results show that these two schemes can significantly save chip area at the cost of slightly increase of memory access. When cache capacity is low, the partition structure is superior to the share structure. As cache capacity increases, the share structure will gradually become superior to the partition structure. For manycore processors, the capacity assigned to each processor is limited, thus the partition structure has certain advantages.

      Research on the cipher block random link
      encryption mode based on GPU  
      WU Weimin,LI Jianrui,LIN Zhiyi
      2015, 37(01): 36-41. doi:
      Abstract ( 119 )   PDF (1060KB) ( 252 )     

      Most of the traditional block cipher modes cannot be applied effectively on GPU. We study the traditional encryption models and put forward a packet encryption model cipher,called block random link encryption mode (RCBC),which is efficient,safe and meets parallel computing requirements.This model has a high efficiency,and increases the difficulty of decryption at the same time.Experimental results show that the proposed encryption algorithm has efficient processing ability on CPU_GPU.

      Weak ties’ influence on information spreading for
      different types of  online social networks  
      ZHANG Shengbing
      2015, 37(01): 42-47. doi:
      Abstract ( 141 )   PDF (699KB) ( 288 )     

      There are strong ties and weak ties in terms of strength between two nodes in online social networks.The strength of ties can be measured by the relative overlap of two neighboring nodes in the network.Our experimental results show that weak ties- influence on information spreading varies in different online social networks.There is little influence on information diffusion when we remove the weak ties in the information-exchang-oriented online social networks,such as mobile phone communication network and Wiki voting network.The coverage of the information will drop sharply when we remove weak ties in the relationship-oriented online social network,such as YouTube, Facebook and CDBLP cooperation network.  

      A routing protocol for in-network aggregation
      based on spatio-temporal correlation in   WSN   
      CHEN Xuehan,CHEN Zhigang,ZENG Feng,WU Jia
      2015, 37(01): 48-55. doi:
      Abstract ( 149 )   PDF (1537KB) ( 288 )     

      More and more wireless sensor networks are applied to a variety of applications for accurate monitoring. Duo to the high node deployment density,and the data periodically generated by nodes,there are many redundant data in the network,whose transmission consumes large amounts of energy. In order to reduce data transmission amount in the network and the communication overhead,a routing protocol for in-network aggregation based on spatio-temporal correlation in WSN named TS-INDAR is proposed. The amount of data transmission in the network is reduced by employing the in-network aggregation technology and spatio-temporal correlation protocol. TS-INDAR establishes maximum overlap routing trees to improve the chance of data aggregation, and the data with spatio-temporal correlation are controlled through regional and temporal suppression.The size of the correlation region is adjusted according to the distance between the center of the event area and the sink node.Compared with the existing routing protocols,TSINDAR decreases the amount of data transmission in network,and reduces the communication overhead .The simulation results show that,TS-INDAR ensures the accuracy of monitoring data, and the energy consumption of the network is reduced by 25% compared with the DRINA algorithm, and is 11.6% lower than the EAST algorithm.

      Network identity model based on social authentication  
      SHAO Chengcheng,JIANG Xinwen,CHEN Kan,ZHU Peidong
      2015, 37(01): 56-62. doi:
      Abstract ( 121 )   PDF (1884KB) ( 268 )     

      Network real-name verification, which is  proposed to solve the problems caused by network anonymity, is criticized for information disclosure. The root cause of the disclosure is that realname authentication relies on personal real-name information. We propose a social-authentication-based network identity model, which relies on social relations, so that it can build an identity to monitor the network while avoiding information leaks. Our model selects root nodes according to a certain strategy, and then implements social authentication by vouching, and finally builds a unique network identity SANI for each node without relying on real-name information. The SNAI identity contains the information of how a node is socially authenticated, and has the ability to authenticate and trace real world identity.

      Enterprise service selection method based on equalization   
      XUE Xiao1,WANG Junfeng1,ZENG Zhifeng2
      2015, 37(01): 63-69. doi:
      Abstract ( 110 )   PDF (1167KB) ( 241 )     

      In CSC,enterprises respond to changing market demands through dynamic matching and selection of all kinds of enterprise service resources.However,the non-equalization phenomenon is common in enterprise service matching,which not only results in a waste of service resources,but also reduces the satisfaction degree of customer needs.As a result,how to achieve the equalization based matching between enterprise services and customer demands has become a serious challenge in the field.To solve this problem, we first design an equalization based Quality of Service (QoS) model according to the operational characteristics of enterprise services.Then a complete equalization oriented service selection process is constructed,which can eliminate the non-equation phenomenon in service selection by means of adjusting the equalization based QoS model.Simulation results validate the effectiveness of the equalization based service selection method,which can improve the service utilization rate effectively without reducing the QoS of the selected services or increasing the time cost too much. 

      A consistent view construction mechanism
      in trustworthy and controllable network  
      CAO Shenglin,LIU Liyan
      2015, 37(01): 70-77. doi:
      Abstract ( 105 )   PDF (809KB) ( 221 )     

      Multiple control nodes are used to control an AS coordinately in trustworthy and controllable networks, thus easily resulting in the problem of inconsistent AS views of different control nodes. To solve this problem, based on the trustworthy and controllable network model, an election algorithm based consistent view construction mechanism is proposed. Firstly, an election algorithm is used to generate a primary control node. Secondly, according to the loads of the control nodes in an AS, the primary control node assigns the view construction tasks to the control node with the lowest load. Thirdly, the version of the generated view is defined by the time of the primary control node. The mechanism avoids the problem of  inconsistent views due to the individual view construction of different control nodes. Besides, the simulation experiment results show that the proposal has good performance.

      Study on the malicious nodes detection algorithm
      based on feature nodes analysis  
      XIE Jinyang1,LI Ping1,XIE Guifang2
      2015, 37(01): 78-83. doi:
      Abstract ( 149 )   PDF (923KB) ( 247 )     

      Wireless Sensor Network (WSN) is usually deployed in complex environment, and an attacker can easily inject false data by capturing nodes, thus causing serious consequences. A malicious nodes detection algorithm based on feature nodes analysis (DAFNA) is proposed. Firstly, the energy values of the event sources are estimated, and nodes healthy characteristics are maintained in this process. Secondly, we establish coordinates according to the feature nodes, conduct a variance analysis of the calculated and perceived distances between the nodes to be detected and the event source, thus the malicious nodes are identified. Simulation result verifies the effectiveness of the proposed algorithm, and compared with Hur algorithm it has a better accommodation of malicious nodes while requiring less prior knowledge.

      A QoS control approach based on
      the constraint of over design and under design    
      DUAN Yucong1,GAO Honghao2,TANG Chaosheng1,DU Wencai1,WAN Shixiang1,LU Junx
      2015, 37(01): 84-92. doi:
      Abstract ( 152 )   PDF (1071KB) ( 276 )     

      With the development of the Web service technology, the service mode based software is widely used in the fields of health care, education and public transportation. However, QualityofService (QoS) control is still one of the difficulties in the service technology. In the process of service combination, considering the design-time service and run-time service modes, two service design modes Over Design (OD) and Under Design (UD) are proposed. The variability space that constrains service combination is constructed to achieve the goal of QoS control. We show OD and UD cases in E-commence scenarios and use value based analysis to justify the significance of reducing OD and UD.

      Software defect prediction using
      rough sets and support vector machine  
      MENG Qian1,2,MA Xiaoping1
      2015, 37(01): 93-98. doi:
      Abstract ( 151 )   PDF (547KB) ( 293 )     

      The prediction of software defects has been an important research topic in the field of software engineering. The paper focuses on the problem of defect prediction. A classification model for predicting software defects based on the integration of rough sets and support vector machine model (RS-SVM) is constructed. Rough sets work as a preprocessor in order to remove redundant information and reduce data dimensionality before the sample data are processed by support vector machine. As a solution to the difficulty of choosing parameters, the particle swarm optimization algorithm is used to choose the parameters of support vector machines. The experimental data are from the open source NASA datasets. The dimensions of the original data sets are reduced from 21 to 5 by rough sets. Experimental results indicate that the prediction performances of Bayes classifier, CART tree, RBF neural network and RS-SVM are all improved after the dimension of the original data sets are reduced from 21 to 5 by rough sets. Compared with the above three models, RS-SVM has a higher prediction performance.

      Weak t-norm of regular FBR0-algebras and its application  
      NIU Chaohui,WU Hongbo
      2015, 37(01): 99-103. doi:
      Abstract ( 135 )   PDF (362KB) ( 212 )     

      FBR0-algebras (fuzzy BR0-algebras), which are built up by weakening the conditions of WBR0-algebras, have the same algebraic structure with FI-algebras (fuzzy implication algebras) which are important and basic logic algebras proposed by professor Wu Wang-ming. FBR0-algebra is studied in detail. Firstly, it is proved that regular FBR0-algebras have the same algebraic structure with RBR0-algebras. Secondly, the basic properties of weak t-norm of regular FBR0algebras are discussed. Finally, a representation of regular FBR0-algebras is given in the form of weak t-norm.

      Application of the grey mutation particle swarm algorithm
      in urban public transport passenger volume prediction  
      MI Gensuo,LIANG Li,YANG Runxia
      2015, 37(01): 104-110. doi:
      Abstract ( 107 )   PDF (849KB) ( 246 )     

      Because urban public transit volume is the fundamental basis for the development and planning of bus system, improving its prediction accuracy is beneficial to the development of urban public transport. By using the good performance of the particle swarm algorithm to optimize the parameters and the advantage of the grey prediction method for predicting uncertainty factors affecting the system, a grey mutation particle swarm combinational prediction model is proposed to predict the urban public transit volume and improve the prediction accuracy of the urban public transit volume. The prediction accuracy and effectiveness of the combinational forecast model are analyzed and verified. The results show that the accuracy of the combinational prediction model outperforms the single gray prediction model and some commonly used prediction algorithms, can predict the urban public transit volume well, and  provides  reliable scientific data for the decision-making and planning of the public transport system.

      Inherited boosting learning for face detection  
      WEN Jiabao1,2,XIONG Yueshan1
      2015, 37(01): 111-118. doi:
      Abstract ( 131 )   PDF (2138KB) ( 257 )     

      The framework of the inherited boosting learning methods is proposed based on “heredity plus variation” inheriting pattern, which can train four sorts of cascade classifiers. Besides the two traditional cascade classifiers, namely basic cascade classifiers based on “no heredity” inheriting pattern and chained cascade classifiers based on “full heredity” inheriting pattern, there are two new ones, which are feature inherited cascade classifiers and weak classifiers inherited cascade classifiers both based on “partly heredity plus partly variation” inheriting pattern. Although the new ones both have some extra costs, they have better fitting, can balance properly between the convergent speed and the generalization ability and thus outperform the traditional ones. Experimental results on upright frontal face detection based on Real AdaBoost, Gentle AdaBoost and LUT weak classifiers confirm the effectiveness of the new inherited boosting learning methods.

      Neurofilaments tracking by detection
      in fluorescence microscopy images  
      YUAN Liang1,ZHU Junda2
      2015, 37(01): 119-124. doi:
      Abstract ( 113 )   PDF (662KB) ( 255 )     

      Neurofilaments are functional protein polymers that are transported along the axonal processes of nerve cells. Studying the movement of neurofilaments is important in applications such as diagnosing neurodegenerative diseases. Traditional methods largely rely on manual labeling of the neurofilaments in fluorescence microscopy images. An automated method for analyzing the motion of neurofilaments based on trackingbydetection is proposed. The axon along which the neurofilaments move is extracted and represented by a parametric curve. Firstly, the axon is decomposed into blocks. Secondly, the blocks containing the moving neurofilaments are determined by graph labeling using Markov Random Field. Finally, the leading and trailing locations of a neurofilament are refined to subpixel accuracy. Experiments on real timelapse fluorescence image sequences of neurofilament movement demonstrate the efficacy and efficiency of our proposed method.  

      A novel image segmentation algorithm
      based on mixed quantum particle swarm 
      GAO Yinghui1,QU Zhiguo1,LU Kai2
      2015, 37(01): 125-132. doi:
      Abstract ( 124 )   PDF (778KB) ( 270 )     

      Image segmentation technology based on swarm intelligence has been paid more and more attentions due to its consistency with human visual mechanism. However, many existing swarm models are sensitive to parameter values and easy to converge to the local minimum, which restricts the application of swarm intelligence in complex image segmentation. In the paper, we define the abstract model of image segmentation based on swarm intelligence firstly, and then propose a novel image segmentation algorithm based on mixed quantum particle swarm (ISMQPS) by introducing the eneralized quantum particle model (GQPM) into image segmentation. ISMQPS contains three key parts: defining the quantum particle by pixel's grey value and position value, defining the swarm action rule by entangled quantum state, and realizing image segmentation by selforganization clustering of mixed quantum particle swarm. Experiments show that ISMQPS is insensitive to noise, has good segmentation effect, and can be used in complex image segmentation.

      Study and implementation of multi-contour 3D reconstruction   
      LIU Kunliang1,HUANG Jinming2
      2015, 37(01): 133-138. doi:
      Abstract ( 162 )   PDF (583KB) ( 454 )     

      In practical applications, 3D reconstruction often faces a series of 2D contour lines rather than the volume data we often process, so studying 3D reconstruction based on contours has important practical values.  The key technologies of 3Dreconstruction based on multiple contours include contours corresponding, contours splicing, crotched contours handling, and terminal contours closing . The concrete solutions to each step of 3D construction of contours are given. According to the winding direction issue of contours, we propose a method of gauging the sum of angles, avoiding error judgment of the winding direction of contours. For the crotching issue of one contour corresponding to multiple contours, we give a way of splitting contours based on the circumference ratio of corresponding contours. We also give the means of maximum angle in order to reduce the calculation time on triangulating terminal contours. The proposed solutions can achieve correct contours splicing of any contours, guarantee that each step of the solution is correct and effective, and they are more universal compared to other solutions.