Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • 论文
      A depth optimal matching algorithm in opportunistic network
      forwarding mechanism based on mobile medical big data platform  
      LUO Xuyuan1,2,CHEN Zhigang1,2,WANG Yunhua3,WU Jia1,2,GUAN Peiyuan1,2,LI Le1,
      2015, 37(10): 1799-1805. doi:
      Abstract ( 154 )   PDF (1344KB) ( 269 )     

      We analyze the ways that nodes pass information in opportunity networks on mobile medical big data platform. By covering all the nearby nodes and comparing the data of each pair of nearby nodes, the optimal matching method is used to select the adjacent nodes with the optimal matching results as the nexthop nodes, thus finding a path of forwarding the data efficiently. Based on the aforementioned process, we put forward a depth optimal matching algorithm in opportunistic network forwarding mechanism on mobile medical big data platform, called the Depth Optimal Matching (DOM) algorithm. It is used to match the data between the nodes in order to find a path that can forward data efficiently. Experimental results show that compared with classical algorithms in opportunistic networks, the DOM algorithm can reduce the redundant data in the process of data transmission and improve the successful transmission rate significantly.

      Secure outsourcing of extreme learning
      machine in cloud computing  
      LIN Jiarun1,YIN Jianping1,CAI Zhiping1,ZHU Ming1,CHENG Yong2
      2015, 37(10): 1806-1810. doi:
      Abstract ( 108 )   PDF (454KB) ( 269 )     

      Duo to the enlarging volume and increasingly complex structure of data involved in applications, running the extreme learning machine (ELM) over largescale data becomes a challenging task.In order to reduce the training time while assuring the confidentiality of ELM’s input and output, we present a secure and practical outsourcing mechanism for ELM in cloud computing.In this mechanism, we explicitly divide the ELM into two parts: public part and private part.The latter is executed locally to generate random parameters and do some simple matrix computation while the former part is outsourced by cloud computing that is mainly responsible for calculating the MoorePenrose generalized inverse, the heaviest computational operation. The inverse also serves as the correctness and soundness proof in result verification.We analyze the confidentiality theoretically and the experimental results demonstrate that the proposed mechanism can effectively release customers from heavy computation.

      Reachability and termination verification of some
      deterministic quantum programs 
      LEI Hongxuan 1,2 ,FU Li3
      2015, 37(10): 1811-1916. doi:
      Abstract ( 119 )   PDF (386KB) ( 206 )     

      In the single qubit system, the reachable set, the termination and the diverging conditions of deterministic quantum programs, which are a special kind of nondeterministic quantum programs, are investigated. They are described by bit flip channel, phase flip channel, depolarizing channel, amplitude damping channel and phase damping channel starting in computation basis states. The investigation shows that the termination and the diverging conditions of some quantum programs described by quantum channels starting in computation basis states relate closely with the parameters represented by quantum channels, while others do not.

      FlatVC:flat version control for virtual
      machine clusters in cloud environment  
      HU Minghao,ZHANG Zhaoning,LI Ziyang,PENG Yuxing
      2015, 37(10): 1817-1824. doi:
      Abstract ( 121 )   PDF (821KB) ( 209 )     

      The development of IaaS enables cloud service to fast deploy largescale virtual machines (VMs). However, implementing version control in VM clusters is in low efficiency. Current version control solutions have some problems like network transmission overload and slow operation speed. We present a novel version control approach for VM clusters, called FlatVC. FlatVC creates VM version incrementally on compute nodes to avoid transmitting version data to persistent storage, and downloads data blocks on demand during the VM restoring process, thus reducing network transmission overhead and speeding up version control process. By using cache tree structure to share the data chunks transmitted over the network, FlatVC reduces the data transmission pressure of root nodes. Besides, we make I/O optimization to avoid performance degradation caused by the version chain, which is constructed by incremental VM versions. Experimental results show that FlatVC can efficiently enforce version control for VM clusters, and speed up both version creating and restoring process.

      A measuring method of the digital
      channel time propagation delay 
      GU Mengxia
      2015, 37(10): 1825-1830. doi:
      Abstract ( 172 )   PDF (608KB) ( 295 )     

      In the field of integrated circuit (IC) testing, Time Propagation Delay (tPD) is a very important parameter, which not only reflect the response speed of the integrated circuit, but also influences the measurement accuracy of alternating current (AC) parameters in the IC testing system.We analyze the causes of tPD in the IC test system in detail, and study the influence on the testing results of  alternating current parameters of the device under test.We propose a measuring method of the digital channel tPD in the IC test system based on the time domain reflection technology,and conduct an experiment in Teradyne’s J750EX system. Through the analysis of experimental data, we find this method can effectively measure the digital channel’s  tPD,and hence improve the measurement accuracy of the AC parameters in the IC testing system. 

      Peripheral access control based on user identity  
      CHEN Songzheng,WEI Lifeng
      2015, 37(10): 1831-1835. doi:
      Abstract ( 100 )   PDF (501KB) ( 196 )     

      With respect to the coarse granularity and simplicity of the existing equipment control methods, we propose a peripheral access control method based on user identification, which can achieve a flexible and fine-grained access control for peripherals by using peripheral role access control lists, user group peripheral access control lists and user peripheral access control lists. Based on Linux operating system, we design and implement a framework, in which we utilize a peripheral feature database to identify a variety of devices; a policy database to define access permissions in connection with roles, user groups and users; and an arbitration procedure to check the accesses of peripherals. Finally, the functions are tested to verify the validity of the method and the features of the method are analyzed.

      An improved ASD model of the ALM
      with delay impact factor 
      CUI Jianqun1,WANG Bolun1,XIONG Tao1,WU Libing2
      2015, 37(10): 1836-1842. doi:
      Abstract ( 107 )   PDF (600KB) ( 220 )     

      In order to improve the stability and efficiency of Application Layer Multicast (ALM),based on the old timestampbased adaptive ALM overlay model with overlay structure detection,a new delay impact factor based ASD model,named ASDDIF,is proposed.After a new node detects the whole net,this model can choose a way to add the node into the net so that the new node brings the least time cost and a high efficient and distributed multicast tree is constructed. Simulation experiments show that the ASDDIF model can build the multicast net with high efficient transmission paths and reduce the delay of the multicast tree efficiently.

      A minimum transmission delay algorithm based on
      the historical transmission efficiency   
      TAN Ziyi1,CHEN Zhigang1,2 ,WU Jia1,2 ,ZHANG Weibin1
      2015, 37(10): 1843-1849. doi:
      Abstract ( 108 )   PDF (673KB) ( 205 )     

      Data exchanges do not require full paths in opportunity networks, and data transfer is based on the node opportunity moving and the probability of two nodes to interact with each other, thus reaching the green energy-saving purpose. But hopbyhop data transmission among nodes  in opportunity network incurs transmission delays and a large number of data copies, resulting in excessive energy consumption. In order to obtain a smaller transmission delay and fewer copies of data, we propose a minimum transmission delay algorithm based on historical transmission efficiency, called MDBHE (Minimum Delay based Historical Efficiency) algorithm, and build a locally high efficient and short transmission path according to historical transmission efficiency. Simulation results show that compared with the traditional opportunistic routing algorithms, the MDBHE algorithm reduces the transmission delay and enhances the success rate of transmission in opportunity networks.

      Security authentication of the modified NeedhamSchroeder
      protocol based on  logic of event   
      LIU Xinqian,XIAO Meihua,CHENG Daolei,MEI Yingtian,LI Wei
      2015, 37(10): 1850-1855. doi:
      Abstract ( 122 )   PDF (487KB) ( 202 )     

      Security protocols are the foundation of modern secure networked systems. Proving the security properties of cryptographic protocols is a challenge. Logic of event  is a formal method for describing the state migration of a distributed system, which formally describes security protocols, and which is the basis of theorem proving. Using the language of event orderings, event classes, and a type of atoms representing random numbers, keys,signatures, and ciphertexts, we present a theory in which authentication protocols can be formally defined and strong authentication properties proven. The improved NeedhamSchroeder protocol with time stamp, is proved to be of good security by our theory. The thoery can also be applied to formal analysis and verification of similar security protocols.

      A WSN identity-based keys updating method
      based on trusted computing platform
      WEN Song,WU Zhao,ZHENG Yi
      2015, 37(10): 1856-1861. doi:
      Abstract ( 93 )   PDF (479KB) ( 259 )     

      We proposed an efficient identitybased keys updating scheme in wireless sensor networks (WSN), in which the trusted computing platform serves as the key generation center, and the oneway function is used to construct a random pool. The scheme enables sensor nodes to verify key updating messages without causing excessive network traffic. To ensure the security of the keys, the scheme uses the trusted computing platform as the key generation center to ensure the security of the key source. When keys are updated, the features of the trusted computing platform can be used to validate the platform configurations and determine the authenticity and integrity of the messages and the keys in question. Oneway function is used for generating a random pool, so that sensor nodes can verify the authenticity of the messages and resist replay attacks.

      Research on real-time messages transmission
      scheduling  over master-slave-based multi-switch Ethernet 
      TAN Ming
      2015, 37(10): 1862-1868. doi:
      Abstract ( 111 )   PDF (654KB) ( 167 )     

      To schedule the messages transmission over master-slave-based multi-switch Ethernet and meet the realtime requirements, we propose a method for calculating the arrival time of the messages scheduled in each elementary cycle to the switch output. In addition, we also present a novel feasibility analysis algorithm called ECSchedTest f for periodic realtime messages scheduled in one elementary cycle, and prove its correctness. Moreover, an EDF-based realtime scheduling algorithm and an admission control algorithm for periodic messages are presented to take full advantage of multiple transmission paths of the switched network. In every elementary cycle, by calculating the time of each message's arrival to the output ports of the switches that it traverses, and considering the FCFS message scheduling policy used in the switch, the scheduler with ECSchedTest  can handle the message transmission precisely and efficiently. Simulation results show the advantages of the proposed real-time scheduling algorithm in terms of the network bandwidth utilization, thus enhancing real-time communication over a master-slave-based multi-switch Ethernet.

      Research on energy saving of WSNs with
      cluster-based routing  
      ZHANG Huanan1,2,LI Shijun2,JIN Hong2
      2015, 37(10): 1869-1876. doi:
      Abstract ( 101 )   PDF (817KB) ( 242 )     

      In wireless sensor networks (WSNs) for large scale sensor and environmental monitoring, since energy saving can prolong the life of sensor nodes, it has become one of the most important research topics. To provide reasonable energy consumption and improve the lifecycle of the sensor network system in WSNs, a new effective energy saving scheme and energy saving routing system is desirable. We design a clustering algorithm to reduce the energy consumption of WSNs and create a WSN routing structure based on clustertree using the sensor node cluster technology.The main goal of our scheme is to make an ideal cluster distribution,reduce the data transmission distance between the sensor nodes, decrease sensor nodes and the energy consumption, and prolong the lifecycle of WSNs. Experimental results show that this scheme can reduce network energy consumption and prolong the lifecycle of WSNs.

      A software adaptation method based on
      Tropos+ requirement model   
      LEI Yiwei,BEN Kerong,HE Zhiyong
      2015, 37(10): 1877-1883. doi:
      Abstract ( 113 )   PDF (630KB) ( 213 )     

      In the selfadaptive control process of model-driven software adaption, the actions of monitoring,analyzing,planning and executing are based on the shared knowledge model.For the convenience of knowledge models’maintaining and reusing, highly abstract requirement models are usually used to represent the knowledge.In order to model the adaptive requirements of software and to solve the problem that traditional Tropos and its extended methods cannot model the software adaptive requirements to exceptional events,we propose the Tropos+ which can monitor and deal with context changes and exception events.Base on Tropos+, we present a requirement model-driven software selfadaption method.Finally, the process of software self-adaption based on the proposed method is illustrated by an example.

      A threevalued logic model checking approach based
      on extensional partial Kripke structure 
      LIU Jiao,LEI Lihui
      2015, 37(10): 1884-1889. doi:
      Abstract ( 128 )   PDF (1458KB) ( 150 )     

      Multivalued model checking is an important method to solve the state explosion problem in formal verification, and its basis is the threevalued logic model checking. The challenge is how to obtain the value of uncertain states. We first propose a method to extend the partial Kripke structure (PKS), then present an approach for for obtaining the values of uncertain states based on the extended PKS, and finally design a threevalued logic model checking algorithm. Compared with the existing threevalued model checking algorithms, our algorithm reduces the complexity. Moreover, the proposed algorithm can improve the processing of uncertain or inconsistent information, and enhance the practicality of the three-valued logic model checking.

      Optimization of cold chain logistics warehousing
      process based on polychromatic sets 
      YANG Wei,GAO Heyun,LI Dan
      2015, 37(10): 1890-1898. doi:
      Abstract ( 107 )   PDF (4179KB) ( 250 )     

      We first analyze the actual operation of a cold chain logistics enterprise in shaanxi province, and use the unified modeling language (UML) with a userfriendly interface, which can describe the system effectively, and the polychromatic sets (PS) theory with a rigorous mathematical foundation, to build the operation process model of a cold chain logistics warehouse. Then we analyze the model structure and time accessibility, which can ensure the rationality of the process model. We also propose some suggestions on the optimization of the workflow, and build up a new model for the input/output process of the coldchain logistics warehouse. We compare the expected operation time of the model before and after optimization, and the results show the effectiveness of the optimized model.

      A construction method of object flow patterns in spatial regions 
      LIU Junling1,2,WANG Wei2,YU Ge1,SUN Huanliang2,XU Hongfei1
      2015, 37(10): 1899-1908. doi:
      Abstract ( 95 )   PDF (4682KB) ( 247 )     

      With the popularization and application of spatiotemporal data acquisition equipment, a large number of object positional data is created, which is typical big data.The positional data of departure and arrival can reflect the flow regularity of moving object, which can be expressed as a regional flow model. Moreover, regional flow models can be used to improve urban planning, intelligent transportation systems, etc.We analyze the methods for constructing object flow patterns in spatial regions. Due to the randomness of object moving, to find patterns with high prediction precision is a big challenge. We therefore propose a model for constructing object flow patterns including data discretization and serialization, pattern training and evaluation and so on, which can quantitatively represent the regional flow regularity as time sequences. We also present a new hierarchical clustering tree with skewness. Based on the skewness, we design a method for removing abnormal sequences and selecting the patterns automatically, which improves the prediction precisionof patterns. Experimental results on real datasets show that the proposed flow patterns can be used to express regional flow patterns, and the proposed pattern training method has higher prediction accuracy compared with the existing ones.

      Research and implementation of a distributed
      heterogeneous database integration system   
      XU Aiping,SONG Xianming,XU Wuping
      2015, 37(10): 1909-1916. doi:
      Abstract ( 112 )   PDF (888KB) ( 242 )     

      Because of historical reasons and the development of database technology, many enterprises and institutions have accumulated and will accumulate a large variety of heterogeneous data. The heterogeneity is mainly manifested in database types and data structures. Aiming at this problem, we take the hydrological distributed heterogeneous database of the Three Gorges Reservoir water environment for example, and build the hydrologic and water environment data exchange architecture and data sharing platform based on the analysis of water environment and hydrological data requirements.The problem of data exchange between different databases has solved through heterogeneous multisource database engine middleware. As for the historical data exchange problem which is in a large number, we present a partially imported data exchange mode. The use of the data catalogue registration technique makes the management and use of the integration platform more convenient and general. The heterogeneous multi-source database engine can not only facilitate the connection of various current mainstream databases, but also solve the problem of connecting web data interfaces using web services techniques. Experimental results show that the proposal can meet the demand of heterogeneous data integration in different application environments.

      A mixed denoising algorithm based on sparse
      representation and noise distribution prior knowledge 
      ZHANG Jianming,LI Pei,WU Honglin,HUANG Qianqian
      2015, 37(10): 1917-1923. doi:
      Abstract ( 85 )   PDF (640KB) ( 199 )     

      We propose a mixed denoising algorithm based on sparse representation and prior knowledge of noise distribution. The proposed algorithm utilizes the Adaptive Median Filter (AMF) to initialize and analyze the prior knowledge of noise distribution, and adaptively weight the sparse representation atom vector at the stage of sparse coding. Then, the selection threshold is adaptively adjusted by the extreme value of the current set of atoms so as to do selective elimination on atoms. Because of a avoidance of the traditional twophase mixed denoising strategy, the proposed algorithm gains much better PSNR and faster speed.

      An adaptive region based covariance tracking algorithm          
      HE Ruhan,HU Xinrong,LI Dengfeng,CHEN Dirong
      2015, 37(10): 1924-1932. doi:
      Abstract ( 121 )   PDF (755KB) ( 200 )     

      Covariance tracking has achieved impressive successes in recent years due to its competent region covariancebased feature descriptor. However, its bruteforce search strategy is still inefficient.A generalized,adaptive covariance tracking algorithm is proposed,which uses novel integral region computation and occlusion detection. The integral region is much faster and adaptive to the tracking target and tracking condition.The adaptive search window is adjusted by simple occlusion detection. The integral image and the global covariance tracking can be seen as a special case of integral region and the proposed algorithm,respectively.The proposed algorithm unifies the local and global search strategies in an elegant way and smoothly switches them according to the tracking condition judged by occlusion detector.It gets much better efficiency and robustness for distraction and stable trajectory by local search in normal steady state, and obtains more abilities for occlusion and reidentification by enlarged search window (until to global search) in abnormal situation at the same time. Experiments on many video sequences show that the proposed algorithm has excellent target representation ability,faster speed,and more robustness. 

      A color image encryption algorithm based on DNA sequence 
      TU Zhengwu,JIN Cong
      2015, 37(10): 1933-1939. doi:
      Abstract ( 159 )   PDF (1038KB) ( 253 )     

      Combining DNA cryptography with chaotic system,we present a color image encryption algorithm based on DNA sequence. This algorithm applies addition,subtraction and xor operations to DNA sequence,and decomposes a color image into bitplanes.We firstly decompose a color image into bitplanes and encode the them by DNA sequence.Likewise,we perform scrambling,addition and xor operations on every DNA-plane.Then,we decode the DNAplanse and compose the bitplanes into an encrypted image.Experimental results show that the encrypted image is noiselike,highly sensitive to the keys, has a flat histogram,good randomness and low correlation between two adjacent pixels.