Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • Software fault-tolerance based on monitoring CPU
      utilization ratio in real-time operating systems
      WANG Yuwei1,CAO Dong2,SHI Shucheng1
      2018, 40(08): 1337-1343. doi:
      Abstract ( 150 )   PDF (797KB) ( 394 )      Review attachment
      In hardware realtime operating systems, the CPU utilization ratio is an important indicator of system performance. If a task occupies the entire CPU, others will not continue, which can induce a disastrous consequence of system performance. By analyzing the characteristics of software running in realtime operating systems, a certain software faulttolerance strategy must be used to enhance system reliability and faulttolerance. In the μC/ OSⅡ realtime operating system, tasks in the flight control software are realtime monitored. Firstly, the calculation method of CPU utilization ratio is given and the CPU monitoring period is proposed reasonably. Secondly, a fault detection algorithm for abnormal CPU utilization ratio is presented. By dealing with the faults, the system's fault tolerance ability can be improved. Finally, the flight control software is designed in the embedded MPC5674 flight control computer to validate the ability and effectiveness of the four methods’ handling abnormal CPU utilization ratio. Simulation results show that software faulttolerance based on monitoring CPU utilization ratio can enhance the system reliability and faulttolerance ability effectively.
       
       
      High throughput implementation of
      SHA512 on mimic computers
      XI Shengxin1,ZHANG Wenning2,ZHOU Qinglei1,SI Xueming3,LI Bin3
      2018, 40(08): 1344-1350. doi:
      Abstract ( 125 )   PDF (635KB) ( 235 )      Review attachment
      Hash function SHA512 is a widely used encryption algorithm and plays an important role in modern cryptography. Considering the high performance and high energy efficiency of mimic computer and analyzing SHA512 deeply, an implementation scheme of fullpipeline structure based on the mimic computer is proposed. To improve the computing speed,the structure of adders on the critical paths is optimized. With the fullpipeline structure,the number of clock cycles needed to encrypt a data packet is reduced, and the data throughput is improved.Its actual running on the mimic computer shows that the chip can work at the clock frequency of 130 MHz and achieves the throughput of 133 120 Mbits/s, so the performance is significantly improved, and its energy efficiency is higher than that of the general server.
       
      A rapid data processing method
      for space science satellites
      SUN Xiaojuan1,2,3,SHI Tao2,3,LI Bing2,3,YANG Xiaoyan2,3,LEI Bin1,2,3,HU Yuxin1,2,3
      2018, 40(08): 1351-1357. doi:
      Abstract ( 153 )   PDF (1209KB) ( 220 )      Review attachment
      Rapid data processing for largescale data acquired by satellites has always been the key to build spatial information processing systems. Faced with space science satellite problems such as allweather observation, multiple types of detection loads, and various processing algorithms, the existing data analysis methods based on CCSDS
       standard format are difficult to satisfy the current requirements of data processing systems for the onorbit space science satellites in correctness and timeliness. According to the observing data characteristics of the space science satellites, the paper proposes a fast data processing method for space science, designs the twolayer joint index structure that transforms the processing problem from the space science large data into the index tables and the source packet data units, and improves the efficiency of data processing. The paper designs a processing framework using scientific workflow technique, which supports the cooperation between the datadriven processing and the businessdriven processing of space science satellite data, and also supports various data processing workflows and the parallel scheduling of numerous tasks for different satellite payload types. The experimental results show that this method has good scalability and less memory usage, which has been well applied to the spatial information processing system of space science satellites.
       
      Web service composition strategy and implementation
      of composition decision-making platform
      GUO Xue1,LI Zheng2,ZHANG He3,RONG Guoping3,WEN Junhao1
      2018, 40(08): 1358-1365. doi:
      Abstract ( 109 )   PDF (745KB) ( 178 )      Review attachment
      Because of different focuses and diverse objective circumstances, the existing Web Service Composition (WSC) methods are complicated and various. One of the key challenges in WSC is how to identify the most feasible and efficient composition strategy from a set of candidates. Therefore, we focus on a decisionmaking mechanism that helps people choose suitable WSC approaches. Considering that Analytic Hierarchy Process (AHP) can solve multicriteria problems, this paper analyzes and summarizes the previous studies and proposes a decisionmaking mechanism on choosing the best WSC method among several alternatives. Finally, to help WSC practitioners understand and use this decisionmaking mechanism, we also designed and implemented a decisionmaking platform, easyWSC, which is now open for use, discussion, and improvement by relevant researchers and practitioners.
       
      GSAT algorithm based on task allocation and
      scheduling for solving the 3-SAT problem
      FU Huimin1,2,XU Yang2,HE Xingxing2,NING Xinran1,2
      2018, 40(08): 1366-1374. doi:
      Abstract ( 199 )   PDF (482KB) ( 226 )      Review attachment
      Based on different allocation strategies for cloud computing task scheduling and task allocation and scheduling, a new algorithm—a GSAT algorithm based on task allocation and scheduling for solving the 3SAT problem is proposed.The algorithm forms a task for each variable in the 3SAT problem. Based on the GSAT algorithm,task allocation and scheduling is introduced to guide the greedy search. Meanwhile, under the premise of keeping the original greedy search, according to the idea of task allocation and scheduling and the characteristics of  the 3SAT problem, two new strategies (allocation strategy and scheduling strategy) are designed to complete the whole greedy search process together.In the experiments, 3700 standard Uniform Random3SAT problems with the variable number from 20 to 250 in SATLAB library were used to evaluate the performance of the new algorithm.Moreover, the new algorithm was compared with the improved GSAT algorithms with high performance and common performance.Experimental results show that the new algorithm has a higher success rate and fewer changes.
       
      Fluid modeling and simulation using
      CUDA-based weakly compressible SPH
      DUAN Xingfeng1,2,REN Hongxiang1,SHEN Helong1
      2018, 40(08): 1375-1382. doi:
      Abstract ( 132 )   PDF (1144KB) ( 192 )      Review attachment
      In order to realize the realtime and realistic simulation of smallscale fluid scenes, the Weakly Compressible SPH (WCSPH) method is used to model fluids and a hybrid CPUGPU framework is proposed for fluid calculation. Aiming at the problem that the neighbor particle search algorithm affects the computational efficiency of the fluid, we propose a 3D spatial grid partition algorithm and parallel counting sort algorithm to increase speed for searching neighbour particles. Lastly, we propose a parallel Marching Cubes algorithm based on CUDA to triangulate the isosurface of fluids, and use cube map technology to represent the effect of reflection and refraction, thus achieving fluid surface shading. The experiments show that the proposed fluid modeling and simulation method can realize the realtime calculation and rendering of smallscale fluid scenes and draw the dynamic effects of water flow, overturning and the wood swaying in the water. When the number of particles reaches 1 048 576, the parallel method based on GPU achieves the speedup of 60.7 in comparison to the sequential method based on CPU.
       
      An investment defined transaction processing optimization
      approach with collaborative storage and computation adaptation
      DUAN Yucong1,SHAO Lixu1,CAO Buqing2,SUN Xiaobing3,QI Lianyong4
      2018, 40(08): 1383-1389. doi:
      Abstract ( 149 )   PDF (963KB) ( 214 )      Review attachment
      Transaction processing technology is a key technology for reporting information consistency and reliability, and determines whether Web services can be applied to ecommerce. Typed resources such as data, information and knowledge are complicated and redundant, resulting in low storage and processing efficiency of resources. Processing of long transactions often lasts for a long time so that the strategy of locking resources cannot always be applied. We propose an investment defined transaction processing approach towards temporal and spatial optimization with collaborative storage and computation adaptation. In terms of resource modeling, resource processing, processing optimization and resource management, we propose a threelayer solution architecture that can be automatically abstracted and adjusted based on expanding the existing concepts of knowledge graph. The architecture includes three layers that are data graph, information graph, and knowledge graph. The key lies in the calculation of type transferring cost and storage cost on the resource storage space of search target resource objects, and the adjustment of the search mechanism and storage scheme of search target resource objects according to users’ investment, thus reducing the temporal complexity of resource searching and spatial complexity of resource storage and optimizing the temporal and spatial efficiency.
       
      Web service reliability prediction via
      collaborative filtering and Slope One algorithm
      WANG Lei,QU Jiaming
      2018, 40(08): 1390-1397. doi:
      Abstract ( 108 )   PDF (589KB) ( 176 )      Review attachment
      Web service reliability prediction has become a research hotspot in the field of service computing. To enhance the performance of the existing Web service reliability prediction methods, two prediction methods are proposed. Firstly, for the Web services reliability prediction method based on collaborative filtering, we improve the calculation of users similarity, services similarity, and prediction values. Secondly, we integrate kmeans clustering algorithm and Slope One algorithm to do Web service reliability prediction. Experimental results show that the proposed method has higher prediction accuracy than the existing methods.
       
      An intrusion detection method based on
      extreme learning machine and modified K-means
      WANG Linlin1,LIU Jinghao1,FU Xiaomei2
      2018, 40(08): 1398-1404. doi:
      Abstract ( 126 )   PDF (1269KB) ( 252 )      Review attachment
      Abstract:Intrusion detection systems are essential to protect the network security. However, it is hard for a traditional single algorithm to attain satisfied detection results for different attack classes. To solve this problem, this paper proposes an intrusion detection method based on Extreme Learning Machine (ELM) and modified K-means. ELM algorithm is optimized by Parametric Rectified Linear Unit (PReLU) activation function. The modified K-means algorithm can automatically select the initial centroids of clusters and the number of clusters by setting the distance threshold. Based on cascade algorithms, a hybrid intrusion detection method is designed based on improved ELM and modified K-means. The experimental results on NSL-KDD dataset show that, compared with other traditional algorithms such as BP neural network, Support Vector Machine (SVM) and ELM, the proposed method improves the detection results and reduces the false alarm rate.
       
      An improved vertex cover algorithm
       based on kernelization
      LUO Weizhong,CAI Zhaoquan
      2018, 40(08): 1405-1411. doi:
      Abstract ( 127 )   PDF (590KB) ( 198 )     
      Vertex cover is a famous NPhard problem and has important applications in many fields such as communication networks, bioinformatics and so on. Current researches mainly study it from the aspects of heuristic algorithms or approximate algorithms, which cannot achieve global optimization. Kernelization is a new approach dealing with hard problems. We propose an algorithm framework combining heuristic operation and kernelization operation and apply the kernelization technique to optimizing heuristic vertex cover algorithms. Kernelization operation identifies the set of vertices belonging to some optimal solutions, while heuristic operation changes the network topology, which guarantees that the kernelization operation can be executed successfully in the next loop.The optimization is achieved by interleaving the kernelization operation and heuristic operation. Simulation results show that the proposed algorithm can achieve various degrees of optimization in different networks. Moreover, it can obtain the optimal solution in almost all sparse network instances.
       
      A device-free indoor localization algorithm
      based on line-of-sight path identification
      YAN Shuping,DUAN Guihua,ZHANG Shigeng
      2018, 40(08): 1412-1419. doi:
      Abstract ( 120 )   PDF (968KB) ( 223 )     
      The widespread application and deployment of Wi-Fi technology has spawned many Wi-Fibased indoor positioning technologies. In recent years, WiFibased device-free positioning algorithm has attracted the attention of researchers. The device-free positioning algorithm does not require the target object to carry the wireless transmission device, but instead estimates the target object’s position by measuring the impact of the target object on the wireless signal transmission. Because it does not need the target to carry the relevant equipment, it can be widely used in many occasions such as elderly health care. Existing device-free positioning technologies usually require pre-acquisition of training data, so they are easily affected by the indoor complex and changeable environments to result in the degradation of localization accuracy. This paper proposes a device-free localization algorithm based on line-of-sight path detection. By using the Channel State Information (CSI), we can determine whether the path between a pair of wireless transceivers is a Line of Sight (LoS) path. On this basis, we propose a new devicefree positioning algorithm. The algorithm deploys a set of Wi-Fi transceivers in the monitoring area. For any pair of wireless devices, we determine whether the target object is within the Fresnel zone of the device by identifying whether there is a line-of-sight path between them or not. We propose a polling-based approach to get the most probable location of the target. The experimental results on the actual equipment show that the proposal can achieve an accuracy of about 0.5 meters, does not require  prior training, and has higher real-time performance.
       
      Detection of Android phishing site
      based on revised native Bayes
      MA Gang,LIU Feng,ZHU Erzhou
      2018, 40(08): 1420-1428. doi:
      Abstract ( 131 )   PDF (818KB) ( 302 )     
      With the rapid development of mobile Internet, phishing attacks are becoming more common on mobile phones. This paper proposes an improved naive Bayes algorithm to detect phishing sites. Firstly, for the purpose of ensuring data integrity in the data collection process, we fill in the missing attribute values through the K-means algorithm to obtain a complete data set. Secondly, for the purpose of eliminating low biased estimation of Bayes algorithm, we appropriately enlarge the probability so as to resolve the underflow problem. Thirdly, for the purpose of avoiding neglecting the relationship between attributes, we weight different attribute values so as to improve the correctness rate of detection. Lastly, for the purpose of resolving the small probability of the occurrence of phishing sites in the actual situation, we adjust the probability ratio of phishing sites and trusted sites so as to further improve the correctness rate of detection. Experiments are deployed on the Android 5.0 mobile phone.The experimental results show that our improved naive Bayes algorithm can effectively detect the phishing attacks on the mobile phone with relatively low time.
       
      Multi-feature combined depth
      image segmentation algorithm
      TAN Zhiguo1,2,OU Jianping1,ZHANG Jun1,SHEN Xiangeng2
      2018, 40(08): 1429-1434. doi:
      Abstract ( 117 )   PDF (877KB) ( 230 )     
      Depth image directly reflects the threedimensional geometric information of the scene surface and is not affected by factors such as light and shadow. Processing, recognizing, and understanding depth images are currently one of the hot topics and focuses in the field of threedimensional computer vision. Aiming at the problem that the depth image information is single and the noise is large, a threshold segmentation algorithm based on combined features is proposed to realize effective segmentation of depth image data. The algorithm first performs Otsu threshold segmentation on the image by using gradient features. On this basis, Otsu multithreshold segmentation is performed using depth features in different segmented regions to obtain candidate targets. Then, in the spatial domain, the depth feature is used to segment, merge, and denoise the candidate targets, thus finally obtaining the segmentation results. Experimental results show that this method can effectively overcome the influence of noise in depth images, the obtained boundary of the segmentation area is accurate, and the segmentation quality is high, which lays a good foundation for future indoor object recognition and scene understanding.
       
      A salient objects detection method based on
       background suppression improvement
      CUI Liqun,ZHAO Yue,WU Xiaodong,WEI Kefei,LIU Chen
      2018, 40(08): 1435-1443. doi:
      Abstract ( 108 )   PDF (932KB) ( 197 )     
      Aiming at the problem that the salient objects detection has low accuracy under complex background, an improved Conditional Random Field (CRF) salient objects detection method is proposed by using Hypercomplex Fourier Transformation (HFT). Firstly, this method builds unoriented graphs and extracts the node features on the image; Then, the HFT is reconstructed to obtain the smooth amplitude spectrum and phase spectrum, and the background suppression weights of unoriented graph nodes are obtained. Thus, the multi-scale Gaussian kernel background suppression graphs are preliminarily determined. Finally, they are inputted into the trained Conditional Random Field, the final significant target area is obtained by enhancing target representation. Experimental results show that the proposed method has obviously higher accuracy than the existing methods and can restrain the complex background and lock the specified target location region accurately at the same time. Experiments verify that salient objects detection has good accuracy and robustness under complex background.
       
      An adaptive video streaming transmission
       algorithm based on fuzzy control
      HOU Yonghong,XING Jiaming,WANG Liwei
      2018, 40(08): 1444-1452. doi:
      Abstract ( 89 )   PDF (1736KB) ( 203 )     
      Adaptive Streaming over HTTP (DASH) transmission protocol based on HTTP can enable users to select suitable video quality according to their terminal display capabilities and channel conditions, which is the development direction of network video service technology. How to adaptively select the video bit rate to obtain the best Quality of Experience (QOE) according to the change of network throughput has not been well solved in the existing DASH systems. In this paper, an adaptive transmission algorithm based on fuzzy control is proposed, which takes the buffered video time and the bitrate mismatch between the video bitrate and the network throughput as input and the expected buffer change as output. Fuzzy logic is used to achieves the following control objectives: (1) The buffered video time stays within a safe range; (2) the average quality of the transmitted video is maximized, and (3) the video playback interruption caused by bandwidth fluctuation is avoided. Finally, performance tests were conducted in two virtual network environments and two actual network environments. The experimental results show that the proposed algorithm is superior to traditional algorithms in terms of QOE.
       
      A remote sensing image fusion algorithm based on
       guided filtering and shearlet sparse base
      WANG Wei1,2,ZHANG Jiae1,2
      2018, 40(08): 1453-1458. doi:
      Abstract ( 80 )   PDF (685KB) ( 158 )     

      For the situation that the spatial resolution and spectral resolution of remote sensing images cannot be combined, we propose a remote sensing image fusion algorithm based on shearlet sparse base and guided filtering by combing multiscale transform with sparse representation. Based on the IHS fusion model, we adopt the guided filtering for the fitting process. Then the brightness image and the panchromatic image are decomposed by the shearlet transform to obtain the high and low frequency subband coefficients of the image. The lowfrequency subimages are sparsely processed and the optimal sparse coefficients are obtained, and fusion is performed based on the criterion that the activity degree of image blocks is large. The corresponding highfrequency subimages are fused based on regional energy and regional variance and obtain the fusion results via the shearlet inverse transformation. Experimental results show that the proposed algorithm can improve image sharpness and spectral retention, and it outperforms other algorithms in image integrity and detail.

      An adaptive multiphase image
      segmentation algorithm based on SLIC
      GUO Wei,LI Hongda,XING Yuzhe
      2018, 40(08): 1459-1467. doi:
      Abstract ( 121 )   PDF (1173KB) ( 225 )     
      Abstract:We propose an adaptive multiphase image segmentation algorithm based on SLIC to solve the problem of the interactive segmentation of multiphase images. Firstly,the new algorithm uses the SLIC method to segment image into superpixels with similar size and regular shape. As the original data points, the fivedimensional eigenvalues of the superpixel center point are clustered by the DBSCAN algorithm with adaptive parameters. Meanwhile, segmentation boundary lines and the number of multiphase can also be obtained. The algorithm can determine the number of segmentations adaptively without user interaction requirement. In order to verify the effectiveness of the proposed algorithm, we make a comparison with the artificial annotation of the segmented image on the standard data set of Berkeley College (BSDS500). Because of the SLIC algorithm, the proposed algorithm can segment an image with 481×321 pixels in only 1.5 seconds. Experimental results show that it can effectively solve the problem of manual interaction in multiphase image segmentation while the PRI and VOI indexes are superior to traditional algorithms. Our algorithm can adaptively determine the number of segmentations and improve the efficiency of segmentation with good segmentation effect.
       
      Selecting attributes and granularity
      for data with test cost constraint
       
      LIAO Shujiao1,2,ZHU Qingxin1,LIANG Rui 1
      2018, 40(08): 1468-1474. doi:
      Abstract ( 79 )   PDF (624KB) ( 180 )     
      Test cost and misclassification cost are commonly considered in costsensitive learning. In real applications, the test cost of a feature is often related to the granularity of attribute values, and the misclassification cost of an object with multiple attributes is usually influenced by the total test cost of attributes. Based on this consideration, the paper studies the selection of attribute and granularity of data in the case where the total test cost is restrained. Aiming at minimizing the average total cost of data processing, a method is proposed to choose the optimal attribute subset and the optimal data granularity simultaneously. We first construct the theoretical model of the proposed method, and then design an efficient algorithm. Experimental results show that the proposed algorithm can effectively select the attributes and the granularity of data under different test cost constraints.
       
      Pedestrian head detection based on deep neural networks
      TAO Zhu,LIU Zhengxi,XIONG Yunyu,LI Zheng
      2018, 40(08): 1475-1481. doi:
      Abstract ( 158 )   PDF (1433KB) ( 448 )     
      Pedestrian detection has become the core technology that security, intelligent video surveillance, and traffic statistics of people in the scenic area depend on. The latest object detection methods such as FastRegions with Convolution Neural Network (FastRCNN), Faster RCNN, Single Shot Multibox Detector (SSD), Deformable Part Models (DPM) are currently the classic algorithms for object detection. However, these algorithms pay more attention to detect the whole pedestrians. In large scenes, pedestrians have different postures and some of them are occluded frequently. Only modeling the position of the pedestrian’s body and grasping the local features of the pedestrians can achieve accurate positioning. The FasterRCNN deep network prototype is adopted, a detection model is built for pedestrian heads, head features in different directions are extracted at the same time, and a spatial pyramid pooling layer is added to ensure the detection rate. These can effectively solve the partial occlusion problem of pedestrians in large scenes and clearly show the general flow direction of pedestrians. The proposal is more conducive to the flow statistics than the ordinary head estimation.
       
       
      Traffic state prediction for mobile crowdsensing
      networks based on CSA-SSVR
      XIA Zhuoqun1,2,3,LUO Junpeng1,2,HU Zhenzhen1,2
      2018, 40(08): 1482-1487. doi:
      Abstract ( 82 )   PDF (713KB) ( 164 )     
      Compared with traditional sensor networks, the mobile crowdsensing network has a great advantage in deployment and maintenance costs, thus becoming more and more popular in intelligent transportation systems. Prediction of traffic state is significant for traffic management systems. We explore the travel speed data in the mobile crowdsensing network to predict the future speed of vehicles. Combining with the seasonal algorithm and the cuckoo search algorithm (CSA), we use the support vector regression model (SVR) to determine the main parameters of the SSVR, and propose a prediction model called CSASSVR to estimate the road traffic status in the future. Experimental results show that this model has good accuracy in traffic state prediction in the mobile crowdsensing network.