Loading...
  • 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Current Issue

    • A service running data anomaly detection method based on
      weighted LOF and context judgment in cloud environment
      QIU Kai1,2,JIANG Ying1,2
      2020, 42(06): 951-958. doi:
      Abstract ( 137 )   PDF (601KB) ( 204 )      Review attachment
      Service running data reflects the service running state in the cloud environment. If there are service running data anomalies in the cloud environment, the operation of related software and the use of users will be affected. Traditional software anomaly detection methods usually neglect the information quantity provided by different dimension attributes of running data and the context environment of software running. Thus, the anomaly detection is inaccurate. Therefore, A service running data anomaly detection method based on weighted LOF and context judgment in cloud environment is proposed. Firstly, the dimension attributes of running data are weighted by the information entropy method, and the running data are judged by weighted LOF algorithm for the first anomaly detection. Secondly, the context information of service at runtime is considered comprehensively. The correspon- ding results are obtained after the second anomaly detection. Experiments show that this method can effectively detect service running data anomalies in the cloud environment.
       
      A new task offloading and resource allocation
      strategy based on mobile edge computing
      XUE Jian-bin,AN Ya-ning
      2020, 42(06): 959-965. doi:
      Abstract ( 180 )   PDF (675KB) ( 237 )      Review attachment
      Aiming at the problem of large system overhead and obvious delay jitter when offloading intensive tasks in Mobile Edge Computing (MEC), a new resource allocation strategy is proposed. Firstly, the strategy analyzes the system's task execution overhead and the terminal equipment's resource allocation mechanism under the constraints of system delay. Secondly, a joint convex optimization pro- blem of computation offloading and task assignment is established. Finally, the Lagrangian multiplier method is used to yield the optimal solution by iterative updating. The simulation results show that the proposed task offloading and resource allocation strategy ensures the user's quality of service while reducing the task execution overhead and effectively improves the performance of the MEC system.
       
      An interpolation based outlier detection
      method of sparse high-dimensional data
      CHEN Wang-hu,TIAN Zhen,ZHANG Li-zhi,LIANG Xiao-yan,GAO Ya-qiong
      2020, 42(06): 966-972. doi:
      Abstract ( 114 )   PDF (577KB) ( 131 )      Review attachment
      The data in the outlier detection problem can be considered as the mixture of normal and abnormal points in a space. Under the premise of reducing the loss of normal points, outliers are usually contained in the sample sets farthest from all clustering centroids. Inspired by this idea, this paper proposes an interpolation-based outlier detection method for sparse high-dimensional data. This method interpolates the original data by applying genetic algorithm on the basis of k-means clustering, solving the problem that sparse data in k-means clustering is easy to be merged. Experimental results show that, compared with traditional outlier detection methods based on k-means clustering and several typical detection methods based on improved k-means clustering, the proposed method can not only lose fewer normal points, but also improve the accuracy and precision of detection.
       
      WSN energy optimization on
      BIRCH based dual head clustering
      LUO Qing-yi1,ZHANG Jiang2,ZHANG Jing1,3,4,WANG Jian-min5
      2020, 42(06): 973-983. doi:
      Abstract ( 90 )   PDF (947KB) ( 103 )     

      null

      An overlapping multiway spectral community
      detection method for attributed network
      LI Qing-qing1,MA Hui-fang1,2,WU Yu-ze3,LIU Hai-jiao1
      2020, 42(06): 984-992. doi:
      Abstract ( 115 )   PDF (887KB) ( 145 )      Review attachment
      Spectral community detection algorithms generally divide the network via structure, which is often limited by the number of divisions and it is difficult to control the degree of overlapping. This paper designs an overlapping multiway spectral community detection algorithm  for attribute network, which can divide the attribute network into any number of overlapping communities and effectively discover outliers. Firstly, the partition mapping method from maximization to node vectorization is designed based on the weighted modularity. Secondly, the initial selection strategy of cluster center vectors is given and merged in the attributed network. Thirdly, the node allocation strategy is designed to calculate the inner product of the node and clustering center vector and to assign the node to the community with the highest inner product. Finally, the tightly structured overlapping communities that have out- liers are effectively detected. In addition, applying the algorithm to multiple networks in the real world verifies the effectiveness and efficiency of the proposed algorithm.
       
      A feature frequency differential enhancement algorithm
      for static detection of Android malicious applications
       
      LI Xiang-jun1,2,KONG Ke2,WEI Zhi-xiang1,WANG Ke-xuan1,XIAO Ju-xin1
      2020, 42(06): 993-1002. doi:
      Abstract ( 146 )   PDF (1875KB) ( 137 )      Review attachment
      With the rapid growth of the number of Android applications, security detection of Android applications has become one of the hot issues in the field of network security. Aiming at the feature selection of static detection for malicious applications, this paper gives the concepts of benign feature, malicious feature, benign typical feature, malicious typical feature and atypical feature, and designs the feature Frequency Differential Enhancement (FDE) algorithm. The FDE algorithm eliminates the atypical features in static features by calculating the frequency of features in benign and malicious applications. In order to verify the target effect and performance of the FDE algorithm, experiments based on equilibrium data and non-equilibrium data are designed, and a weight loss function is introduced for non-equilibrium data experiments. Experimental results show that the FDE algorithm can effectively remove atypical features from static features and screen out valid features, and weight loss function can effectively improve the recognition rate of malicious data in non-equilibrium data.
       
      Decentralized attribute-based undeniable signature
      WEI Liang1,2,HUANG Zhen-jie1,CHEN Qun-shan2
      2020, 42(06): 1003-1011. doi:
      Abstract ( 151 )   PDF (483KB) ( 157 )      Review attachment
      Combining the concepts of attribute-based, undeniability and decentralization, a new concept of decentralized attribute-based undeniable signature with formal security model is proposed, and a concrete non-bilinear pair scheme based on discrete logarithm difficulty problem is constructed. The scheme uses the witness-hiding zero-knowledge proof protocol proposed by Cramer et al. And the Schnorr protocol as the basic Σ  protocol, and uses the Shamir threshold scheme as its secret sharing scheme. Then, Fiat-Shamir transformation is used to obtain a (t,n) threshold signature (BTS) scheme. Then, undeniability, anti-collusion, and decentralization are performed on the BTS scheme. Finally, a decentralized attributes-based undeniable signature scheme is obtained, and its security is proved under the random oracle model.
       
      Utility and verification of failure data set in SRGM
      ZHANG Ce1,YI Wen-min2,BAI Rui1,SHENG Sheng1,XU Zao-hui1,GAO Tian-yi1,WANG Kan-yu1,SU Jia-yao1
      2020, 42(06): 1012-1020. doi:
      Abstract ( 128 )   PDF (3173KB) ( 131 )      Review attachment
      Aiming at the dependence of parameter fitting and performance evaluation on the FDS (Failure Dataset) in SRGM (Software Reliability Growth Model), the function and the impact of FDS on SRGM are studied, and the drawbacks and release advice are proposed. Firstly, SRGM performance evaluation flowchart is sketched and the collected FDSs are described, classified and analyzed. Secondly, 7 typical imperfect debugging dependent of SRGMs are evaluated on 9 FDSs. Furthermore, the fitting and prediction impact of FDS on SRGM are analyzed. From the view of publisher and researcher, the deficiencies in FDS are pointed out and the suggestions on releasing FDS are given. The research result shows that the researchers need to fully mine the testing information in FDS and establish more accurate SRGMs. Finally, the lack of the FDSs describing the new type of software structure and containing a large amount of failure information has been the bottleneck to restrict the development of SRGM.

       
      A trustworthiness-based development cost
      allocation algorithm of modular software
      MA Yan-fang1,WANG Meng-yue1,ZHOU Wei1,CHEN Liang2
      2020, 42(06): 1021-1029. doi:
      Abstract ( 102 )   PDF (669KB) ( 119 )      Review attachment
      Modularization is an important method of software development. Developing modules requires a certain cost. In order to ensure the system trustworthiness goal, additional cost is required. Therefore, how to allocate cost to each module to optimize software trustworthiness within the development cost given by users is an important research issue. Firstly, according to the relationship between module trustworthiness and cost, the cost estimation model of module trustworthiness is established. Secondly, based on different combination ways of modules, software trustworthiness and cost allocation models under different structures of software system are established and corresponding allocation algorithms are designed by using dynamic programming. Within the development cost given by users, the proposed allocation algorithms can allocate the development cost to each module, so as to optimize the software system trustworthiness. Finally, a case of automatic ticketing system shows the feasibility of the proposed allocation algorithms.
       
      A multi-feature stereo matching algorithm
       based on improved Census transform
      OU Yong-dong,XIE Xiao-peng
      2020, 42(06): 1030-1036. doi:
      Abstract ( 186 )   PDF (654KB) ( 153 )     
      Aiming at the problem that the low matching accuracy of the current stereo matching algorithm cannot achieve the practical high precision level. This paper proposes a multi-feature stereo matching algorithm combining improved Census transform, color information and gradient measure, so as to realize high-precision binocular stereo matching. Firstly, in the initial cost matching phase, the improved Census transform, color and gradient measure are summed to obtain a reliable initial matching cost. In the aggregation phase, the efficient and fast minimum spanning tree aggregation is adopted to obtain the matching cost matrix. Finally, the initial disparity map is obtained according to the winner's strategy, and the disparity map is optimized by the left and right consistency detection method to obtain a high-precision disparity map. Experiments on the standard test charts provided on the Middlebury website show that the average mismatch rate of the disparity maps of the 15 test data sets obtained by the algorithm is 6.81%. The algorithm has excellent real-time responsiveness.

       
       
      Combining the optimal threshold method and the improved
      Freeman chain code for lung parenchyma segmentation
      WANG Niu-niu,AN Jian-cheng
      2020, 42(06): 1037-1042. doi:
      Abstract ( 139 )   PDF (564KB) ( 161 )      Review attachment
      Accurate segmentation of lung parenchyma in lung CT images is a key step in the detection and diagnosis of lung diseases. In view of the fact that the traditional image segmentation method is not ideal for lung parenchyma segmentation in CT images, a lung parenchyma segmentation method based on optimal threshold method and improved Freeman chain code is proposed. Firstly, the optimal threshold method is used to realize the initial segmentation of lung parenchyma, then the lung parenchyma template is further processed. Secondly, the defective template is repaired by the improved Freeman chain code method and Bezier curve. Finally, the lung parenchyma is extracted by its multiplication with CT images of the lung. The accuracy of lung parenchyma segmentation is improved in image contrast clarity and consistency of lung parenchyma features, and the average segmentation accuracy can reach 96.8%. The experimental results show that, for marginal nodules and different lung lesions, this method has ideal segmentation effect and has good accuracy and robustness.
       
      An adaptive scale local intensity
      clustering image segmentation model
      ZHAO Ren-he,WANG Jun-feng
      2020, 42(06): 1043-1048. doi:
      Abstract ( 148 )   PDF (672KB) ( 129 )     
      Aiming at the problem that the traditional active contour model cannot accurately segment images with intensity inhomogeneity and is sensitive to scale parameters, an adaptive scale active contour model based on region information is proposed. The adaptive scale operator is constructed based on the local entropy of the image, and the energy function is constructed by using the local intensity clustering property of the image. A linear combination of a set of smooth basis functions is used to represent the bias field to increases the stability of the model. By minimizing this energy, the proposed model is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction. Experimental results show that, compared with the other four models, the proposal has higher segmentation accuracy, and the segmentation result is robust to the initial level set function and noise in the image.
       
      An image spam filtering method
      based on integrated learning
      ZHAO Jun-sheng,HOU Sheng,WANG Xin-yu,YIN Yu-jie
      2020, 42(06): 1049-1059. doi:
      Abstract ( 106 )   PDF (805KB) ( 107 )     
      Currently, majority of the image spam mail filtering technologies adopt a global common image spam mail data set as the training set. This data set lacks of updates and exhibits characteristics different from Chinese domestic image spam mails. In addition, it only employs only one type of classi- fier, which worsens the filtering performance. To address this issue, on the basis of constructing a domestic image spam mail database, the color, texture, and shape characteristics of images are extracted firstly. Then, the K-NN classification algorithm is used to select the HSV color histogram features for training, testing and performance comparison of different classifiers. A serial iterative improvement method integrating rough set-based K-NN, Naive Bayes, and SVM is proposed to form a strong integrated learning classifier, which can effectively filter domestic image spam mails. The accuracy and recall rate of image spam filtering can be improved to 97.3% and 96.1% respectively, and the false positive rate is reduced to 2.7%.
       
      Dermatological skin lesion segmentation
      based on DenseNet-BC network
      QI Yong-feng,HOU Lu-lu,DUAN You-fang
      2020, 42(06): 1060-1067. doi:
      Abstract ( 142 )   PDF (766KB) ( 137 )      Review attachment

      In order to solve the problem of inaccurate boundary segmentation of skin lesion images, an improved skin lesion segmentation algorithm based on dense lesion convolution network (DenseNet-BC) is proposed. Firstly, the connection way between the traditional algorithm layers is changed. Through dense connections, all layers can directly access the gradient from the original input signal to the loss function, so as to maximize the image feature information. Secondly, in order to reduce the number of parameters and the calculation amount of the network, the small convolution kernel is used in the bottleneck layer and the transition layer to halve the number of channels of the input feature map. The performance of this method is compared with the algorithms of VGG-16, Inception-v3 and ResNet-50 on the ISIC 2018 task 1 skin lesion segmentation data set. The experimental results show that the DenseNet-BC algorithm has the segmentation accuracy of 0.975 and the Threshold Jaccard of 0.835. The segmentation accuracy is significantly improved compared with other algorithms, and it is an effective method for skin lesion segmentation.

      An Alzheimer’s disease classification
      algorithm based on 3D-ResNet
      YU Song,LIAO Wen-hao
      2020, 42(06): 1068-1075. doi:
      Abstract ( 391 )   PDF (1145KB) ( 237 )      Review attachment
      Alzheimer’s disease (AD) is an irreversible neuro degenerative brain disease and the most common dementia in the elderly. Manual classification of Alzheimer
      ’s magnetic resonance image (MRI) has problems delay classification and time-consuming classification. As the aging population becomes more and more serious, accurately and quickly classify patients with AD has important research significance. This paper combines convolutional neural network (CNN) technology with MRI technology, and designs a 3D-ResNet model for AD classification, which achieves 98.39% accuracy, 96.74% sensitivity and 99.99% specificity on the validation set and achieves 97.43% accuracy, 94.92% sensitivity and 9999% specificity on the test set. The classification time of each patient is 0.23 s. In addition, for the problem that the pathogenesis of AD is not yet clear, this paper uses Class Activation Mapping (CAM) technology to visualize the AD-related brain regions.
       
      A multi-strategy adaptive mutation differential
      evolution algorithm and its application
       
      HU Fu-nian,DONG Qian-nan
      2020, 42(06): 1076-1088. doi:
      Abstract ( 140 )   PDF (927KB) ( 103 )      Review attachment
      In order to solve the defects of premature convergence, low convergence precision and slow convergence speed in the traditional DE algorithm, a multi-strategy adaptive mutation differential evolution algorithm (MsA-DE) is proposed. The algorithm combines three kinds of mutation strategies, randomly assigns the proportion, and increases the diversity of the population. By introducing the evolution threshold, the most appropriate mutation strategy is adaptively selected, and the global search and local search ability of the algorithm are balanced. Individuals crossing the boundary are treated to ensure the diversity and effectiveness of the population. Adding a perturbation mechanism improves the ability of the algorithm to jump out of local optimum, and at the same time improves the accuracy of obtaining the optimal solution. The algorithm is applied to the optimization of 14 test functions. The results show that the MsA-DE algorithm has higher convergence precision and the ability to jump out of local optimum, compared with the other three algorithms. The algorithm is applied to the capacity optimization problem of Railway Power Conditioner (RPC). The results show that the algorithm can reduce the capacity of the RPC compensation device and improve the economics of the device.
       
      A hybrid recommendation algorithm based on
       multi-source information clustering and IRC-RBM
      HE Deng-ping1,2,3,ZHANG Wei-yi1,2,HUANG Hao1,2
      2020, 42(06): 1089-1095. doi:
      Abstract ( 105 )   PDF (710KB) ( 115 )      Review attachment
      To solve the problem of data sparsity in collaborative filtering, this paper proposes a hybrid recommendation algorithm combining multi-source information clustering and IRC-RBM. Firstly, this algorithm takes user trust and project time weight as the clustering basis, uses the K-means clustering algorithm of minimum spanning tree to carry out clustering analysis on users, generates K similar user sets, and conducts scoring prediction on the basis of clustering analysis. Finally, the scoring matrix after clustering and the scoring matrix generated by IRC-RBM model are weighted and fused by linear weighting, and Top-N is used for recommendation. Experimental results show that the proposed hybrid recommendation algorithm significantly improves the accuracy in comparison to the traditional recommendation algorithm.
       

       
      A collaborative filtering recommendation algorithm
      with revised rating by cosine similarity
       
      ZHANG Rui-dian,QIAN Xiao-dong
      2020, 42(06): 1096-1105. doi:
      Abstract ( 142 )   PDF (739KB) ( 169 )      Review attachment

      When the user scores items, sometimes there are unreasonable factors that cause the user to make an unreasonable score on items, which may cause bias in the recommendation process. In order to correct this deviation, multiple dimensions of the scoring matrix are used to compare the similarity to correct the unreasonable score, and then the revised score is used for collaborative filtering recommendation. When using the variable dimension scoring matrix for similarity comparison, the similarity of the same user's scoring similar items is used, and the cosine similarity of two users' similarity of multiple similar items in different dimensions is compared. Firstly, multiple scores are constructed into several arrays of equal dimensions, and the similarity of each score array of the two users is compared. When a similarity differs greatly from other similarities, it is considered that the similarity corresponds to at least one of the two user arrays containing an unreasonable score. Secondly, the two arrays are divided into smaller arrays in the same way, and the reset can be done in the same manner, finally determining the unreasonable score. Finally, the similarity mean of all reasonable score arrays is used as the similarity of the corresponding array of unreasonable scores, thereby correcting the unreasonable score. Experiments using the MovieLens and Bookcrossing datasets show that the collaborative filtering algorithm with revised scoring has higher recommendation accuracy than the unmodified scoring, and its recommendation score MAE is reduced significantly. Compared with the comparison algorithm, the proposed algorithm can obtain better recommendation performance on MAE, Precision and Coverage.

      Total variational regularization for
      non-local mean seismic data denoising
       
      LI Xiao-lu1,ZHOU Ya-tong1,HE Jing-fei1,WENG Li-yuan1,LI Shu-hua2
      2020, 42(06): 1106-1110. doi:
      Abstract ( 145 )   PDF (561KB) ( 111 )      Review attachment
      Random noise often exists in seismic data acquisition, which will affect the accuracy of seismic data analysis. For Gaussian noise existing in seismic data, traditional non-local mean denoising algorithms cannot effectively maintain the in-phase axis edge in seismic data after denoising the seismic data. The total variation regularization non-local mean algorithm is applied to seismic data noise reduction. By calculating the noise estimation value and then updating the weight value of the de-jitter non- local mean algorithm, the de-jitter non-local mean noise reduction result is subjected to the total variation regularization constraint so as to obtain the best seismic data noise reduction result. While Gaussian noise is effectively removed, the edge of the in-phase axis of seismic data is retained. Through noise reduction experiments on synthetic seismic data, offshore pre-stack seismic data and onshore post-stack seismic data, the peak signal-to-noise ratio, mean square error and average structural similarity of the algorithm are compared with non-local mean algorithm and non-local mean algorithm based on neighbor selection strategy. It is concluded that the total variation regularization non-local mean noise reduction algorithm can effectively reduce noise while retaining the in-phase axis edge details of seismic data.

       
      Construction of topic word embeddings
      based on HDP: Khmer as an example
      LI Chao, YAN Xin, XIE Jun, XU Guang-yi, ZHOU Feng, MO Yuan-yuan,
      2020, 42(06): 1111-1119. doi:
      Abstract ( 132 )   PDF (783KB) ( 103 )      Review attachment
      Aiming at the problem of polysemy in a single word embedding, a topic word embeddings construction method on HDP (Hierarchical Dirichlet Process) is proposed in the case of Khmer. The method integrates the topic information on the basis of a single word embedding. In this way, the word topic tag is obtained through the HDP, and then it is regarded as a pseudo word and the word is input into the Skip-Gram model. Next, the topic word embeddings and the word embeddings are trained. Finally, the topic word embeddings of the text topic information is concatenated with the word embeddings obtained after the word training, and the topic word embedding of each word in the text is obtained. Compared with the word embeddings model that is not integrated into the topic information, this method achieves better results in terms of word similarity and text classification. Therefore, the topic word embeddings obtained in this paper has more semantic information.