Loading...
中国计算机学会会刊
中国科技核心期刊
中文核心期刊
Home
Introduction
Journal honors
Database
Editorial board
Guide
Submission
Publishing ethics
Subscription
Download
Contact us
中文
Current Issue
2021, No. 09 Published:25 September 2021
Last issue
Next issue
Columns in this issue:
A dynamic remainder processing mapping model for convolutional neural network accelerator on FPGA
ZHAO Xiao-qiang, JIANG Jing-fei, XU Jin-wei, DOU Yong
2021, 43(09): 1521-1528. doi:
Abstract
(
203
)
PDF
(1283KB) (
326
)
Mapping convolutions to matrix multiplications is an efficient implementation on FPGA. However, the existing conversion methods cannot be dynamically adjusted according to different convolution parameters, which limits the parallelism of convolution calculation. This paper proposes a novel dynamic residue processing mapping model. The mapping model contains three sub-models: feature mapping model, weight mapping model, and output mapping model. The feature mapping model converts features into a feature matrix, and the weight mapping model converts weights into a weight matrix. The feature matrix and the weight matrix obtain convolution calculation results by multiply-and- accumulate array, and the convolution calculation results are stored in the memory by the output mapping model. In the process of convolution calculation, the number of output channels of the convolution is usually not an integer multiple of the number of rows of the multiply-and-accumulate array. The three sub-mapping models will dynamically adjust the mapping method according to the remaining number to increase the utilization of the multiply-accumulated array. Experiments show that using the dynamic remainder processing mapping model can increase the multiple of parallelism up to the size of the convolution kernel and achieve higher actual throughput and energy efficiency.
A computational model for satisfiability problem based on DNA origami
WANG Xin-yi, YIN Zhi-xiang, TANG Zhen, YANG Jing, CUI Jian-zhong,
2021, 43(09): 1529-1537. doi:
Abstract
(
157
)
PDF
(1215KB) (
203
)
DNA origami is a new DNA self-assembly method, which has the advantages of programmability and nano-addressability, and is widely used in DNA computing. In this paper, a computational model on DNA origami substrate for solving the satisfiability problem is designed on the basis of DNA origami that can fold special structures. This model adopts the principle of molecular beacon and eliminates the non-solution by observing the luminescence, so as to find the solution of the satisfiability problem, and the feasibility of the model is proved by examples and simulation.
Network-on-Chip optimization for high bandwidth I/O in processors
SHI Wei, GONG Rui, LIU Wei, WANG Lei, FENG Quan-you, ZHANG Jian-feng
2021, 43(09): 1538-1545. doi:
Abstract
(
159
)
PDF
(936KB) (
201
)
In high-performance processors, the demand of I/O bandwidth is increasing. On the one hand, more and more lanes of high-speed interface are used, and on the other hand the transmission rate of interface is also raised gradually. The Network-on-Chip (NoC) of high-performance processors must be able to match the bandwidth requirements of various high-speed I/O interface, and must ensure that direct memory access (DMA) requests can be completed correctly. However, there are great differences in communication mechanism between various high-speed interface protocols and interconnection network protocols, which may lead to deadlock and other problems. This paper first analyzes NoC and high performance I/O, and proposes a method of designing high bandwidth I/O interface and a solution of resolving deadlock. NoC with deadlock resolution technique makes the I/O system more robust, and various limitations of NoC design can be reduced. Finally, based on a server processor, the proposed optimization method was implemented and evaluated. For 16-lane PCIe Gen4 interface, the read and write bandwidths reach up to 30GB/s respectively. In some special scenarios, deadlock is produced due to special transaction sequences, and the NoC can automatically detect the deadlock and release the deadlock.
Truth discovery based on neural network encoding
CAO Jian-jun, CHANG Chen, WENG Nian-feng, TAO Jia-qing, JIANG Chun
2021, 43(09): 1546-1557. doi:
Abstract
(
136
)
PDF
(1492KB) (
140
)
Due to the openness and diversity of the Internet, different platforms provide different quality information,and the descriptions of the same object can be conflict with each other. Truth discovery is one of the important technical means to resolve semantic conflicts and improve the data quality. Traditional truth discovery methods usually assume that the relationship between source reliability and claim credibility can be represented by a simple function. These methods design iterative rules or probability models to find trustworthy claims and sources. However, manually-defined factors are often difficult to reflect the real underlying distribution of the data, resulting in an unsatisfied truth discovery result. In order to solve this problem, a truth discovery method based on neural network encoding is proposed. Firstly, the method constructs a double-loss deep neural network which contains “source-source” and “source-claim” relationships. Secondly, it embeds the sources and claim into a low-dimensional space, which indicates the source reliability and claim credibility. Based on the optimization, the reliable sources and the trustworthy claims are close in the embedding space (meanwhile, unreliable sources and untrustworthy claims). Finally, truth discovery is performed based on the embedding space. Compared with traditional methods, it is not necessary for the proposed method to manually define the iterative rules or data distribution before truth discovery. The method utilizes the neural network to automatically learn the complex relationships among sources and claims, and then embeds them into a low- dimensional space. The experimental results on the real dataset show that the proposed model increases the precision by 2%~25% in comparison to the iterative based methods such as Accu, by 2%~4% in comparison to the probabilistic graphical model based methods such as 3-Estimate, by 2%~5% in comparison to the optimization based method such as CRH, and by 1%~2% in comparison to the neural network based method FFMN.
A grey wolf optimization algorithm based on drunkard strolling and reverse learning
LIU Lian, FU Shao-chang, HUANG Hui-xian
2021, 43(09): 1558-1566. doi:
Abstract
(
155
)
PDF
(689KB) (
166
)
The grey wolf optimization algorithm is easy to fall into the local optimum in the later stage of optimization. Because of its higher complexity when solving high-dimensional functions, the probability of falling into the local optimum is greater. To address this problem, this paper proposes a mixed grey wolf algorithm on the basis of both drunkard strolling and reverse learning, termed as DGWO. In the process of iteration, the dominant wolves are partially retained from the comparison between the dominant and the worst wolves via backward learning. Meanwhile, drunkard strolling is performed on the leader wolf, where the coefficient scalars are utilized in A and C instead of the coefficient vectors in the original algorithm. The effectiveness of the proposed method is investigated by 10 standard test functions (100D, 500D and 1 000D) as well as 10D CEC2013 test function, and compared with PSO, GWO-CS, and GWO algorithms. The simulation results demonstrate that the proposed DGWO algorithm performs better in terms of accuracy and convergence rate. In addition, the improved grey wolf algorithm is applied to the parameter design of the two-stage operational amplifier with the goal of maximizing the open-loop low-frequency gain to verify the practicability of our scheme.
Optimization of cold and hot flows replacement in large-scale data flow statistics
QIAO Guan-jie, L Gao-feng, TAN Jing, MO Lu-sha
2021, 43(09): 1567-1573. doi:
Abstract
(
119
)
PDF
(799KB) (
115
)
This paper studies the problem of large-scale data flow statistics, and optimizes the problems in the replacement strategy of Elastic Sketch, which is a typical structure of large flow statistics. The optimization strategy solves the problem of cold flow being misjudged as hot flow inserted into the heavy part. In order to optimize the problem that the flow stored in the heavy part may not be the largest flow, a replacement strategy based on the maximum value and group connection is proposed to ensure that the largest flow stored in the heavy part is guaranteed to improve the accuracy of the large flow statistics. At the same time, the probability of thermal collisions is greatly reduced. Compared with the traditional measurement statistics method, the measurement accuracy is improved while the memory usage is reduced.
A dynamic trust evaluation model for edge devices
ZHAO Guo-sheng, WANG Tian-tian, WANG Jian
2021, 43(09): 1574-1583. doi:
Abstract
(
227
)
PDF
(792KB) (
260
)
Aiming at the accuracy and time overhead of device trust evaluation in edge computing environment, a dynamic trust evaluation model for edge devices is proposed. Firstly, the time-degradation factor is used to express the degree of direct trust invalidity, the satisfaction function is introduced to modify the Bayesian equation, and the incentive mechanism is used to evaluate the direct trust between edge devices. Secondly, the improved grey correlation analysis method is used to determine the index weight to solve the problem of recommending equipment weight in the process of indirect trust evaluation. Finally, through the information entropy fusion of direct trust and indirect trust, the comprehensive trust value of the device is obtained, and the dynamic update factor is used to dynamically update the integrated trust value. The simulation analysis shows that, compared with the RFSN model, the proposed model can reduce the false detection rate of malicious devices by 5.7% on average, and increase the inter-device interaction success rate by 4.1%. At the same time, it is verified that the trust evaluation model is superior to the FHTM model and the RFSN model in terms of time overhead, which can more accurately and efficiently evaluate the trust of edge devices.
A conditional causality mining algorithm in network log data
LIU Yun, XIAO Tian
2021, 43(09): 1584-1590. doi:
Abstract
(
116
)
PDF
(895KB) (
152
)
A large amount of system log data is collected during network operations, and finding out precise system faults has become an important research direction. This paper proposes a conditional causality mining algorithm (CCMA), which generates a set of time series data from log messages, and uses Fourier analysis and linear regression analysis to delete a large number of irrelevant periodic time series. Then, the causal inference algorithm is used to output the directed acyclic graph, and the final result is obtained by detecting the edge distribution of the acyclic graph and eliminating the redundant relationship. The simulation results show that the CCMA algorithm outperforms the dependent mining algorithm (DMA) and the network information correlation and exploration algorithm (NICE) in two main performance indicators such as processing time and edge correlation rate, which proves that the CCMA algorithm can effectively optimize the processing speed and mining accuracy in log event mining.
Cross-domain pedestrian re-identification based on capsule network
YANG Xiao-feng, ZHANG Lai-fu, WANG Zhi-peng, Saddam Naji Abdu Nasher, DENG Hong-xia, LI Hai-fang
2021, 43(09): 1591-1599. doi:
Abstract
(
131
)
PDF
(768KB) (
157
)
Pedestrian re-identification searches for specific pedestrians in different environments, which has attracted widespread attention from domestic and foreign scholars in recent years. At present, pedestrian re-recognition algorithms mostly use a combination of local features and global features, and the training test on a single data set has achieved very good results. However, the results in the cross-domain test are not satisfactory, and the generalization ability is low. This paper proposes a cross- domain pedestrian re-recognition method based on deep capsule network. Through the view angle classification training task, the model can learn the effective features of the pedestrian in the image, and these features can be directly transferred to the pedestrian re-recognition task, alleviating the problem of insufficient generalization ability of pedestrian re-recognition. Experimental results show that this model is superior to all current pedestrian re-recognition methods based on unsupervised learning and has good generalization ability.
A zero-visibility cops and robber game algorithm in the three-dimensional grid
WANG Jia-hui, ZHONG Fa-rong
2021, 43(09): 1600-1607. doi:
Abstract
(
140
)
PDF
(692KB) (
112
)
Cops and robber game is a graph searching problem. The goal is to determine the minimum cop number needed to capture the robber. The robber is invisible in the zero-visibility cops and robber searching model: the cops have no information about the location of the robber at any time. This problem can be abstracted into a vertex cleaning model. By studying the properties of the three-dimensional grid, the vertex set of the three-dimensional grid can be divided into two subsets, and the relationship between the smaller subset and the boundary is deduced. Then, the results in the partition can be applied to prove the lower bound of the minimum cop number, and a monotonic search strategy is constructed to determine the upper bound. Finally, a zero-visibility cops and robber game algorithm on the three-dimensional grid is proposed.
A PCB image denoising algorithm based on improved NLM
ZHANG Lu-wen, XUE Xiao-jun, LI Heng, WANG Hai-rui, ZHANG Guo-yin, ZHAO Lei
2021, 43(09): 1608-1615. doi:
Abstract
(
168
)
PDF
(828KB) (
156
)
In order to solve the problems of PCB image edge information missing and carrying a lot of noise in industrial production, the existing de-noising algorithm has poor effect, large amount of calculation and high complexity. Based on this, a PCB image denoising algorithm based on improved NLM is proposed to enhance the denoising quality of PCB image. Firstly, the weight adaptive algorithm based on morphology is used to enhance the PCB image, so that the PCB image retains good edge information. Secondly, the feature matching model is introduced to fuse the enhanced PCB image with the original PCB image. Finally, the PCB image is denoised by improving the weight value of NLM algorithm, and the final denoised image is obtained. Experimental results show that, compared with the existing algorithms, the proposed algorithm retains the edge information of PCB image better, has better denoising effect, significantly improves the image quality, enhances the robustness of the image, improves the calculation speed and reduces the complexity of the algorithm.
A novel image retrieval algorithm with adaptive fusion network and hash
ZHOU Yan, PAN Li-li, CHEN Rong-yu, SHAO Wei-zhi, LEI Qian-hui
2021, 43(09): 1616-1622. doi:
Abstract
(
131
)
PDF
(649KB) (
178
)
Because of its high accuracy in image recognition, convolutional neural network is very popular in the field of image retrieval. However, in the face of large datasets, the high deep feature dimension extracted by convolutional neural network is likely to cause “the curse of dimensionality”. Aiming at the problem of the high dimension of deep features in image retrieval, a novel image retrieval algorithm based on adaptive fusion network feature extraction with hash feature reduction is proposed. Due to the high complexity of high-dimensional features in traditional hash processing, an adaptive fusion module is added to the convolutional neural network to re-integrate the features, to enhance the capability of feature representation and reduce the feature dimension. Then, the deep feature dimension is reduced by feature sparsity optimization algorithm for the second time, and the reduced hashing code is obtained by mapping. Finally, using Inception network as the basic model, a number of experiments were conducted on CIFAR-10 and ImageNet datasets. Experimental results show that this algorithm can effectively improve the efficiency of image retrieval.
Image dehazing based on fusion luminance model and gradient domain filter
HUO Yuan-lian, ZHENG Hai-liang, LI Ming, ZHANG Jian,
2021, 43(09): 1623-1633. doi:
Abstract
(
136
)
PDF
(1537KB) (
163
)
In order to solve the problems of the dark channel prior algorithm, such as color oversaturation, overall luminance darkening and halo in the sky, an image dehazing algorithm fusing the luminance model and gradient domain filter is proposed. Firstly, the average value of the first 0.1% pixels with the largest luminance is selected as the atmospheric light value. Secondly, the improved dark channel model and luminance model of adaptive minimum filter are used to solve the transmission in the foreground region and the sky region respectively, and then the rough transmission is obtained by weighted fusion. On the basis of that, the gradient domain guided filter is used to refine the transmission. Finally, the haze-free image is restored by atmospheric scattering model and gamma correction. The experimental results show that this algorithm can quickly and effectively solve the halo effect and image distortion of the sky region on the haze image containing the sky region, and the restored image is clear and natural, with more detailed information retained, which is better than other comparison algorithms in subjective and objective evaluation.
An irregular interference repair algorithm of text images based on partial convolution
DUAN Ying, LONG Hua, QU Yu-quan, SHAO Yu-bin, DU Qing-zhi,
2021, 43(09): 1634-1644. doi:
Abstract
(
153
)
PDF
(1372KB) (
158
)
Aiming at the erroneous literacy caused by irregular interference and text adhesion in text images, this paper proposes a text image restoration model based on partial convolution operations. This paper studies and analyzes the text image characteristics of several common fonts, establishes a text image database, and integrates it with the interference mask database, then evaluates the repair effect of the model, and conducts classification tests on different levels of repair. Experiments have proved that the text model predicts the missing parts based on the existing parts of the current text under the premise of ensuring that the original text information is not lost. The peak signal-to-noise ratio is up to 32.46 dB, the structural similarity is up to 0.954, and the best loss value is up to 0.015. The text recognition rate after repair is 27.85% higher than that before repair. After repairing the defective pictures of four ancient scripts, including official script, seal script, oracle bone inscriptions, and running script, the peak signal-to-noise ratio reached 30.46 dB, and the structural similarity reached 0.964.
Text feature selection based on improved CHI and PCA
Text feature selection based on improved CHI and PCA
2021, 43(09): 1645-1652. doi:
Abstract
(
160
)
PDF
(790KB) (
133
)
Aiming at the large amount of noise and redundant features in text data, in order to obtain a more representative feature set, a feature selection algorithm (ICHIPCA) combining improved CHI-square statistics (ICHI) and principal component analysis (PCA) is proposed. Firstly, the CHI algorithm ignores word frequency, document length, category distribution, and negative correlation characteristics, and introduces corresponding adjustment factors to improve the CHI calculation model. Secondly, the improved CHI calculation model is used to evaluate the features, and selects the top features as the primary selection feature set. Finally, PCA algorithm is used to extract the main components while basically retaining the original information to achieve dimensionality reduction. Verification on the KNN classifier shows that, compared with the traditional feature selection algorithm IG and CHI equivalent type algorithm, the ICHIPCA algorithm improves the classification performance in multiple feature dimensions and multiple categories.
A local latent space recommendation method fusing social information
WEI Yun-he, MA Hui-fang, JIANG Yan-bin, SU Yun
2021, 43(09): 1653-1659. doi:
Abstract
(
137
)
PDF
(647KB) (
159
)
With the development of social networks, more and more studies utilize social information to improve the performance of traditional recommendation algorithms. However, most of the existing recommendation algorithms ignore the diversity of user interests and do not consider the aspects that users care about in different social dimensions, resulting in poor recommendation quality. In order to solve this problem, a recommendation method that considers both global latent factors and the specific latent factors of different subsets is proposed. The recommendation process considers both the users shared preferences and the users specific preferences in different subsets. This method first divides users into different subsets according to their social relationships, based on the intuition that users participate in different social dimensions, and are interested in different items. Secondly, the truncated singular value decomposition technique is used to model the user s rating of items, among which the global factors capture levels shared by users, while specific latent factors of different user subsets capture specific levels of user concern. Finally, global and local latent factors are combined to predict user ratings for unscored items. Experiments prove that the method is feasible and effective.
A Chinese distant supervised personal relationship extraction method based on TongYiCi CiLin and rules
XIE Ming-hong, RAN Qiang, WANG Hong-bin,
2021, 43(09): 1661-1667. doi:
Abstract
(
147
)
PDF
(825KB) (
120
)
Distant supervision is a large-scale corpus labeling method based on automatic alignment of entities in the knowledge base, but the excessively strong assumptions lead to a large amount of noise in the acquired corpus. Aiming at this problem, this paper proposes a Chinese distant supervised personal relationship extraction method based on TongYiCi CiLin and rules. The multi-instances learning idea is used to divide the personal relationship into bags. Based on it, TongYiCi CiLin is used to do word frequency statistics on personal relationship trigger words, which can determine the candidate relationship of maximum word frequency and sub-large word frequency. Then, specific personal relationship judgment rules are combined to judge the personal relationship. After judging a personal relationship in a bag, the multi-relationship is further predicted to get the final result of the personal relationship. Expe- rimental results on IPRE, which is a large-scale Chinese distant supervised personal relationship public data set, show that our results have a good F1 value and can identify the personal relationship not marked by the distant supervision data test set.
Analysis of Chinese text emotions combining BERT and BiSRU-AT
HUANG Ze-min, WU Xiao-ling, WU Ying-gang, LING Jie
2021, 43(09): 1668-1674. doi:
Abstract
(
332
)
PDF
(798KB) (
179
)
To address the problems that the traditional language model cannot solve the problem of word ambiguity in word vector representation and the existing models of emotion classification cannot capture long distance semantic information, the analysis of text emotions combining BERT (Bidirectional Encoder Representation from Transformers) and BiSRU-AT is proposed. Firstly, BERT is used to obtain the word vector representation that integrates text semantics, and then BiSRU (Bidirectional Simple Recurrent Unit) is used to extract context information again. Secondly, the attention mechanism is utilized to assign corresponding weights to the outputs of BiSRU layer to highlight the key information. Finally, softmax regression is used to obtain the sentence level emotional probability distribution. Experiments are carried out on the Twitter dataset and hotel comment dataset. The results show that the ana- lysis of text emotions model combining BERT and BiSRU-AT can achieve higher accuracy, and the BiSRU model and the adoption of attention mechanism can effectively improve the overall performance of the model which is of great pragmatic value.
A weighted erasable itemset mining algorithm based on list structure
WEN Kai, XU Meng-meng, ZHANG Xu-hong,
2021, 43(09): 1676-1683. doi:
Abstract
(
110
)
PDF
(832KB) (
132
)
Erasable itemset mining is an approach of mining low profit itemset from large-scale pro- duct databases in order to solve the financial crisis of manufactures. Traditional erasable itemset mining methods deal with static product databases only, and ignore the weight of item itself when they extract the erasable itemset. To address the problem of single condition and the inefficiency of existing erasable itemset mining algorithms, an effective algorithm WELI is proposed to mine erasable itemset in an incremental database with weighted condition. The proposed algorithm comprehensively considers the factors of data accumulation and different importance of items. The concise list structure is applied to reduce the memory consumption. Besides, the proposed algorithm prunes the invalid itemset with the weight conditions. Whats more, it can simplify the process of gain calculation by combining subsume index and difference set. Therefore, it can achieve incremental mining operations efficiently. Experiments show that, in terms of running time and memory consumption, the algorithm has good experimental results on both dense and sparse data sets. In terms of scalability, the algorithm is also superior to previous algorithms.
Small sample bearing fault diagnosis based on combined prediction model
SUN Pang-bo, FU Qi, CHEN An-hua, JIANG Yun-xia
2021, 43(09): 1684-1691. doi:
Abstract
(
148
)
PDF
(900KB) (
171
)
Rolling bearings are an important part of problems that often occur in rotating machinery. Their fault conditions are complex and difficult to diagnose. Aiming at the problem that small sample learning has a large difference between the true feature value and the target feature value and the gene- ralization ability is weak, this paper proposes a small sample learning model that combines semi- supervised variational autoencoder and LightGBM classification model. The Bayesian optimization and improvement algorithm based on Gaussian process is used to optimize the LightGBM hyperparameters, thus effectively solving the defects such as unstable performance, weak ability of extracting features, and overfitting in small sample learning. Comparative verification on the bearing experimental data set released by the Western Reserve University in the United States shows that the method has better diagnostic accuracy when facing small sample data space.
Author center
Submission
Note to authors
Paper template
Copyright agreement
Review center
Expert
Committee
Editor in chief review
Office editorial
Online journal
Current issue
Accepted
Archive
Special issue
Download ranking
Cited ranking