[1] |
Gu T,Dolan-Gavitt B,Garg S.BadNets:Identifying vulnerabilities in the machine learning model supply chain[J].arXiv:1708.06733,2017.
|
[2] |
Tan Qing-yin, Zeng Ying-ming, Han Ye, et al. Survey on backdoor attacks targeted on neural network[J]. Chinese Journal of Network and Information Security,2021,7(3):46-58.(in Chinese)
|
[3] |
Shafahi A,Huang W R,Najibi M,et al.Poison frogs! Target- ed clean-label poisoning attacks on neural networks[J].arXiv:1804.00792V1,2018.
|
[4] |
Zhu C,Huang W R,Li H D,et al.Transferable clean-label poisoning attacks on deep neural nets[C]∥Proc of the 36th International Conference on Machine Learning,2019:7614-7623.
|
[5] |
Vicarte J R S,Wang G,Fletcher C W.Double-cross attacks:Subverting active learning systems[C]∥Proc of the 30th USENIX Security Symposium,2021:1593-1610.
|
[6] |
Severi G,Meyer J,Coull S,et al.Explanation-guided backdoor poisoning attacks against malware classifiers[C]∥Proc of the 30th USENIX Security Symposium,2021:1487-1504.
|
[7] |
Sundararajan M,Najmi A.The many Shapley values for model explanation[J].arXiv:1908.08474V1,2019.
|
[8] |
Carlini N.Poisoning the unlabeled dataset of semi-supervised learning[C]∥Proc of the 30th USENIX Security Symposium,2021:1577-1592.
|
[9] |
Liu Y Q,Ma S Q,Aafer Y,et al.Trojaning attack on neural networks[C]∥Proc of the 25th Annual Network and Distributed System Security Symposium,2018:214-229.
|
[10] |
Zou M H, Yang S,Wang C L,et al.PoTrojan:Powerful neural-level trojan designs in deep learning models[J].arXiv:1802.03043,2018.
|
[11] |
Krizhevsky A,Sutskever I,Hinton G E.ImageNet classification with deep convolutional neural networks[C]∥Proc of the 25th International Conference on Neural Information Processing Systems,2012:1097-1105.
|
[12] |
Yao Y S,Li H Y,Zheng H T,et al.Latent backdoor attack on deep neural networks[C]∥Proc of the 26th ACM Conference on Computer and Communication Security,2019:2041-2055.
|
[13] |
Bagdasaryan E,Shmatikov V.Blind backdoors in deep learning models[C]∥Proc of the 30th USENIX Security Symposium, 2021:1505-1521.
|
[14] |
Désidéri J A.Multiple-gradient descent algorithm (MGDA) for multiobjective optimization[J].Comptes Rendus Mathematique,2012,350(5/6):313-318.
|
[15] |
Xi Z H,Pang R,Ji S L,et al.Graph backdoor[C]∥Proc of the 30th USENIX Security Symposium,2021:1523-1540.
|
[16] |
Tran B,Li J,Madry A.Spectral signatures in backdoor attacks[J].arXiv:1811.00636,2018.
|
[17] |
Chen B,Carvalho W,Baracaldo N,et al.Detecting backdoor attacks on deep neural networks by activation clustering[J].arXiv:1811.03728,2018.
|
[18] |
Gao Y S,Xu C,Wang D R,et al.STRIP:A defence against Trojan attacks on deep neural networks[C]∥Proc of the 35th Annual Computer Security Applications Conference,2019:113-125.
|
[19] |
Udeshi S,Peng S S,Woo G,et al.Model agnostic defence against backdoor attacks in machine learning[J].arXiv:1908.02203,2019.
|
[20] |
Chou E,Tramèr F,Pellegrino G.SentiNet:Detecting localized universal attacks against deep learning systems[C]∥Proc of 2020 IEEE Security and Privacy Workshops,2020:48-54.
|
[21] |
Selvaraju R R,Das A,Vedantam R,et al.Grad-CAM:Why did you say that? Visual explanations from deep networks via gradient-based localization[J].arXiv:1610.02391,2016.
|
[22] |
Chen H L,Fu C,Zhao J S,et al.DeepInspect:A black-box Trojan detection and mitigation framework for deep neural networks[C]∥Proc of the 28th International Joint Confe- rence on Artificial Intelligence,2019:4658-4664.
|
[23] |
Kolouri S,Saha A,Pirsiavash H,et al.Universal litmus patterns:Revealing backdoor attacks in CNNs[C]∥Proc of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:298-307.
|
[24] |
Moosavi-Dezfooli S M,Fawzi A,Fawzi O,et al.Universal adversarial perturbations[C]∥Proc of 2017 IEEE Confe- rence on Computer Vision and Pattern Recognition,2017:86-94.
|
[25] |
Xu X J,Wang Q,Li H C,et al.Detecting AI Trojans using meta neural analysis[C]∥Proc of the 42nd IEEE Symposium on Security and Privacy, 2021:1-17.
|
[26] |
Liu K,Dolan-Gavitt B,Garg S.Fine-pruning:Defending against backdooring attacks on deep neural networks[C]∥Proc of International Symposium on Research in Attacks,Intrusions,and Defenses,2018:273-294.
|
[27] |
Anwar S, Hwang K,Sung W.Structured pruning of deep convolutional neural networks[J].ACM Journal on Emerging Technologies in Computing Systems,2015,13(3):1-18.
|
[28] |
Wang B L,Yao Y S,Shan S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]∥Proc of the 40th IEEE Symposium on Security and Privacy,2019:707-723.
|
[29] |
Guo W B,Wang L,Xing X Y,et al.TABOR:A highly accurate approach to inspecting and restoring Trojan backdoors in AI systems[J].arXiv:1908.01763,2019.
|
[30] |
Li Y G,Lyu X X,Koren N,et al.Neural attention distillation:Erasing backdoor triggers from deep neural networks[J].arXiv:2101.05930,2021.
|
|
附中文参考文献:
|
[2] |
谭清尹,曾颖明,韩叶,等.神经网络后门攻击研究[J].网络与信息安全学报,2021,7(3):46-58.
|