• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2025, Vol. 47 ›› Issue (11): 1974-1983.

• 计算机网络与信息安全 • 上一篇    下一篇

MinRS:一种兼顾模型可用性与模型隐私性的防御方法

任志强,陈学斌,张宏扬   

  1. (1.华北理工大学理学院,河北 唐山 063210;2.河北省数据科学与应用重点实验室(华北理工大学),河北 唐山 063210;
    3.唐山市数据科学重点实验室(华北理工大学),河北 唐山 063210)

  • 收稿日期:2023-10-20 修回日期:2024-09-15 出版日期:2025-11-25 发布日期:2025-12-08
  • 基金资助:
    国家自然科学基金(U20A20179)

MinRS: A defense method for both model availability and model privacy

REN Zhiqiang,CHEN Xuebin,ZHANG Hongyang   

  1. (1.College of Science,North China University of Science and Technology,Tangshan 063210;
    2.Heibei Key Laboratory of Data Science and Application 
    (North China University of Science and Technology),Tangshan 063210;
    3.Tangshan Key Laboratory of Data Science (North China University of Science and Technology),Tangshan 063210,China)
  • Received:2023-10-20 Revised:2024-09-15 Online:2025-11-25 Published:2025-12-08

摘要: 联邦学习是一种解决机器学习中数据共享和隐私保护问题的技术。然而,联邦学习系统面临着模型可用性和模型隐私性这2方面的安全隐患。此外,目前针对这2类安全隐患的防御方法之间并不能兼容。针对这些问题,从兼顾模型可用性与模型隐私性的角度出发,提出了一种名为MinRS的防御方法,该方法由安全访问方案和选择算法组成,能在不影响模型隐私性的前提下防御恶意模型攻击,实现安全的模型聚合。实验结果表明,MinRS在保护模型隐私性的前提下,成功防御了3种不同攻击策略生成的恶意模型,并且几乎没有对模型的性能造成负面影响。


关键词: 联邦学习, 恶意模型, 模型可用性, 模型隐私性, 防御方法

Abstract: Federated learning is a technology that addresses the challenges  of data sharing and privacy protection in machine learning. However, federated learning systems face security risks in two aspects: those targeting model availability and those targeting model privacy. Moreover, the current defense methods against these two types of security risks are not mutually compatible. To tackle these problems, from the perspective of balancing model availability and model privacy, a defense method named MinRS is proposed. This method consists of a secure access scheme and a selection algorithm, which can defend against malicious model attacks without compromising model privacy, thereby achieving secure model aggregation. Experimental results show that, on the premise of protecting model privacy, MinRS successfully defends against malicious models generated by three different attack strategies, and has almost no negative impact on the performance of the models.


Key words: federated learning, malicious model, model availability, model privacy, defense method