• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学

• 软件工程 • 上一篇    下一篇

方差辗转的软集参数约简算法

林连海1,田立勤1,2,蔡铭楷1,李升宏1   

  1. (1.青海师范大学计算机学院,青海 西宁 810008;2.华北科技学院计算机学院,河北 廊坊 065201)
  • 收稿日期:2019-09-03 修回日期:2019-11-26 出版日期:2020-02-25 发布日期:2020-02-25
  • 基金资助:

    国家重点研发计划(2018YFC0808306);青海省物联网重点实验室(2017-ZJ-Y21);河北省物联网监控工程技术研究中心(3142018055);河北省重点研发计划项目(19270318D)

A variance toss algorithm for parameter reduction of soft set

LIN Lian-hai1,TIAN Li-qin1,2,CAI Ming-kai1,LI Sheng-hong1   

  1. (1.School of Computer Science,Qinghai Normal University,Xining 810008;
    2.School of Computer Science,North China Institute of Science and Technology,Langfang 065201,China)
     
  • Received:2019-09-03 Revised:2019-11-26 Online:2020-02-25 Published:2020-02-25

摘要:

软集是一种处理不确定数据的理论、工具,通常用于决策论中。软集的参数约简是指删除对决策几乎没有影响的冗余参数,自从0-1线性规划算法提出以来,软集的参数约简问题基本得到了解决,但0-1线性规划算法实现复杂,需要依赖整数规划算法。在此,考虑软集的实际应用背景,将软集与概率论结合,设计出一个在大数据背景下的软集参数约简方法——方差辗转法,该算法的时间复杂度为O(m2n),而0-1线性规划通常视为NP难问题。方差辗转法实现简单,在物集(或全集)较小,不超过属性集大小的2倍时,效果较差,但随着物集(或全集)大小的增长,效率会逐步上升,最终运算效率会全面优于 0-1线性规划算法的,对于约简稠密度高的软集效率会更高。
 

关键词: 软集, 参数约简, 概率论, 方差辗转法, 大数据

Abstract:

Soft set is a theory and tool for dealing with uncertain data. It is usually used in decision theory. The parameter reduction of soft set refers to the removal of redundant parameters that have little effect on decision-making. Since the 0-1 linear programming algorithm was proposed, the problem of parameter reduction of soft set has been basically solved, but the implementation of the 0-1 linear programming algorithm is complex and relies on the integer programming algorithm. Here, considering the practical application background of soft set, combining soft set with probability theory, a soft set parameter reduction method in the context of big data, called the variance toss algorithm, is designed. The time complexity of this algorithm is O(m2n), and 0-1 linear programming is usually regarded as a NP-hard problem. The variance toss algorithm is simple to implement. When the object set (or the complete set) is small and less than twice the size of the attribute set, the effect is poor. However, as the size of the object set (or the complete set) increases, the efficiency will gradually increase, and the final computing efficiency will be better than the 0-1 linear programming algorithm. It has higher efficiency for the soft set with high reduction density.  
 

 

Key words: soft set, parameter reduction, probability theory, variance toss algorithm, big data