Computer Engineering & Science ›› 2024, Vol. 46 ›› Issue (11): 1924-1930.
• High Performance Computing • Previous Articles Next Articles
YU Heng-biao,YI Xin,LI Sheng-guo,LI Fa,JIANG Hao,HUANG Chun
Received:
Revised:
Accepted:
Online:
Published:
Abstract: Floating-point arithmetic is a typical numerical solution model for high-performance computing. Mixed-precision optimization enhances performance and reduces energy consumption by decreas- ing the precision of floating-point variables in programs. However, existing automatic mixed-precision optimization techniques are limited by low robustness, meaning that the optimized programs fail to meet the result accuracy constraints for given inputs. To address this issue, a method for improving the robustness of mixed-precision optimization based on floating-point error analysis is proposed. Firstly, inputs that can trigger imprecise calculations in the program are identified through floating-point error analysis. Then, based on these error-triggering inputs, the precision configurations are evaluated to guide the search for highly robust mixed-precision configurations. Experimental results show that for typical floating-point applications, this method can improve the robustness of mixed-precision optimization by an average of 62%.
Key words: floating-point arithmetic, mixed precision, robustness, error analysis
YU Heng-biao, YI Xin, LI Sheng-guo, LI Fa, JIANG Hao, HUANG Chun. A method for improving the robustness of mixed-precision optimization based on floating-point error analysis[J]. Computer Engineering & Science, 2024, 46(11): 1924-1930.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: http://joces.nudt.edu.cn/EN/
http://joces.nudt.edu.cn/EN/Y2024/V46/I11/1924