[1] |
Fu H H, Liao J F, Yang J Z, et al.The Sunway TaihuLight supercomputer:System and applications[J].Science China Information Sciences,2016,59:072001.
|
[2] |
Braam P.The Lustre storage architecture[J].arXiv:1903.01955,2019.
|
[3] |
Summit supercomputer[EB/OL].[2021-11-05].https:∥www.olcf.ornl.gov/summit/.
|
[4] |
Zimmer C. Summit burst buffer[EB/OL].[2021-11-03].https://www.olcf.ornl.gov/wp-content/uploads/2020/02/Burst_Buffer_Training_June2020.pdf.
|
[5] |
Gainaru A,Aupy G,Benoit A,et al.Scheduling the I/O of HPC applications under congestion[C]∥Proc of 2015 IEEE International Parallel and Distributed Processing Sympo- sium,2015:1013-1022.
|
[6] |
Jokanovic A,Sancho J C,Labarta J, et al.Quiet neighborhoods:Key to protect job performance predictability[C]∥Proc of 2015 IEEE International Parallel and Distributed Processing Symposium,2015:449-459.
|
[7] |
Kuo C S,Shah A,Nomura A,et al.How file access patterns influence interference among cluster applications[C]∥Proc of 2014 IEEE International Conference on Cluster Computing,2014:185-193.
|
[8] |
Neuwirth S,Wang F Y, Oral S,et al.Using balanced data placement to address I/O contention in production environments[C]∥Proc of the 28th International Symposium on Computer Architecture and High Performance Computing,2016:9-17.
|
[9] |
Gunawi H S,Suminto R O,Sears R,et al.Fail-slow at scale:Evidence of hardware performance faults in large production systems[J].ACM Transactions on Storage,2018,14(3):23:1-23:26.
|
[10] |
Yang B,Ji X,Ma X S, Ma X S,et al.End-to-end I/O monitoring on a leading supercomputer[C]∥Proc of the 16th USENIX Symposium on Networked Systems Design and Implementation,2019:379-394.
|
[11] |
Carns P,Latham R,Ross R,et al.24/7 characterization of petascale I/O workloads[C]∥Proc of 2009 IEEE International Conference on Cluster Computing and Workshops,2009:1-10.
|
[12] |
Vijayakumar K,Mueller F,Ma X S,et al.Scalable I/O tracing and analysis[C]∥Proc of the 4th Annual Workshop on Petascale Data Storage,2009:26-31.
|
[13] |
Paul A K,Chard R,Chard K,et al.FSMonitor:Scalable file system monitoring for arbitrary storage systems[C]∥Proc of 2019 IEEE International Conference on Cluster Comput- ing,2019:1-11.
|
[14] |
LustrePerfMon[EB/OL].[2021-11-05].http:∥lustrefs.cn/monitor/.
|
[15] |
Snyder S,Carns P,Harms K,et al.Modular HPC I/O characterization with Darshan[C]∥Proc of 2016 Workshop on Extreme-scale Programming Tools,2016:9-17.
|
[16] |
Al-Mamun A,Liu J,Koziol Q, et al.Reflector:A fine-grained I/O tracker for HPC systems[C]∥Proc of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming,2020:427-428.
|
[17] |
Wan L P,Wolf M,Wang F Y,et al.Analysis and modeling of the end-to-end I/O performance on OLCF’s Titan supercomputer[C]∥Proc of 2017 IEEE 19th International Conference on High Performance Computing and Communications,2017:1-9.
|
[18] |
Vazhkudai S S,Miller R, Tiwari D, et al.GUIDE:A scalable information directory service to collect,federate,and analyze logs for operational insights into a leadership HPC facility[C]∥Proc of International Conference for High Performance Computing, Networking, Storage and Analysis,2017:1-12.
|
[19] |
Wang T,Yu W,Sato K,et al.BurstFS:A distributed burst buffer file system for scientific applications:LLNL-CONF-621480[R].Livermore:Lawrence Livermore National Laboratory,2016.
|
[20] |
Logstash[EB/OL].[2021-11-05].https:∥www.elastic.co/cn/logstash/.
|
[21] |
Redis[EB/OL].[2021-11-05].https:∥github.com/redis/redis-doc.
|
[22] |
Mysql[EB/OL].[2021-11-05].https:∥www.mysql.com/.
|
[23] |
Shahid J.InfluxDB documentation[EB/OL].[2021-11-03].https://buildmedia.readthedocs.org/media/pdf/influxdb-python/latest/influxdb-python.pdf.
|
[24] |
Thakur R,Lusk E, Gropp W. Users guide for ROMIO:A high-performance,portable MPI-IO implementation:ANL/MCS-TM-234[R].Argonne:Argonne National Laboratory,1997.
|
[25] |
Folk M,Heber G,Koziol Q,et al.An overview of the HDF5 technology suite and its applications[C]∥Proc of the EDBT/ICDT 2011 Workshop on Array Databases,2011:36-47.
|
[26] |
Nvme-cli[EB/OL].[2021-11-05].https:∥github.com/linux-nvme/nvme-cli.
|
[27] |
Liu Y,Gunasekaran R, Ma X S, et al.Server-side log data analytics for I/O workload characterization and coordination on large shared storage systems[C]∥Proc of the International Conference for High Performance Computing, Networking, Storage and Analysis,2016:819-829.
|
[28] |
Liu Y,Gunasekaran R,Ma X S,et al.Automatic identification of application I/O signatures from noisy server-side traces[C]∥Proc of the 12th USENIX Conference on File and Storage Technologies,2014:213-228.
|
[29] |
Thrift[EB/OL].[2021-11-05].https:∥thrift.apache.org/.
|
[30] |
Matplotlib[EB/OL].[2021-11-05].https:∥matplotlib.org/.
|
[31] |
Vue[EB/OL].[2021-11-05].https:∥vuejs.org/index.html.
|
[32] |
Django[EB/OL].[2021-11-05].https:∥www.djangoproject.com/start/overview/.
|
[33] |
IOR[EB/OL].[2021-11-05].https:∥github.com/LLNL/ior.
|
[34] |
Roberts V A,Thompson E E,Pique M E,et al.DOT2:Macromolecular docking with improved biophysical models[J].Journal of Computational Chemistry,2013,34(20):1743-1758.
|
[35] |
Ji X,Yang B,Zhang T Y,et al.Automatic,application-aware I/O forwarding resource allocation[C]∥Proc of the 17th USENIX Conference on File and Storage Technologies,2019:265-279.
|
[36] |
Tang X C,Wang H J,Ma X S,et al.Spread-n-share:Improv- ing application performance and cluster throughput with resource-aware job placement[C]∥Proc of the International Conference for High Performance Computing, Networking,Storage and Analysis,2019:1-15.
|