OPTIMIZING NUMERICAL WEATHER PREDICTION MODEL PERFORMANCE USING MACHINE LEARNING TECHNIQUES
Main Article Content
Abstract
Numerical weather prediction models, which use weather observation data such as temperature and humidity, are the main tool used in weather forecasting. For weather forecasting, the UK-based GloSea6 numerical weather prediction model has been used by the Korea Meteorological Administration (KMA). Supercomputers are necessary to run these models for research reasons in addition to using them for real-time weather predictions. However, several researchers have encountered challenges in running the models because of the restricted supercomputer resources. In order to solve this problem, the KMA created the Low GloSea6 low-resolution model. Although Low GloSea6 can operate on small and medium-sized servers in research facilities, it still consumes a lot of computer resources, particularly in the I/O load. Model I/O optimization is crucial as I/O load may lead to performance deterioration for models with heavy data I/O; nevertheless, user trial-and-error optimization is ineffective. In order to improve the hardware and software characteristics of the Low GloSea6 research environment, this work provides a machine learning-based method. There were two stages in the suggested procedure. In order to extract hardware platform parameters and Low GloSea6 internal parameters under different settings, performance data were first gathered using profiling tools. Second, the acquired data was used to build a machine learning model that identified the ideal hardware platform parameters and Low GloSea6 internal parameters for fresh study settings. When compared to the actual parameter combinations, the machine-learning model demonstrated a high degree of accuracy in its successful prediction of the ideal combinations of parameters in various research situations. Specifically, a noteworthy result was the error rate of just 16% between the actual execution time and the expected model execution time based on the parameter combination. All things considered, this optimization technique may enhance the efficiency of further high-performance computing research applications.
Downloads
Metrics
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
References
Concept of a Numerical Forecast Model. Accessed: Aug. 10, 2023. [Online]. Available: http://web.kma.go.kr/aboutkma/intro/supercom/ model/model_concept.jsp
P. Davis, C. Ruth, A. A. Scaife, and J. Kettleborough, ‘‘A large ensemble seasonal forecasting system: GloSea6,’’ Dec. 2020, vol. 2020.
M. Howison, Q. Koziol, D. Knaak, J. Mainzer, and J. Shalf, ‘‘Tuning
HDF5 for Lustre file systems,’’ Lawrence Berkeley Nat. Lab., Berkeley, CA, USA, Tech. Rep. LBNL-4803E, 2010. [4] B. Behzad et al., ‘‘Taming parallel I/O complexity with auto-tuning,’’ in Proc. Int. Conf. High Perform. Comput., Netw., Storage Anal., 2013, p. 68.
B. Behzad, S. Byna, Prabhat, and M. Snir, ‘‘Optimizing I/O performance of HPC applications with autotuning,’’ ACM Trans. Parallel Comput., vol. 5, no. 4, pp. 1–27, Mar. 2019, doi: 10.1145/3309205.
S. Robert, S. Zertal, and G. Goret, ‘‘Auto-tuning of IO accelerators using black-box optimization,’’ in Proc. Int. Conf. High Perform. Comput. Simulation (HPCS), Jul. 2019, pp. 1022–1027, doi: 10.1109/HPCS48598.2019.9188173.
A. Bağbaba, X. Wang, C. Niethammer, and J. Gracia, ‘‘Improving the I/O performance of applications with predictive modeling based auto-tuning,’’ in Proc. Int. Conf. Eng. Emerg. Technol. (ICEET), Oct. 2021, pp. 1–6, doi: 10.1109/ICEET53442.2021.9659711.
S. Valcke and R. Redler, ‘‘The OASIS coupler,’’ in Earth System Modelling, vol. 3. Berlin, Germany: Springer, 2012, pp. 23–32, doi: 10.1007/978-3-642-23360-9_4.
Analysing UM Outputs. Accessed: Feb. 14, 2023. [Online]. Available: http://climate-cms.wikis.unsw.edu.au/Analysing_UM_outputs
Unidata | NetCDF. Accessed: Nov. 28, 2022. [Online]. Available: https://www.unidata.ucar.edu/software/netcdf/
Icons8. Free Icons, Clipart Illustrations, Photos, and Music. Accessed: Jul. 18, 2023. [Online]. Available: https://icons8.com
P. Carns, R. Latham, R. Ross, K. Iskra, S. Lang, and K. Riley, ‘‘24/7 characterization of petascale I/O workloads,’’ in Proc. 2009 Workshop Interfaces Archit. Sci. Data Storage, Sep. 2009, pp. 1–10.
Darshan Introduction. Accessed: Aug. 10, 2023. [Online]. Available: https://wordpress.cels.anl.gov/darshan/wp-content/uploads/sites/54/ 2014/08/iiswc-2014-darshan-instrumentation.pdf
R. Ross, D. Nurmi, A. Cheng, and M. Zingale, ‘‘A case study in application I/O on Linux clusters,’’ in Proc. ACM/IEEE Conf. Supercomput., New York, NY, USA, Nov. 2001, p. 11, doi: 10.1145/582034.582045.
S. Herbein, D. H. Ahn, D. Lipari, T. R. W. Scogland, M. Stearman, M. Grondona, J. Garlick, B. Springmeyer, and M. Taufer, ‘‘Scalable I/Oaware job scheduling for burst buffer enabled HPC clusters,’’ in Proc. 25th
ACM Int. Symp. High-Perform. Parallel Distrib. Comput., New York, NY, USA, May 2016, pp. 69–80, doi: 10.1145/2907294.2907316.