OPTIMIZING NUMERICAL WEATHER PREDICTION MODEL PERFORMANCE USING MACHINE LEARNING TECHNIQUES

Main Article Content

Dr K Venkata Naganjaneyulu
N. Nayana Latha
P. Jhansi Sriya
P. Vaishnavi

Abstract

Numerical weather prediction models, which use weather observation data such as temperature and humidity, are the main tool used in weather forecasting. For weather forecasting, the UK-based GloSea6 numerical weather prediction model has been used by the Korea Meteorological Administration (KMA). Supercomputers are necessary to run these models for research reasons in addition to using them for real-time weather predictions. However, several researchers have encountered challenges in running the models because of the restricted supercomputer resources. In order to solve this problem, the KMA created the Low GloSea6 low-resolution model. Although Low GloSea6 can operate on small and medium-sized servers in research facilities, it still consumes a lot of computer resources, particularly in the I/O load. Model I/O optimization is crucial as I/O load may lead to performance deterioration for models with heavy data I/O; nevertheless, user trial-and-error optimization is ineffective. In order to improve the hardware and software characteristics of the Low GloSea6 research environment, this work provides a machine learning-based method. There were two stages in the suggested procedure. In order to extract hardware platform parameters and Low GloSea6 internal parameters under different settings, performance data were first gathered using profiling tools. Second, the acquired data was used to build a machine learning model that identified the ideal hardware platform parameters and Low GloSea6 internal parameters for fresh study settings. When compared to the actual parameter combinations, the machine-learning model demonstrated a high degree of accuracy in its successful prediction of the ideal combinations of parameters in various research situations. Specifically, a noteworthy result was the error rate of just 16% between the actual execution time and the expected model execution time based on the parameter combination. All things considered, this optimization technique may enhance the efficiency of further high-performance computing research applications.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Venkata Naganjaneyulu, . K., Nayana Latha, N. ., Jhansi Sriya, P., & Vaishnavi, P. . (2024). OPTIMIZING NUMERICAL WEATHER PREDICTION MODEL PERFORMANCE USING MACHINE LEARNING TECHNIQUES. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 15(3), 278–285. https://doi.org/10.61841/turcomat.v15i3.14801
Section
Articles

References

Concept of a Numerical Forecast Model. Accessed: Aug. 10, 2023. [Online]. Available: http://web.kma.go.kr/aboutkma/intro/supercom/ model/model_concept.jsp

P. Davis, C. Ruth, A. A. Scaife, and J. Kettleborough, ‘‘A large ensemble seasonal forecasting system: GloSea6,’’ Dec. 2020, vol. 2020.

M. Howison, Q. Koziol, D. Knaak, J. Mainzer, and J. Shalf, ‘‘Tuning

HDF5 for Lustre file systems,’’ Lawrence Berkeley Nat. Lab., Berkeley, CA, USA, Tech. Rep. LBNL-4803E, 2010. [4] B. Behzad et al., ‘‘Taming parallel I/O complexity with auto-tuning,’’ in Proc. Int. Conf. High Perform. Comput., Netw., Storage Anal., 2013, p. 68.

B. Behzad, S. Byna, Prabhat, and M. Snir, ‘‘Optimizing I/O performance of HPC applications with autotuning,’’ ACM Trans. Parallel Comput., vol. 5, no. 4, pp. 1–27, Mar. 2019, doi: 10.1145/3309205.

S. Robert, S. Zertal, and G. Goret, ‘‘Auto-tuning of IO accelerators using black-box optimization,’’ in Proc. Int. Conf. High Perform. Comput. Simulation (HPCS), Jul. 2019, pp. 1022–1027, doi: 10.1109/HPCS48598.2019.9188173.

A. Bağbaba, X. Wang, C. Niethammer, and J. Gracia, ‘‘Improving the I/O performance of applications with predictive modeling based auto-tuning,’’ in Proc. Int. Conf. Eng. Emerg. Technol. (ICEET), Oct. 2021, pp. 1–6, doi: 10.1109/ICEET53442.2021.9659711.

S. Valcke and R. Redler, ‘‘The OASIS coupler,’’ in Earth System Modelling, vol. 3. Berlin, Germany: Springer, 2012, pp. 23–32, doi: 10.1007/978-3-642-23360-9_4.

Analysing UM Outputs. Accessed: Feb. 14, 2023. [Online]. Available: http://climate-cms.wikis.unsw.edu.au/Analysing_UM_outputs

Unidata | NetCDF. Accessed: Nov. 28, 2022. [Online]. Available: https://www.unidata.ucar.edu/software/netcdf/

Icons8. Free Icons, Clipart Illustrations, Photos, and Music. Accessed: Jul. 18, 2023. [Online]. Available: https://icons8.com

P. Carns, R. Latham, R. Ross, K. Iskra, S. Lang, and K. Riley, ‘‘24/7 characterization of petascale I/O workloads,’’ in Proc. 2009 Workshop Interfaces Archit. Sci. Data Storage, Sep. 2009, pp. 1–10.

Darshan Introduction. Accessed: Aug. 10, 2023. [Online]. Available: https://wordpress.cels.anl.gov/darshan/wp-content/uploads/sites/54/ 2014/08/iiswc-2014-darshan-instrumentation.pdf

R. Ross, D. Nurmi, A. Cheng, and M. Zingale, ‘‘A case study in application I/O on Linux clusters,’’ in Proc. ACM/IEEE Conf. Supercomput., New York, NY, USA, Nov. 2001, p. 11, doi: 10.1145/582034.582045.

S. Herbein, D. H. Ahn, D. Lipari, T. R. W. Scogland, M. Stearman, M. Grondona, J. Garlick, B. Springmeyer, and M. Taufer, ‘‘Scalable I/Oaware job scheduling for burst buffer enabled HPC clusters,’’ in Proc. 25th

ACM Int. Symp. High-Perform. Parallel Distrib. Comput., New York, NY, USA, May 2016, pp. 69–80, doi: 10.1145/2907294.2907316.