OPTIMIZATION TECHNIQUES IN OPERATIONS RESEARCH: A REVIEW

Main Article Content

Badri Vishal Padamwar
Hemant Pandey

Abstract

This review paper provides an in-depth exploration of optimization techniques in Operations Research (OR), highlighting their significance, recent advancements, challenges, and future directions. Operations Research utilizes mathematical and analytical methods to optimize decision-making processes across various industries. The paper begins with an overview of OR, discussing its definition, scope, and historical development. It then delves into the fundamentals of optimization, covering different types of optimization problems and algorithms. Classical optimization techniques such as Linear Programming, Integer Programming, and Nonlinear Programming are discussed, followed by an exploration of heuristic and metaheuristic optimization techniques including Genetic Algorithms, Simulated Annealing, Tabu Search, Particle Swarm Optimization, and Ant Colony Optimization. Recent advances in optimization, including hybrid methods, multi-objective optimization, optimization for big data, and integration with machine learning and artificial intelligence, are analyzed. Furthermore, the paper examines the challenges faced by optimization research and outlines emerging trends and future prospects. By synthesizing current knowledge and identifying areas for future research, this paper aims to contribute to the ongoing development and application of optimization techniques in OR and related fields.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Padamwar, B. V. ., & Pandey, H. (2019). OPTIMIZATION TECHNIQUES IN OPERATIONS RESEARCH: A REVIEW. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 10(1), 746–752. https://doi.org/10.61841/turcomat.v10i1.14604
Section
Research Articles

References

Bertsimas, D., & Freund, R. M. (2016). Data, models, and decisions: The fundamentals of management

science. Dynamic Ideas.

Chvátal, V. (2013). Linear programming. Courier Corporation.

Clerc, M., & Kennedy, J. (2016). The particle swarm-explosion, stability, and convergence in a

multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58-73.

Hillier, F. S., & Lieberman, G. J. (2012). Introduction to Operations Research. McGraw-Hill Education.

Nemhauser, G. L., & Wolsey, L. A. (2014). Integer and combinatorial optimization. John Wiley & Sons.

Taha, H. A. (2016). Operations research: An introduction. Pearson Education.

Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2013). Nonlinear programming: Theory and algorithms.

John Wiley & Sons.

Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of IEEE International

Conference on Neural Networks, 1942-1948.

Dorigo, M. (1996). Ant colony optimization. Proceedings of IEEE International Conference on

Evolutionary Computation, 2, 137-142.

Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. AddisonWesley.

Kirkpatrick, S., Gelatt Jr, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science,

(4598), 671-680.

Glover, F. (1989). Tabu search—part I. ORSA Journal on Computing, 1(3), 190-206.

Van den Bergh, F., & Engelbrecht, A. P. (2001). A new locally convergent particle swarm optimizer. IEEE

Transactions on Evolutionary Computation, 5(3), 292-295.

Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic

algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182-197.

Luersen, M. A., & Nowak, A. S. (2004). Benchmarking the multi-objective evolutionary algorithm

MOEA/D on the DTLZ test suite. Congress on Evolutionary Computation, 1, 1-8.

Ma, Z., & Zhang, J. (2019). A survey of big data optimization: Opportunities, challenges, and

methodologies. IEEE Access, 7, 135673-135689.

Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine

Learning Research, 13(Feb), 281-305.