OPTIMIZATION TECHNIQUES IN OPERATIONS RESEARCH: A REVIEW
Main Article Content
Abstract
This review paper provides an in-depth exploration of optimization techniques in Operations Research (OR), highlighting their significance, recent advancements, challenges, and future directions. Operations Research utilizes mathematical and analytical methods to optimize decision-making processes across various industries. The paper begins with an overview of OR, discussing its definition, scope, and historical development. It then delves into the fundamentals of optimization, covering different types of optimization problems and algorithms. Classical optimization techniques such as Linear Programming, Integer Programming, and Nonlinear Programming are discussed, followed by an exploration of heuristic and metaheuristic optimization techniques including Genetic Algorithms, Simulated Annealing, Tabu Search, Particle Swarm Optimization, and Ant Colony Optimization. Recent advances in optimization, including hybrid methods, multi-objective optimization, optimization for big data, and integration with machine learning and artificial intelligence, are analyzed. Furthermore, the paper examines the challenges faced by optimization research and outlines emerging trends and future prospects. By synthesizing current knowledge and identifying areas for future research, this paper aims to contribute to the ongoing development and application of optimization techniques in OR and related fields.
Downloads
Metrics
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
References
Bertsimas, D., & Freund, R. M. (2016). Data, models, and decisions: The fundamentals of management
science. Dynamic Ideas.
Chvátal, V. (2013). Linear programming. Courier Corporation.
Clerc, M., & Kennedy, J. (2016). The particle swarm-explosion, stability, and convergence in a
multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58-73.
Hillier, F. S., & Lieberman, G. J. (2012). Introduction to Operations Research. McGraw-Hill Education.
Nemhauser, G. L., & Wolsey, L. A. (2014). Integer and combinatorial optimization. John Wiley & Sons.
Taha, H. A. (2016). Operations research: An introduction. Pearson Education.
Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2013). Nonlinear programming: Theory and algorithms.
John Wiley & Sons.
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of IEEE International
Conference on Neural Networks, 1942-1948.
Dorigo, M. (1996). Ant colony optimization. Proceedings of IEEE International Conference on
Evolutionary Computation, 2, 137-142.
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. AddisonWesley.
Kirkpatrick, S., Gelatt Jr, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science,
(4598), 671-680.
Glover, F. (1989). Tabu search—part I. ORSA Journal on Computing, 1(3), 190-206.
Van den Bergh, F., & Engelbrecht, A. P. (2001). A new locally convergent particle swarm optimizer. IEEE
Transactions on Evolutionary Computation, 5(3), 292-295.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic
algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182-197.
Luersen, M. A., & Nowak, A. S. (2004). Benchmarking the multi-objective evolutionary algorithm
MOEA/D on the DTLZ test suite. Congress on Evolutionary Computation, 1, 1-8.
Ma, Z., & Zhang, J. (2019). A survey of big data optimization: Opportunities, challenges, and
methodologies. IEEE Access, 7, 135673-135689.
Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine
Learning Research, 13(Feb), 281-305.