GENERATIVE ARTIFICIAL INTELLIGENCE FOR METAHEURISTIC OPTIMIZATION: TAXONOMY, METHODOLOGICAL FRAMEWORKS, AND OPEN RESEARCH CHALLENGES
Abstract
The arrival of Generative Artificial Intelligence (GenAI) has brought a new paradigm shift in the field of optimization research because it has now allowed scientists to be able to model search spaces using data-based methods. This paper gives a thorough review of GenAI metaheuristic optimization with its research that assesses the functionality of generative models such as generative adversarial networks and diffusion models and variational autoencoders and large language models within both evolutionary and swarm-based systems. In order to integrate the existing methodologies, we suggest a three-dimensional taxonomy of the functional role, level of integration and learning paradigm. The survey explores generator-enhanced evolution, diffusion-guided search, LLM-assisted metaheuristic design, and surrogate-assisted generative optimization in terms of their research backgrounds and convergence. The researchers describe their strategies to be used in empirical testing by analyzing benchmarking processes and performance evaluation criteria and reproducibility test procedures. The research points to the key unresolved problems that encompass scalability issues and uncertainty quantification issues and interpretability issues and ethical concerns. The study opens new methodological frontiers that will result in independent adaptive systems to generative optimization that employ theoretical underpinnings to address problems in optimization of large scales in the real world.













