Please use this identifier to cite or link to this item:
https://scidar.kg.ac.rs/handle/123456789/22729| Title: | A Survey of Reinforcement Learning Approaches for Tuning Particle Swarm Optimization |
| Authors: | Milicevic, Bogdan Milovanović, Vladimir |
| Journal: | Book of Proceedings International Conference on Chemo and BioInformatics (3 ; 2025 ; Kragujevac) |
| Issue Date: | 2025 |
| Abstract: | Particle Swarm Optimization (PSO) remains a popular, simple, and strong baseline for numerical optimization, yet its performance depends critically on a small set of hyper-parameters (e.g., inertia weight w and cognitive and social coefficients c1, c2) and on structural design choices (e.g., topology, velocity clamps). Over the last decade, reinforcement learning (RL) has emerged as a principled, data-driven way to adapt these design choices online—either by directly controlling parameters, reshaping swarm interactions, selecting variation operators, or transferring control policies across runs. This survey systematizes RL–for–PSO tuning along four families: (1) direct parameter control, (2) topology/structure control, (3) operator/strategy selection, and (4) cross-run memory and transfer. We highlight representative methods—including tabular Q-learning, Deep Q-Networks (DQN), deterministic policy gradients (DDPG), and hybrid RL–PSO schemes—summarize empirical evidence, and distill practical design patterns (state, action, reward, and training protocols). We conclude with open challenges in stability, sample efficiency, safety-constrained control, and reproducible benchmarking. |
| URI: | https://scidar.kg.ac.rs/handle/123456789/22729 |
| Type: | conferenceObject |
| DOI: | 10.46793/ICCBIKG25.198M |
| Appears in Collections: | Institute for Information Technologies, Kragujevac |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 199-202-Milicevic.pdf | 626.72 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License

