Please use this identifier to cite or link to this item:
https://scidar.kg.ac.rs/handle/123456789/22729Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Milicevic, Bogdan | - |
| dc.contributor.author | Milovanović, Vladimir | - |
| dc.contributor.editor | Saveljic I. | - |
| dc.contributor.editor | Filipovic, Nenad | - |
| dc.date.accessioned | 2025-12-03T07:54:48Z | - |
| dc.date.available | 2025-12-03T07:54:48Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.isbn | 978-86-82172-05-5 | en_US |
| dc.identifier.uri | https://scidar.kg.ac.rs/handle/123456789/22729 | - |
| dc.description.abstract | Particle Swarm Optimization (PSO) remains a popular, simple, and strong baseline for numerical optimization, yet its performance depends critically on a small set of hyper-parameters (e.g., inertia weight w and cognitive and social coefficients c1, c2) and on structural design choices (e.g., topology, velocity clamps). Over the last decade, reinforcement learning (RL) has emerged as a principled, data-driven way to adapt these design choices online—either by directly controlling parameters, reshaping swarm interactions, selecting variation operators, or transferring control policies across runs. This survey systematizes RL–for–PSO tuning along four families: (1) direct parameter control, (2) topology/structure control, (3) operator/strategy selection, and (4) cross-run memory and transfer. We highlight representative methods—including tabular Q-learning, Deep Q-Networks (DQN), deterministic policy gradients (DDPG), and hybrid RL–PSO schemes—summarize empirical evidence, and distill practical design patterns (state, action, reward, and training protocols). We conclude with open challenges in stability, sample efficiency, safety-constrained control, and reproducible benchmarking. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Institute for Information Technologies, University of Kragujevac | en_US |
| dc.relation.ispartof | Book of Proceedings International Conference on Chemo and BioInformatics (3 ; 2025 ; Kragujevac) | en_US |
| dc.rights | CC0 1.0 Universal | * |
| dc.rights.uri | http://creativecommons.org/publicdomain/zero/1.0/ | * |
| dc.subject | particle swarm optimization | en_US |
| dc.subject | einforcement learning | en_US |
| dc.subject | parameter tuning | en_US |
| dc.title | A Survey of Reinforcement Learning Approaches for Tuning Particle Swarm Optimization | en_US |
| dc.type | conferenceObject | en_US |
| dc.description.version | Published | en_US |
| dc.identifier.doi | 10.46793/ICCBIKG25.198M | en_US |
| dc.type.version | PublishedVersion | en_US |
| dc.source.conference | 3rd International Conference on Chemo and Bioinformatics ICCBIKG 2025 | en_US |
| Appears in Collections: | Institute for Information Technologies, Kragujevac | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 199-202-Milicevic.pdf | 626.72 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License
