1The Firefly Algorithm is a metaheuristic inspired by the...
Firefly algorithm and whale optimization algorithm
Easy
A.hunting strategy of lions
B.migration patterns of birds
C.foraging of ants
D.flashing behavior of fireflies
Correct Answer: flashing behavior of fireflies
Explanation:
The algorithm models how fireflies use their light flashes to attract mates or prey, where the brightness of the flash corresponds to the fitness of a solution.
Incorrect! Try again.
2In the Firefly Algorithm, the attractiveness of a firefly is directly related to its...
Firefly algorithm and whale optimization algorithm
Easy
A.age
B.distance from the origin
C.light intensity (brightness)
D.speed of movement
Correct Answer: light intensity (brightness)
Explanation:
A firefly's attractiveness is proportional to its brightness, which is determined by the objective function's value. Brighter fireflies attract less bright ones.
Incorrect! Try again.
3The Whale Optimization Algorithm (WOA) is primarily based on the hunting behavior of which specific animal?
Firefly algorithm and whale optimization algorithm
Easy
A.Killer whales (Orcas)
B.Humpback whales
C.Sperm whales
D.Blue whales
Correct Answer: Humpback whales
Explanation:
WOA specifically mimics the bubble-net feeding strategy, a unique hunting technique observed in humpback whales.
Incorrect! Try again.
4Which of the following is a key phase of the Whale Optimization Algorithm's hunting strategy?
Firefly algorithm and whale optimization algorithm
Easy
A.Building a nest
B.Hibernation
C.Encircling prey
D.Shedding skin
Correct Answer: Encircling prey
Explanation:
The three main phases of WOA are encircling prey, the bubble-net attacking method (exploitation), and searching for prey (exploration).
Incorrect! Try again.
5In the Grey Wolf Optimizer (GWO), the three best solutions found so far are represented by which wolves in the social hierarchy?
Grey wolf optimization and grasshopper optimization algorithm
Easy
A.Alpha, Beta, and Delta
B.Omega, Beta, and Gamma
C.Alpha, Beta, and Omega
D.Epsilon, Gamma, and Delta
Correct Answer: Alpha, Beta, and Delta
Explanation:
The GWO algorithm is guided by the three leaders of the pack: Alpha (), the best solution; Beta (), the second-best solution; and Delta (), the third-best solution.
Incorrect! Try again.
6What is the primary role of the omega () wolves in the Grey Wolf Optimizer?
Grey wolf optimization and grasshopper optimization algorithm
Easy
A.To challenge the Alpha wolf
B.To find new territory
C.To follow the Alpha, Beta, and Delta wolves
D.To lead the hunt
Correct Answer: To follow the Alpha, Beta, and Delta wolves
Explanation:
The omega wolves represent the rest of the candidate solutions. They update their positions based on the locations of the top three leaders (Alpha, Beta, and Delta).
Incorrect! Try again.
7The Grasshopper Optimization Algorithm (GOA) is inspired by the behavior of grasshoppers in...
Grey wolf optimization and grasshopper optimization algorithm
Easy
A.building nests
B.isolation
C.mating rituals
D.a swarm
Correct Answer: a swarm
Explanation:
GOA models the social interaction and movement of grasshoppers in a swarm, particularly their behavior in both larval (slow movement) and adult (long-range movement) stages.
Incorrect! Try again.
8In the Grasshopper Optimization Algorithm, the movement of an individual is mainly influenced by which two forces?
Grey wolf optimization and grasshopper optimization algorithm
Easy
A.Hunger and fear
B.Magnetic fields and light
C.Social interaction and gravity force towards the target
D.Wind current and temperature
Correct Answer: Social interaction and gravity force towards the target
Explanation:
A grasshopper's position is updated based on its interaction with other grasshoppers (social force) and a gravity force pulling it towards the best solution found so far (the target).
Incorrect! Try again.
9Algorithms inspired by physical processes, like the cooling of metal in annealing, are grouped into which category of metaheuristics?
Conceptual grouping of metaheuristics
Easy
A.Physics-based
B.Swarm-based
C.Human-based
D.Evolutionary-based
Correct Answer: Physics-based
Explanation:
Physics-based algorithms mimic physical laws. Simulated Annealing, for example, is inspired by the metallurgical process of annealing.
Incorrect! Try again.
10Ant Colony Optimization and Particle Swarm Optimization are examples of which class of algorithms?
Conceptual grouping of metaheuristics
Easy
A.Evolutionary algorithms
B.Swarm intelligence-based
C.Physics-based
D.Trajectory-based
Correct Answer: Swarm intelligence-based
Explanation:
These algorithms are inspired by the collective, decentralized intelligence of social swarms, such as ant colonies or bird flocks.
Incorrect! Try again.
11A key characteristic of population-based metaheuristics is that they...
Conceptual grouping of metaheuristics
Easy
A.work with a single solution that moves through the search space
B.are guaranteed to find the global optimum
C.are only inspired by biological evolution
D.maintain and improve multiple candidate solutions simultaneously
Correct Answer: maintain and improve multiple candidate solutions simultaneously
Explanation:
Population-based algorithms, like Genetic Algorithms or PSO, work with a set (population) of solutions at each iteration, sharing information to guide the search.
Incorrect! Try again.
12Genetic Algorithms and Differential Evolution belong to which group of metaheuristics?
Conceptual grouping of metaheuristics
Easy
A.Human-based algorithms
B.Swarm intelligence algorithms
C.Evolutionary algorithms
D.Physics-based algorithms
Correct Answer: Evolutionary algorithms
Explanation:
These algorithms are inspired by the principles of biological evolution, such as selection, crossover (recombination), and mutation.
Incorrect! Try again.
13When comparing optimization algorithms, what does 'convergence speed' refer to?
Comparison of metaheuristic algorithms
Easy
A.How quickly the algorithm finds a good enough solution
B.The programming language it is written in
C.The computational complexity of the algorithm
D.How many parameters the algorithm has
Correct Answer: How quickly the algorithm finds a good enough solution
Explanation:
Convergence speed measures how many iterations or how much time an algorithm takes to approach an optimal or near-optimal solution.
Incorrect! Try again.
14The "No Free Lunch" (NFL) theorem implies that...
Comparison of metaheuristic algorithms
Easy
A.free and open-source algorithms are always worse than commercial ones
B.no single optimization algorithm is best for all possible problems
C.all optimization algorithms perform equally well on every problem
D.an algorithm that is fast is always better
Correct Answer: no single optimization algorithm is best for all possible problems
Explanation:
The NFL theorem states that if an algorithm performs well on a certain class of problems, it will necessarily perform poorly on another class of problems. This means there is no universally superior algorithm.
Incorrect! Try again.
15In the context of metaheuristics, what does 'parameter tuning' involve?
Comparison of metaheuristic algorithms
Easy
A.Choosing the objective function for the problem
B.Writing the algorithm's code
C.Increasing the population size to infinity
D.Setting the algorithm's control parameters to achieve the best performance
Correct Answer: Setting the algorithm's control parameters to achieve the best performance
Explanation:
Most metaheuristics have parameters (e.g., population size, mutation rate) that need to be set. Parameter tuning is the process of finding the right values for these parameters for a specific problem.
Incorrect! Try again.
16A common way to ensure a fair comparison between two stochastic (randomized) optimization algorithms is to...
Comparison of metaheuristic algorithms
Easy
A.use different population sizes for each algorithm
B.run each algorithm multiple times and compare their average performance
C.use different objective functions for each algorithm
D.run each algorithm only once
Correct Answer: run each algorithm multiple times and compare their average performance
Explanation:
Since metaheuristics often have a random component, running them multiple times and analyzing statistical measures (like mean, standard deviation) provides a more reliable and fair comparison of their typical performance.
Incorrect! Try again.
17An optimization algorithm is said to have good 'scalability' if its performance...
Scalability and convergence issues in optimization
Easy
A.is always fast regardless of the problem
B.improves as the problem size increases
C.does not degrade significantly as the problem size increases
D.is consistent only on small-scale problems
Correct Answer: does not degrade significantly as the problem size increases
Explanation:
Scalability refers to an algorithm's ability to handle growing problem sizes (e.g., more dimensions or variables) efficiently without a drastic drop in solution quality or a massive increase in computation time.
Incorrect! Try again.
18'Premature convergence' is an issue where an algorithm...
Scalability and convergence issues in optimization
Easy
A.finds the global optimum too quickly
B.gets stuck in a local optimum and stops exploring the search space
C.converges too slowly to the global optimum
D.fails to converge at all
Correct Answer: gets stuck in a local optimum and stops exploring the search space
Explanation:
This happens when the algorithm's population loses diversity too quickly, converging to a suboptimal solution (a local optimum) and failing to explore other promising areas of the search space to find the global optimum.
Incorrect! Try again.
19A standard convergence curve for a minimization problem plots the 'best fitness value' on the y-axis against what on the x-axis?
Scalability and convergence issues in optimization
Easy
A.Algorithm runtime in seconds
B.Number of iterations or function evaluations
C.Number of problem dimensions
D.Population size
Correct Answer: Number of iterations or function evaluations
Explanation:
A convergence curve visually shows how the best solution found by the algorithm improves over time, which is typically measured in iterations or the number of times the objective function has been evaluated.
Incorrect! Try again.
20The 'curse of dimensionality' refers to the problem where...
Scalability and convergence issues in optimization
Easy
A.the optimal solution is always at the origin in high dimensions
B.the algorithm requires less memory for high-dimensional problems
C.the algorithm becomes simpler with more dimensions
D.the search space grows exponentially as the number of dimensions increases
Correct Answer: the search space grows exponentially as the number of dimensions increases
Explanation:
As the number of variables (dimensions) in a problem increases, the volume of the search space grows exponentially, making it much harder for an algorithm to find the optimal solution efficiently.
Incorrect! Try again.
21In the Firefly Algorithm, if the light absorption coefficient, , is set to a very large value (e.g., ), what is the expected behavior of the algorithm?
Firefly algorithm and whale optimization algorithm
Medium
A.The algorithm performs a global search across the entire search space.
B.All fireflies will have the same brightness, regardless of their position.
C.The algorithm behaves like a random search because attractiveness becomes negligible except at very close distances.
D.The algorithm converges extremely fast to a single global optimum.
Correct Answer: The algorithm behaves like a random search because attractiveness becomes negligible except at very close distances.
Explanation:
The attractiveness is proportional to . If is very large, the attractiveness term quickly drops to zero as the distance increases. This means fireflies are only attracted to others in their immediate vicinity, effectively breaking down the swarm's communication and causing the search to become a set of independent random walks.
Incorrect! Try again.
22The spiral updating position mechanism in the Whale Optimization Algorithm (WOA) is designed to mimic the humpback whale's bubble-net feeding behavior. What is the primary purpose of this mechanism in the optimization process?
Firefly algorithm and whale optimization algorithm
Medium
A.To reduce the number of tunable parameters compared to other algorithms.
B.To enhance local search and exploitation around the best-found solution.
C.To ensure the algorithm always escapes local optima.
D.To increase the exploration of the search space by making large random jumps.
Correct Answer: To enhance local search and exploitation around the best-found solution.
Explanation:
The spiral equation in WOA creates a path that logarithmically spirals towards the best-known solution (the prey). This allows the search agent (whale) to finely explore the neighborhood of the current best solution, thus intensifying the search and promoting exploitation.
Incorrect! Try again.
23In the Whale Optimization Algorithm (WOA), the decision to either encircle the prey or perform a spiral update is controlled by a probability . If a developer sets , how does this affect the algorithm's search behavior?
Firefly algorithm and whale optimization algorithm
Medium
A.The algorithm's convergence speed will be unaffected.
B.The algorithm will only perform exploration by searching for prey randomly.
C.The algorithm will only use the encircling prey mechanism.
D.The algorithm will only use the spiral bubble-net mechanism for exploitation.
Correct Answer: The algorithm will only use the spiral bubble-net mechanism for exploitation.
Explanation:
In WOA, if the random number generated is less than , the whale updates its position using the spiral model. If the number is greater than or equal to , it uses the shrinking encircling mechanism. Setting ensures that the condition for the spiral model is always met, forcing the algorithm to exclusively use this exploitation-focused strategy.
Incorrect! Try again.
24How does the Firefly Algorithm's movement equation fundamentally differ from the velocity update in Particle Swarm Optimization (PSO)?
Firefly algorithm and whale optimization algorithm
Medium
A.FA's movement is based on attractiveness between pairs of fireflies, while PSO's is based on individual and global best positions.
B.FA does not have a random component in its movement, unlike PSO.
C.FA updates fireflies one by one, whereas PSO updates all particles simultaneously.
D.PSO's movement is deterministic, while FA's is purely stochastic.
Correct Answer: FA's movement is based on attractiveness between pairs of fireflies, while PSO's is based on individual and global best positions.
Explanation:
The core of the Firefly Algorithm is the pairwise comparison between fireflies, where a dimmer firefly moves towards a brighter one. In contrast, PSO's velocity update is influenced by two main factors: the particle's own best-found position (pbest) and the swarm's overall best-found position (gbest), not direct pairwise interactions.
Incorrect! Try again.
25In Grey Wolf Optimization (GWO), the search is primarily guided by the top three wolves: alpha (), beta (), and delta (). What is the rationale behind using three leaders instead of just one (the alpha)?
Grey wolf optimization and grasshopper optimization algorithm
Medium
A.It provides a better balance between exploration and exploitation by considering multiple promising regions.
B.It eliminates the need for any random parameters in the algorithm.
C.It triples the convergence speed of the algorithm.
D.It is a direct imitation of wolf pack sizes and has no specific optimization purpose.
Correct Answer: It provides a better balance between exploration and exploitation by considering multiple promising regions.
Explanation:
By averaging the positions of the three best solutions (alpha, beta, and delta), the other wolves are guided towards a promising region rather than a single point. This prevents the swarm from converging too quickly to the single best solution (alpha), which could be a local optimum. It promotes exploration in the early stages and exploitation later on.
Incorrect! Try again.
26Consider the position update equation for omega wolves in GWO: , where are position vectors influenced by the alpha, beta, and delta wolves. If the coefficient vector is consistently greater than 1 for all three leaders, what phase is the algorithm likely in?
Grey wolf optimization and grasshopper optimization algorithm
Medium
A.Exploitation phase, where wolves converge to attack prey.
B.Initialization phase, where positions are being set randomly.
C.Stagnation phase, where wolves are not moving.
D.Exploration phase, where wolves diverge to search for prey.
Correct Answer: Exploration phase, where wolves diverge to search for prey.
Explanation:
The coefficient vector in GWO controls the search behavior. When , the wolves are forced to diverge from the prey (the current best solutions). This emphasizes exploration, allowing the pack to search for a better prey (global optimum) across the search space. When , the wolves converge, indicating exploitation.
Incorrect! Try again.
27The Grasshopper Optimization Algorithm (GOA) models both repulsion and attraction between grasshoppers. During which stage of the optimization process is repulsion between grasshoppers most dominant and why?
Grey wolf optimization and grasshopper optimization algorithm
Medium
A.Late stages, to refine the solution around the global optimum.
B.Early stages, to encourage exploration of the entire search space.
C.Repulsion is always weaker than attraction to ensure convergence.
D.When the swarm is very large, to manage computational complexity.
Correct Answer: Early stages, to encourage exploration of the entire search space.
Explanation:
In GOA, repulsion occurs at short distances, pushing grasshoppers away from each other. This is crucial in the early iterations to prevent the swarm from clumping together prematurely. This repulsive force promotes wide exploration of the search space, increasing the chances of discovering the region containing the global optimum.
Incorrect! Try again.
28The parameter in the Grasshopper Optimization Algorithm (GOA) decreases over iterations. What is the primary consequence of this design on the algorithm's behavior?
Grey wolf optimization and grasshopper optimization algorithm
Medium
A.It increases the random behavior of the grasshoppers over time.
B.It gradually shifts the algorithm's focus from exploration to exploitation.
C.It keeps the balance between attraction and repulsion forces constant.
D.It guarantees that the algorithm will find the global optimum.
Correct Answer: It gradually shifts the algorithm's focus from exploration to exploitation.
Explanation:
The parameter is a decreasing coefficient that reduces the search region around the target (best grasshopper so far). In the initial stages, a large allows for significant movements (exploration). As decreases with each iteration, the movements become smaller and more focused around the target, thus promoting exploitation and fine-tuning the solution.
Incorrect! Try again.
29Simulated Annealing is a well-known metaheuristic. How would it be conceptually classified?
Conceptual grouping of metaheuristics
Medium
A.Population-based and evolutionary
B.Population-based and swarm intelligence
C.Trajectory-based and physics-based
D.Trajectory-based and bio-inspired
Correct Answer: Trajectory-based and physics-based
Explanation:
Simulated Annealing is 'trajectory-based' (or single-solution based) because it improves a single candidate solution over time. It is 'physics-based' because it mimics the process of annealing in metallurgy, where a material is heated and then slowly cooled to alter its physical properties.
Incorrect! Try again.
30What is a key conceptual difference between Swarm Intelligence (SI) algorithms like PSO and Evolutionary Algorithms (EA) like Genetic Algorithms (GA)?
Conceptual grouping of metaheuristics
Medium
A.SI algorithms cannot solve discrete optimization problems, while EAs can.
B.EAs use operators like crossover and mutation to create new solutions, while SI algorithms typically adjust trajectories based on shared information.
C.SI algorithms are always trajectory-based, while EAs are always population-based.
D.EAs do not maintain a population of solutions, unlike SI algorithms.
Correct Answer: EAs use operators like crossover and mutation to create new solutions, while SI algorithms typically adjust trajectories based on shared information.
Explanation:
The core mechanism for generating new solutions differs. Evolutionary Algorithms are inspired by biological evolution and use operators like crossover (recombination of parent solutions) and mutation (random changes) to create offspring. Swarm Intelligence algorithms are inspired by collective behavior and typically involve individuals in a population moving through the search space, influenced by their own experience and the swarm's collective knowledge, without explicit crossover or mutation operators.
Incorrect! Try again.
31Which of the following pairs correctly categorizes the given algorithms?
D.Particle Swarm Optimization: Evolutionary Algorithm; Grey Wolf Optimizer: Physics-based
Correct Answer: Ant Colony Optimization: Swarm Intelligence; Tabu Search: Trajectory-based
Explanation:
Ant Colony Optimization is a classic Swarm Intelligence algorithm mimicking the foraging behavior of ants. Tabu Search is a trajectory-based (single-solution) metaheuristic that explores the search space by moving from one solution to another, using a memory list (tabu list) to avoid cycles.
Incorrect! Try again.
32An algorithm that maintains a population of solutions and improves them over generations using mechanisms inspired by natural selection, but does not use crossover between solutions, would be best classified as what?
Conceptual grouping of metaheuristics
Medium
A.A trajectory-based algorithm
B.A classical Evolutionary Algorithm
C.A Swarm Intelligence algorithm
D.A deterministic optimization method
Correct Answer: A Swarm Intelligence algorithm
Explanation:
This description fits Swarm Intelligence algorithms like PSO, GWO, or WOA. They are population-based and inspired by nature, but they lack the defining operators of classical EAs, such as crossover and mutation. Instead, individuals move and learn based on social interaction and collective intelligence.
Incorrect! Try again.
33According to the No Free Lunch (NFL) theorem for optimization, what can be concluded when comparing the performance of the Firefly Algorithm (FA) and the Grey Wolf Optimizer (GWO)?
Comparison of metaheuristic algorithms
Medium
A.FA will always converge faster than GWO on continuous optimization problems.
B.The algorithm with fewer parameters (GWO) is fundamentally better than the one with more parameters (FA).
C.GWO will always outperform FA on high-dimensional problems.
D.Neither algorithm can be considered universally superior to the other across all possible optimization problems.
Correct Answer: Neither algorithm can be considered universally superior to the other across all possible optimization problems.
Explanation:
The NFL theorem states that if an algorithm performs well on a certain class of problems, it must pay for that with degraded performance on the set of all remaining problems. This implies that there is no single best optimization algorithm for every problem. Therefore, GWO might be better for some problems, while FA might be better for others.
Incorrect! Try again.
34In a high-dimensional optimization problem with many local optima, why might Grey Wolf Optimizer (GWO) have an advantage over Particle Swarm Optimization (PSO)?
Comparison of metaheuristic algorithms
Medium
A.GWO has fewer parameters to tune, making it inherently more robust.
B.PSO particles can only move in straight lines, while GWO wolves cannot.
C.GWO's strategy of following three leaders (alpha, beta, delta) can prevent premature convergence to a single local optimum better than PSO's gbest.
D.GWO's computational complexity per iteration is always lower than PSO's.
Correct Answer: GWO's strategy of following three leaders (alpha, beta, delta) can prevent premature convergence to a single local optimum better than PSO's gbest.
Explanation:
In PSO, the entire swarm is strongly pulled towards a single global best (gbest). If gbest is a local optimum, the whole swarm can get trapped. GWO updates positions based on an average of the three best solutions. This creates a more diversified pull, allowing the swarm to explore the neighborhood of multiple promising areas, which can be advantageous in escaping local optima in complex landscapes.
Incorrect! Try again.
35When comparing the parameter sensitivity of the Firefly Algorithm (FA) and the Whale Optimization Algorithm (WOA), which statement is most accurate?
Comparison of metaheuristic algorithms
Medium
A.FA is generally more sensitive due to the light absorption coefficient () and randomization parameter (), which significantly impact performance.
B.Both algorithms have the exact same number and type of parameters to tune.
C.WOA is more sensitive because its spiral shape parameter () must be precisely tuned for each problem.
D.Both algorithms are parameter-free and require no tuning.
Correct Answer: FA is generally more sensitive due to the light absorption coefficient () and randomization parameter (), which significantly impact performance.
Explanation:
The performance of the Firefly Algorithm is highly dependent on the correct tuning of its parameters, especially the light absorption coefficient , which controls the convergence speed and behavior of the swarm. WOA has fewer control parameters that require tuning (mainly the coefficient vector ), often making it easier to apply out-of-the-box compared to FA.
Incorrect! Try again.
36An optimization algorithm is applied to a 10-dimensional problem and converges well. When the same algorithm with the same population size is applied to a 100-dimensional version of the problem, it consistently gets stuck in poor-quality local optima. This is a classic example of:
Scalability and convergence issues in optimization
Medium
A.A poorly implemented fitness function.
B.Algorithmic divergence.
C.The No Free Lunch theorem.
D.The curse of dimensionality.
Correct Answer: The curse of dimensionality.
Explanation:
The curse of dimensionality refers to various phenomena that arise when analyzing data in high-dimensional spaces. In optimization, it means that the volume of the search space increases exponentially with the number of dimensions. A fixed-size population becomes increasingly sparse, making it much harder to adequately explore the space and find the global optimum, leading to premature convergence.
Incorrect! Try again.
37Premature convergence in a population-based metaheuristic is characterized by the swarm losing its diversity and stagnating at a suboptimal solution. Which of the following strategies is specifically designed to counteract this?
Scalability and convergence issues in optimization
Medium
A.Decreasing the population size to speed up computations.
B.Reducing the number of iterations to stop the algorithm earlier.
C.Introducing a mutation operator or increasing the randomization parameter to re-introduce diversity.
D.Always replacing the worst solutions with copies of the best solution.
Correct Answer: Introducing a mutation operator or increasing the randomization parameter to re-introduce diversity.
Explanation:
The root cause of premature convergence is the loss of diversity, where all solutions in the population become very similar. To counteract this, mechanisms that re-introduce diversity are needed. A mutation operator (as in GAs) or increasing the influence of random factors can push some solutions away from the converged area, allowing the algorithm to escape the local optimum and explore other regions of the search space.
Incorrect! Try again.
38You are observing the convergence curve (Best Fitness vs. Iteration) of a metaheuristic algorithm. The curve drops very sharply in the first few iterations and then becomes completely flat for the rest of the run, far from the known optimal value. What is the most likely issue?
Scalability and convergence issues in optimization
Medium
A.The learning rate or step size is too small.
B.The algorithm has prematurely converged to a local optimum.
C.The population size is too large for the problem.
D.The algorithm is performing an effective global search.
Correct Answer: The algorithm has prematurely converged to a local optimum.
Explanation:
This pattern is a classic sign of premature convergence. The sharp initial drop indicates that the algorithm quickly found a point of attraction (an optimum), but the subsequent flat line shows it lacks the exploratory power to escape this point and find a better solution. This indicates a loss of diversity and stagnation in a suboptimal region of the search space.
Incorrect! Try again.
39How does increasing the population size in a swarm-based algorithm typically affect the exploration-exploitation balance and scalability?
Scalability and convergence issues in optimization
Medium
A.It reduces the algorithm's ability to scale to high-dimensional problems.
B.It generally improves exploration and the ability to handle higher dimensions, but at the cost of increased computational time per iteration.
C.It forces the algorithm to focus purely on exploitation, leading to faster convergence.
D.It has no effect on the exploration-exploitation balance but decreases overall runtime.
Correct Answer: It generally improves exploration and the ability to handle higher dimensions, but at the cost of increased computational time per iteration.
Explanation:
A larger population means more search agents are exploring the search space simultaneously. This enhances diversity and the algorithm's ability to conduct a more thorough global search (exploration), which is beneficial for scalability to higher dimensions. However, the trade-off is that evaluating the fitness of more agents in each iteration increases the computational cost.
Incorrect! Try again.
40For a problem where the global optimum is located within a narrow, funnel-shaped valley, which algorithm's search strategy would likely be more effective: the Firefly Algorithm (FA) or the Whale Optimization Algorithm (WOA)?
Comparison of metaheuristic algorithms
Medium
A.Both would be equally effective as they are both swarm intelligence algorithms.
B.WOA, due to its spiral bubble-net mechanism which is well-suited for exploiting narrow regions around a target.
C.Neither, as this type of problem requires a gradient-based deterministic method.
D.FA, because its attractiveness function works best in landscapes with clear gradients.
Correct Answer: WOA, due to its spiral bubble-net mechanism which is well-suited for exploiting narrow regions around a target.
Explanation:
The logarithmic spiral path used in WOA's exploitation phase is specifically designed to home in on a target (the best solution). This makes it particularly effective at navigating and exploiting narrow valleys or basins of attraction once a promising solution is found. FA's movement is based on brightness, which might not provide a strong enough signal to effectively navigate such a specific landscape feature compared to WOA's targeted spiral approach.
Incorrect! Try again.
41In the standard Firefly Algorithm, the attractiveness is given by . What is the most likely behavior of the swarm if the light absorption coefficient is set to a value approaching zero (), assuming the randomization parameter is also small?
Firefly algorithm and whale optimization algorithm
Hard
A.The algorithm behaves like a parallel random search, with each firefly moving almost independently.
B.All fireflies immediately converge to the position of the initially brightest firefly and cease movement.
C.The algorithm's search behavior becomes chaotic and unpredictable, leading to divergence.
D.The algorithm devolves into a variant of Particle Swarm Optimization (PSO) where all fireflies are attracted to the single global best.
Correct Answer: The algorithm devolves into a variant of Particle Swarm Optimization (PSO) where all fireflies are attracted to the single global best.
Explanation:
If , then . The attractiveness becomes constant () regardless of the distance . This means every firefly is equally attracted to every other brighter firefly. The movement equation sums these attractions, effectively pulling each firefly towards a weighted average of all brighter fireflies. Since the brightest firefly has no brighter individuals to move towards, it acts as a strong attractor, and the swarm's movement will be heavily biased towards this single best solution, mimicking the global best model in PSO.
Incorrect! Try again.
42In the Whale Optimization Algorithm (WOA), the spiral updating equation, , primarily contributes to the algorithm's search process in a way that distinguishes it from the shrinking encircling mechanism. What is this primary contribution?
Firefly algorithm and whale optimization algorithm
Hard
A.It provides a fine-grained exploitation mechanism, allowing the whale to explore various points in the neighborhood between its current position and the prey's position along a spiral path.
B.It guarantees convergence to the global optimum by creating a logarithmic spiral trajectory.
C.It serves as a diversity-promoting mechanism, pushing the whale away from the best-so-far solution to escape local optima.
D.It exclusively enhances global exploration by allowing whales to search in a wider, circular area around the prey.
Correct Answer: It provides a fine-grained exploitation mechanism, allowing the whale to explore various points in the neighborhood between its current position and the prey's position along a spiral path.
Explanation:
The spiral equation models the whale's movement towards the prey () along a logarithmic spiral. This allows the search agent to explore a continuous path between its current location and the best solution, effectively performing a very detailed local search (exploitation) in the immediate vicinity of the current best solution. This is a more refined local search than the simpler shrinking circle, which moves the agent more directly towards the target.
Incorrect! Try again.
43A key vulnerability of the Grey Wolf Optimizer (GWO) is premature convergence when the alpha, beta, and delta wolves become trapped in the same local optimum. Which of the following modifications to the GWO position update equation would be the most direct and effective method to mitigate this specific failure mode?
Grey wolf optimization and grasshopper optimization algorithm
Hard
A.Modifying the final position update from an average of the three vectors () to a weighted average where the alpha wolf's influence is reduced in later iterations.
B.Increasing the population size to have more omega wolves.
C.Introducing a "repulsion" force from the alpha wolf if the beta and delta wolves are within a certain small distance from it.
D.Changing the linear decay of the parameter 'a' from [2, 0] to a non-linear, convex function over the same range.
Correct Answer: Introducing a "repulsion" force from the alpha wolf if the beta and delta wolves are within a certain small distance from it.
Explanation:
When the top three wolves are stuck in a local optimum, they pull all other omega wolves into the same trap. Introducing a repulsion force when the leaders are too close directly addresses this clustering issue. This forces the beta and delta wolves to explore the neighborhood around the alpha wolf's position instead of converging onto it, maintaining diversity within the leadership pack and increasing the chances of escaping the local minimum.
Incorrect! Try again.
44In the Grasshopper Optimization Algorithm (GOA), the parameter 'c' acts as a decreasing coefficient that shrinks the "comfort zone." What is the critical consequence of 'c' multiplying both the social interaction term's bounds and the overall step size towards the target?
Grey wolf optimization and grasshopper optimization algorithm
Hard
A.It makes the algorithm highly sensitive to the initial population distribution, as 'c' amplifies initial distances.
B.It creates a dynamic balance where the influence of the swarm decreases, while the pull towards the best-so-far solution is simultaneously refined for fine-tuning.
C.It causes the algorithm to focus exclusively on the best grasshopper (the target) in the final iterations.
D.It forces the grasshoppers into a stable, fixed formation around the target, halting the search process.
Correct Answer: It creates a dynamic balance where the influence of the swarm decreases, while the pull towards the best-so-far solution is simultaneously refined for fine-tuning.
Explanation:
The parameter 'c' has a dual role. As 'c' decreases, it shrinks the bounds of social interaction, reducing the influence of other grasshoppers. Simultaneously, it multiplies the entire social interaction component, dampening large, exploratory steps. This means that as the search progresses, swarm-driven exploration is reduced and movement becomes more dominated by the pull towards the target, allowing for precise local search and fine-tuning (exploitation).
Incorrect! Try again.
45Consider a hypothetical hybrid algorithm that first uses a Genetic Algorithm (GA) to generate a diverse set of candidate solutions. It then uses the top 10% of these solutions to initialize the alpha, beta, and delta wolves in a Grey Wolf Optimizer (GWO) which then runs to find the final solution. How would this hybrid algorithm be most accurately classified?
Conceptual grouping of metaheuristics
Hard
A.As a pure Evolutionary Algorithm, because the primary diversification comes from GA.
B.As a trajectory-based metaheuristic, since GWO guides the final search path.
C.As a Memetic Algorithm, combining global evolutionary search with local swarm-based search.
D.As a pure Swarm Intelligence algorithm, because the final optimization is performed by GWO.
Correct Answer: As a Memetic Algorithm, combining global evolutionary search with local swarm-based search.
Explanation:
Memetic Algorithms are hybrid metaheuristics that combine a population-based global search algorithm (like GA) with a local search or individual learning procedure. In this scenario, the GA performs the global exploration role, and GWO, initialized with the best GA solutions, acts as a sophisticated, cooperative local search method to refine those solutions. This fits the definition of a Memetic Algorithm, where evolutionary concepts are enhanced by local improvement strategies.
Incorrect! Try again.
46For a high-dimensional optimization problem with a single, deep, and narrow global optimum funnel (a "needle in a haystack" problem), which algorithm's search mechanism is inherently most disadvantaged and why?
Comparison of metaheuristic algorithms
Hard
A.Grey Wolf Optimizer (GWO), because the averaging of the top three wolves' positions is likely to miss a narrow funnel if the wolves surround it but don't land in it.
B.Whale Optimization Algorithm (WOA), because the spiral search mechanism is too localized and inefficient for exploring a large search space for a narrow target.
C.Particle Swarm Optimization (PSO), because the cognitive and social components provide a strong pull towards known good areas, which are vast and non-optimal.
D.Firefly Algorithm (FA), because the distance-dependent attractiveness () will drop to nearly zero for all but the closest fireflies, effectively isolating search agents.
Correct Answer: Firefly Algorithm (FA), because the distance-dependent attractiveness () will drop to nearly zero for all but the closest fireflies, effectively isolating search agents.
Explanation:
In high-dimensional spaces, the Euclidean distance 'r' between any two random points tends to be large (curse of dimensionality). The Firefly Algorithm's attractiveness function is exponentially dependent on the square of this distance. For a large 'r', becomes negligible. This means unless a firefly is, by chance, initialized very close to a brighter one, it will perceive no attraction and will only perform a random walk. This "isolates" the fireflies, severely hindering the cooperative search needed to find a narrow optimum.
Incorrect! Try again.
47How does the "curse of dimensionality" specifically impact the effectiveness of the social interaction component, , in the Grasshopper Optimization Algorithm (GOA)?
Scalability and convergence issues in optimization
Hard
A.It has no significant impact because the comfort zone parameter 'c' effectively rescales the search space.
B.It strengthens the social interaction, as the average inter-agent distance increases, leading to stronger repulsion forces.
C.It forces all grasshoppers into the attraction zone, causing rapid premature convergence to the population's centroid.
D.It severely weakens the social interaction, as in high dimensions, most grasshoppers will fall into the mid-range repulsion zone of the s-function, leading to chaotic and unproductive movements with little directed attraction.
Correct Answer: It severely weakens the social interaction, as in high dimensions, most grasshoppers will fall into the mid-range repulsion zone of the s-function, leading to chaotic and unproductive movements with little directed attraction.
Explanation:
The s-function defines zones of repulsion, attraction, and weak force. In high-dimensional spaces, the distribution of pairwise distances between random points concentrates in a narrow band. This means most pairs of grasshoppers will have a distance that falls into the weak-force or moderate repulsion zone. Consequently, the social interaction vector becomes an aggregation of many small, almost random vectors, leading to a weak overall signal and hindering effective swarm cooperation.
Incorrect! Try again.
48In the Firefly Algorithm's movement equation, , what is the critical role of the randomization term , particularly for the globally brightest firefly?
Firefly algorithm and whale optimization algorithm
Hard
A.It scales the attractiveness based on the problem's dimensionality.
B.It helps less bright fireflies escape the pull of the brightest one, thus maintaining diversity.
C.It ensures that even the brightest firefly, which has no brighter fireflies to be attracted to, continues to explore its local neighborhood.
D.It primarily serves to break ties when two fireflies have identical brightness.
Correct Answer: It ensures that even the brightest firefly, which has no brighter fireflies to be attracted to, continues to explore its local neighborhood.
Explanation:
The brightest firefly has no other firefly to move towards, so the attraction term in its movement equation is zero. Without the randomization term, the brightest firefly would remain stationary. The term adds a random vector to its position, ensuring it performs a local random walk. This is crucial for exploring the immediate vicinity of the current best solution and allows the algorithm to fine-tune the best-found position.
Incorrect! Try again.
49In GWO, the final position of an omega wolf is the average of three positions calculated relative to the alpha, beta, and delta wolves: . What is the geometric interpretation of this update mechanism in the search space?
Grey wolf optimization and grasshopper optimization algorithm
Hard
A.The wolf moves to the circumcenter of the triangle formed by the alpha, beta, and delta wolves.
B.The wolf moves to the centroid of a triangle formed by its potential next positions relative to the three leaders.
C.The wolf is projected onto the plane defined by the three leaders, ensuring a 2D search in a higher-dimensional space.
D.The wolf performs a random walk within a hyper-sphere defined by the positions of the three leaders.
Correct Answer: The wolf moves to the centroid of a triangle formed by its potential next positions relative to the three leaders.
Explanation:
For each omega wolf, GWO calculates three potential next positions: based on the alpha wolf's position, based on beta's, and based on delta's. These three vectors define the vertices of a triangle in the search space. The update rule finds the arithmetic mean of these three vectors, which corresponds geometrically to the centroid of that triangle. This mechanism balances the influence of the three best solutions to guide the pack.
Incorrect! Try again.
50The "No Free Lunch" (NFL) theorem for optimization states that, averaged over all possible problems, any two optimization algorithms will have the same average performance. What is the most profound implication of this theorem for the field of metaheuristics?
Conceptual grouping of metaheuristics
Hard
A.It proves that developing new metaheuristic algorithms is a futile effort.
B.It implies that all metaheuristics are essentially variants of random search.
C.It necessitates the development of problem-specific or class-specific algorithms, as a universally superior algorithm cannot exist.
D.It suggests that hybridizing algorithms is the only way to achieve better performance.
Correct Answer: It necessitates the development of problem-specific or class-specific algorithms, as a universally superior algorithm cannot exist.
Explanation:
The NFL theorem's core message is that an algorithm's superior performance on one class of problems is necessarily paid for with inferior performance on another. This means there is no "master" algorithm that is best for all optimization problems. The practical consequence is that research must focus on designing algorithms tailored to specific problem classes (e.g., continuous vs. discrete, unimodal vs. multimodal) by exploiting their structural properties.
Incorrect! Try again.
51Compare the primary exploitation mechanism of the Whale Optimization Algorithm (the spiral update around ) with that of the Grey Wolf Optimizer (omega wolves encircling the region defined by ). Which statement provides the most accurate analysis of their differences in handling multimodal problems?
Comparison of metaheuristic algorithms
Hard
A.Both algorithms have identical exploitation capabilities, with differences only in their exploration phases.
B.WOA's spiral mechanism provides a more exhaustive local search path around the best solution, while GWO's averaging provides a more discrete jump towards a region of promise.
C.GWO's exploitation is more robust for multimodal problems as it considers three good solutions, whereas WOA's focus on a single makes it more prone to local optima.
D.GWO's exploitation is computationally cheaper as it avoids trigonometric functions, making it more efficient for fast convergence.
Correct Answer: GWO's exploitation is more robust for multimodal problems as it considers three good solutions, whereas WOA's focus on a single makes it more prone to local optima.
Explanation:
WOA's exploitation is driven by a single best-so-far solution, . If this solution is a local optimum, the swarm's exploitation efforts will be focused there. GWO bases its exploitation on the collective information of the three best solutions. If these three wolves have identified different peaks in a multimodal landscape, the omega wolves are guided towards a region informed by all three, making the exploitation less susceptible to being trapped by a single misleading local optimum.
Incorrect! Try again.
52Many metaheuristics can be proven to converge to a global optimum, but this proof often relies on assumptions that are not practical (e.g., the ability to reach any point in the search space from any other point). Which of the following algorithms, in its standard form, most clearly violates this assumption, thus making a formal proof of convergence challenging?
Scalability and convergence issues in optimization
Hard
A.A Genetic Algorithm that includes a mutation operator with a non-zero probability of changing any gene.
B.The Firefly Algorithm, where movement is strictly biased towards brighter fireflies and can be zero if no brighter firefly exists.
C.Particle Swarm Optimization, where the velocity update can theoretically propel a particle anywhere in the search space.
D.Simulated Annealing, where there is always a non-zero probability of accepting a worse move.
Correct Answer: The Firefly Algorithm, where movement is strictly biased towards brighter fireflies and can be zero if no brighter firefly exists.
Explanation:
A formal proof of convergence often relies on the condition that the algorithm can eventually generate any solution in the search space. In the standard Firefly Algorithm, movement is deterministically biased towards brighter fireflies. The globally brightest firefly only performs a small random step. If this step size is fixed or shrinks and the brightest firefly is in a local optimum, it may be impossible for it to make a large enough jump to escape and reach the global optimum's basin of attraction, thus violating the reachability assumption.
Incorrect! Try again.
53In WOA, the transition between exploration (, search for prey) and exploitation (, attack prey) is governed by , where 'a' decreases linearly from 2 to 0. What is the key implication of using this formulation?
Firefly algorithm and whale optimization algorithm
Hard
A.It creates a probabilistic transition, where exploration is more likely in early stages and exploitation is more likely in later stages, but neither is ever fully eliminated.
B.It makes the transition dependent on the problem's dimensionality, as the random vector 's magnitude changes.
C.It guarantees that exactly the first half of the iterations are dedicated to exploration and the second half to exploitation.
D.It forces the algorithm to switch deterministically from exploration to exploitation once the iteration count passes the halfway mark.
Correct Answer: It creates a probabilistic transition, where exploration is more likely in early stages and exploitation is more likely in later stages, but neither is ever fully eliminated.
Explanation:
The value of depends on both the deterministic, decreasing 'a' and the random vector 'r'. When 'a' is large (e.g., > 1), is very likely to be greater than 1, favoring exploration. As 'a' becomes small (< 1), is more likely to be less than 1, favoring exploitation. However, the randomness from 'r' means there is always a non-zero probability of exploration even late in the search, and vice-versa. This creates a smooth, probabilistic shift rather than a hard switch.
Incorrect! Try again.
54The GOA position update equation is of the form , where is the target position. This is different from many swarm algorithms like PSO where . What is the fundamental difference in search behavior implied by GOA's formulation?
Grey wolf optimization and grasshopper optimization algorithm
Hard
A.GOA's approach is computationally more complex and therefore converges more slowly.
B.GOA's positions are recalculated in each iteration relative to the target, making it a memory-less algorithm regarding individual trajectory, unlike PSO which has momentum.
C.GOA's formulation ensures that grasshoppers can never move further away from the target, guaranteeing convergence.
D.There is no fundamental difference; it is just a different mathematical representation of the same process.
Correct Answer: GOA's positions are recalculated in each iteration relative to the target, making it a memory-less algorithm regarding individual trajectory, unlike PSO which has momentum.
Explanation:
The GOA equation calculates the new absolute position based on current swarm interactions and the target. It does not use its own previous position as a base. This contrasts with PSO, where the new position is the old position plus a velocity vector, which includes momentum from the previous velocity. This makes GOA's search process "memory-less" in terms of an individual's own velocity or momentum, as its location is completely redefined in each step based on the swarm's current state.
Incorrect! Try again.
55Some algorithms blur the line between being population-based and trajectory-based. Which of the following algorithms' core mechanism makes it the most ambiguous to classify strictly as one or the other?
Conceptual grouping of metaheuristics
Hard
A.Random Search, which involves independent trials.
B.Genetic Algorithm, which operates on an entire population simultaneously.
C.Simulated Annealing, which modifies a single solution over time.
D.Ant Colony Optimization (ACO), where individual ants create solutions but the population collectively modifies a pheromone map that represents a shared search structure.
Correct Answer: Ant Colony Optimization (ACO), where individual ants create solutions but the population collectively modifies a pheromone map that represents a shared search structure.
Explanation:
ACO is population-based because a population of ants constructs solutions concurrently. However, the algorithm's core is the pheromone map, a single, shared data structure that is iteratively modified. One can view the evolution of this pheromone map as the trajectory of a single entity (a probability distribution) in a higher-dimensional space. The ants act as agents that sample and update this single, evolving structure. This dual nature makes its classification less clear-cut than a pure trajectory method like Simulated Annealing or a pure population method like a GA.
Incorrect! Try again.
56Metaheuristics balance exploration and exploitation using control parameters. Which algorithm pair offers the most fundamentally different approach to managing this balance?
Comparison of metaheuristic algorithms
Hard
A.Simulated Annealing (SA) and Genetic Algorithm (GA), where SA uses a temperature schedule to reduce randomness, while a standard GA uses static operator rates.
B.Firefly Algorithm (FA) and Particle Swarm Optimization (PSO), where both rely on attraction to better solutions within the population.
C.Grasshopper Optimization (GOA) and GWO, as both use a coefficient ('c' or 'a') that shrinks agents' step sizes over time.
D.GWO and WOA, as both use a linearly decreasing parameter 'a' to shift from exploration to exploitation.
Correct Answer: Simulated Annealing (SA) and Genetic Algorithm (GA), where SA uses a temperature schedule to reduce randomness, while a standard GA uses static operator rates.
Explanation:
GWO, WOA, and GOA all use an explicit, time-dependent parameter ('a' or 'c') to shift the search dynamics. The SA vs. GA comparison shows a more fundamental difference. SA uses an explicit temperature schedule to deterministically reduce the probability of exploratory (uphill) moves over time. In contrast, a standard GA does not have such a time-varying parameter. The exploration-exploitation balance in a GA emerges implicitly from the constant interplay between selection pressure (exploitation) and the disruptive effects of crossover and mutation (exploration).
Incorrect! Try again.
57An algorithm exhibits rapid initial convergence, with all agents clustering in one region of the search space early on, after which the best solution improves very slowly. This indicates premature convergence. Which parameter tuning strategy is most likely to be ineffective or even counter-productive for solving this issue?
Scalability and convergence issues in optimization
Hard
A.In the Firefly Algorithm, significantly increasing the light absorption coefficient .
B.In PSO, increasing the cognitive parameter () and decreasing the social parameter ().
C.In any swarm algorithm, significantly increasing the population size.
D.In GWO, using a non-linear concave function for the decay of parameter 'a', so it stays high for longer.
Correct Answer: In the Firefly Algorithm, significantly increasing the light absorption coefficient .
Explanation:
Premature convergence is caused by a loss of diversity. Increasing the light absorption coefficient in FA would worsen this problem. A larger causes attractiveness to fade much more quickly with distance, effectively isolating fireflies. This destroys global communication, forces each firefly to converge to its nearest brighter neighbor, and leads to even faster convergence into multiple separate local optima, further reducing overall swarm diversity. The other options are valid strategies to encourage more exploration.
Incorrect! Try again.
58WOA's main exploration mechanism involves updating a whale's position based on a randomly chosen whale instead of the best-so-far whale . How does this mechanism's effectiveness for global search compare to the mutation operator in a Genetic Algorithm (GA)?
Firefly algorithm and whale optimization algorithm
Hard
A.It is less effective for creating truly novel solutions because it only directs the search towards regions defined by the current population, whereas mutation can generate entirely new genetic material.
B.The two mechanisms are functionally equivalent, both serving to introduce random changes to escape local optima.
C.WOA's mechanism is only useful in early iterations, while mutation is effective throughout the entire evolutionary process.
D.It is more effective because it always guides the search toward a potentially good region (defined by ), unlike mutation which is a completely random perturbation.
Correct Answer: It is less effective for creating truly novel solutions because it only directs the search towards regions defined by the current population, whereas mutation can generate entirely new genetic material.
Explanation:
WOA's exploration, by moving towards a random agent , biases the search towards the convex hull of the current population. It is good at exploring areas 'in between' existing solutions but struggles to generate a solution in a completely unexplored region of the search space. A mutation operator in a GA can change a variable to any of its possible values, allowing it to create a solution that may lie far outside the current population's convex hull. This ability to generate genuinely novel solutions gives mutation a stronger capability for global exploration.
Incorrect! Try again.
59In the Grasshopper Optimization Algorithm, the position update is , where is the position of the best solution (target) found so far. What is the most significant potential drawback of having this strong, direct pull towards a single target in every iteration?
Grey wolf optimization and grasshopper optimization algorithm
Hard
A.It makes the algorithm unsuitable for discrete optimization problems where the concept of a "target position" is ill-defined.
B.It creates an overly strong exploitation pressure from the very beginning, potentially overriding the exploratory social interactions and leading to premature convergence if the initial target is a local optimum.
C.It makes the algorithm computationally expensive as the target must be identified in each iteration.
D.It requires an extra parameter to control the influence of the target, which complicates the algorithm.
Correct Answer: It creates an overly strong exploitation pressure from the very beginning, potentially overriding the exploratory social interactions and leading to premature convergence if the initial target is a local optimum.
Explanation:
The explicit addition of the target position in every update creates a persistent pull towards the single best-known point. If this point is discovered early and happens to be a deep but local optimum, this term can overwhelm the social (exploratory) forces and quickly draw the entire swarm into that region, causing severe premature convergence. Other algorithms, like GWO, balance the pull among three leaders, providing some hedge against this single-point failure mode.
Incorrect! Try again.
60Metaheuristic algorithms can be classified by their information sharing topology. GWO and gbest-PSO both have a star-like topology where leader(s) broadcast information to all others. What fundamentally distinguishes the Firefly Algorithm's topology from these?
Conceptual grouping of metaheuristics
Hard
A.FA has no information sharing topology; all agents are independent.
B.FA has a ring topology where each firefly only communicates with its immediate neighbors.
C.FA has a fully connected (all-to-all) topology where every firefly influences every other firefly equally.
D.FA has a variable, dynamic, and asymmetric topology where links only exist from dimmer to brighter fireflies and their strength depends on distance.
Correct Answer: FA has a variable, dynamic, and asymmetric topology where links only exist from dimmer to brighter fireflies and their strength depends on distance.
Explanation:
The information sharing in FA is not fixed like a star or ring. It is dynamic because the 'brightest' firefly can change. It is variable because the influence (attraction) is a continuous function of distance. Most importantly, it is asymmetric because a dimmer firefly is attracted to a brighter one, but not vice-versa. This creates a directed graph of influence that changes its structure and edge weights in every iteration, making its topology far more complex and fluid than the fixed star topology of gbest PSO or GWO.