Unit 4 - Practice Quiz

CSE275 60 Questions
0 Correct 0 Wrong 60 Left
0/60

1 The Firefly Algorithm is a metaheuristic inspired by the...

Firefly algorithm and whale optimization algorithm Easy
A. hunting strategy of lions
B. migration patterns of birds
C. foraging of ants
D. flashing behavior of fireflies

2 In the Firefly Algorithm, the attractiveness of a firefly is directly related to its...

Firefly algorithm and whale optimization algorithm Easy
A. age
B. distance from the origin
C. light intensity (brightness)
D. speed of movement

3 The Whale Optimization Algorithm (WOA) is primarily based on the hunting behavior of which specific animal?

Firefly algorithm and whale optimization algorithm Easy
A. Killer whales (Orcas)
B. Humpback whales
C. Sperm whales
D. Blue whales

4 Which of the following is a key phase of the Whale Optimization Algorithm's hunting strategy?

Firefly algorithm and whale optimization algorithm Easy
A. Building a nest
B. Hibernation
C. Encircling prey
D. Shedding skin

5 In the Grey Wolf Optimizer (GWO), the three best solutions found so far are represented by which wolves in the social hierarchy?

Grey wolf optimization and grasshopper optimization algorithm Easy
A. Alpha, Beta, and Delta
B. Omega, Beta, and Gamma
C. Alpha, Beta, and Omega
D. Epsilon, Gamma, and Delta

6 What is the primary role of the omega () wolves in the Grey Wolf Optimizer?

Grey wolf optimization and grasshopper optimization algorithm Easy
A. To challenge the Alpha wolf
B. To find new territory
C. To follow the Alpha, Beta, and Delta wolves
D. To lead the hunt

7 The Grasshopper Optimization Algorithm (GOA) is inspired by the behavior of grasshoppers in...

Grey wolf optimization and grasshopper optimization algorithm Easy
A. building nests
B. isolation
C. mating rituals
D. a swarm

8 In the Grasshopper Optimization Algorithm, the movement of an individual is mainly influenced by which two forces?

Grey wolf optimization and grasshopper optimization algorithm Easy
A. Hunger and fear
B. Magnetic fields and light
C. Social interaction and gravity force towards the target
D. Wind current and temperature

9 Algorithms inspired by physical processes, like the cooling of metal in annealing, are grouped into which category of metaheuristics?

Conceptual grouping of metaheuristics Easy
A. Physics-based
B. Swarm-based
C. Human-based
D. Evolutionary-based

10 Ant Colony Optimization and Particle Swarm Optimization are examples of which class of algorithms?

Conceptual grouping of metaheuristics Easy
A. Evolutionary algorithms
B. Swarm intelligence-based
C. Physics-based
D. Trajectory-based

11 A key characteristic of population-based metaheuristics is that they...

Conceptual grouping of metaheuristics Easy
A. work with a single solution that moves through the search space
B. are guaranteed to find the global optimum
C. are only inspired by biological evolution
D. maintain and improve multiple candidate solutions simultaneously

12 Genetic Algorithms and Differential Evolution belong to which group of metaheuristics?

Conceptual grouping of metaheuristics Easy
A. Human-based algorithms
B. Swarm intelligence algorithms
C. Evolutionary algorithms
D. Physics-based algorithms

13 When comparing optimization algorithms, what does 'convergence speed' refer to?

Comparison of metaheuristic algorithms Easy
A. How quickly the algorithm finds a good enough solution
B. The programming language it is written in
C. The computational complexity of the algorithm
D. How many parameters the algorithm has

14 The "No Free Lunch" (NFL) theorem implies that...

Comparison of metaheuristic algorithms Easy
A. free and open-source algorithms are always worse than commercial ones
B. no single optimization algorithm is best for all possible problems
C. all optimization algorithms perform equally well on every problem
D. an algorithm that is fast is always better

15 In the context of metaheuristics, what does 'parameter tuning' involve?

Comparison of metaheuristic algorithms Easy
A. Choosing the objective function for the problem
B. Writing the algorithm's code
C. Increasing the population size to infinity
D. Setting the algorithm's control parameters to achieve the best performance

16 A common way to ensure a fair comparison between two stochastic (randomized) optimization algorithms is to...

Comparison of metaheuristic algorithms Easy
A. use different population sizes for each algorithm
B. run each algorithm multiple times and compare their average performance
C. use different objective functions for each algorithm
D. run each algorithm only once

17 An optimization algorithm is said to have good 'scalability' if its performance...

Scalability and convergence issues in optimization Easy
A. is always fast regardless of the problem
B. improves as the problem size increases
C. does not degrade significantly as the problem size increases
D. is consistent only on small-scale problems

18 'Premature convergence' is an issue where an algorithm...

Scalability and convergence issues in optimization Easy
A. finds the global optimum too quickly
B. gets stuck in a local optimum and stops exploring the search space
C. converges too slowly to the global optimum
D. fails to converge at all

19 A standard convergence curve for a minimization problem plots the 'best fitness value' on the y-axis against what on the x-axis?

Scalability and convergence issues in optimization Easy
A. Algorithm runtime in seconds
B. Number of iterations or function evaluations
C. Number of problem dimensions
D. Population size

20 The 'curse of dimensionality' refers to the problem where...

Scalability and convergence issues in optimization Easy
A. the optimal solution is always at the origin in high dimensions
B. the algorithm requires less memory for high-dimensional problems
C. the algorithm becomes simpler with more dimensions
D. the search space grows exponentially as the number of dimensions increases

21 In the Firefly Algorithm, if the light absorption coefficient, , is set to a very large value (e.g., ), what is the expected behavior of the algorithm?

Firefly algorithm and whale optimization algorithm Medium
A. The algorithm performs a global search across the entire search space.
B. All fireflies will have the same brightness, regardless of their position.
C. The algorithm behaves like a random search because attractiveness becomes negligible except at very close distances.
D. The algorithm converges extremely fast to a single global optimum.

22 The spiral updating position mechanism in the Whale Optimization Algorithm (WOA) is designed to mimic the humpback whale's bubble-net feeding behavior. What is the primary purpose of this mechanism in the optimization process?

Firefly algorithm and whale optimization algorithm Medium
A. To reduce the number of tunable parameters compared to other algorithms.
B. To enhance local search and exploitation around the best-found solution.
C. To ensure the algorithm always escapes local optima.
D. To increase the exploration of the search space by making large random jumps.

23 In the Whale Optimization Algorithm (WOA), the decision to either encircle the prey or perform a spiral update is controlled by a probability . If a developer sets , how does this affect the algorithm's search behavior?

Firefly algorithm and whale optimization algorithm Medium
A. The algorithm's convergence speed will be unaffected.
B. The algorithm will only perform exploration by searching for prey randomly.
C. The algorithm will only use the encircling prey mechanism.
D. The algorithm will only use the spiral bubble-net mechanism for exploitation.

24 How does the Firefly Algorithm's movement equation fundamentally differ from the velocity update in Particle Swarm Optimization (PSO)?

Firefly algorithm and whale optimization algorithm Medium
A. FA's movement is based on attractiveness between pairs of fireflies, while PSO's is based on individual and global best positions.
B. FA does not have a random component in its movement, unlike PSO.
C. FA updates fireflies one by one, whereas PSO updates all particles simultaneously.
D. PSO's movement is deterministic, while FA's is purely stochastic.

25 In Grey Wolf Optimization (GWO), the search is primarily guided by the top three wolves: alpha (), beta (), and delta (). What is the rationale behind using three leaders instead of just one (the alpha)?

Grey wolf optimization and grasshopper optimization algorithm Medium
A. It provides a better balance between exploration and exploitation by considering multiple promising regions.
B. It eliminates the need for any random parameters in the algorithm.
C. It triples the convergence speed of the algorithm.
D. It is a direct imitation of wolf pack sizes and has no specific optimization purpose.

26 Consider the position update equation for omega wolves in GWO: , where are position vectors influenced by the alpha, beta, and delta wolves. If the coefficient vector is consistently greater than 1 for all three leaders, what phase is the algorithm likely in?

Grey wolf optimization and grasshopper optimization algorithm Medium
A. Exploitation phase, where wolves converge to attack prey.
B. Initialization phase, where positions are being set randomly.
C. Stagnation phase, where wolves are not moving.
D. Exploration phase, where wolves diverge to search for prey.

27 The Grasshopper Optimization Algorithm (GOA) models both repulsion and attraction between grasshoppers. During which stage of the optimization process is repulsion between grasshoppers most dominant and why?

Grey wolf optimization and grasshopper optimization algorithm Medium
A. Late stages, to refine the solution around the global optimum.
B. Early stages, to encourage exploration of the entire search space.
C. Repulsion is always weaker than attraction to ensure convergence.
D. When the swarm is very large, to manage computational complexity.

28 The parameter in the Grasshopper Optimization Algorithm (GOA) decreases over iterations. What is the primary consequence of this design on the algorithm's behavior?

Grey wolf optimization and grasshopper optimization algorithm Medium
A. It increases the random behavior of the grasshoppers over time.
B. It gradually shifts the algorithm's focus from exploration to exploitation.
C. It keeps the balance between attraction and repulsion forces constant.
D. It guarantees that the algorithm will find the global optimum.

29 Simulated Annealing is a well-known metaheuristic. How would it be conceptually classified?

Conceptual grouping of metaheuristics Medium
A. Population-based and evolutionary
B. Population-based and swarm intelligence
C. Trajectory-based and physics-based
D. Trajectory-based and bio-inspired

30 What is a key conceptual difference between Swarm Intelligence (SI) algorithms like PSO and Evolutionary Algorithms (EA) like Genetic Algorithms (GA)?

Conceptual grouping of metaheuristics Medium
A. SI algorithms cannot solve discrete optimization problems, while EAs can.
B. EAs use operators like crossover and mutation to create new solutions, while SI algorithms typically adjust trajectories based on shared information.
C. SI algorithms are always trajectory-based, while EAs are always population-based.
D. EAs do not maintain a population of solutions, unlike SI algorithms.

31 Which of the following pairs correctly categorizes the given algorithms?

Conceptual grouping of metaheuristics Medium
A. Whale Optimization: Trajectory-based; Firefly Algorithm: Evolutionary Algorithm
B. Genetic Algorithm: Swarm Intelligence; Simulated Annealing: Population-based
C. Ant Colony Optimization: Swarm Intelligence; Tabu Search: Trajectory-based
D. Particle Swarm Optimization: Evolutionary Algorithm; Grey Wolf Optimizer: Physics-based

32 An algorithm that maintains a population of solutions and improves them over generations using mechanisms inspired by natural selection, but does not use crossover between solutions, would be best classified as what?

Conceptual grouping of metaheuristics Medium
A. A trajectory-based algorithm
B. A classical Evolutionary Algorithm
C. A Swarm Intelligence algorithm
D. A deterministic optimization method

33 According to the No Free Lunch (NFL) theorem for optimization, what can be concluded when comparing the performance of the Firefly Algorithm (FA) and the Grey Wolf Optimizer (GWO)?

Comparison of metaheuristic algorithms Medium
A. FA will always converge faster than GWO on continuous optimization problems.
B. The algorithm with fewer parameters (GWO) is fundamentally better than the one with more parameters (FA).
C. GWO will always outperform FA on high-dimensional problems.
D. Neither algorithm can be considered universally superior to the other across all possible optimization problems.

34 In a high-dimensional optimization problem with many local optima, why might Grey Wolf Optimizer (GWO) have an advantage over Particle Swarm Optimization (PSO)?

Comparison of metaheuristic algorithms Medium
A. GWO has fewer parameters to tune, making it inherently more robust.
B. PSO particles can only move in straight lines, while GWO wolves cannot.
C. GWO's strategy of following three leaders (alpha, beta, delta) can prevent premature convergence to a single local optimum better than PSO's gbest.
D. GWO's computational complexity per iteration is always lower than PSO's.

35 When comparing the parameter sensitivity of the Firefly Algorithm (FA) and the Whale Optimization Algorithm (WOA), which statement is most accurate?

Comparison of metaheuristic algorithms Medium
A. FA is generally more sensitive due to the light absorption coefficient () and randomization parameter (), which significantly impact performance.
B. Both algorithms have the exact same number and type of parameters to tune.
C. WOA is more sensitive because its spiral shape parameter () must be precisely tuned for each problem.
D. Both algorithms are parameter-free and require no tuning.

36 An optimization algorithm is applied to a 10-dimensional problem and converges well. When the same algorithm with the same population size is applied to a 100-dimensional version of the problem, it consistently gets stuck in poor-quality local optima. This is a classic example of:

Scalability and convergence issues in optimization Medium
A. A poorly implemented fitness function.
B. Algorithmic divergence.
C. The No Free Lunch theorem.
D. The curse of dimensionality.

37 Premature convergence in a population-based metaheuristic is characterized by the swarm losing its diversity and stagnating at a suboptimal solution. Which of the following strategies is specifically designed to counteract this?

Scalability and convergence issues in optimization Medium
A. Decreasing the population size to speed up computations.
B. Reducing the number of iterations to stop the algorithm earlier.
C. Introducing a mutation operator or increasing the randomization parameter to re-introduce diversity.
D. Always replacing the worst solutions with copies of the best solution.

38 You are observing the convergence curve (Best Fitness vs. Iteration) of a metaheuristic algorithm. The curve drops very sharply in the first few iterations and then becomes completely flat for the rest of the run, far from the known optimal value. What is the most likely issue?

Scalability and convergence issues in optimization Medium
A. The learning rate or step size is too small.
B. The algorithm has prematurely converged to a local optimum.
C. The population size is too large for the problem.
D. The algorithm is performing an effective global search.

39 How does increasing the population size in a swarm-based algorithm typically affect the exploration-exploitation balance and scalability?

Scalability and convergence issues in optimization Medium
A. It reduces the algorithm's ability to scale to high-dimensional problems.
B. It generally improves exploration and the ability to handle higher dimensions, but at the cost of increased computational time per iteration.
C. It forces the algorithm to focus purely on exploitation, leading to faster convergence.
D. It has no effect on the exploration-exploitation balance but decreases overall runtime.

40 For a problem where the global optimum is located within a narrow, funnel-shaped valley, which algorithm's search strategy would likely be more effective: the Firefly Algorithm (FA) or the Whale Optimization Algorithm (WOA)?

Comparison of metaheuristic algorithms Medium
A. Both would be equally effective as they are both swarm intelligence algorithms.
B. WOA, due to its spiral bubble-net mechanism which is well-suited for exploiting narrow regions around a target.
C. Neither, as this type of problem requires a gradient-based deterministic method.
D. FA, because its attractiveness function works best in landscapes with clear gradients.

41 In the standard Firefly Algorithm, the attractiveness is given by . What is the most likely behavior of the swarm if the light absorption coefficient is set to a value approaching zero (), assuming the randomization parameter is also small?

Firefly algorithm and whale optimization algorithm Hard
A. The algorithm behaves like a parallel random search, with each firefly moving almost independently.
B. All fireflies immediately converge to the position of the initially brightest firefly and cease movement.
C. The algorithm's search behavior becomes chaotic and unpredictable, leading to divergence.
D. The algorithm devolves into a variant of Particle Swarm Optimization (PSO) where all fireflies are attracted to the single global best.

42 In the Whale Optimization Algorithm (WOA), the spiral updating equation, , primarily contributes to the algorithm's search process in a way that distinguishes it from the shrinking encircling mechanism. What is this primary contribution?

Firefly algorithm and whale optimization algorithm Hard
A. It provides a fine-grained exploitation mechanism, allowing the whale to explore various points in the neighborhood between its current position and the prey's position along a spiral path.
B. It guarantees convergence to the global optimum by creating a logarithmic spiral trajectory.
C. It serves as a diversity-promoting mechanism, pushing the whale away from the best-so-far solution to escape local optima.
D. It exclusively enhances global exploration by allowing whales to search in a wider, circular area around the prey.

43 A key vulnerability of the Grey Wolf Optimizer (GWO) is premature convergence when the alpha, beta, and delta wolves become trapped in the same local optimum. Which of the following modifications to the GWO position update equation would be the most direct and effective method to mitigate this specific failure mode?

Grey wolf optimization and grasshopper optimization algorithm Hard
A. Modifying the final position update from an average of the three vectors () to a weighted average where the alpha wolf's influence is reduced in later iterations.
B. Increasing the population size to have more omega wolves.
C. Introducing a "repulsion" force from the alpha wolf if the beta and delta wolves are within a certain small distance from it.
D. Changing the linear decay of the parameter 'a' from [2, 0] to a non-linear, convex function over the same range.

44 In the Grasshopper Optimization Algorithm (GOA), the parameter 'c' acts as a decreasing coefficient that shrinks the "comfort zone." What is the critical consequence of 'c' multiplying both the social interaction term's bounds and the overall step size towards the target?

Grey wolf optimization and grasshopper optimization algorithm Hard
A. It makes the algorithm highly sensitive to the initial population distribution, as 'c' amplifies initial distances.
B. It creates a dynamic balance where the influence of the swarm decreases, while the pull towards the best-so-far solution is simultaneously refined for fine-tuning.
C. It causes the algorithm to focus exclusively on the best grasshopper (the target) in the final iterations.
D. It forces the grasshoppers into a stable, fixed formation around the target, halting the search process.

45 Consider a hypothetical hybrid algorithm that first uses a Genetic Algorithm (GA) to generate a diverse set of candidate solutions. It then uses the top 10% of these solutions to initialize the alpha, beta, and delta wolves in a Grey Wolf Optimizer (GWO) which then runs to find the final solution. How would this hybrid algorithm be most accurately classified?

Conceptual grouping of metaheuristics Hard
A. As a pure Evolutionary Algorithm, because the primary diversification comes from GA.
B. As a trajectory-based metaheuristic, since GWO guides the final search path.
C. As a Memetic Algorithm, combining global evolutionary search with local swarm-based search.
D. As a pure Swarm Intelligence algorithm, because the final optimization is performed by GWO.

46 For a high-dimensional optimization problem with a single, deep, and narrow global optimum funnel (a "needle in a haystack" problem), which algorithm's search mechanism is inherently most disadvantaged and why?

Comparison of metaheuristic algorithms Hard
A. Grey Wolf Optimizer (GWO), because the averaging of the top three wolves' positions is likely to miss a narrow funnel if the wolves surround it but don't land in it.
B. Whale Optimization Algorithm (WOA), because the spiral search mechanism is too localized and inefficient for exploring a large search space for a narrow target.
C. Particle Swarm Optimization (PSO), because the cognitive and social components provide a strong pull towards known good areas, which are vast and non-optimal.
D. Firefly Algorithm (FA), because the distance-dependent attractiveness () will drop to nearly zero for all but the closest fireflies, effectively isolating search agents.

47 How does the "curse of dimensionality" specifically impact the effectiveness of the social interaction component, , in the Grasshopper Optimization Algorithm (GOA)?

Scalability and convergence issues in optimization Hard
A. It has no significant impact because the comfort zone parameter 'c' effectively rescales the search space.
B. It strengthens the social interaction, as the average inter-agent distance increases, leading to stronger repulsion forces.
C. It forces all grasshoppers into the attraction zone, causing rapid premature convergence to the population's centroid.
D. It severely weakens the social interaction, as in high dimensions, most grasshoppers will fall into the mid-range repulsion zone of the s-function, leading to chaotic and unproductive movements with little directed attraction.

48 In the Firefly Algorithm's movement equation, , what is the critical role of the randomization term , particularly for the globally brightest firefly?

Firefly algorithm and whale optimization algorithm Hard
A. It scales the attractiveness based on the problem's dimensionality.
B. It helps less bright fireflies escape the pull of the brightest one, thus maintaining diversity.
C. It ensures that even the brightest firefly, which has no brighter fireflies to be attracted to, continues to explore its local neighborhood.
D. It primarily serves to break ties when two fireflies have identical brightness.

49 In GWO, the final position of an omega wolf is the average of three positions calculated relative to the alpha, beta, and delta wolves: . What is the geometric interpretation of this update mechanism in the search space?

Grey wolf optimization and grasshopper optimization algorithm Hard
A. The wolf moves to the circumcenter of the triangle formed by the alpha, beta, and delta wolves.
B. The wolf moves to the centroid of a triangle formed by its potential next positions relative to the three leaders.
C. The wolf is projected onto the plane defined by the three leaders, ensuring a 2D search in a higher-dimensional space.
D. The wolf performs a random walk within a hyper-sphere defined by the positions of the three leaders.

50 The "No Free Lunch" (NFL) theorem for optimization states that, averaged over all possible problems, any two optimization algorithms will have the same average performance. What is the most profound implication of this theorem for the field of metaheuristics?

Conceptual grouping of metaheuristics Hard
A. It proves that developing new metaheuristic algorithms is a futile effort.
B. It implies that all metaheuristics are essentially variants of random search.
C. It necessitates the development of problem-specific or class-specific algorithms, as a universally superior algorithm cannot exist.
D. It suggests that hybridizing algorithms is the only way to achieve better performance.

51 Compare the primary exploitation mechanism of the Whale Optimization Algorithm (the spiral update around ) with that of the Grey Wolf Optimizer (omega wolves encircling the region defined by ). Which statement provides the most accurate analysis of their differences in handling multimodal problems?

Comparison of metaheuristic algorithms Hard
A. Both algorithms have identical exploitation capabilities, with differences only in their exploration phases.
B. WOA's spiral mechanism provides a more exhaustive local search path around the best solution, while GWO's averaging provides a more discrete jump towards a region of promise.
C. GWO's exploitation is more robust for multimodal problems as it considers three good solutions, whereas WOA's focus on a single makes it more prone to local optima.
D. GWO's exploitation is computationally cheaper as it avoids trigonometric functions, making it more efficient for fast convergence.

52 Many metaheuristics can be proven to converge to a global optimum, but this proof often relies on assumptions that are not practical (e.g., the ability to reach any point in the search space from any other point). Which of the following algorithms, in its standard form, most clearly violates this assumption, thus making a formal proof of convergence challenging?

Scalability and convergence issues in optimization Hard
A. A Genetic Algorithm that includes a mutation operator with a non-zero probability of changing any gene.
B. The Firefly Algorithm, where movement is strictly biased towards brighter fireflies and can be zero if no brighter firefly exists.
C. Particle Swarm Optimization, where the velocity update can theoretically propel a particle anywhere in the search space.
D. Simulated Annealing, where there is always a non-zero probability of accepting a worse move.

53 In WOA, the transition between exploration (, search for prey) and exploitation (, attack prey) is governed by , where 'a' decreases linearly from 2 to 0. What is the key implication of using this formulation?

Firefly algorithm and whale optimization algorithm Hard
A. It creates a probabilistic transition, where exploration is more likely in early stages and exploitation is more likely in later stages, but neither is ever fully eliminated.
B. It makes the transition dependent on the problem's dimensionality, as the random vector 's magnitude changes.
C. It guarantees that exactly the first half of the iterations are dedicated to exploration and the second half to exploitation.
D. It forces the algorithm to switch deterministically from exploration to exploitation once the iteration count passes the halfway mark.

54 The GOA position update equation is of the form , where is the target position. This is different from many swarm algorithms like PSO where . What is the fundamental difference in search behavior implied by GOA's formulation?

Grey wolf optimization and grasshopper optimization algorithm Hard
A. GOA's approach is computationally more complex and therefore converges more slowly.
B. GOA's positions are recalculated in each iteration relative to the target, making it a memory-less algorithm regarding individual trajectory, unlike PSO which has momentum.
C. GOA's formulation ensures that grasshoppers can never move further away from the target, guaranteeing convergence.
D. There is no fundamental difference; it is just a different mathematical representation of the same process.

55 Some algorithms blur the line between being population-based and trajectory-based. Which of the following algorithms' core mechanism makes it the most ambiguous to classify strictly as one or the other?

Conceptual grouping of metaheuristics Hard
A. Random Search, which involves independent trials.
B. Genetic Algorithm, which operates on an entire population simultaneously.
C. Simulated Annealing, which modifies a single solution over time.
D. Ant Colony Optimization (ACO), where individual ants create solutions but the population collectively modifies a pheromone map that represents a shared search structure.

56 Metaheuristics balance exploration and exploitation using control parameters. Which algorithm pair offers the most fundamentally different approach to managing this balance?

Comparison of metaheuristic algorithms Hard
A. Simulated Annealing (SA) and Genetic Algorithm (GA), where SA uses a temperature schedule to reduce randomness, while a standard GA uses static operator rates.
B. Firefly Algorithm (FA) and Particle Swarm Optimization (PSO), where both rely on attraction to better solutions within the population.
C. Grasshopper Optimization (GOA) and GWO, as both use a coefficient ('c' or 'a') that shrinks agents' step sizes over time.
D. GWO and WOA, as both use a linearly decreasing parameter 'a' to shift from exploration to exploitation.

57 An algorithm exhibits rapid initial convergence, with all agents clustering in one region of the search space early on, after which the best solution improves very slowly. This indicates premature convergence. Which parameter tuning strategy is most likely to be ineffective or even counter-productive for solving this issue?

Scalability and convergence issues in optimization Hard
A. In the Firefly Algorithm, significantly increasing the light absorption coefficient .
B. In PSO, increasing the cognitive parameter () and decreasing the social parameter ().
C. In any swarm algorithm, significantly increasing the population size.
D. In GWO, using a non-linear concave function for the decay of parameter 'a', so it stays high for longer.

58 WOA's main exploration mechanism involves updating a whale's position based on a randomly chosen whale instead of the best-so-far whale . How does this mechanism's effectiveness for global search compare to the mutation operator in a Genetic Algorithm (GA)?

Firefly algorithm and whale optimization algorithm Hard
A. It is less effective for creating truly novel solutions because it only directs the search towards regions defined by the current population, whereas mutation can generate entirely new genetic material.
B. The two mechanisms are functionally equivalent, both serving to introduce random changes to escape local optima.
C. WOA's mechanism is only useful in early iterations, while mutation is effective throughout the entire evolutionary process.
D. It is more effective because it always guides the search toward a potentially good region (defined by ), unlike mutation which is a completely random perturbation.

59 In the Grasshopper Optimization Algorithm, the position update is , where is the position of the best solution (target) found so far. What is the most significant potential drawback of having this strong, direct pull towards a single target in every iteration?

Grey wolf optimization and grasshopper optimization algorithm Hard
A. It makes the algorithm unsuitable for discrete optimization problems where the concept of a "target position" is ill-defined.
B. It creates an overly strong exploitation pressure from the very beginning, potentially overriding the exploratory social interactions and leading to premature convergence if the initial target is a local optimum.
C. It makes the algorithm computationally expensive as the target must be identified in each iteration.
D. It requires an extra parameter to control the influence of the target, which complicates the algorithm.

60 Metaheuristic algorithms can be classified by their information sharing topology. GWO and gbest-PSO both have a star-like topology where leader(s) broadcast information to all others. What fundamentally distinguishes the Firefly Algorithm's topology from these?

Conceptual grouping of metaheuristics Hard
A. FA has no information sharing topology; all agents are independent.
B. FA has a ring topology where each firefly only communicates with its immediate neighbors.
C. FA has a fully connected (all-to-all) topology where every firefly influences every other firefly equally.
D. FA has a variable, dynamic, and asymmetric topology where links only exist from dimmer to brighter fireflies and their strength depends on distance.