Unit5 - Subjective Questions

INT255 • Practice Questions with Detailed Answers

1

Define the concept of a hyperplane and its fundamental role in Support Vector Machines (SVMs).

2

Explain the geometric interpretation of the classification margin in SVM. How is it related to the support vectors?

3

Derive the expression for the margin in terms of the weight vector and bias for a linearly separable dataset.

4

Distinguish between Hard Margin SVM and Soft Margin SVM. When is each appropriate?

5

Explain the role of slack variables in Soft Margin SVM. How do they allow for misclassifications and handle non-linearly separable data?

6

Describe the objective function of a Hard Margin SVM, including its constraints. Explain why it's formulated this way.

7

Describe the objective function of a Soft Margin SVM, explaining the significance of the regularization parameter .

8

Explain why the Primal Optimization Problem for SVM is typically formulated as a quadratic programming problem.

9

What is the significance of moving from the Primal to the Dual Optimization Problem in SVM training?

10

Describe the general structure of a Lagrangian function for a constrained optimization problem.

11

Formulate the Lagrangian for the Hard Margin SVM primal problem.

12

Explain how the Karush-Kuhn-Tucker (KKT) conditions are applied to the SVM Lagrangian to derive the dual problem.

13

Discuss the implications of the KKT slackness condition for support vectors in SVM.

14

Explain the "kernel trick" in the context of SVMs. Why is it necessary?

15

Describe at least three common types of kernel functions used in SVMs (e.g., Linear, Polynomial, RBF) and briefly explain when each might be preferred.

16

How does a kernel function implicitly map data into a higher-dimensional feature space without explicit computation?

17

What is Mercer's theorem, and why is it important for valid kernel functions?

18

Briefly outline the general steps involved in training an SVM.

19

Explain how the dual problem's solution relates to the weight vector and bias of the separating hyperplane.

20

Discuss the computational advantages of solving the dual problem over the primal problem, especially when using the kernel trick.