Unit1 - Subjective Questions

INT255 • Practice Questions with Detailed Answers

1

Define a vector, a matrix, and a tensor in the context of machine learning. Provide a brief example of each's application.

2

Explain how data is typically represented using vectors and matrices in machine learning. Provide a concrete example involving a tabular dataset.

3

What is a vector space? List the ten axioms (properties) that a set must satisfy to be considered a vector space over a field of scalars.

4

Explain what a subspace is. Provide an example of a subspace of that is not itself or the zero vector.

5

Compare and contrast the L1 norm (Manhattan norm) and the L2 norm (Euclidean norm) of a vector. Include their mathematical definitions and typical applications in machine learning.

6

Define linear independence of a set of vectors. Why is linear independence a crucial concept in the context of vector spaces and basis formation?

7

Explain the concept of orthogonal projection of a vector onto another vector. Provide the formula and briefly describe its utility in machine learning.

8

Define a linear transformation (or linear operator). List the two key properties it must satisfy and provide a simple example of a linear transformation in .

9

Discuss the importance of eigenvalues and eigenvectors in understanding linear transformations and their applications in dimensionality reduction techniques like Principal Component Analysis (PCA).

10

Describe the Null Space (Kernel) and the Column Space (Image) of a matrix. Explain their significance in understanding the properties of a linear transformation.

11

How can tensors be seen as a generalization of scalars, vectors, and matrices? Provide examples of where higher-order tensors are used in deep learning.

12

What is a basis for a vector space? How does it differ from a spanning set? Illustrate with an example in .

13

Explain the geometric interpretation of the L2 norm and its relation to Euclidean distance. How is it applied in machine learning for tasks like classification?

14

Discuss how L1 and L2 regularization (Lasso and Ridge regression) utilize these norms to prevent overfitting in machine learning models.

15

Describe the effects of common linear transformations such as scaling, rotation, and reflection on vectors in . Provide the corresponding transformation matrices.

16

Define the general concept of a vector norm. Explain its purpose and list three properties that any valid vector norm must satisfy.

17

Explain the concept of an invertible linear transformation and its associated matrix. Under what conditions is a square matrix invertible?

18

What are the common operations performed on vectors and matrices relevant to machine learning? Illustrate with simple examples for at least four operations.

19

What is the rank of a matrix? How does it relate to the concepts of column space and null space?

20

What is the difference between an orthogonal vector and an orthonormal vector? Why are orthonormal bases particularly useful in linear algebra and machine learning?