1. Not commutative.
2. Associative.
e.g. For where is matrix and is matrix,
is an matrix,
is an matrix.
Identity matrix
Denoted as or
e.g.
For any matrix ,
1. Not commutative.
2. Associative.
e.g. For where is matrix and is matrix,
is an matrix,
is an matrix.
Identity matrix
Denoted as or
e.g.
For any matrix ,
3 by 2 matrix 2 by 1 matrix 3 by 1 matrix
by matrix by matrix by matrix
Addition:
Scalar multiplication:
Matrix
Matrix: rectangular array of numbers
Dimension of matrix: number of rows number of columns
: , entry in the row, column
e.g.
dimension: or
Vector
Vector: matrix
: element
e.g.
dimension: 3-dimensional vector or
1-indexed vector:
0-indexed vector:
Gradient descent algorithm
repeat until convergence {
(for and )
}
: learning rate
: assigning to
Simultaneous update
temp0 :=
temp1 :=
:= temp0
:= temp1
Gradient descent for linear regression
repeat until convergence {
}
Linear regression: solve a minimisation problem
minimise for
cost function:
: number of training examples
: input variables / features
: output variables / target variables
: single training example
: training example
Training set to learning algorithm to hypothesis (based on size of house) to estimates price
: parameters
Linear regression in one variable = univariate linear regression
No labels and need to find structures
Clustering algorithm: Google news, social network analysis, market segmentation
Regression: predict continuous valued output (price)
Example: housing pricing prediction
Classification: predict discrete value output (zero or one)
Example: prediction if tumour is benign or malignant
Support vector machine: algorithm to process infinite number of features
I have started these few threads about what I have learnt from Stanford Machine Learning course on Coursera.
I highly recommend that you take up the course to learn more about Machine Learning.
- PAGE 2 OF 2 -