14Nov
2016
Eugene / Learning, Stanford Machine Learning / 0 comment
Linear Regression With Multiple Variables
Notation:
$n$: number of features
$x^{(i)}$: input (features) of $i^{th}$ training example
$x^{(i)}_j$: value of feature $j$ in $i^{th}$ training example
$h_\theta (x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + \cdots + \theta_n x_n$
$\begin{align*}
h_\theta(x) =
\begin{bmatrix}
\theta_0 \hspace{2em} \theta_1 \hspace{2em} … \hspace{2em} \theta_n
\end{bmatrix}
\begin{bmatrix}
x_0 \newline
x_1 \newline
\vdots \newline
x_n
\end{bmatrix}
= \theta^T x
\end{align*}$