Machine Learning

From Code Self Study Wiki
Jump to: navigation, search

The book:

Data:

To research:

Tips:

Machine Learning[edit]

To Research[edit]

Coursera Course[edit]

  • m = number of training examples
  • x = input variable
  • y = output/target variable

Notation[edit]

Colon-equals is for assignment:

a := b

An equals sign is truth assertion:

a = b

Greek lowercase alpha is the learning rate.

m is the number of rows of data.

Hypothesis[edit]

Cost Function[edit]

Linear Regression[edit]

Hypothesis: \( h\theta(x)=\theta_{0}+\theta_{1}x \)

Parameters: \( \theta_{0} \) and \(\theta_{1} \).

Cost function: \( J(\theta_{0},\theta_{1})=\frac{1}{2m}\sum_{i=1}^m(h\theta(x^{(i)})-y^{(i)})^{2} \)

Goal: \( \underset{\theta_0,\theta_1}{\text{minimize}}~J(\theta_{0},\theta_{1}) \)

Gradient Descent[edit]

Gradient descent algorithm: \( \theta_{j}:=\theta_{j}-\alpha\frac{\partial}{\partial\theta_{j}}J(\theta_{0},\theta_{1}) \)

Simultaneous Update[edit]

Be sure to use simultaneous update. Example:

\( temp0:=\theta_{0}-\alpha\frac{\partial}{\partial\theta_{0}}J(\theta_{0},\theta_{1})\\temp1:=\theta_{1}-\alpha\frac{\partial}{\partial\theta_{1}}J(\theta_{0},\theta_{1})\\\theta_{0}:=temp0\\\theta_{1}:=temp1 \)

More notes from the course:

      1. Basics
  • Supervised Learning
    • Regression problems -- "trying to predict results within a *continuous* output" e.g., predicting price based on house size
    • Classification problems -- "trying to map input variable to some *continuous* function" e.g., whether a house sells for more or less than the asking price (discrete categories)
  • Unsupervised Learning
    • Clustering articles into groups based on similarity
    • Associative -- like a doctor associating possible illnesses based on what has been seen in previous patients
      1. Linear Regression, One Variable
  • Linear regression -> continuous expected result function
  • Univariate linear regression -> for single output from single input value
  • Hypothesis function: \( h\theta(x)=\theta_{0}+\theta_{1}x \)
  • Cost function: \( J(\theta_{0},\theta_{1})=\frac{1}{2m}\sum_{i=1}^m(h\theta(x^{(i)})-y^{(i)})^{2} \) -- measures the accuracy of the hypothesis function
  • Goal: \( \underset{\theta_0,\theta_1}{\text{minimize}}~J(\theta_{0},\theta_{1}) \)