码迷,mamicode.com
首页 > 其他好文 > 详细

(CS229) 第一课 梯度下降及标准方程推导笔记

时间:2017-05-30 17:57:18      阅读:164      评论:0      收藏:0      [点我收藏+]

标签:auto   lob   技术   sci   its   nbsp   height   rem   img   

1 regression 和 classificationn

we call the learning problem a regression prob if th target variable that we‘re trying to predict is continuous; when target variable can only take on a small number of discrete values we call it a classification prob.

 

2 gradient descent

技术分享

alpha is learning rate. This update is simultaneously performed for all values of j= 0, ..., n

For a single training exmple, this gives the update rule:

(LMS least mean squares update rule or Widrow-Hoff learining rule)

技术分享

:=  means "set value"

Note that J(theta) is a quadratic function, (a Covex bowl shape), so the gradient descent always converges (only have one global optima with no other local one)

 

3 Batch gd and stochastic gd (gd = gradient descent)

Batch means you should look at every example in the entire training set on every step

But it turns out that sometimes you will meet really a large training set. by then, you should use another algorsm, that is stochastic gd(随机梯度下降) (also called incremental gd) (it may never "converge" to the minimum, and the parameters theta will keep oscillating around the minimum of J(theta))

 

4 Matrix derivatives (define and  some interesting conclusion)

技术分享

技术分享
技术分享

技术分享

有了上面的引进,就有:

技术分享

这也是1/2消失的原因。so to minimize J, we set its derivatives to zero, and obtain the normal equations:

技术分享

Thus, the value of theta that minimizes J(theta) is given in closed form by the equation 

技术分享

 

 

 

 

(CS229) 第一课 梯度下降及标准方程推导笔记

标签:auto   lob   技术   sci   its   nbsp   height   rem   img   

原文地址:http://www.cnblogs.com/hans209/p/6920613.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!