码迷,mamicode.com
首页 > 其他好文 > 详细

机器学习笔记(Washington University)- Classification Specialization-week 3

时间:2017-05-13 09:49:32      阅读:145      评论:0      收藏:0      [点我收藏+]

标签:values   use   read   one   学习笔记   return   str   gre   step   

1. Quality metric

Quality metric for the desicion tree is the classification error

error=number of incorrect  predictions / number of examples

 

2. Greedy algorithm

Procedure

Step 1: Start with an empty tree

Step 2: Select a feature to split data

explanation:

  Split data for each feature 

  Calculate classification error of this decision stump

  choose the one with the lowest error

For each split of the tree:

  Step 3: If all data in these nodes have same y value

      Or if we already use up all the features, stop.        

  Step 4: Otherwise go to step 2 and continue on this split

Algorithm

predict(tree_node, input)

if current tree_node is a leaf:

  return majority class of data points in leaf

else:

  next_node = child node of tree_node whose feature value agrees with input

  return (tree_node, input)

3  Threshold split

Threshold split is for the continous input

we just pick a threshold value for the continous input and classify the data.

Procedure:

Step 1: Sort the values of a feature hj(x) {v1, v2,...,vn}

Step 2: For i = 1 .... N-1(all the data points)

      consider split ti=(vi+vi+1)/2

      compute the classification error of the aplit

    choose ti with the lowest classification error

 

机器学习笔记(Washington University)- Classification Specialization-week 3

标签:values   use   read   one   学习笔记   return   str   gre   step   

原文地址:http://www.cnblogs.com/climberclimb/p/6848037.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!