码迷,mamicode.com
首页 > 其他好文 > 详细

DTU DeepLearning: exercise 6

时间:2019-10-18 15:58:48      阅读:105      评论:0      收藏:0      [点我收藏+]

标签:sibling   target   int   com   between   examples   sticky   types   batch   

Hi everyone, I‘m a little confused about 6.1_exercise_4, the relationship between network selection (FFNN, CNN, RNN) and input types (image, margin, shape, texture). As my understanding, CNN is a type of FFNN, and image classification can be conducted by CNN. What about the other three features?  Does each feature has a special relation with a network structure? I just don‘t get the point. Thanks in advance for your help.
 
Today

new messages
The way I see it is that you use the CNN to preprocess the image and save the output from that as features. Same for the RNN but with other features. Finally, you collect all the features and feed them through one or more linear layers.

Does batch_size have any affects in results quality? how to set the optimal batch size and number of iterations?
1. the batch size a : the number of training data feed into neural network once.
2. the iterations b : how many time we should feed into NN. 
a*b = the amount of training examples. 
 
 

DTU DeepLearning: exercise 6

标签:sibling   target   int   com   between   examples   sticky   types   batch   

原文地址:https://www.cnblogs.com/dulun/p/11698471.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!