码迷,mamicode.com
首页 > 其他好文 > 详细

[Keras] mnist with cnn

时间:2016-12-31 20:49:12      阅读:622      评论:0      收藏:0      [点我收藏+]

标签:training   roc   closed   close   value   cal   组件   indicator   div   

典型的卷积神经网络。

 

  • Keras傻瓜式读取数据:自动下载,自动解压,自动加载。
  • # X_train:
array([[[[ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         ..., 
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.]]],

       ..., 

       [[[ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         ..., 
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.],
         [ 0.,  0.,  0., ...,  0.,  0.,  0.]]]], dtype=float32)
  • # y_train:
array([5, 0, 4, ..., 5, 6, 8], dtype=uint8)

但需要二值化作为output:np_utils.to_categorical(y_train, nb_classes)

  • # Y_train:
Y_train[0]
Out[56]: array([ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.])

Y_train[1]
Out[57]: array([ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.])

Y_train[2]
Out[58]: array([ 0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.])

 

Code:

技术分享
#coding:utf-8

import os
from PIL import Image
import numpy as np

#读取文件夹mnist下的42000张图片,图片为灰度图,所以为1通道,
#如果是将彩色图作为输入,则将1替换为3,并且data[i,:,:,:] = arr改为data[i,:,:,:] = [arr[:,:,0],arr[:,:,1],arr[:,:,2]]
def load_data():
    data = np.empty((42000,1,28,28),dtype="float32")
    label = np.empty((42000,),dtype="uint8")

    imgs = os.listdir("./mnist")
    num = len(imgs)
    for i in range(num):
        img = Image.open("./mnist/"+imgs[i])
        arr = np.asarray(img,dtype="float32")
        data[i,:,:,:] = arr
        label[i] = int(imgs[i].split(.)[0])
    return data,label
读取原始图片

 


 

   Codea Multilayer Perceptron

import numpy as np
np.random.seed(1337) # for reproducibility
 
import os
from keras.datasets import mnist    #自动下载

# import 套路
from keras.models import Sequential  from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import RMSprop from keras.utils import np_utils batch_size = 128 #Number of images used in each optimization step nb_classes = 10 #One class per digit nb_epoch = 12 #Number of times the whole data is used to learn
(X_train, y_train), (X_test, y_test)
= mnist.load_data() #Flatten the data, MLP doesn‘t use the 2D structure of the data. 784 = 28*28 X_train = X_train.reshape(60000, 784) X_test = X_test.reshape(10000, 784) #Make the value floats in [0;1] instead of int in [0;255] --> [归一化] X_train = X_train.astype(float32) X_test = X_test.astype(float32) X_train /= 255 X_test /= 255 #Display the shapes to check if everything‘s ok print(X_train.shape[0], train samples) print(X_test.shape[0], test samples) # convert class vectors to binary class matrices (ie one-hot vectors)
Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) #Define the model achitecture model = Sequential()
######################################################################################## model.add(Dense(
512, input_shape=(784,))) model.add(Activation(relu)) model.add(Dropout(0.2)) model.add(Dense(512)) model.add(Activation(relu)) model.add(Dropout(0.2)) model.add(Dense(10)) #Last layer with one output per class model.add(Activation(softmax)) #We want a score simlar to a probability for each class ########################################################################################
#Use rmsprop to do the gradient descent see http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf #and http://cs231n.github.io/neural-networks-3/#ada rms = RMSprop() #The function to optimize is the cross entropy between the true label and the output (softmax) of the model model.compile(loss=categorical_crossentropy, optimizer=rms, metrics=["accuracy"])
#Make the model learn --> [Training] model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=2, validation_data=(X_test, Y_test)) #Evaluate how the model does on the test set score = model.evaluate(X_test, Y_test, verbose=0) print(Test score:, score[0]) print(Test accuracy:, score[1])

 

Code: a Convolutional Neural Network

import numpy as np
np.random.seed(1337) # for reproducibility
 
import os
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
 
batch_size = 128
nb_classes = 10
nb_epoch = 12
 
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3
 
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
 
#Add the depth in the input. Only grayscale so depth is only one
#see http://cs231n.github.io/convolutional-networks/#overview
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test  = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
 
#Make the value floats in [0;1] instead of int in [0;255]
X_train = X_train.astype(float32)
X_test = X_test.astype(float32)
X_train /= 255
X_test /= 255
 
#Display the shapes to check if everything‘s ok
print(X_train shape:, X_train.shape)
print(X_train.shape[0], train samples)
print(X_test.shape[0], test samples)
 
# convert class vectors to binary class matrices (ie one-hot vectors)
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
 
############################################################################################## model
= Sequential() #For an explanation on conv layers see http://cs231n.github.io/convolutional-networks/#conv #By default the stride/subsample is 1 #border_mode "valid" means no zero-padding. #If you want zero-padding add a ZeroPadding layer or, if stride is 1 use border_mode="same" model.add(Convolution2D(nb_filters, nb_conv, nb_conv,       border_mode=valid,       input_shape=(1, img_rows, img_cols)))
model.add(Activation(
relu))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv)) model.add(Activation(
relu))
#For an explanation on pooling layers see http://cs231n.github.io/convolutional-networks/#pool model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool))) model.add(Dropout(0.25))
#Flatten the 3D output to 1D tensor for a fully connected layer to accept the input model.add(Flatten()) model.add(Dense(128)) model.add(Activation(relu))
model.add(Dropout(
0.5)) model.add(Dense(nb_classes)) #Last layer with one output per class model.add(Activation(softmax)) #We want a score simlar to a probability for each class ###############################################################################################
#The function to optimize is the cross entropy between the true label and the output (softmax) of the model #We will use adadelta to do the gradient descent see http://cs231n.github.io/neural-networks-3/#ada model.compile(loss=categorical_crossentropy, optimizer=adadelta, metrics=["accuracy"]) #Make the model learn model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) #Evaluate how the model does on the test set score = model.evaluate(X_test, Y_test, verbose=0) print(Test score:, score[0]) print(Test accuracy:, score[1])

 

  

另一个卷积示例:

#coding:utf-8
 
‘‘‘
    GPU run command:
        THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python cnn.py
    CPU run command:
        python cnn.py
‘‘‘
#导入各种用到的模块组件
from __future__ import absolute_import
from __future__ import print_function
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.advanced_activations import PReLU
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD, Adadelta, Adagrad
from keras.utils import np_utils, generic_utils
from six.moves import range
from data import load_data
import random
import numpy as np
 
np.random.seed(1024)  # for reproducibility
 
#加载数据
data, label = load_data()
#打乱数据 index = [i for i in range(len(data))] random.shuffle(index) data = data[index] label = label[index] print(data.shape[0], samples) #label为0~9共10个类别,keras要求格式为binary class matrices,转化一下,直接调用keras提供的这个函数 label = np_utils.to_categorical(label, 10) ############### #开始建立CNN模型 ############### #生成一个model model = Sequential() #第一个卷积层】4个卷积核,每个卷积核大小5*51表示输入的图片的通道,灰度图为1通道 #border_mode可以是valid或者full,参见这里:http://blog.csdn.net/niuwei22007/article/details/49366745 #激活函数用tanh #你还可以在model.add(Activation(‘tanh‘))后加上dropout的技巧: model.add(Dropout(0.5)) model.add(Convolution2D(4, 5, 5, border_mode=valid,input_shape=(1,28,28))) model.add(Activation(tanh)) #第二个卷积层】8个卷积核,每个卷积核大小3*3。4表示输入的特征图个数,等于上一层的卷积核个数 #激活函数用tanh #采用maxpooling,poolsize为(2,2) model.add(Convolution2D(8, 3, 3, border_mode=valid)) model.add(Activation(tanh))
model.add(MaxPooling2D(pool_size
=(2, 2))) #第三个卷积层】16个卷积核,每个卷积核大小3*3 #激活函数用tanh #采用maxpooling,poolsize为(2,2) model.add(Convolution2D(16, 3, 3, border_mode=valid)) model.add(Activation(relu))
model.add(MaxPooling2D(pool_size
=(2, 2)))
#全连接层】,先将前一层输出的二维特征图flatten为一维的。 #Dense就是隐藏层。16就是上一层输出的特征图个数。4是根据每个卷积层计算出来的:(28-5+1)得到24,(24-3+1)/2得到11,(11-3+1)/2得到4 #全连接有128个神经元节点,初始化方式为normal model.add(Flatten()) model.add(Dense(128, init=normal)) model.add(Activation(tanh)) #Softmax分类】,输出是10类别 model.add(Dense(10, init=normal)) model.add(Activation(softmax)) ############# #开始训练模型 ############## #使用SGD + momentum #model.compile里的参数loss就是损失函数(目标函数) sgd = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss=categorical_crossentropy, optimizer=sgd,metrics=["accuracy"]) #调用fit方法,就是一个训练过程. 训练的epoch数设为10,batch_size为100. #数据经过随机打乱shuffle=True。verbose=1,训练过程中输出的信息,0、1、2三种方式都可以,无关紧要。show_accuracy=True,训练时每一个epoch都输出accuracy。 #validation_split=0.2,将20%的数据作为验证集。 model.fit(data, label, batch_size=100, nb_epoch=10,shuffle=True,verbose=1,validation_split=0.2)

 

[Keras] mnist with cnn

标签:training   roc   closed   close   value   cal   组件   indicator   div   

原文地址:http://www.cnblogs.com/jesse123/p/6240079.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!