码迷,mamicode.com
首页 > Web开发 > 详细

mxnet-深度学习网络

时间:2018-11-14 19:04:00      阅读:209      评论:0      收藏:0      [点我收藏+]

标签:bre   epo   return   auth   0.00   ide   compose   shu   lua   

dtype‘, <type ‘numpy.uint8‘>, ‘y:‘, 2, ‘Y dtype‘, dtype(‘int32‘))

[[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]

[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]

[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]

...

[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]

[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]

[[[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
...
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]
[-0.41935483 -0.41935483 -0.41935483 ... -0.41935483 -0.41935483
-0.41935483]]]]
<NDArray 200x1x28x28 @cpu(0)>
[8 3 9 ... 4 3 0]
<NDArray 200 @cpu(0)>
Sequential(
(0): Conv2D(None -> 6, kernel_size=(5, 5), stride=(1, 1))
(1): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
(2): Conv2D(None -> 16, kernel_size=(3, 3), stride=(1, 1))
(3): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
(4): Flatten
(5): Dense(None -> 120, Activation(relu))
(6): Dense(None -> 84, Activation(relu))
(7): Dense(None -> 10, linear)
)
.
.
.
.
.
.
.
.
.
.
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.004, \ Time 15.3 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.008, \ Time 15.4 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.012, \ Time 15.4 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.017, \ Time 15.5 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.022, \ Time 15.5 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.026, \ Time 15.5 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.032, \ Time 15.6 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.035, \ Time 15.6 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.039, \ Time 15.7 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.044, \ Time 15.7 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.049, \ Time 15.7 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.054, \ Time 15.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.059, \ Time 15.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.063, \ Time 15.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.066, \ Time 15.9 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.071, \ Time 15.9 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.075, \ Time 16.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.080, \ Time 16.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.084, \ Time 16.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.088, \ Time 16.1 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.092, \ Time 16.1 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.097, \ Time 16.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.101, \ Time 16.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.105, \ Time 16.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.109, \ Time 16.3 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.114, \ Time 16.3 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.118, \ Time 16.4 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.123, \ Time 16.4 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.127, \ Time 16.4 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.131, \ Time 16.5 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.136, \ Time 16.5 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.140, \ Time 16.6 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.145, \ Time 16.6 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.150, \ Time 16.6 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.155, \ Time 16.7 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.160, \ Time 16.7 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.164, \ Time 16.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.168, \ Time 16.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.173, \ Time 16.8 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.177, \ Time 16.9 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.181, \ Time 16.9 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.186, \ Time 17.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.190, \ Time 17.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.194, \ Time 17.0 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.198, \ Time 17.1 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.202, \ Time 17.1 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.206, \ Time 17.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.210, \ Time 17.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.213, \ Time 17.2 sec
Epoch 9: Loss: 0.007, Train acc 0.001, Test acc 0.217, \ Time 17.3 sec

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Fri Aug 10 16:13:29 2018
@author: myhaspl
"""

from mxnet import nd, gluon, init, autograd
from mxnet.gluon import nn
from mxnet.gluon.data.vision import datasets,transforms 
import matplotlib.pyplot as plt
from time import time

mnist_train = datasets.FashionMNIST(train=True)
X, y = mnist_train[0]
print (‘X shape: ‘, X.shape, ‘X dtype‘, X.dtype, ‘y:‘, y,‘Y dtype‘, y.dtype)
#x:(height, width, channel)
#y:numpy.scalar,标签
text_labels = [
            ‘t-shirt‘, ‘trouser‘, ‘pullover‘, ‘dress‘, ‘coat‘,
            ‘sandal‘, ‘shirt‘, ‘sneaker‘, ‘bag‘, ‘ankle boot‘
]
X, y = mnist_train[0:6]#取6个样本

_, figs = plt.subplots(1, X.shape[0], figsize=(15, 15))
for f,x,yi in zip(figs, X,y):
    # 3D->2D by removing the last channel dim
    f.imshow(x.reshape((28,28)).asnumpy())
    ax = f.axes
    ax.set_title(text_labels[int(yi)])
    ax.title.set_fontsize(20)
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()
#转换图像为(channel, height, weight)格式,并且为floating数据类型,通过transforms.ToTensor。
#另外,normalize所有像素值 使用 transforms.Normalize平均值0.13和标准差0.31. 
transformer = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize(0.13, 0.31)])
#只转换第一个元素,图像部分。第二个元素为标签。
mnist_train = mnist_train.transform_first(transformer)
#加载批次数据
batch_size = 200
train_data = gluon.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True)
#读取本批数据
i=1
for data, label in train_data:
    print i
    print data,label
    break#没有这一行,会以每批次200个数据来读取。
mnist_valid = gluon.data.vision.FashionMNIST(train=False)
valid_data = gluon.data.DataLoader(mnist_valid.transform_first(transformer),batch_size=batch_size)
#定义网络
net = nn.Sequential()
net.add(nn.Conv2D(channels=6,kernel_size=5,activation="relu"),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Conv2D(channels=16, kernel_size=3, activation="relu"),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Flatten(),
        nn.Dense(120, activation="relu"),
        nn.Dense(84, activation="relu"),
        nn.Dense(10))
net.initialize(init=init.Xavier())
print net
#输出softmax与误差
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
#定义训练器
trainer = gluon.Trainer(net.collect_params(), ‘sgd‘, {‘learning_rate‘: 0.1})

def acc(output, label):
    # output: (batch, num_output) float32 ndarray
    # label: (batch, ) int32 ndarray
    return (output.argmax(axis=1) == label.astype(‘float32‘)).mean().asscalar()

for epoch in range(10):
    train_loss, train_acc, valid_acc = 0., 0., 0.
    tic = time()
    for data, label in train_data:
        # 前向与反馈
        with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
        loss.backward()
    # 换一批样本数据,更新参数
    trainer.step(batch_size)
    # 计算训练误差和正确率
    train_loss += loss.mean().asscalar()
    train_acc += acc(output, label)
    print "."

    #测试正确率
    for data, label in valid_data: 
        valid_acc += acc(net(data), label)
        print("Epoch %d: Loss: %.3f, Train acc %.3f, Test acc %.3f, \ Time %.1f sec" % (
                 epoch, train_loss/len(train_data),
                 train_acc/len(train_data),
                 valid_acc/len(valid_data), time()-tic))

#!/usr/bin/env python2
# -*- coding: utf-8 -*-

#非深度网络
from mxnet import gluon
from mxnet import ndarray as nd
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import autograd

def transform(data, label):
    return data.astype(‘float32‘)/255, label.astype(‘float32‘)

mnist_train = gluon.data.vision.FashionMNIST(train=True, transform=transform)
mnist_test = gluon.data.vision.FashionMNIST(train=False, transform=transform)

def show_images(images):
    n = images.shape[0]
    _, figs = plt.subplots(1, n, figsize=(15, 15))
    for i in range(n):
        figs[i].imshow(images[i].reshape((28, 28)).asnumpy())
        figs[i].axes.get_xaxis().set_visible(False)
        figs[i].axes.get_yaxis().set_visible(False)
    plt.show()

def get_text_labels(label):
    text_labels = [
                ‘t-shirt‘, ‘trouser‘, ‘pullover‘, ‘dress‘, ‘coat‘,
                ‘sandal‘, ‘shirt‘, ‘sneaker‘, ‘bag‘, ‘ankle boot‘
    ]
    return [text_labels[int(i)] for i in label]

data, label = mnist_train[0:10]

print(‘example shape: ‘, data.shape, ‘label:‘, label)
show_images(data)
print(get_text_labels(label))

batch_size = 256
train_data = gluon.data.DataLoader(mnist_train, batch_size, shuffle=True)
test_data = gluon.data.DataLoader(mnist_test, batch_size, shuffle=False)

#计算模型
net = gluon.nn.Sequential()
with net.name_scope():
    net.add(gluon.nn.Flatten())
    net.add(gluon.nn.Dense(256, activation="relu"))
    net.add(gluon.nn.Dense(10))
net.initialize()

softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()

#定义训练器
trainer = gluon.Trainer(net.collect_params(), ‘sgd‘, {‘learning_rate‘: 0.5})

def accuracy(output, label):
    return nd.mean(output.argmax(axis=1) == label).asscalar()

def _get_batch(batch):
    if isinstance(batch, mx.io.DataBatch):
        data = batch.data[0]
        label = batch.label[0]
    else:
        data, label = batch
    return data, label

def evaluate_accuracy(data_iterator, net):
    acc = 0.
    if isinstance(data_iterator, mx.io.MXDataIter):
        data_iterator.reset()
    for i, batch in enumerate(data_iterator):
        data, label = _get_batch(batch)
        output = net(data)
        acc += accuracy(output, label)
    return acc / (i+1)

for epoch in range(5):
    train_loss = 0.
    train_acc = 0.
    for data, label in train_data:
        with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
        loss.backward()
        trainer.step(batch_size) #使用训练器,向"前"走一步

        train_loss += nd.mean(loss).asscalar()
        train_acc += accuracy(output, label)

    test_acc = evaluate_accuracy(test_data, net)
    print("Epoch %d. Loss: %f, Train acc %f, Test acc %f" % (
        epoch, train_loss/len(train_data), train_acc/len(train_data), test_acc))

data, label = mnist_test[0:10]
show_images(data)
print(‘true labels‘)
print(get_text_labels(label))

predicted_labels = net(data).argmax(axis=1)
print(‘predicted labels‘)
print (get_text_labels(predicted_labels.asnumpy()))

mxnet-深度学习网络

标签:bre   epo   return   auth   0.00   ide   compose   shu   lua   

原文地址:http://blog.51cto.com/13959448/2316672

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!