基于MNIST数据集的深度学习库keras的学习
目录
学习步骤
学习过程很简单(haha),先看官方文档,结合例子看文档,不懂的看博客,一些大牛总结的非常好~!
搭建简单模型训练预测
先上代码,如下
#!/usr/bin/env python
# coding=utf-8
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import model_from_json
import numpy
# 准备训练数据和测试数据
(x_train, y_train), (x_test, y_test) = mnist.load_data() # 使用Keras自带的mnist读取数据(第一次需要联网)
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1] * x_train.shape[2]).astype('float32')
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1] * x_test.shape[2]).astype('float32')
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
x_train = x_train / 255
x_test = x_test / 255
# 用keras的序贯模型搭建模型
model = Sequential()
# 第一层必须指定输入的维度(shape)
layers1 = Dense(784, input_dim=784, kernel_initializer='normal', activation='relu')
model.add(layers1)
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10, kernel_initializer='normal', activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # 使用交叉熵作为loss函数
model.fit(x_train, y_train, validation_data=(x_test, y_test), batch_size=200, epochs=10, shuffle=True, verbose=2)
print '正在测试数据...'
scores = model.evaluate(x_test, y_test, verbose=0)
print scores
print '=' * 30
print("MLP4MNIST Error: %.2f%%" % (100 - scores[1]*100))#输出测试错误率
#保存搭建的模型以及训练的权重
json_string = model.to_json()
open('baseline4MNIST_model.json','w').write(json_string)
model.save_weights('baseline4MNIST_weights.h5')
需要注意的是输入需要归一化x_train = x_train / 255,没有归一化训练过程中损失函数值很大,准确率只有40%左右(具体机理还不知);
模型为两层全连接,神经元分别为784,512个;
搭建的模型与训练的权重结果分别以
json
和h5
格式保存。
训练的结果如下
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 784) 615440
_________________________________________________________________
dropout_1 (Dropout) (None, 784) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 401920
_________________________________________________________________
activation_1 (Activation) (None, 512) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 1,022,490
Trainable params: 1,022,490
Non-trainable params: 0
_________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
27s - loss: 0.2584 - acc: 0.9225 - val_loss: 0.1032 - val_acc: 0.9675
Epoch 2/10
27s - loss: 0.0997 - acc: 0.9695 - val_loss: 0.0724 - val_acc: 0.9773
Epoch 3/10
29s - loss: 0.0646 - acc: 0.9796 - val_loss: 0.0714 - val_acc: 0.9771
Epoch 4/10
29s - loss: 0.0515 - acc: 0.9833 - val_loss: 0.0655 - val_acc: 0.9800
Epoch 5/10
29s - loss: 0.0379 - acc: 0.9876 - val_loss: 0.0679 - val_acc: 0.9787
Epoch 6/10
28s - loss: 0.0324 - acc: 0.9891 - val_loss: 0.0636 - val_acc: 0.9820
Epoch 7/10
27s - loss: 0.0264 - acc: 0.9912 - val_loss: 0.0783 - val_acc: 0.9782
Epoch 8/10
27s - loss: 0.0264 - acc: 0.9912 - val_loss: 0.0646 - val_acc: 0.9822
Epoch 9/10
27s - loss: 0.0225 - acc: 0.9923 - val_loss: 0.0692 - val_acc: 0.9819
Epoch 10/10
29s - loss: 0.0207 - acc: 0.9925 - val_loss: 0.0636 - val_acc: 0.9815
test set...
[0.063634108908620687, 0.98150000000000004]
==============================
MLP4MNIST Error: 1.85%
loss和acc分别是训练集的损失函数值和正确率,后面是测试集的结果。
测试样本集的错误率为1.85%,当然现在手写数字的识别准确率可以做到100%。
搭建CNN模型训练预测
模型结构参考如下图:
图片来自这里
还是先上代码
#!/usr/bin/env python
# coding=utf-8
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Reshape
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.datasets import mnist
from keras.optimizers import SGD
from keras import metrics
import numpy
# 准备训练和测试数据
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255
x_test = x_test / 255
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
print x_train.shape
print y_train.shape
# keras序贯模型搭建CNN
model = Sequential()
model.add(Reshape((28, 28, 1), input_shape=(28, 28)))
model.add(Conv2D(6, (5, 5), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(120, activation='relu'))
model.add(Dense(84, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=50, epochs=5, verbose=2)
score = model.evaluate(x_test, y_test, batch_size=100, verbose=1)
print '=' * 30
print("CNN Score: %.2f%%" % (score[1]*100))#输出错误率
#save
json_string = model.to_json()
open('CNN4MNIST_model.json','w').write(json_string)
model.save_weights('CNN4MNIST_weights.h5')
首先将输入(28,28)的shape按tensorflow的tensor模式reshape成(28,28,1)
此CNN模型包括两个卷积层,两个池化层,两个全连接层
训练结果如下
Using TensorFlow backend.
(60000, 28, 28)
(60000, 10)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 28, 28, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 24, 24, 6) 156
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 6) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 8, 8, 16) 2416
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 16) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 4, 4, 16) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 120) 30840
_________________________________________________________________
dense_2 (Dense) (None, 84) 10164
_________________________________________________________________
dense_3 (Dense) (None, 10) 850
=================================================================
Total params: 44,426
Trainable params: 44,426
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
41s - loss: 0.3166 - acc: 0.8970
Epoch 2/5
40s - loss: 0.0979 - acc: 0.9686
Epoch 3/5
40s - loss: 0.0731 - acc: 0.9769
Epoch 4/5
40s - loss: 0.0624 - acc: 0.9807
Epoch 5/5
41s - loss: 0.0536 - acc: 0.9825
9900/10000 [============================>.] - ETA: 0s==============================
CNN Score: 98.64%
对比:CNN的准确率要比前面的简单MLP模型的略高,但是可以从下面的测试看出CNN优势(泛化能力强)。
数字图片的预测测试
将前面保存的模型以及训练权重结果重新加载进来,对输入图像进行预测。
测试代码
#!/usr/bin/env python
# coding=utf-8
# from keras.models import load_model
from keras.models import model_from_json
from keras.preprocessing import image
import numpy as np
import sys
from scipy import misc
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
print '*' * 50
print 'Usage:python *.py modelName.json weightsName.h5 img.jpg'
print '*' * 50
def rgb2gray(rgb):
return np.dot(rgb[...,:3],[0.299,0.587,0.114])
# 加载模型以及权重
model=model_from_json(open(sys.argv[1]).read())
model.load_weights(sys.argv[2])
# model.summary()
# 输入图像预处理
img_path = sys.argv[3]
img = mpimg.imread(img_path)
print 'original img shape:', img.shape
imgGray = rgb2gray(img)
img_newsize=misc.imresize(imgGray,[28,28])
plt.imshow(img_newsize, cmap=plt.get_cmap('PuBuGn_r'))
img_newsize = img_newsize.reshape(1, 28, 28).astype('float32')
x = img_newsize / 255
# 预测
preds2 = model.predict_classes(x)
print '*' * 50
print 'Predicted:', preds2[0]
plt.show()
- 上述代码是对CNN的模型进行测试,对MLP模型测试时,需将输入reshape成(1,784)。
测试图片
测试结果
nums: | MLP | CNN |
---|---|---|
0 | 4 | 7 |
1 | 0 | 5 |
2 | 0 | 5 |
4 | 4 | 5 |
7 | 1 | 5 |
9 | 9 | 5 |
33 | 8 | 5 |
55 | 5 | 5 |
77 | 1 | 5 |
99 | 1 | 5 |
555 | 5 | 5 |
- 在训练以及对测试数据集进行测试时,CNN和MLP模型准确率
>98%
,但是这测试结果远远没有达到理想中的效果;
有可能输入数据没有做好处理,训练数据集和测试数据集的数据与图片输入数据还是有很大不同,输入图像空白处大部分是255,而训练和测试数据集都是0,后续实验再验证…继续学习…
将输入图像翻转img = 255 - img
,再进行测试,测试结果如下:
nums | MLP | CNN |
---|---|---|
0 | 0 | 0 |
1 | 1 | 1 |
2 | 2 | 2 |
4 | 4 | 4 |
7 | 8 | 7 |
9 | 8 | 9 |
33 | 3 | 3 |
55 | 5 | 5 |
77 | 6 | 1 |
99 | 1 | 7 |
555 | 5 | 5 |
相比于前面的预测结果要好很多,但是感觉还是没有达到训练的准确率
>98%
(虽然测试数据只有这几个),还要继续努力…单看目前的结果还是可以看出CNN比MLP的泛化能力要强。
学习中…
总结
keras还是很容易上手的,文档写得清晰明了,深度学习库的结构也清晰,功能强大,以tensorflow作为后端进行包装,灵活,但是感觉较慢。keras库集成的CIFAR10
、IMDB
、MNIST
、BOSTON房屋价格
等数据集挺好用,以及有预训练好的架构模型可以用。刚刚接触,还要继续学习…
先定个目标万一实现了呢
星期四, 04. 五月 2017 09:46上午