This post aims to designing a convolutional neural network (CNN) using Keras.
We can write most of the deeplearning algoritham in 5 simple steps using Keras.
The steps are like importing all the requirments, loading data, designing the model, compailing
model and followed by training the model.
The Basic layers of CNN are convolution, rectification and pooling operation.
Pooling Operation:
Rectification:
The following code is self explanetry,
######### Step 0: Importing requirments ############
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import matplotlib.pyplot as plt
###### Step 1: Loading data and setting up training parameters##
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
####### Step 2: Designing the model############
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
##### Step 3: Compailing the Model###########
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
######### Step 4: Traning the Model ###########
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Output:
('x_train shape:', (60000, 1, 28, 28))
(60000, 'train samples')
(10000, 'test samples')
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
60000/60000 [==============================] - 2s - loss: 0.3321 - acc: 0.8987 - val_loss: 0.0747 - val_acc: 0.9758
Epoch 2/12
60000/60000 [==============================] - 2s - loss: 0.1112 - acc: 0.9673 - val_loss: 0.0495 - val_acc: 0.9840
Epoch 3/12
60000/60000 [==============================] - 2s - loss: 0.0845 - acc: 0.9749 - val_loss: 0.0436 - val_acc: 0.9849
Epoch 4/12
60000/60000 [==============================] - 2s - loss: 0.0692 - acc: 0.9796 - val_loss: 0.0384 - val_acc: 0.9865
Epoch 5/12
60000/60000 [==============================] - 2s - loss: 0.0622 - acc: 0.9819 - val_loss: 0.0347 - val_acc: 0.9874
Epoch 6/12
60000/60000 [==============================] - 2s - loss: 0.0549 - acc: 0.9836 - val_loss: 0.0367 - val_acc: 0.9874
Epoch 7/12
60000/60000 [==============================] - 2s - loss: 0.0499 - acc: 0.9848 - val_loss: 0.0338 - val_acc: 0.9887
Epoch 8/12
60000/60000 [==============================] - 2s - loss: 0.0456 - acc: 0.9863 - val_loss: 0.0298 - val_acc: 0.9895
Epoch 9/12
60000/60000 [==============================] - 2s - loss: 0.0436 - acc: 0.9871 - val_loss: 0.0309 - val_acc: 0.9898
Epoch 10/12
60000/60000 [==============================] - 2s - loss: 0.0406 - acc: 0.9881 - val_loss: 0.0303 - val_acc: 0.9900
Epoch 11/12
60000/60000 [==============================] - 2s - loss: 0.0403 - acc: 0.9884 - val_loss: 0.0302 - val_acc: 0.9896
Epoch 12/12
60000/60000 [==============================] - 2s - loss: 0.0372 - acc: 0.9889 - val_loss: 0.0289 - val_acc: 0.9906
('Test loss:', 0.027463116704608184)
('Test accuracy:', 0.99209999999999998)
['acc', 'loss', 'val_acc', 'val_loss']
('Test loss:', 0.027463116704608184)
('Test accuracy:', 0.99209999999999998)
Training and testing accuracy vs epoch are showin in the following figure.
Training and testing loss vs epoch are showin in the following figure.

No comments:
Post a Comment