CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
better-data-science

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!

GitHub Repository: better-data-science/TensorFlow
Path: blob/main/009_CNN_002_Image_Classification_With_ANN.ipynb
Views: 47
Kernel: Python 3 (ipykernel)

CNN 2 - Training image classification models with ANNs

import os import pathlib import pickle import numpy as np import pandas as pd import tensorflow as tf from PIL import Image, ImageOps from IPython.display import display from sklearn.utils import shuffle import warnings os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' warnings.filterwarnings('ignore')
  • Let's load in an arbitrary image:

src_img = Image.open('data/train/cat/1.jpg') display(src_img)
Image in a Jupyter notebook
  • Here's the shape (width, height, color channels):

np.array(src_img).shape
(281, 300, 3)
  • If flattened, it would result in this many features:

281 * 300 * 3
252900
  • We can reduce the number by a factor of 3 by grayscaling the image

  • We still know it's a cat, no matter if we lose the color info:

gray_img = ImageOps.grayscale(src_img) display(gray_img)
Image in a Jupyter notebook
np.array(gray_img).shape
(281, 300)
281 * 300
84300
  • It's still a lot, so let's resize the image to something smaller

  • Let's say 96x96:

gray_resized_img = gray_img.resize(size=(96, 96)) display(gray_resized_img)
Image in a Jupyter notebook
np.array(gray_resized_img).shape
(96, 96)
  • Much less features:

96 * 96
9216
  • This is how you can flatten the image and store it as an array:

np.ravel(gray_resized_img)
array([42, 42, 51, ..., 68, 37, 36], dtype=uint8)
  • The values aren't in an ideal range (0-255)

  • Neural network model prefers 0-1 range

  • Let's transform it:

img_final = np.ravel(gray_resized_img) / 255.0
img_final
array([0.16470588, 0.16470588, 0.2 , ..., 0.26666667, 0.14509804, 0.14117647])
  • Finally, let's implement all of this in a single function:

def process_image(img_path: str) -> np.array: img = Image.open(img_path) img = ImageOps.grayscale(img) img = img.resize(size=(96, 96)) img = np.ravel(img) / 255.0 return img
  • And let's test it:

tst_img = process_image(img_path='data/validation/dog/10012.jpg')
tst_img
array([0.24705882, 0.23921569, 0.29019608, ..., 0.19215686, 0.20784314, 0.23921569])
Image.fromarray(np.uint8(tst_img * 255).reshape((96, 96)))
Image in a Jupyter notebook
  • It works as expected, so let's apply the same logic to the entire dataset next.


Process the entire dataset

  • Let's declare a function that will process all images in a given folder

  • The function returns processed images as a Pandas DataFrame

  • We'll add an additional column just so we know the class:

def process_folder(folder: pathlib.PosixPath) -> pd.DataFrame: # We'll store the images here processed = [] # For every image in the directory for img in folder.iterdir(): # Ensure JPG if img.suffix == '.jpg': # Two images failed for whatever reason, so let's just ignore them try: processed.append(process_image(img_path=str(img))) except Exception as _: continue # Convert to pd.DataFrame processed = pd.DataFrame(processed) # Add a class column - dog or a cat processed['class'] = folder.parts[-1] return processed
  • And now let's build ourselves training, validation, and test sets

  • We'll start with the training set

    • Process both cat and dog images

    • Concatenate the two datasets

    • Save them in a pickle format, just so you don't have to go through the entire process again

%%time train_cat = process_folder(folder=pathlib.Path.cwd().joinpath('data/train/cat')) train_dog = process_folder(folder=pathlib.Path.cwd().joinpath('data/train/dog')) train_set = pd.concat([train_cat, train_dog], axis=0) with open('train_set.pkl', 'wb') as f: pickle.dump(train_set, f)
Wall time: 49.1 s
train_set.head()
train_set.shape
(20030, 9217)
  • Now for the test set:

%%time test_cat = process_folder(folder=pathlib.Path.cwd().joinpath('data/test/cat')) test_dog = process_folder(folder=pathlib.Path.cwd().joinpath('data/test/dog')) test_set = pd.concat([test_cat, test_dog], axis=0) with open('test_set.pkl', 'wb') as f: pickle.dump(test_set, f)
Wall time: 5.46 s
test_set.shape
(2490, 9217)
  • And finally for the validation set:

%%time valid_cat = process_folder(folder=pathlib.Path.cwd().joinpath('data/validation/cat')) valid_dog = process_folder(folder=pathlib.Path.cwd().joinpath('data/validation/dog')) valid_set = pd.concat([valid_cat, valid_dog], axis=0) with open('valid_set.pkl', 'wb') as f: pickle.dump(valid_set, f)
Wall time: 5.53 s
valid_set.shape
(2478, 9217)

Additional processing

  • Datasets now contain images of cats first, followed by images of dogs

  • We want to shuffle those datasets, so a neural network goes through the images in a random order:

train_set = shuffle(train_set).reset_index(drop=True) valid_set = shuffle(valid_set).reset_index(drop=True)
train_set.head()
  • Separate the features from the target:

X_train = train_set.drop('class', axis=1) y_train = train_set['class'] X_valid = valid_set.drop('class', axis=1) y_valid = valid_set['class'] X_test = test_set.drop('class', axis=1) y_test = test_set['class']
  • We need to factorize the target variable

  • For example, if our classes are ['cat', 'dog'], the function will convert them to integers [0, 1]

  • Then, each instance is represented as follows:

    • Cat: [1, 0]

    • Dog: [0, 1]

y_train.factorize()
(array([0, 0, 1, ..., 0, 1, 0], dtype=int64), Index(['cat', 'dog'], dtype='object'))
y_train = tf.keras.utils.to_categorical(y_train.factorize()[0], num_classes=2) y_valid = tf.keras.utils.to_categorical(y_valid.factorize()[0], num_classes=2) y_test = tf.keras.utils.to_categorical(y_test.factorize()[0], num_classes=2)
y_train[:5]
array([[1., 0.], [1., 0.], [0., 1.], [0., 1.], [1., 0.]], dtype=float32)

Training the model

  • The architecture and parameters are completely random

  • Set it to whatever you want

  • We have two nodes at the output layer

    • Represents two classes - cat and dog

  • We're using Categorical Crossentropy as a loss function because we have two categories - cat and dog

  • The model is trained for 100 epochs with a batch size of 128:

tf.random.set_seed(42) model = tf.keras.Sequential([ tf.keras.layers.Dense(2048, activation='relu'), tf.keras.layers.Dense(1024, activation='relu'), tf.keras.layers.Dense(1024, activation='relu'), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(2, activation='softmax') ]) model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer=tf.keras.optimizers.Adam(), metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')] ) history = model.fit( X_train, y_train, epochs=100, batch_size=128, validation_data=(X_valid, y_valid) )
Epoch 1/100 157/157 [==============================] - 3s 9ms/step - loss: 0.9690 - accuracy: 0.5320 - val_loss: 0.6741 - val_accuracy: 0.5771 Epoch 2/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6696 - accuracy: 0.5899 - val_loss: 0.6660 - val_accuracy: 0.5936 Epoch 3/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6647 - accuracy: 0.5995 - val_loss: 0.6593 - val_accuracy: 0.6045 Epoch 4/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6595 - accuracy: 0.6064 - val_loss: 0.6828 - val_accuracy: 0.5734 Epoch 5/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6562 - accuracy: 0.6074 - val_loss: 0.6675 - val_accuracy: 0.5940 Epoch 6/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6521 - accuracy: 0.6161 - val_loss: 0.6578 - val_accuracy: 0.5964 Epoch 7/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6524 - accuracy: 0.6133 - val_loss: 0.6578 - val_accuracy: 0.5969 Epoch 8/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6489 - accuracy: 0.6172 - val_loss: 0.6645 - val_accuracy: 0.5969 Epoch 9/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6478 - accuracy: 0.6209 - val_loss: 0.6599 - val_accuracy: 0.6037 Epoch 10/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6422 - accuracy: 0.6296 - val_loss: 0.6868 - val_accuracy: 0.5710 Epoch 11/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6390 - accuracy: 0.6319 - val_loss: 0.6546 - val_accuracy: 0.6142 Epoch 12/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6380 - accuracy: 0.6325 - val_loss: 0.6531 - val_accuracy: 0.6065 Epoch 13/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6316 - accuracy: 0.6402 - val_loss: 0.6734 - val_accuracy: 0.5936 Epoch 14/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6316 - accuracy: 0.6416 - val_loss: 0.6602 - val_accuracy: 0.6223 Epoch 15/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6311 - accuracy: 0.6422 - val_loss: 0.6708 - val_accuracy: 0.5997 Epoch 16/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6259 - accuracy: 0.6457 - val_loss: 0.6771 - val_accuracy: 0.6017 Epoch 17/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6234 - accuracy: 0.6487 - val_loss: 0.6623 - val_accuracy: 0.6166 Epoch 18/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6253 - accuracy: 0.6444 - val_loss: 0.6489 - val_accuracy: 0.6190 Epoch 19/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6229 - accuracy: 0.6465 - val_loss: 0.6577 - val_accuracy: 0.6146 Epoch 20/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6169 - accuracy: 0.6538 - val_loss: 0.6596 - val_accuracy: 0.6082 Epoch 21/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6177 - accuracy: 0.6521 - val_loss: 0.6622 - val_accuracy: 0.6098 Epoch 22/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6169 - accuracy: 0.6555 - val_loss: 0.6656 - val_accuracy: 0.6174 Epoch 23/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6099 - accuracy: 0.6621 - val_loss: 0.6588 - val_accuracy: 0.6203 Epoch 24/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6093 - accuracy: 0.6606 - val_loss: 0.6640 - val_accuracy: 0.6215 Epoch 25/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6022 - accuracy: 0.6682 - val_loss: 0.6608 - val_accuracy: 0.6178 Epoch 26/100 157/157 [==============================] - 1s 7ms/step - loss: 0.6061 - accuracy: 0.6666 - val_loss: 0.6666 - val_accuracy: 0.6215 Epoch 27/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5995 - accuracy: 0.6718 - val_loss: 0.6489 - val_accuracy: 0.6299 Epoch 28/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5958 - accuracy: 0.6731 - val_loss: 0.6657 - val_accuracy: 0.6094 Epoch 29/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5929 - accuracy: 0.6751 - val_loss: 0.7267 - val_accuracy: 0.6114 Epoch 30/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5926 - accuracy: 0.6777 - val_loss: 0.6607 - val_accuracy: 0.6283 Epoch 31/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5870 - accuracy: 0.6821 - val_loss: 0.7064 - val_accuracy: 0.6102 Epoch 32/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5803 - accuracy: 0.6868 - val_loss: 0.6650 - val_accuracy: 0.6247 Epoch 33/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5788 - accuracy: 0.6893 - val_loss: 0.6658 - val_accuracy: 0.6162 Epoch 34/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5782 - accuracy: 0.6852 - val_loss: 0.6922 - val_accuracy: 0.5952 Epoch 35/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5725 - accuracy: 0.6923 - val_loss: 0.6811 - val_accuracy: 0.6199 Epoch 36/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5733 - accuracy: 0.6947 - val_loss: 0.6629 - val_accuracy: 0.6223 Epoch 37/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5695 - accuracy: 0.6931 - val_loss: 0.6883 - val_accuracy: 0.6178 Epoch 38/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5708 - accuracy: 0.6906 - val_loss: 0.7039 - val_accuracy: 0.6073 Epoch 39/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5638 - accuracy: 0.6977 - val_loss: 0.6886 - val_accuracy: 0.6090 Epoch 40/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5558 - accuracy: 0.7053 - val_loss: 0.7287 - val_accuracy: 0.6186 Epoch 41/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5595 - accuracy: 0.6995 - val_loss: 0.7271 - val_accuracy: 0.6199 Epoch 42/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5449 - accuracy: 0.7095 - val_loss: 0.7239 - val_accuracy: 0.6267 Epoch 43/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5512 - accuracy: 0.7132 - val_loss: 0.7316 - val_accuracy: 0.6340 Epoch 44/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5424 - accuracy: 0.7166 - val_loss: 0.7291 - val_accuracy: 0.6190 Epoch 45/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5360 - accuracy: 0.7193 - val_loss: 0.7300 - val_accuracy: 0.6162 Epoch 46/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5381 - accuracy: 0.7156 - val_loss: 0.7059 - val_accuracy: 0.6360 Epoch 47/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5284 - accuracy: 0.7256 - val_loss: 0.7323 - val_accuracy: 0.6178 Epoch 48/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5245 - accuracy: 0.7306 - val_loss: 0.7382 - val_accuracy: 0.6255 Epoch 49/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5307 - accuracy: 0.7198 - val_loss: 0.7180 - val_accuracy: 0.6182 Epoch 50/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5252 - accuracy: 0.7286 - val_loss: 0.7533 - val_accuracy: 0.6073 Epoch 51/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5191 - accuracy: 0.7329 - val_loss: 0.7384 - val_accuracy: 0.6057 Epoch 52/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5120 - accuracy: 0.7360 - val_loss: 0.7918 - val_accuracy: 0.6138 Epoch 53/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5280 - accuracy: 0.7256 - val_loss: 0.7541 - val_accuracy: 0.6174 Epoch 54/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5090 - accuracy: 0.7382 - val_loss: 0.7860 - val_accuracy: 0.6275 Epoch 55/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5018 - accuracy: 0.7422 - val_loss: 0.8113 - val_accuracy: 0.6166 Epoch 56/100 157/157 [==============================] - 1s 7ms/step - loss: 0.5003 - accuracy: 0.7443 - val_loss: 0.7933 - val_accuracy: 0.5989 Epoch 57/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4997 - accuracy: 0.7421 - val_loss: 0.7627 - val_accuracy: 0.6215 Epoch 58/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4924 - accuracy: 0.7496 - val_loss: 0.7880 - val_accuracy: 0.6126 Epoch 59/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4880 - accuracy: 0.7511 - val_loss: 0.9239 - val_accuracy: 0.6203 Epoch 60/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4861 - accuracy: 0.7522 - val_loss: 0.7980 - val_accuracy: 0.6219 Epoch 61/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4784 - accuracy: 0.7566 - val_loss: 0.7953 - val_accuracy: 0.6174 Epoch 62/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4814 - accuracy: 0.7554 - val_loss: 0.8254 - val_accuracy: 0.6199 Epoch 63/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4735 - accuracy: 0.7606 - val_loss: 0.7479 - val_accuracy: 0.6009 Epoch 64/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4674 - accuracy: 0.7627 - val_loss: 0.9078 - val_accuracy: 0.6235 Epoch 65/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4596 - accuracy: 0.7707 - val_loss: 0.9048 - val_accuracy: 0.6017 Epoch 66/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4658 - accuracy: 0.7680 - val_loss: 0.9041 - val_accuracy: 0.6251 Epoch 67/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4510 - accuracy: 0.7741 - val_loss: 0.7672 - val_accuracy: 0.6207 Epoch 68/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4594 - accuracy: 0.7669 - val_loss: 1.0470 - val_accuracy: 0.6114 Epoch 69/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4560 - accuracy: 0.7717 - val_loss: 0.8900 - val_accuracy: 0.5884 Epoch 70/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4496 - accuracy: 0.7762 - val_loss: 0.8845 - val_accuracy: 0.6126 Epoch 71/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4444 - accuracy: 0.7828 - val_loss: 0.9683 - val_accuracy: 0.6114 Epoch 72/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4415 - accuracy: 0.7773 - val_loss: 0.8592 - val_accuracy: 0.6065 Epoch 73/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4402 - accuracy: 0.7787 - val_loss: 1.0549 - val_accuracy: 0.6069 Epoch 74/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4444 - accuracy: 0.7771 - val_loss: 0.9060 - val_accuracy: 0.6259 Epoch 75/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4257 - accuracy: 0.7884 - val_loss: 0.8995 - val_accuracy: 0.5888 Epoch 76/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4314 - accuracy: 0.7876 - val_loss: 0.9880 - val_accuracy: 0.6061 Epoch 77/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4100 - accuracy: 0.8002 - val_loss: 1.0353 - val_accuracy: 0.6114 Epoch 78/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4231 - accuracy: 0.7924 - val_loss: 0.9878 - val_accuracy: 0.6061 Epoch 79/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4127 - accuracy: 0.7967 - val_loss: 1.0643 - val_accuracy: 0.6094 Epoch 80/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4131 - accuracy: 0.7972 - val_loss: 1.0283 - val_accuracy: 0.6130 Epoch 81/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4079 - accuracy: 0.8019 - val_loss: 0.9503 - val_accuracy: 0.6106 Epoch 82/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3926 - accuracy: 0.8099 - val_loss: 1.2072 - val_accuracy: 0.6021 Epoch 83/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4128 - accuracy: 0.7978 - val_loss: 0.9844 - val_accuracy: 0.6134 Epoch 84/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4171 - accuracy: 0.7944 - val_loss: 1.2073 - val_accuracy: 0.6122 Epoch 85/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3946 - accuracy: 0.8081 - val_loss: 1.0378 - val_accuracy: 0.6065 Epoch 86/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3833 - accuracy: 0.8107 - val_loss: 1.1218 - val_accuracy: 0.6142 Epoch 87/100 157/157 [==============================] - 1s 7ms/step - loss: 0.4076 - accuracy: 0.8001 - val_loss: 1.1005 - val_accuracy: 0.6090 Epoch 88/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3817 - accuracy: 0.8154 - val_loss: 1.1514 - val_accuracy: 0.6005 Epoch 89/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3789 - accuracy: 0.8150 - val_loss: 1.0562 - val_accuracy: 0.6086 Epoch 90/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3755 - accuracy: 0.8185 - val_loss: 1.1848 - val_accuracy: 0.6138 Epoch 91/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3687 - accuracy: 0.8220 - val_loss: 1.1346 - val_accuracy: 0.6126 Epoch 92/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3793 - accuracy: 0.8163 - val_loss: 1.2242 - val_accuracy: 0.5973 Epoch 93/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3816 - accuracy: 0.8194 - val_loss: 1.4040 - val_accuracy: 0.5985 Epoch 94/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3741 - accuracy: 0.8192 - val_loss: 1.2043 - val_accuracy: 0.6150 Epoch 95/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3636 - accuracy: 0.8230 - val_loss: 1.0830 - val_accuracy: 0.6247 Epoch 96/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3733 - accuracy: 0.8192 - val_loss: 1.4678 - val_accuracy: 0.6154 Epoch 97/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3744 - accuracy: 0.8182 - val_loss: 1.4372 - val_accuracy: 0.6009 Epoch 98/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3698 - accuracy: 0.8212 - val_loss: 1.4769 - val_accuracy: 0.5989 Epoch 99/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3609 - accuracy: 0.8239 - val_loss: 1.0699 - val_accuracy: 0.6094 Epoch 100/100 157/157 [==============================] - 1s 7ms/step - loss: 0.3494 - accuracy: 0.8333 - val_loss: 1.2802 - val_accuracy: 0.6029

Inspecting performance

  • It doesn't look like the best model, as ANNs aren't the best tool for image data

  • Let's visualize training loss vs. validation loss and training accuracy vs. validation accuracy

import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['figure.figsize'] = (18, 8) rcParams['axes.spines.top'] = False rcParams['axes.spines.right'] = False
plt.plot(np.arange(1, 101), history.history['loss'], label='Training Loss') plt.plot(np.arange(1, 101), history.history['val_loss'], label='Validation Loss') plt.title('Training vs. Validation Loss', size=20) plt.xlabel('Epoch', size=14) plt.legend();
Image in a Jupyter notebook
plt.plot(np.arange(1, 101), history.history['accuracy'], label='Training Accuracy') plt.plot(np.arange(1, 101), history.history['val_accuracy'], label='Validation Accuracy') plt.title('Training vs. Validation Accuracy', size=20) plt.xlabel('Epoch', size=14) plt.legend();
Image in a Jupyter notebook
  • The performance is terrible

  • 60% accuracy for a binary classifier is almost useless

  • Convolutions can help, and you'll see how in the following notebook