This example shows how to do image classification from scratch, starting from JPEG image files on disk, without leveraging pre-trained weights or a pre-made Keras Application model. We demonstrate the workflow on the Kaggle Cats vs Dogs binary classification dataset.
We use the image_dataset_from_directory
utility to generate the datasets, and
we use Keras image preprocessing layers for image standardization and data augmentation.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pydot
import os
from PIL import Image
import matplotlib.pyplot as plt
from keras.utils.vis_utils import plot_model
/opt/miniconda3/lib/python3.8/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3 warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
Now we have a PetImages
folder which contain two subfolders, Cat
and Dog
. Each
subfolder contains image files for each category.
!ls PetImages
Cat Dog
When working with lots of real-world image data, corrupted images are a common occurence. Let's filter out badly-encoded images that do not feature the string "JFIF" in their header.
num_skipped = 0
print("here!")
for folder_name in ("Cat", "Dog"):
folder_path = os.path.join("PetImages", folder_name)
for fname in os.listdir(folder_path):
if fname[-1].lower() == 'g':
fpath = os.path.join(folder_path, fname)
#Image.open(fpath).convert('L').save(fpath)
try:
fobj = open(fpath, "rb")
is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
finally:
fobj.close()
if not is_jfif:
num_skipped += 1
# Delete corrupted image
os.remove(fpath)
print("Deleted %d images" % num_skipped)
here! Deleted 0 images
Dataset
¶image_size = (180, 180)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
color_mode='grayscale',
validation_split=0.2,
subset="training",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
color_mode='grayscale',
validation_split=0.2,
subset="validation",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
Found 23410 files belonging to 2 classes. Using 18728 files for training. Found 23410 files belonging to 2 classes. Using 4682 files for validation.
Here are the first 9 images in the training dataset. As you can see, label 1 is "dog" and label 0 is "cat".
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
img = images[i].numpy().astype("uint8").squeeze()
plt.imshow(img,cmap='gray', vmin=0, vmax=255)
plt.title(int(labels[i]))
plt.axis("off")
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting.
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
)
Let's visualize what the augmented samples look like, by applying data_augmentation
repeatedly to the first image in the dataset:
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
img = augmented_images[0].numpy().astype("uint8").squeeze()
plt.imshow(img,cmap='gray', vmin=0, vmax=255)
plt.axis("off")
Our image are already in a standard size (180x180), as they are being yielded as
contiguous float32
batches by our dataset. However, their RGB channel values are in
the [0, 255]
range. This is not ideal for a neural network;
in general you should seek to make your input values small. Here, we will
standardize values to be in the [0, 1]
by using a Rescaling
layer at the start of
our model.
There are two ways you could be using the data_augmentation
preprocessor:
Option 1: Make it part of the model, like this:
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(x)
... # Rest of the model
With this option, your data augmentation will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration.
Option 2: apply it to the dataset, so as to obtain a dataset that yields batches of augmented images, like this:
augmented_train_ds = train_ds.map(
lambda x, y: (data_augmentation(x, training=True), y))
We'll go with the first option.
Let's make sure to use buffered prefetching so we can yield data from disk without having I/O becoming blocking:
train_ds = train_ds.prefetch(buffer_size=32)
val_ds = val_ds.prefetch(buffer_size=32)
We'll build a small version of the Xception network. We haven't particularly tried to optimize the architecture; if you want to do a systematic search for the best model configuration, consider using KerasTuner.
Note that:
data_augmentation
preprocessor, followed by a
Rescaling
layer.Dropout
layer before the final classification layer.def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
x = layers.Flatten()(inputs)
h1 = layers.Dense(1000)(x)
h2 = layers.Dense(500, activation="relu")(h1)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
outputs = layers.Dense(units, activation=activation)(h2)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size, num_classes=2)
keras.utils.plot_model(model, show_shapes=True)
epochs = 30
#callbacks = [
# keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
#]
model.compile(
optimizer=keras.optimizers.Adam(1e-4),
loss="binary_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, validation_data=val_ds,
)
Epoch 1/30 586/586 [==============================] - 122s 205ms/step - loss: 125.9583 - accuracy: 0.5236 - val_loss: 81.3079 - val_accuracy: 0.4953 Epoch 2/30 586/586 [==============================] - 124s 211ms/step - loss: 55.1137 - accuracy: 0.5359 - val_loss: 37.4214 - val_accuracy: 0.5314 Epoch 3/30 586/586 [==============================] - 126s 215ms/step - loss: 52.3048 - accuracy: 0.5382 - val_loss: 10.2799 - val_accuracy: 0.5485 Epoch 4/30 586/586 [==============================] - 124s 211ms/step - loss: 38.4124 - accuracy: 0.5408 - val_loss: 11.1606 - val_accuracy: 0.5316 Epoch 5/30 586/586 [==============================] - 125s 213ms/step - loss: 17.8660 - accuracy: 0.5387 - val_loss: 92.5307 - val_accuracy: 0.5199 Epoch 6/30 586/586 [==============================] - 123s 210ms/step - loss: 59.4586 - accuracy: 0.5382 - val_loss: 11.6279 - val_accuracy: 0.5295 Epoch 7/30 586/586 [==============================] - 124s 211ms/step - loss: 39.5454 - accuracy: 0.5354 - val_loss: 16.4853 - val_accuracy: 0.5382 Epoch 8/30 586/586 [==============================] - 4456s 8s/step - loss: 21.2480 - accuracy: 0.5467 - val_loss: 31.7739 - val_accuracy: 0.5282 Epoch 9/30 586/586 [==============================] - 128s 218ms/step - loss: 64.4682 - accuracy: 0.5271 - val_loss: 96.2987 - val_accuracy: 0.4955 Epoch 10/30 586/586 [==============================] - 133s 227ms/step - loss: 14.9657 - accuracy: 0.5475 - val_loss: 6.1392 - val_accuracy: 0.5733 Epoch 11/30 586/586 [==============================] - 133s 226ms/step - loss: 70.0990 - accuracy: 0.5242 - val_loss: 39.0431 - val_accuracy: 0.5188 Epoch 12/30 586/586 [==============================] - 127s 216ms/step - loss: 27.6119 - accuracy: 0.5441 - val_loss: 187.4568 - val_accuracy: 0.5053 Epoch 13/30 586/586 [==============================] - 128s 218ms/step - loss: 99.5137 - accuracy: 0.5332 - val_loss: 32.4473 - val_accuracy: 0.5342 Epoch 14/30 586/586 [==============================] - 129s 219ms/step - loss: 16.5628 - accuracy: 0.5577 - val_loss: 11.3521 - val_accuracy: 0.5051 Epoch 15/30 586/586 [==============================] - 128s 218ms/step - loss: 9.9761 - accuracy: 0.5683 - val_loss: 4.8752 - val_accuracy: 0.5681 Epoch 16/30 586/586 [==============================] - 130s 221ms/step - loss: 51.6497 - accuracy: 0.5421 - val_loss: 87.3677 - val_accuracy: 0.5113 Epoch 17/30 586/586 [==============================] - 134s 228ms/step - loss: 10.0727 - accuracy: 0.5703 - val_loss: 15.7859 - val_accuracy: 0.5167 Epoch 18/30 586/586 [==============================] - 131s 224ms/step - loss: 59.4432 - accuracy: 0.5459 - val_loss: 14.2706 - val_accuracy: 0.5233 Epoch 19/30 586/586 [==============================] - 126s 215ms/step - loss: 32.5514 - accuracy: 0.5567 - val_loss: 93.7239 - val_accuracy: 0.5094 Epoch 20/30 586/586 [==============================] - 130s 221ms/step - loss: 24.2537 - accuracy: 0.5608 - val_loss: 7.3942 - val_accuracy: 0.6181 Epoch 21/30 586/586 [==============================] - 127s 216ms/step - loss: 8.3464 - accuracy: 0.5715 - val_loss: 4.1981 - val_accuracy: 0.5632 Epoch 22/30 586/586 [==============================] - 128s 218ms/step - loss: 64.6679 - accuracy: 0.5520 - val_loss: 8.9043 - val_accuracy: 0.5760 Epoch 23/30 586/586 [==============================] - 127s 216ms/step - loss: 6.0070 - accuracy: 0.5886 - val_loss: 10.2113 - val_accuracy: 0.5470 Epoch 24/30 586/586 [==============================] - 127s 216ms/step - loss: 214.5466 - accuracy: 0.5295 - val_loss: 23.9238 - val_accuracy: 0.5581 Epoch 25/30 586/586 [==============================] - 124s 211ms/step - loss: 18.8111 - accuracy: 0.5720 - val_loss: 40.3518 - val_accuracy: 0.5246 Epoch 26/30 586/586 [==============================] - 124s 210ms/step - loss: 12.7129 - accuracy: 0.5858 - val_loss: 24.8830 - val_accuracy: 0.5429 Epoch 27/30 586/586 [==============================] - 122s 208ms/step - loss: 29.3323 - accuracy: 0.5678 - val_loss: 29.9948 - val_accuracy: 0.4972 Epoch 28/30 586/586 [==============================] - 122s 208ms/step - loss: 13.7258 - accuracy: 0.5886 - val_loss: 15.2969 - val_accuracy: 0.5329 Epoch 29/30 586/586 [==============================] - 121s 206ms/step - loss: 35.5036 - accuracy: 0.5616 - val_loss: 69.4794 - val_accuracy: 0.5002 Epoch 30/30 586/586 [==============================] - 124s 211ms/step - loss: 16.3129 - accuracy: 0.5774 - val_loss: 10.7901 - val_accuracy: 0.5534
<keras.callbacks.History at 0x7fc146734550>
After 50 epochs, we're up to 64% accuracy on the test set, but still close to chance on the validation set.
Note that data augmentation and dropout are inactive at inference time.
img = keras.preprocessing.image.load_img(
"bathtub_bear.jpg", target_size=image_size, color_mode="grayscale"
)
img_array = keras.preprocessing.image.img_to_array(img)
print(img_array.shape)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
print(
"This image is %.2f percent cat and %.2f percent dog."
% (100 * (1 - score), 100 * score)
)
#"PetImages/Cat/6779.jpg"
(180, 180, 1) This image is 0.00 percent cat and 100.00 percent dog.