This example shows how to do image classification from scratch, starting from JPEG image files on disk, without leveraging pre-trained weights or a pre-made Keras Application model. We demonstrate the workflow on the Kaggle Cats vs Dogs binary classification dataset.
We use the image_dataset_from_directory
utility to generate the datasets, and
we use Keras image preprocessing layers for image standardization and data augmentation.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pydot
import os
from PIL import Image
import matplotlib.pyplot as plt
from keras.utils.vis_utils import plot_model
/opt/miniconda3/lib/python3.8/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3 warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
Now we have a PetImages
folder which contain two subfolders, Cat
and Dog
. Each
subfolder contains image files for each category.
!cd /Users/ca101/teaching/AI/CS232/regression
!ls PetImages
/bin/bash: line 0: cd: /Users/ca101/teaching/AI/CS232/regression: No such file or directory Cat Dog
When working with lots of real-world image data, corrupted images are a common occurence. Let's filter out badly-encoded images that do not feature the string "JFIF" in their header.
num_skipped = 0
print("here!")
for folder_name in ("Cat", "Dog"):
folder_path = os.path.join("PetImages", folder_name)
for fname in os.listdir(folder_path):
if fname[-1].lower() == 'g':
fpath = os.path.join(folder_path, fname)
#Image.open(fpath).convert('L').save(fpath)
try:
fobj = open(fpath, "rb")
is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
finally:
fobj.close()
if not is_jfif:
num_skipped += 1
# Delete corrupted image
os.remove(fpath)
print("Deleted %d images" % num_skipped)
here! Deleted 0 images
Dataset
¶image_size = (180, 180)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
color_mode='grayscale',
validation_split=0.2,
subset="training",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
color_mode='grayscale',
validation_split=0.2,
subset="validation",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
Found 23410 files belonging to 2 classes. Using 18728 files for training. Found 23410 files belonging to 2 classes. Using 4682 files for validation.
Here are the first 9 images in the training dataset. As you can see, label 1 is "dog" and label 0 is "cat".
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
img = images[i].numpy().astype("uint8").squeeze()
plt.imshow(img,cmap='gray', vmin=0, vmax=255)
plt.title(int(labels[i]))
plt.axis("off")
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting.
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
)
Let's visualize what the augmented samples look like, by applying data_augmentation
repeatedly to the first image in the dataset:
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
img = augmented_images[0].numpy().astype("uint8").squeeze()
plt.imshow(img,cmap='gray', vmin=0, vmax=255)
plt.axis("off")
Our image are already in a standard size (180x180), as they are being yielded as
contiguous float32
batches by our dataset. However, their RGB channel values are in
the [0, 255]
range. This is not ideal for a neural network;
in general you should seek to make your input values small. Here, we will
standardize values to be in the [0, 1]
by using a Rescaling
layer at the start of
our model.
There are two ways you could be using the data_augmentation
preprocessor:
Option 1: Make it part of the model, like this:
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(x)
... # Rest of the model
With this option, your data augmentation will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration.
Option 2: apply it to the dataset, so as to obtain a dataset that yields batches of augmented images, like this:
augmented_train_ds = train_ds.map(
lambda x, y: (data_augmentation(x, training=True), y))
We'll go with the first option.
Let's make sure to use buffered prefetching so we can yield data from disk without having I/O becoming blocking:
train_ds = train_ds.prefetch(buffer_size=32)
val_ds = val_ds.prefetch(buffer_size=32)
We'll build a small version of the Xception network. We haven't particularly tried to optimize the architecture; if you want to do a systematic search for the best model configuration, consider using KerasTuner.
Note that:
data_augmentation
preprocessor, followed by a
Rescaling
layer.Dropout
layer before the final classification layer.def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(8, (5, 5), activation='leaky_relu', strides=1)(inputs)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Conv2D(16, (5, 5), activation='leaky_relu', strides=1)(x)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Flatten()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size+(1,), num_classes=2)
keras.utils.plot_model(model, show_shapes=True)
epochs = 5
callbacks = [
keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
]
model.compile(
optimizer=keras.optimizers.Adam(1e-3),
loss="binary_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds,
)
Epoch 1/5 586/586 [==============================] - 325s 553ms/step - loss: 3.3252 - accuracy: 0.6046 - val_loss: 1.0432 - val_accuracy: 0.5854 Epoch 2/5 586/586 [==============================] - 327s 556ms/step - loss: 0.6131 - accuracy: 0.7117 - val_loss: 0.7265 - val_accuracy: 0.6585 Epoch 3/5 586/586 [==============================] - 342s 583ms/step - loss: 0.4725 - accuracy: 0.7790 - val_loss: 0.6950 - val_accuracy: 0.6997 Epoch 4/5 586/586 [==============================] - 359s 612ms/step - loss: 0.4157 - accuracy: 0.8080 - val_loss: 0.8182 - val_accuracy: 0.6894 Epoch 5/5 586/586 [==============================] - 337s 574ms/step - loss: 0.3706 - accuracy: 0.8344 - val_loss: 0.8886 - val_accuracy: 0.6796
<keras.callbacks.History at 0x7f8859fc3310>
Note that data augmentation and dropout are inactive at inference time.
img = keras.preprocessing.image.load_img(
"PetImages/Cat/6767.jpg", target_size=image_size, color_mode="grayscale"
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
print(
"This image is %.2f percent cat and %.2f percent dog."
% (100 * (1 - score), 100 * score)
)
This image is 89.57 percent cat and 10.43 percent dog.