CIFAR-10 83% Accuracy Without Data Augmentation

Posted on 30 May 2021 by Alexei Raiu
CIFAR-10 83% Accuracy Without Data Augmentation

Cifar-10 is an image classification subset widely used for testing image classification AI. I have seen lots and lots of articles like "Reaching 90% Accuracy for Cifar-10", where they build complex convolutional neural networks, add data augmentation, and reach 90% to 95%. It's interesting, that when these articles usually show how a CNN (networks) acts without data augmentation, they usually end up somewhere around 75%. But you can, in fact, squeeze more from a CNN without data augmentation by adjusting hyper-parameters better. I got myself up to 83.4%.

Now, if you look at the image, you can see that the evaluation rate starts lagging and fluctuating. So in fact, depending on further tuning the parameters, you can end up with slightly different end results.

If you ask what is a Drupal developer doing playing with AI and Python? To answer short: I play with many technologies. Back end, front end, server, programming. Everything is connected. The more you know, the better developer you become, if you can only use it. I am training my Natural Neural Network to develop better.

cnn = models.Sequential([
    layers.Conv2D(filters=64,
          kernel_size=(3, 3),
          activation='relu',
          input_shape=(32, 32, 3)),
    layers.BatchNormalization(),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Dropout(0.3),

    layers.Conv2D(filters=128,
          kernel_size=(3, 3),
          activation='relu'),
    layers.BatchNormalization(),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Dropout(0.3),

    layers.Conv2D(filters=256,
          kernel_size=(3, 3),
          activation='relu'),
    layers.BatchNormalization(),
    layers.MaxPooling2D(pool_size=(2, 2)),
    layers.Dropout(0.3),

    layers.Flatten(),
    layers.Dense(512, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(10, activation='softmax')
])

# Stop training when the monitored metric has stopped improving.
early_stop = EarlyStopping(monitor='val_loss',
                           mode='min',
                           verbose=1,
                           patience=10)

cnn.compile(optimizer='adam',
            loss='sparse_categorical_crossentropy',
            metrics=['accuracy'])

history = cnn.fit(X_train,
                  y_train,
                  epochs=100,
                  batch_size=64,
                  callbacks=[early_stop],
                  validation_data=(X_test, y_test))