Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.

In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.

In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile
In [3]:
# Explore the dataset
batch_id = 3
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 3:
Samples: 10000
Label Counts: {0: 994, 1: 1042, 2: 965, 3: 997, 4: 990, 5: 1029, 6: 978, 7: 1015, 8: 961, 9: 1029}
First 20 Labels: [8, 5, 0, 6, 9, 2, 8, 3, 6, 2, 7, 4, 6, 9, 0, 0, 7, 3, 7, 2]

Example of Image 15:
Image - Min Value: 59 Max Value: 210
Image - Shape: (32, 32, 3)
Label - Label Id: 0 Name: airplane

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.

In [4]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # Range is [0, 255]. Divide by 255 to normalize to [0, 1].
    return x / 255.


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.

In [5]:
labels_map = np.eye(10)

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    return np.array([labels_map[entry] for entry in x])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.

In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.

In [7]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.

In [8]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    return tf.placeholder(dtype=tf.float32, name="x", shape=(None, *image_shape))


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(dtype=tf.int32, name="y", shape=(None, n_classes))


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(dtype=tf.float32, name="keep_prob")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.

In [33]:
# standard deviation for truncated_normal
STD_DEV=0.1
In [34]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
        
    depth = x_tensor.get_shape().as_list()[3]
    weight_shape = [*conv_ksize, depth, conv_num_outputs]
    bias_shape = conv_num_outputs
    strides = [1, *conv_strides, 1]
    
    weight = tf.Variable(tf.truncated_normal(shape=weight_shape, stddev=STD_DEV, dtype=tf.float32))
    bias = tf.Variable(tf.zeros(bias_shape), dtype=tf.float32)
    
    padding = "SAME"
    conv = tf.nn.conv2d(x_tensor, weight, strides=strides, padding=padding)
    conv = tf.nn.bias_add(conv, bias)
    conv = tf.nn.relu(conv)

    return tf.nn.max_pool(conv, ksize=[1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding=padding)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

In [35]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    
    batch, h, w, d = x_tensor.get_shape().as_list()
    return tf.reshape(x_tensor, [-1, h*w*d])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

In [43]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    _, n_input = x_tensor.get_shape().as_list()
    weights = tf.Variable(tf.truncated_normal(shape=[n_input, num_outputs],stddev=STD_DEV, dtype=tf.float32))
    bias = tf.Variable(tf.zeros(num_outputs), dtype=tf.float32)
    return tf.nn.sigmoid(tf.add(tf.matmul(x_tensor, weights), bias))


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.

In [44]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    _, n_input = x_tensor.get_shape().as_list()
    weights = tf.Variable(tf.truncated_normal(shape=[n_input, num_outputs], stddev=STD_DEV, dtype=tf.float32))
    bias = tf.Variable(tf.zeros(num_outputs), dtype=tf.float32)
    return tf.add(tf.matmul(x_tensor, weights), bias)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
In [45]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    
    conv_num_outputs_1 = 30
    conv_ksize_1 = [7, 7]
    conv_strides_1 = [1, 1]
    pool_ksize_1 = [5, 5]
    pool_strides_1 = [1, 1]
    conv1 = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize_1, conv_strides_1, pool_ksize_1, pool_strides_1)
    conv1 = tf.nn.dropout(conv1, keep_prob)
    

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flat1 = flatten(conv1)
    

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    fc1 = fully_conn(flat1, 700)
    fc1 = tf.nn.dropout(fc1, keep_prob)
    fc2 = fully_conn(fc1, 70)    
    
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    num_classes = 10
    output_layer = output(fc1, num_classes)
    
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)
Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.

In [46]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    feed_dict = {
        x: feature_batch,
        y: label_batch,
        keep_prob: keep_probability
    }
    session.run(optimizer, feed_dict=feed_dict)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.

In [47]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    feed_dict_loss = {
        x: feature_batch,
        y: label_batch,
        keep_prob: 1.0
    }
    
    feed_dict_accuracy = {
        x: valid_features,
        y: valid_labels,
        keep_prob: 1.0
    }
    
    loss = session.run(cost, feed_dict=feed_dict_loss)
    validation_accuracy = session.run(accuracy, feed_dict=feed_dict_accuracy)
    print("Loss={}, Validation accuracy={}".format(loss, validation_accuracy))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout
In [48]:
# TODO: Tune Parameters
epochs = 100
batch_size = 1024
keep_probability = 0.6

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.

In [49]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)
Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss=3.158484697341919, Validation accuracy=0.11659999936819077
Epoch  2, CIFAR-10 Batch 1:  Loss=2.56754207611084, Validation accuracy=0.19019998610019684
Epoch  3, CIFAR-10 Batch 1:  Loss=2.1253716945648193, Validation accuracy=0.22379998862743378
Epoch  4, CIFAR-10 Batch 1:  Loss=2.014014482498169, Validation accuracy=0.26739996671676636
Epoch  5, CIFAR-10 Batch 1:  Loss=1.9201066493988037, Validation accuracy=0.311599999666214
Epoch  6, CIFAR-10 Batch 1:  Loss=1.8420087099075317, Validation accuracy=0.3579999506473541
Epoch  7, CIFAR-10 Batch 1:  Loss=1.7454478740692139, Validation accuracy=0.3765999972820282
Epoch  8, CIFAR-10 Batch 1:  Loss=1.6463346481323242, Validation accuracy=0.40759995579719543
Epoch  9, CIFAR-10 Batch 1:  Loss=1.5633338689804077, Validation accuracy=0.42799997329711914
Epoch 10, CIFAR-10 Batch 1:  Loss=1.5061180591583252, Validation accuracy=0.44759997725486755
Epoch 11, CIFAR-10 Batch 1:  Loss=1.4267821311950684, Validation accuracy=0.46939992904663086
Epoch 12, CIFAR-10 Batch 1:  Loss=1.341346263885498, Validation accuracy=0.4869999289512634
Epoch 13, CIFAR-10 Batch 1:  Loss=1.288378119468689, Validation accuracy=0.5011999607086182
Epoch 14, CIFAR-10 Batch 1:  Loss=1.2369287014007568, Validation accuracy=0.5157999396324158
Epoch 15, CIFAR-10 Batch 1:  Loss=1.1876987218856812, Validation accuracy=0.5151999592781067
Epoch 16, CIFAR-10 Batch 1:  Loss=1.1216654777526855, Validation accuracy=0.5363999605178833
Epoch 17, CIFAR-10 Batch 1:  Loss=1.0652371644973755, Validation accuracy=0.5483999252319336
Epoch 18, CIFAR-10 Batch 1:  Loss=1.0316054821014404, Validation accuracy=0.5467999577522278
Epoch 19, CIFAR-10 Batch 1:  Loss=0.9787388443946838, Validation accuracy=0.5559999346733093
Epoch 20, CIFAR-10 Batch 1:  Loss=0.9368684887886047, Validation accuracy=0.5611999034881592
Epoch 21, CIFAR-10 Batch 1:  Loss=0.89697265625, Validation accuracy=0.5711999535560608
Epoch 22, CIFAR-10 Batch 1:  Loss=0.8469679355621338, Validation accuracy=0.5743999481201172
Epoch 23, CIFAR-10 Batch 1:  Loss=0.7968685030937195, Validation accuracy=0.5811998844146729
Epoch 24, CIFAR-10 Batch 1:  Loss=0.7739435434341431, Validation accuracy=0.5839999318122864
Epoch 25, CIFAR-10 Batch 1:  Loss=0.7360510230064392, Validation accuracy=0.5859999656677246
Epoch 26, CIFAR-10 Batch 1:  Loss=0.7023627758026123, Validation accuracy=0.5853999257087708
Epoch 27, CIFAR-10 Batch 1:  Loss=0.6604152321815491, Validation accuracy=0.5931999683380127
Epoch 28, CIFAR-10 Batch 1:  Loss=0.6161536574363708, Validation accuracy=0.5945999026298523
Epoch 29, CIFAR-10 Batch 1:  Loss=0.595232367515564, Validation accuracy=0.5965999364852905
Epoch 30, CIFAR-10 Batch 1:  Loss=0.560025691986084, Validation accuracy=0.5989998579025269
Epoch 31, CIFAR-10 Batch 1:  Loss=0.5274173021316528, Validation accuracy=0.5997998714447021
Epoch 32, CIFAR-10 Batch 1:  Loss=0.5020452737808228, Validation accuracy=0.5943998694419861
Epoch 33, CIFAR-10 Batch 1:  Loss=0.4938148856163025, Validation accuracy=0.5985999703407288
Epoch 34, CIFAR-10 Batch 1:  Loss=0.44474345445632935, Validation accuracy=0.6095998883247375
Epoch 35, CIFAR-10 Batch 1:  Loss=0.4143369793891907, Validation accuracy=0.6099998950958252
Epoch 36, CIFAR-10 Batch 1:  Loss=0.41497477889060974, Validation accuracy=0.6067999601364136
Epoch 37, CIFAR-10 Batch 1:  Loss=0.3820435404777527, Validation accuracy=0.6037999391555786
Epoch 38, CIFAR-10 Batch 1:  Loss=0.3678383231163025, Validation accuracy=0.6005998849868774
Epoch 39, CIFAR-10 Batch 1:  Loss=0.3366408050060272, Validation accuracy=0.603399932384491
Epoch 40, CIFAR-10 Batch 1:  Loss=0.3408856689929962, Validation accuracy=0.5959999561309814
Epoch 41, CIFAR-10 Batch 1:  Loss=0.32289615273475647, Validation accuracy=0.6067999005317688
Epoch 42, CIFAR-10 Batch 1:  Loss=0.2724730372428894, Validation accuracy=0.6157999634742737
Epoch 43, CIFAR-10 Batch 1:  Loss=0.26052919030189514, Validation accuracy=0.6167998909950256
Epoch 44, CIFAR-10 Batch 1:  Loss=0.22754769027233124, Validation accuracy=0.6245999336242676
Epoch 45, CIFAR-10 Batch 1:  Loss=0.2207002192735672, Validation accuracy=0.619399905204773
Epoch 46, CIFAR-10 Batch 1:  Loss=0.20991188287734985, Validation accuracy=0.612799882888794
Epoch 47, CIFAR-10 Batch 1:  Loss=0.20227862894535065, Validation accuracy=0.619399905204773
Epoch 48, CIFAR-10 Batch 1:  Loss=0.17882971465587616, Validation accuracy=0.619399905204773
Epoch 49, CIFAR-10 Batch 1:  Loss=0.1716601550579071, Validation accuracy=0.6151999235153198
Epoch 50, CIFAR-10 Batch 1:  Loss=0.16426822543144226, Validation accuracy=0.6139999032020569
Epoch 51, CIFAR-10 Batch 1:  Loss=0.15179839730262756, Validation accuracy=0.621199905872345
Epoch 52, CIFAR-10 Batch 1:  Loss=0.13940730690956116, Validation accuracy=0.614799976348877
Epoch 53, CIFAR-10 Batch 1:  Loss=0.12336361408233643, Validation accuracy=0.6169998645782471
Epoch 54, CIFAR-10 Batch 1:  Loss=0.11900712549686432, Validation accuracy=0.6175999641418457
Epoch 55, CIFAR-10 Batch 1:  Loss=0.10992005467414856, Validation accuracy=0.6283998489379883
Epoch 56, CIFAR-10 Batch 1:  Loss=0.10971148312091827, Validation accuracy=0.6209998726844788
Epoch 57, CIFAR-10 Batch 1:  Loss=0.10846621543169022, Validation accuracy=0.6165999174118042
Epoch 58, CIFAR-10 Batch 1:  Loss=0.09993941336870193, Validation accuracy=0.6203999519348145
Epoch 59, CIFAR-10 Batch 1:  Loss=0.09679406881332397, Validation accuracy=0.616599977016449
Epoch 60, CIFAR-10 Batch 1:  Loss=0.09445144236087799, Validation accuracy=0.6157999038696289
Epoch 61, CIFAR-10 Batch 1:  Loss=0.08392078429460526, Validation accuracy=0.619399905204773
Epoch 62, CIFAR-10 Batch 1:  Loss=0.09465217590332031, Validation accuracy=0.6125999093055725
Epoch 63, CIFAR-10 Batch 1:  Loss=0.10422531515359879, Validation accuracy=0.6091998815536499
Epoch 64, CIFAR-10 Batch 1:  Loss=0.10747363418340683, Validation accuracy=0.5995998382568359
Epoch 65, CIFAR-10 Batch 1:  Loss=0.08343493193387985, Validation accuracy=0.606799840927124
Epoch 66, CIFAR-10 Batch 1:  Loss=0.06665997207164764, Validation accuracy=0.6245998740196228
Epoch 67, CIFAR-10 Batch 1:  Loss=0.0648818165063858, Validation accuracy=0.6183999180793762
Epoch 68, CIFAR-10 Batch 1:  Loss=0.06119130551815033, Validation accuracy=0.618399977684021
Epoch 69, CIFAR-10 Batch 1:  Loss=0.05197865515947342, Validation accuracy=0.6227998733520508
Epoch 70, CIFAR-10 Batch 1:  Loss=0.04795194789767265, Validation accuracy=0.6239998936653137
Epoch 71, CIFAR-10 Batch 1:  Loss=0.04506682977080345, Validation accuracy=0.6259998679161072
Epoch 72, CIFAR-10 Batch 1:  Loss=0.04014560580253601, Validation accuracy=0.6289998888969421
Epoch 73, CIFAR-10 Batch 1:  Loss=0.035602979362010956, Validation accuracy=0.6273999214172363
Epoch 74, CIFAR-10 Batch 1:  Loss=0.034170083701610565, Validation accuracy=0.6257998943328857
Epoch 75, CIFAR-10 Batch 1:  Loss=0.03295932337641716, Validation accuracy=0.6271998882293701
Epoch 76, CIFAR-10 Batch 1:  Loss=0.030673380941152573, Validation accuracy=0.6255998611450195
Epoch 77, CIFAR-10 Batch 1:  Loss=0.028260590508580208, Validation accuracy=0.6243999004364014
Epoch 78, CIFAR-10 Batch 1:  Loss=0.027941059321165085, Validation accuracy=0.6277998685836792
Epoch 79, CIFAR-10 Batch 1:  Loss=0.02728883922100067, Validation accuracy=0.6251999139785767
Epoch 80, CIFAR-10 Batch 1:  Loss=0.027809882536530495, Validation accuracy=0.6233999133110046
Epoch 81, CIFAR-10 Batch 1:  Loss=0.02794725075364113, Validation accuracy=0.6225998997688293
Epoch 82, CIFAR-10 Batch 1:  Loss=0.027251221239566803, Validation accuracy=0.6227999329566956
Epoch 83, CIFAR-10 Batch 1:  Loss=0.027941234409809113, Validation accuracy=0.6155999302864075
Epoch 84, CIFAR-10 Batch 1:  Loss=0.024130839854478836, Validation accuracy=0.6167998909950256
Epoch 85, CIFAR-10 Batch 1:  Loss=0.022384706884622574, Validation accuracy=0.6181999444961548
Epoch 86, CIFAR-10 Batch 1:  Loss=0.01999329775571823, Validation accuracy=0.6259998679161072
Epoch 87, CIFAR-10 Batch 1:  Loss=0.018870608881115913, Validation accuracy=0.6245999336242676
Epoch 88, CIFAR-10 Batch 1:  Loss=0.021031318232417107, Validation accuracy=0.6245999336242676
Epoch 89, CIFAR-10 Batch 1:  Loss=0.020830288529396057, Validation accuracy=0.6221998929977417
Epoch 90, CIFAR-10 Batch 1:  Loss=0.017515553161501884, Validation accuracy=0.6229998469352722
Epoch 91, CIFAR-10 Batch 1:  Loss=0.015247240662574768, Validation accuracy=0.6249998807907104
Epoch 92, CIFAR-10 Batch 1:  Loss=0.014519520103931427, Validation accuracy=0.6223998665809631
Epoch 93, CIFAR-10 Batch 1:  Loss=0.01539524644613266, Validation accuracy=0.6195998787879944
Epoch 94, CIFAR-10 Batch 1:  Loss=0.016634685918688774, Validation accuracy=0.6205999255180359
Epoch 95, CIFAR-10 Batch 1:  Loss=0.01698371395468712, Validation accuracy=0.6095999479293823
Epoch 96, CIFAR-10 Batch 1:  Loss=0.020870467647910118, Validation accuracy=0.6103999614715576
Epoch 97, CIFAR-10 Batch 1:  Loss=0.018253415822982788, Validation accuracy=0.6099998950958252
Epoch 98, CIFAR-10 Batch 1:  Loss=0.016773922368884087, Validation accuracy=0.6097999215126038
Epoch 99, CIFAR-10 Batch 1:  Loss=0.014641585759818554, Validation accuracy=0.6171998977661133
Epoch 100, CIFAR-10 Batch 1:  Loss=0.0179935060441494, Validation accuracy=0.6187999248504639

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.

In [50]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)
Training...
Epoch  1, CIFAR-10 Batch 1:  Loss=3.596191644668579, Validation accuracy=0.11959999799728394
Epoch  1, CIFAR-10 Batch 2:  Loss=2.8301358222961426, Validation accuracy=0.19039997458457947
Epoch  1, CIFAR-10 Batch 3:  Loss=2.316746234893799, Validation accuracy=0.1947999745607376
Epoch  1, CIFAR-10 Batch 4:  Loss=2.066059112548828, Validation accuracy=0.25060001015663147
Epoch  1, CIFAR-10 Batch 5:  Loss=1.9852726459503174, Validation accuracy=0.28700000047683716
Epoch  2, CIFAR-10 Batch 1:  Loss=1.8860323429107666, Validation accuracy=0.3086000084877014
Epoch  2, CIFAR-10 Batch 2:  Loss=1.7972356081008911, Validation accuracy=0.35179999470710754
Epoch  2, CIFAR-10 Batch 3:  Loss=1.672827959060669, Validation accuracy=0.3831999897956848
Epoch  2, CIFAR-10 Batch 4:  Loss=1.6269925832748413, Validation accuracy=0.4071999788284302
Epoch  2, CIFAR-10 Batch 5:  Loss=1.601941704750061, Validation accuracy=0.4253999888896942
Epoch  3, CIFAR-10 Batch 1:  Loss=1.5257962942123413, Validation accuracy=0.4479999542236328
Epoch  3, CIFAR-10 Batch 2:  Loss=1.4630465507507324, Validation accuracy=0.47919994592666626
Epoch  3, CIFAR-10 Batch 3:  Loss=1.3777202367782593, Validation accuracy=0.4859999418258667
Epoch  3, CIFAR-10 Batch 4:  Loss=1.320054054260254, Validation accuracy=0.4925999641418457
Epoch  3, CIFAR-10 Batch 5:  Loss=1.337393879890442, Validation accuracy=0.5098000168800354
Epoch  4, CIFAR-10 Batch 1:  Loss=1.3030941486358643, Validation accuracy=0.5235999226570129
Epoch  4, CIFAR-10 Batch 2:  Loss=1.2722676992416382, Validation accuracy=0.5315999388694763
Epoch  4, CIFAR-10 Batch 3:  Loss=1.1926805973052979, Validation accuracy=0.5407999753952026
Epoch  4, CIFAR-10 Batch 4:  Loss=1.1677989959716797, Validation accuracy=0.5469999313354492
Epoch  4, CIFAR-10 Batch 5:  Loss=1.1606148481369019, Validation accuracy=0.559999942779541
Epoch  5, CIFAR-10 Batch 1:  Loss=1.1692306995391846, Validation accuracy=0.5679999589920044
Epoch  5, CIFAR-10 Batch 2:  Loss=1.1541403532028198, Validation accuracy=0.567599892616272
Epoch  5, CIFAR-10 Batch 3:  Loss=1.090762972831726, Validation accuracy=0.5745999813079834
Epoch  5, CIFAR-10 Batch 4:  Loss=1.0765011310577393, Validation accuracy=0.5801998972892761
Epoch  5, CIFAR-10 Batch 5:  Loss=1.058629035949707, Validation accuracy=0.5921999216079712
Epoch  6, CIFAR-10 Batch 1:  Loss=1.075524091720581, Validation accuracy=0.5907999277114868
Epoch  6, CIFAR-10 Batch 2:  Loss=1.085533857345581, Validation accuracy=0.5829999446868896
Epoch  6, CIFAR-10 Batch 3:  Loss=1.0264976024627686, Validation accuracy=0.5895999073982239
Epoch  6, CIFAR-10 Batch 4:  Loss=1.0214204788208008, Validation accuracy=0.5849999189376831
Epoch  6, CIFAR-10 Batch 5:  Loss=1.0089836120605469, Validation accuracy=0.5961998701095581
Epoch  7, CIFAR-10 Batch 1:  Loss=1.0121253728866577, Validation accuracy=0.6139998435974121
Epoch  7, CIFAR-10 Batch 2:  Loss=1.0154056549072266, Validation accuracy=0.6073999404907227
Epoch  7, CIFAR-10 Batch 3:  Loss=0.9338960647583008, Validation accuracy=0.6177999377250671
Epoch  7, CIFAR-10 Batch 4:  Loss=0.9399778246879578, Validation accuracy=0.6121999621391296
Epoch  7, CIFAR-10 Batch 5:  Loss=0.923679769039154, Validation accuracy=0.6155999302864075
Epoch  8, CIFAR-10 Batch 1:  Loss=0.9524246454238892, Validation accuracy=0.6209998726844788
Epoch  8, CIFAR-10 Batch 2:  Loss=0.9562322497367859, Validation accuracy=0.6229999661445618
Epoch  8, CIFAR-10 Batch 3:  Loss=0.883797287940979, Validation accuracy=0.6303998827934265
Epoch  8, CIFAR-10 Batch 4:  Loss=0.8960821032524109, Validation accuracy=0.6239998936653137
Epoch  8, CIFAR-10 Batch 5:  Loss=0.8720555901527405, Validation accuracy=0.6269999146461487
Epoch  9, CIFAR-10 Batch 1:  Loss=0.9022220373153687, Validation accuracy=0.6347999572753906
Epoch  9, CIFAR-10 Batch 2:  Loss=0.8946985006332397, Validation accuracy=0.6417998671531677
Epoch  9, CIFAR-10 Batch 3:  Loss=0.8571116924285889, Validation accuracy=0.6283999085426331
Epoch  9, CIFAR-10 Batch 4:  Loss=0.8470996022224426, Validation accuracy=0.6393998861312866
Epoch  9, CIFAR-10 Batch 5:  Loss=0.8102514147758484, Validation accuracy=0.6381999254226685
Epoch 10, CIFAR-10 Batch 1:  Loss=0.852898120880127, Validation accuracy=0.6465998888015747
Epoch 10, CIFAR-10 Batch 2:  Loss=0.8368192911148071, Validation accuracy=0.65559983253479
Epoch 10, CIFAR-10 Batch 3:  Loss=0.8021507263183594, Validation accuracy=0.6453999280929565
Epoch 10, CIFAR-10 Batch 4:  Loss=0.7987974882125854, Validation accuracy=0.6493998765945435
Epoch 10, CIFAR-10 Batch 5:  Loss=0.7645756006240845, Validation accuracy=0.6523998975753784
Epoch 11, CIFAR-10 Batch 1:  Loss=0.8052517771720886, Validation accuracy=0.6503998637199402
Epoch 11, CIFAR-10 Batch 2:  Loss=0.792033851146698, Validation accuracy=0.6587998867034912
Epoch 11, CIFAR-10 Batch 3:  Loss=0.7586128115653992, Validation accuracy=0.6539998054504395
Epoch 11, CIFAR-10 Batch 4:  Loss=0.7627214789390564, Validation accuracy=0.6605998873710632
Epoch 11, CIFAR-10 Batch 5:  Loss=0.7405994534492493, Validation accuracy=0.6599999070167542
Epoch 12, CIFAR-10 Batch 1:  Loss=0.7670783996582031, Validation accuracy=0.6545998454093933
Epoch 12, CIFAR-10 Batch 2:  Loss=0.7565175294876099, Validation accuracy=0.6675999164581299
Epoch 12, CIFAR-10 Batch 3:  Loss=0.7227672338485718, Validation accuracy=0.6601998805999756
Epoch 12, CIFAR-10 Batch 4:  Loss=0.7232500910758972, Validation accuracy=0.66159987449646
Epoch 12, CIFAR-10 Batch 5:  Loss=0.7107397317886353, Validation accuracy=0.6547998785972595
Epoch 13, CIFAR-10 Batch 1:  Loss=0.7324991226196289, Validation accuracy=0.6609998941421509
Epoch 13, CIFAR-10 Batch 2:  Loss=0.7236279845237732, Validation accuracy=0.6699999570846558
Epoch 13, CIFAR-10 Batch 3:  Loss=0.6874537467956543, Validation accuracy=0.6649998426437378
Epoch 13, CIFAR-10 Batch 4:  Loss=0.685638427734375, Validation accuracy=0.6663998961448669
Epoch 13, CIFAR-10 Batch 5:  Loss=0.6600831747055054, Validation accuracy=0.6709998846054077
Epoch 14, CIFAR-10 Batch 1:  Loss=0.6854305267333984, Validation accuracy=0.6721999049186707
Epoch 14, CIFAR-10 Batch 2:  Loss=0.6897182464599609, Validation accuracy=0.6779998540878296
Epoch 14, CIFAR-10 Batch 3:  Loss=0.6547743082046509, Validation accuracy=0.6667998433113098
Epoch 14, CIFAR-10 Batch 4:  Loss=0.6635459661483765, Validation accuracy=0.6775997877120972
Epoch 14, CIFAR-10 Batch 5:  Loss=0.6298936605453491, Validation accuracy=0.6775998473167419
Epoch 15, CIFAR-10 Batch 1:  Loss=0.6663693189620972, Validation accuracy=0.673599898815155
Epoch 15, CIFAR-10 Batch 2:  Loss=0.6604249477386475, Validation accuracy=0.681999921798706
Epoch 15, CIFAR-10 Batch 3:  Loss=0.6344577074050903, Validation accuracy=0.6703999042510986
Epoch 15, CIFAR-10 Batch 4:  Loss=0.62324059009552, Validation accuracy=0.6805999279022217
Epoch 15, CIFAR-10 Batch 5:  Loss=0.5893506407737732, Validation accuracy=0.683199942111969
Epoch 16, CIFAR-10 Batch 1:  Loss=0.636084794998169, Validation accuracy=0.6799998879432678
Epoch 16, CIFAR-10 Batch 2:  Loss=0.6378974914550781, Validation accuracy=0.6787999272346497
Epoch 16, CIFAR-10 Batch 3:  Loss=0.5970638990402222, Validation accuracy=0.6733998656272888
Epoch 16, CIFAR-10 Batch 4:  Loss=0.6014404296875, Validation accuracy=0.6777998805046082
Epoch 16, CIFAR-10 Batch 5:  Loss=0.5778285264968872, Validation accuracy=0.6881998777389526
Epoch 17, CIFAR-10 Batch 1:  Loss=0.6196664571762085, Validation accuracy=0.6785998940467834
Epoch 17, CIFAR-10 Batch 2:  Loss=0.6068964600563049, Validation accuracy=0.6859999299049377
Epoch 17, CIFAR-10 Batch 3:  Loss=0.5622032880783081, Validation accuracy=0.68479984998703
Epoch 17, CIFAR-10 Batch 4:  Loss=0.5763555765151978, Validation accuracy=0.6853998899459839
Epoch 17, CIFAR-10 Batch 5:  Loss=0.5500836968421936, Validation accuracy=0.6905999183654785
Epoch 18, CIFAR-10 Batch 1:  Loss=0.5619458556175232, Validation accuracy=0.691399872303009
Epoch 18, CIFAR-10 Batch 2:  Loss=0.565067708492279, Validation accuracy=0.6941998600959778
Epoch 18, CIFAR-10 Batch 3:  Loss=0.5334730744361877, Validation accuracy=0.6899999380111694
Epoch 18, CIFAR-10 Batch 4:  Loss=0.5396486520767212, Validation accuracy=0.692599892616272
Epoch 18, CIFAR-10 Batch 5:  Loss=0.5087827444076538, Validation accuracy=0.6901998519897461
Epoch 19, CIFAR-10 Batch 1:  Loss=0.538285493850708, Validation accuracy=0.6851998567581177
Epoch 19, CIFAR-10 Batch 2:  Loss=0.5483095645904541, Validation accuracy=0.693199872970581
Epoch 19, CIFAR-10 Batch 3:  Loss=0.5211815237998962, Validation accuracy=0.6911998391151428
Epoch 19, CIFAR-10 Batch 4:  Loss=0.5215278267860413, Validation accuracy=0.6897998452186584
Epoch 19, CIFAR-10 Batch 5:  Loss=0.5232316255569458, Validation accuracy=0.6863998174667358
Epoch 20, CIFAR-10 Batch 1:  Loss=0.5206626057624817, Validation accuracy=0.6935998201370239
Epoch 20, CIFAR-10 Batch 2:  Loss=0.5143779516220093, Validation accuracy=0.6999999284744263
Epoch 20, CIFAR-10 Batch 3:  Loss=0.4822012782096863, Validation accuracy=0.6991998553276062
Epoch 20, CIFAR-10 Batch 4:  Loss=0.492217481136322, Validation accuracy=0.6937998533248901
Epoch 20, CIFAR-10 Batch 5:  Loss=0.4794415235519409, Validation accuracy=0.6943998336791992
Epoch 21, CIFAR-10 Batch 1:  Loss=0.49447351694107056, Validation accuracy=0.6941998600959778
Epoch 21, CIFAR-10 Batch 2:  Loss=0.5097225904464722, Validation accuracy=0.7045998573303223
Epoch 21, CIFAR-10 Batch 3:  Loss=0.4686203896999359, Validation accuracy=0.6983998417854309
Epoch 21, CIFAR-10 Batch 4:  Loss=0.47381383180618286, Validation accuracy=0.6967998147010803
Epoch 21, CIFAR-10 Batch 5:  Loss=0.4399777948856354, Validation accuracy=0.7059998512268066
Epoch 22, CIFAR-10 Batch 1:  Loss=0.4753015637397766, Validation accuracy=0.6945998668670654
Epoch 22, CIFAR-10 Batch 2:  Loss=0.4773818850517273, Validation accuracy=0.7069998979568481
Epoch 22, CIFAR-10 Batch 3:  Loss=0.44857627153396606, Validation accuracy=0.694399893283844
Epoch 22, CIFAR-10 Batch 4:  Loss=0.45140865445137024, Validation accuracy=0.6985998749732971
Epoch 22, CIFAR-10 Batch 5:  Loss=0.4253271222114563, Validation accuracy=0.7029998302459717
Epoch 23, CIFAR-10 Batch 1:  Loss=0.439289391040802, Validation accuracy=0.6987999081611633
Epoch 23, CIFAR-10 Batch 2:  Loss=0.4434622526168823, Validation accuracy=0.7083998918533325
Epoch 23, CIFAR-10 Batch 3:  Loss=0.41633325815200806, Validation accuracy=0.7015998959541321
Epoch 23, CIFAR-10 Batch 4:  Loss=0.42463499307632446, Validation accuracy=0.6995998024940491
Epoch 23, CIFAR-10 Batch 5:  Loss=0.404775470495224, Validation accuracy=0.6969999074935913
Epoch 24, CIFAR-10 Batch 1:  Loss=0.412749320268631, Validation accuracy=0.7021999359130859
Epoch 24, CIFAR-10 Batch 2:  Loss=0.4176906645298004, Validation accuracy=0.7135998010635376
Epoch 24, CIFAR-10 Batch 3:  Loss=0.387885183095932, Validation accuracy=0.7017998695373535
Epoch 24, CIFAR-10 Batch 4:  Loss=0.4048633575439453, Validation accuracy=0.6935998201370239
Epoch 24, CIFAR-10 Batch 5:  Loss=0.3773036599159241, Validation accuracy=0.6991998553276062
Epoch 25, CIFAR-10 Batch 1:  Loss=0.38901716470718384, Validation accuracy=0.6999998688697815
Epoch 25, CIFAR-10 Batch 2:  Loss=0.40014317631721497, Validation accuracy=0.7147997617721558
Epoch 25, CIFAR-10 Batch 3:  Loss=0.3748718798160553, Validation accuracy=0.7089998126029968
Epoch 25, CIFAR-10 Batch 4:  Loss=0.38148707151412964, Validation accuracy=0.7049998641014099
Epoch 25, CIFAR-10 Batch 5:  Loss=0.3547472655773163, Validation accuracy=0.6987998485565186
Epoch 26, CIFAR-10 Batch 1:  Loss=0.37638694047927856, Validation accuracy=0.6999998092651367
Epoch 26, CIFAR-10 Batch 2:  Loss=0.37921297550201416, Validation accuracy=0.7083998918533325
Epoch 26, CIFAR-10 Batch 3:  Loss=0.35160404443740845, Validation accuracy=0.7069998979568481
Epoch 26, CIFAR-10 Batch 4:  Loss=0.36347612738609314, Validation accuracy=0.7039998173713684
Epoch 26, CIFAR-10 Batch 5:  Loss=0.3209289312362671, Validation accuracy=0.7069998979568481
Epoch 27, CIFAR-10 Batch 1:  Loss=0.36303871870040894, Validation accuracy=0.7023998498916626
Epoch 27, CIFAR-10 Batch 2:  Loss=0.35542675852775574, Validation accuracy=0.7129998803138733
Epoch 27, CIFAR-10 Batch 3:  Loss=0.3238350450992584, Validation accuracy=0.7065998315811157
Epoch 27, CIFAR-10 Batch 4:  Loss=0.3417074680328369, Validation accuracy=0.7065998315811157
Epoch 27, CIFAR-10 Batch 5:  Loss=0.31891733407974243, Validation accuracy=0.6999998688697815
Epoch 28, CIFAR-10 Batch 1:  Loss=0.3440130352973938, Validation accuracy=0.7021998763084412
Epoch 28, CIFAR-10 Batch 2:  Loss=0.33997729420661926, Validation accuracy=0.7137998342514038
Epoch 28, CIFAR-10 Batch 3:  Loss=0.3192880153656006, Validation accuracy=0.7125998735427856
Epoch 28, CIFAR-10 Batch 4:  Loss=0.3200072944164276, Validation accuracy=0.7113999128341675
Epoch 28, CIFAR-10 Batch 5:  Loss=0.3165261745452881, Validation accuracy=0.6975998878479004
Epoch 29, CIFAR-10 Batch 1:  Loss=0.3199344575405121, Validation accuracy=0.6971998810768127
Epoch 29, CIFAR-10 Batch 2:  Loss=0.33648133277893066, Validation accuracy=0.708599865436554
Epoch 29, CIFAR-10 Batch 3:  Loss=0.2920742630958557, Validation accuracy=0.7099999189376831
Epoch 29, CIFAR-10 Batch 4:  Loss=0.3184308409690857, Validation accuracy=0.6991998553276062
Epoch 29, CIFAR-10 Batch 5:  Loss=0.27337026596069336, Validation accuracy=0.7137998342514038
Epoch 30, CIFAR-10 Batch 1:  Loss=0.30209773778915405, Validation accuracy=0.7037999033927917
Epoch 30, CIFAR-10 Batch 2:  Loss=0.3426493704319, Validation accuracy=0.6913999319076538
Epoch 30, CIFAR-10 Batch 3:  Loss=0.3001363277435303, Validation accuracy=0.6967998743057251
Epoch 30, CIFAR-10 Batch 4:  Loss=0.3075724244117737, Validation accuracy=0.7027997970581055
Epoch 30, CIFAR-10 Batch 5:  Loss=0.27757951617240906, Validation accuracy=0.7083998918533325
Epoch 31, CIFAR-10 Batch 1:  Loss=0.279106467962265, Validation accuracy=0.7123998403549194
Epoch 31, CIFAR-10 Batch 2:  Loss=0.3043026030063629, Validation accuracy=0.7063997983932495
Epoch 31, CIFAR-10 Batch 3:  Loss=0.2681553363800049, Validation accuracy=0.6953998804092407
Epoch 31, CIFAR-10 Batch 4:  Loss=0.2694190442562103, Validation accuracy=0.7101998329162598
Epoch 31, CIFAR-10 Batch 5:  Loss=0.24221250414848328, Validation accuracy=0.7145998477935791
Epoch 32, CIFAR-10 Batch 1:  Loss=0.2823907434940338, Validation accuracy=0.699199914932251
Epoch 32, CIFAR-10 Batch 2:  Loss=0.28520292043685913, Validation accuracy=0.7045998573303223
Epoch 32, CIFAR-10 Batch 3:  Loss=0.2501200735569, Validation accuracy=0.6999998092651367
Epoch 32, CIFAR-10 Batch 4:  Loss=0.25333526730537415, Validation accuracy=0.7115998268127441
Epoch 32, CIFAR-10 Batch 5:  Loss=0.22897742688655853, Validation accuracy=0.7179999351501465
Epoch 33, CIFAR-10 Batch 1:  Loss=0.24984806776046753, Validation accuracy=0.7137998342514038
Epoch 33, CIFAR-10 Batch 2:  Loss=0.26026031374931335, Validation accuracy=0.7081998586654663
Epoch 33, CIFAR-10 Batch 3:  Loss=0.22220879793167114, Validation accuracy=0.7109998464584351
Epoch 33, CIFAR-10 Batch 4:  Loss=0.2415698915719986, Validation accuracy=0.7159999012947083
Epoch 33, CIFAR-10 Batch 5:  Loss=0.21168847382068634, Validation accuracy=0.7187998294830322
Epoch 34, CIFAR-10 Batch 1:  Loss=0.23865245282649994, Validation accuracy=0.707399845123291
Epoch 34, CIFAR-10 Batch 2:  Loss=0.25386691093444824, Validation accuracy=0.7063999176025391
Epoch 34, CIFAR-10 Batch 3:  Loss=0.2265116423368454, Validation accuracy=0.6999998688697815
Epoch 34, CIFAR-10 Batch 4:  Loss=0.2388649582862854, Validation accuracy=0.7071999311447144
Epoch 34, CIFAR-10 Batch 5:  Loss=0.21580390632152557, Validation accuracy=0.7141998410224915
Epoch 35, CIFAR-10 Batch 1:  Loss=0.21800455451011658, Validation accuracy=0.7101998925209045
Epoch 35, CIFAR-10 Batch 2:  Loss=0.23256705701351166, Validation accuracy=0.7177998423576355
Epoch 35, CIFAR-10 Batch 3:  Loss=0.19511345028877258, Validation accuracy=0.7189998626708984
Epoch 35, CIFAR-10 Batch 4:  Loss=0.21604418754577637, Validation accuracy=0.7075998187065125
Epoch 35, CIFAR-10 Batch 5:  Loss=0.20292723178863525, Validation accuracy=0.7127999067306519
Epoch 36, CIFAR-10 Batch 1:  Loss=0.20829053223133087, Validation accuracy=0.7139998078346252
Epoch 36, CIFAR-10 Batch 2:  Loss=0.23347780108451843, Validation accuracy=0.7141998410224915
Epoch 36, CIFAR-10 Batch 3:  Loss=0.18610091507434845, Validation accuracy=0.7153998613357544
Epoch 36, CIFAR-10 Batch 4:  Loss=0.20224900543689728, Validation accuracy=0.7121998071670532
Epoch 36, CIFAR-10 Batch 5:  Loss=0.1913793385028839, Validation accuracy=0.7145999073982239
Epoch 37, CIFAR-10 Batch 1:  Loss=0.19989782571792603, Validation accuracy=0.7117998600006104
Epoch 37, CIFAR-10 Batch 2:  Loss=0.22334618866443634, Validation accuracy=0.7119998931884766
Epoch 37, CIFAR-10 Batch 3:  Loss=0.1857011616230011, Validation accuracy=0.7123998403549194
Epoch 37, CIFAR-10 Batch 4:  Loss=0.19192980229854584, Validation accuracy=0.7051998376846313
Epoch 37, CIFAR-10 Batch 5:  Loss=0.19465085864067078, Validation accuracy=0.712199866771698
Epoch 38, CIFAR-10 Batch 1:  Loss=0.1810031533241272, Validation accuracy=0.7045998573303223
Epoch 38, CIFAR-10 Batch 2:  Loss=0.20639252662658691, Validation accuracy=0.7155998349189758
Epoch 38, CIFAR-10 Batch 3:  Loss=0.1679804027080536, Validation accuracy=0.7221997976303101
Epoch 38, CIFAR-10 Batch 4:  Loss=0.17614823579788208, Validation accuracy=0.7105998396873474
Epoch 38, CIFAR-10 Batch 5:  Loss=0.18444564938545227, Validation accuracy=0.710399866104126
Epoch 39, CIFAR-10 Batch 1:  Loss=0.16920557618141174, Validation accuracy=0.710399866104126
Epoch 39, CIFAR-10 Batch 2:  Loss=0.18458586931228638, Validation accuracy=0.7161998152732849
Epoch 39, CIFAR-10 Batch 3:  Loss=0.1565871387720108, Validation accuracy=0.7179999351501465
Epoch 39, CIFAR-10 Batch 4:  Loss=0.1597345769405365, Validation accuracy=0.7053998708724976
Epoch 39, CIFAR-10 Batch 5:  Loss=0.16446025669574738, Validation accuracy=0.7087998390197754
Epoch 40, CIFAR-10 Batch 1:  Loss=0.1755354404449463, Validation accuracy=0.7065999507904053
Epoch 40, CIFAR-10 Batch 2:  Loss=0.17808759212493896, Validation accuracy=0.7089998722076416
Epoch 40, CIFAR-10 Batch 3:  Loss=0.15181349217891693, Validation accuracy=0.7155998945236206
Epoch 40, CIFAR-10 Batch 4:  Loss=0.14883416891098022, Validation accuracy=0.7039998769760132
Epoch 40, CIFAR-10 Batch 5:  Loss=0.15894439816474915, Validation accuracy=0.7113999128341675
Epoch 41, CIFAR-10 Batch 1:  Loss=0.1620832085609436, Validation accuracy=0.7143998742103577
Epoch 41, CIFAR-10 Batch 2:  Loss=0.16582335531711578, Validation accuracy=0.7115998268127441
Epoch 41, CIFAR-10 Batch 3:  Loss=0.14033624529838562, Validation accuracy=0.710399866104126
Epoch 41, CIFAR-10 Batch 4:  Loss=0.13110825419425964, Validation accuracy=0.7109998464584351
Epoch 41, CIFAR-10 Batch 5:  Loss=0.14068938791751862, Validation accuracy=0.7159999012947083
Epoch 42, CIFAR-10 Batch 1:  Loss=0.15013964474201202, Validation accuracy=0.7135999202728271
Epoch 42, CIFAR-10 Batch 2:  Loss=0.14955472946166992, Validation accuracy=0.7125998139381409
Epoch 42, CIFAR-10 Batch 3:  Loss=0.11894629895687103, Validation accuracy=0.7159998416900635
Epoch 42, CIFAR-10 Batch 4:  Loss=0.1262790858745575, Validation accuracy=0.709199845790863
Epoch 42, CIFAR-10 Batch 5:  Loss=0.127560093998909, Validation accuracy=0.7195998430252075
Epoch 43, CIFAR-10 Batch 1:  Loss=0.12605130672454834, Validation accuracy=0.7163998484611511
Epoch 43, CIFAR-10 Batch 2:  Loss=0.13135750591754913, Validation accuracy=0.7115998268127441
Epoch 43, CIFAR-10 Batch 3:  Loss=0.11576075106859207, Validation accuracy=0.7197998762130737
Epoch 43, CIFAR-10 Batch 4:  Loss=0.10825807601213455, Validation accuracy=0.7159999012947083
Epoch 43, CIFAR-10 Batch 5:  Loss=0.1091403067111969, Validation accuracy=0.7191998362541199
Epoch 44, CIFAR-10 Batch 1:  Loss=0.1096663549542427, Validation accuracy=0.7141998410224915
Epoch 44, CIFAR-10 Batch 2:  Loss=0.12689995765686035, Validation accuracy=0.7141998410224915
Epoch 44, CIFAR-10 Batch 3:  Loss=0.10147009789943695, Validation accuracy=0.7233998775482178
Epoch 44, CIFAR-10 Batch 4:  Loss=0.11081117391586304, Validation accuracy=0.7163998484611511
Epoch 44, CIFAR-10 Batch 5:  Loss=0.11455469578504562, Validation accuracy=0.7131998538970947
Epoch 45, CIFAR-10 Batch 1:  Loss=0.10471072793006897, Validation accuracy=0.7125998735427856
Epoch 45, CIFAR-10 Batch 2:  Loss=0.11697555333375931, Validation accuracy=0.7181998491287231
Epoch 45, CIFAR-10 Batch 3:  Loss=0.09612768888473511, Validation accuracy=0.7177998423576355
Epoch 45, CIFAR-10 Batch 4:  Loss=0.10434608906507492, Validation accuracy=0.709199845790863
Epoch 45, CIFAR-10 Batch 5:  Loss=0.1002340093255043, Validation accuracy=0.7133998870849609
Epoch 46, CIFAR-10 Batch 1:  Loss=0.1043657511472702, Validation accuracy=0.7171998620033264
Epoch 46, CIFAR-10 Batch 2:  Loss=0.10978663712739944, Validation accuracy=0.7147998809814453
Epoch 46, CIFAR-10 Batch 3:  Loss=0.09060336649417877, Validation accuracy=0.7155999541282654
Epoch 46, CIFAR-10 Batch 4:  Loss=0.09989733248949051, Validation accuracy=0.7115998268127441
Epoch 46, CIFAR-10 Batch 5:  Loss=0.09453509747982025, Validation accuracy=0.7185998558998108
Epoch 47, CIFAR-10 Batch 1:  Loss=0.09540928155183792, Validation accuracy=0.7143998742103577
Epoch 47, CIFAR-10 Batch 2:  Loss=0.10392753779888153, Validation accuracy=0.7175998687744141
Epoch 47, CIFAR-10 Batch 3:  Loss=0.08333522081375122, Validation accuracy=0.716999888420105
Epoch 47, CIFAR-10 Batch 4:  Loss=0.09752949327230453, Validation accuracy=0.7107998728752136
Epoch 47, CIFAR-10 Batch 5:  Loss=0.0857909768819809, Validation accuracy=0.7175999283790588
Epoch 48, CIFAR-10 Batch 1:  Loss=0.09151194989681244, Validation accuracy=0.7131998538970947
Epoch 48, CIFAR-10 Batch 2:  Loss=0.09269651025533676, Validation accuracy=0.7169998288154602
Epoch 48, CIFAR-10 Batch 3:  Loss=0.07764540612697601, Validation accuracy=0.71399986743927
Epoch 48, CIFAR-10 Batch 4:  Loss=0.08980701118707657, Validation accuracy=0.7059999108314514
Epoch 48, CIFAR-10 Batch 5:  Loss=0.08528096228837967, Validation accuracy=0.7137998342514038
Epoch 49, CIFAR-10 Batch 1:  Loss=0.07910257577896118, Validation accuracy=0.7209998965263367
Epoch 49, CIFAR-10 Batch 2:  Loss=0.0834222286939621, Validation accuracy=0.7123998999595642
Epoch 49, CIFAR-10 Batch 3:  Loss=0.068524070084095, Validation accuracy=0.7197998762130737
Epoch 49, CIFAR-10 Batch 4:  Loss=0.08364802598953247, Validation accuracy=0.7099998593330383
Epoch 49, CIFAR-10 Batch 5:  Loss=0.07565202564001083, Validation accuracy=0.7143998742103577
Epoch 50, CIFAR-10 Batch 1:  Loss=0.07751552760601044, Validation accuracy=0.7183998227119446
Epoch 50, CIFAR-10 Batch 2:  Loss=0.08336591720581055, Validation accuracy=0.7155998945236206
Epoch 50, CIFAR-10 Batch 3:  Loss=0.06813569366931915, Validation accuracy=0.7191998362541199
Epoch 50, CIFAR-10 Batch 4:  Loss=0.07268206775188446, Validation accuracy=0.7127998471260071
Epoch 50, CIFAR-10 Batch 5:  Loss=0.07546213269233704, Validation accuracy=0.7079998254776001
Epoch 51, CIFAR-10 Batch 1:  Loss=0.07832249999046326, Validation accuracy=0.7135998010635376
Epoch 51, CIFAR-10 Batch 2:  Loss=0.07929863035678864, Validation accuracy=0.71399986743927
Epoch 51, CIFAR-10 Batch 3:  Loss=0.07352077960968018, Validation accuracy=0.7161998152732849
Epoch 51, CIFAR-10 Batch 4:  Loss=0.06913211941719055, Validation accuracy=0.7043998837471008
Epoch 51, CIFAR-10 Batch 5:  Loss=0.0683852806687355, Validation accuracy=0.7113998532295227
Epoch 52, CIFAR-10 Batch 1:  Loss=0.07138774544000626, Validation accuracy=0.7129998803138733
Epoch 52, CIFAR-10 Batch 2:  Loss=0.0702119991183281, Validation accuracy=0.7137998342514038
Epoch 52, CIFAR-10 Batch 3:  Loss=0.06403297185897827, Validation accuracy=0.7151998281478882
Epoch 52, CIFAR-10 Batch 4:  Loss=0.060955166816711426, Validation accuracy=0.71399986743927
Epoch 52, CIFAR-10 Batch 5:  Loss=0.06391982734203339, Validation accuracy=0.7087998390197754
Epoch 53, CIFAR-10 Batch 1:  Loss=0.06950539350509644, Validation accuracy=0.7141998410224915
Epoch 53, CIFAR-10 Batch 2:  Loss=0.08050397038459778, Validation accuracy=0.7133998870849609
Epoch 53, CIFAR-10 Batch 3:  Loss=0.06322576105594635, Validation accuracy=0.71399986743927
Epoch 53, CIFAR-10 Batch 4:  Loss=0.06316187232732773, Validation accuracy=0.7127997875213623
Epoch 53, CIFAR-10 Batch 5:  Loss=0.06346860527992249, Validation accuracy=0.7045997977256775
Epoch 54, CIFAR-10 Batch 1:  Loss=0.06776342540979385, Validation accuracy=0.7177998423576355
Epoch 54, CIFAR-10 Batch 2:  Loss=0.07332353293895721, Validation accuracy=0.7139999270439148
Epoch 54, CIFAR-10 Batch 3:  Loss=0.06088455766439438, Validation accuracy=0.7113999128341675
Epoch 54, CIFAR-10 Batch 4:  Loss=0.0633789673447609, Validation accuracy=0.7105998396873474
Epoch 54, CIFAR-10 Batch 5:  Loss=0.0631679967045784, Validation accuracy=0.7071998119354248
Epoch 55, CIFAR-10 Batch 1:  Loss=0.07798465341329575, Validation accuracy=0.7021998763084412
Epoch 55, CIFAR-10 Batch 2:  Loss=0.07285696268081665, Validation accuracy=0.7087998390197754
Epoch 55, CIFAR-10 Batch 3:  Loss=0.06569588929414749, Validation accuracy=0.7177999019622803
Epoch 55, CIFAR-10 Batch 4:  Loss=0.05527675151824951, Validation accuracy=0.7145998477935791
Epoch 55, CIFAR-10 Batch 5:  Loss=0.06025084853172302, Validation accuracy=0.7097998857498169
Epoch 56, CIFAR-10 Batch 1:  Loss=0.06436019390821457, Validation accuracy=0.7115998864173889
Epoch 56, CIFAR-10 Batch 2:  Loss=0.06333371996879578, Validation accuracy=0.720599889755249
Epoch 56, CIFAR-10 Batch 3:  Loss=0.06803334504365921, Validation accuracy=0.7067998647689819
Epoch 56, CIFAR-10 Batch 4:  Loss=0.053898848593235016, Validation accuracy=0.7043998837471008
Epoch 56, CIFAR-10 Batch 5:  Loss=0.05352358520030975, Validation accuracy=0.7081998586654663
Epoch 57, CIFAR-10 Batch 1:  Loss=0.05871087312698364, Validation accuracy=0.7171998023986816
Epoch 57, CIFAR-10 Batch 2:  Loss=0.0613841749727726, Validation accuracy=0.7167998552322388
Epoch 57, CIFAR-10 Batch 3:  Loss=0.059014081954956055, Validation accuracy=0.7053998112678528
Epoch 57, CIFAR-10 Batch 4:  Loss=0.04395398497581482, Validation accuracy=0.7125998735427856
Epoch 57, CIFAR-10 Batch 5:  Loss=0.04950186610221863, Validation accuracy=0.7119998931884766
Epoch 58, CIFAR-10 Batch 1:  Loss=0.05126107856631279, Validation accuracy=0.7139999270439148
Epoch 58, CIFAR-10 Batch 2:  Loss=0.052152834832668304, Validation accuracy=0.7185998558998108
Epoch 58, CIFAR-10 Batch 3:  Loss=0.05288868397474289, Validation accuracy=0.7161998152732849
Epoch 58, CIFAR-10 Batch 4:  Loss=0.04829757660627365, Validation accuracy=0.703799843788147
Epoch 58, CIFAR-10 Batch 5:  Loss=0.043841782957315445, Validation accuracy=0.7095998525619507
Epoch 59, CIFAR-10 Batch 1:  Loss=0.05129558593034744, Validation accuracy=0.7141998410224915
Epoch 59, CIFAR-10 Batch 2:  Loss=0.04435642063617706, Validation accuracy=0.720599889755249
Epoch 59, CIFAR-10 Batch 3:  Loss=0.04610337316989899, Validation accuracy=0.7155998945236206
Epoch 59, CIFAR-10 Batch 4:  Loss=0.041270554065704346, Validation accuracy=0.7003998756408691
Epoch 59, CIFAR-10 Batch 5:  Loss=0.048930950462818146, Validation accuracy=0.7107998728752136
Epoch 60, CIFAR-10 Batch 1:  Loss=0.04339106008410454, Validation accuracy=0.7189998626708984
Epoch 60, CIFAR-10 Batch 2:  Loss=0.04492587223649025, Validation accuracy=0.7147998809814453
Epoch 60, CIFAR-10 Batch 3:  Loss=0.03894413262605667, Validation accuracy=0.7189998626708984
Epoch 60, CIFAR-10 Batch 4:  Loss=0.035408519208431244, Validation accuracy=0.7121998071670532
Epoch 60, CIFAR-10 Batch 5:  Loss=0.04148343205451965, Validation accuracy=0.7191998362541199
Epoch 61, CIFAR-10 Batch 1:  Loss=0.0428810752928257, Validation accuracy=0.7115998864173889
Epoch 61, CIFAR-10 Batch 2:  Loss=0.04012392833828926, Validation accuracy=0.7203998565673828
Epoch 61, CIFAR-10 Batch 3:  Loss=0.03866593539714813, Validation accuracy=0.7153998613357544
Epoch 61, CIFAR-10 Batch 4:  Loss=0.03297264873981476, Validation accuracy=0.7075998783111572
Epoch 61, CIFAR-10 Batch 5:  Loss=0.03660612180829048, Validation accuracy=0.7143998742103577
Epoch 62, CIFAR-10 Batch 1:  Loss=0.03450064733624458, Validation accuracy=0.718599796295166
Epoch 62, CIFAR-10 Batch 2:  Loss=0.03814726322889328, Validation accuracy=0.7219998240470886
Epoch 62, CIFAR-10 Batch 3:  Loss=0.04334612935781479, Validation accuracy=0.7099998593330383
Epoch 62, CIFAR-10 Batch 4:  Loss=0.030083827674388885, Validation accuracy=0.7055999040603638
Epoch 62, CIFAR-10 Batch 5:  Loss=0.037273108959198, Validation accuracy=0.7113998532295227
Epoch 63, CIFAR-10 Batch 1:  Loss=0.03457082435488701, Validation accuracy=0.715799868106842
Epoch 63, CIFAR-10 Batch 2:  Loss=0.03428924083709717, Validation accuracy=0.7205998301506042
Epoch 63, CIFAR-10 Batch 3:  Loss=0.03167338669300079, Validation accuracy=0.7137998938560486
Epoch 63, CIFAR-10 Batch 4:  Loss=0.0313510000705719, Validation accuracy=0.7117998600006104
Epoch 63, CIFAR-10 Batch 5:  Loss=0.030943531543016434, Validation accuracy=0.720599889755249
Epoch 64, CIFAR-10 Batch 1:  Loss=0.030534878373146057, Validation accuracy=0.7159998416900635
Epoch 64, CIFAR-10 Batch 2:  Loss=0.03482475504279137, Validation accuracy=0.7149998545646667
Epoch 64, CIFAR-10 Batch 3:  Loss=0.032408129423856735, Validation accuracy=0.7125998139381409
Epoch 64, CIFAR-10 Batch 4:  Loss=0.02691776119172573, Validation accuracy=0.7095998525619507
Epoch 64, CIFAR-10 Batch 5:  Loss=0.02774365246295929, Validation accuracy=0.7201999425888062
Epoch 65, CIFAR-10 Batch 1:  Loss=0.03210210055112839, Validation accuracy=0.7195998430252075
Epoch 65, CIFAR-10 Batch 2:  Loss=0.029581373557448387, Validation accuracy=0.7191998958587646
Epoch 65, CIFAR-10 Batch 3:  Loss=0.028042789548635483, Validation accuracy=0.7185999155044556
Epoch 65, CIFAR-10 Batch 4:  Loss=0.024993013590574265, Validation accuracy=0.7149998545646667
Epoch 65, CIFAR-10 Batch 5:  Loss=0.029368264600634575, Validation accuracy=0.7195998430252075
Epoch 66, CIFAR-10 Batch 1:  Loss=0.028628472238779068, Validation accuracy=0.7187998294830322
Epoch 66, CIFAR-10 Batch 2:  Loss=0.026661790907382965, Validation accuracy=0.7193998694419861
Epoch 66, CIFAR-10 Batch 3:  Loss=0.028538662940263748, Validation accuracy=0.7155998349189758
Epoch 66, CIFAR-10 Batch 4:  Loss=0.026902437210083008, Validation accuracy=0.7159998416900635
Epoch 66, CIFAR-10 Batch 5:  Loss=0.02972368337213993, Validation accuracy=0.7125998735427856
Epoch 67, CIFAR-10 Batch 1:  Loss=0.02788686938583851, Validation accuracy=0.7137998938560486
Epoch 67, CIFAR-10 Batch 2:  Loss=0.026235809549689293, Validation accuracy=0.7203998565673828
Epoch 67, CIFAR-10 Batch 3:  Loss=0.03032740391790867, Validation accuracy=0.716999888420105
Epoch 67, CIFAR-10 Batch 4:  Loss=0.024790966883301735, Validation accuracy=0.7185998558998108
Epoch 67, CIFAR-10 Batch 5:  Loss=0.030179612338542938, Validation accuracy=0.7215998768806458
Epoch 68, CIFAR-10 Batch 1:  Loss=0.025543803349137306, Validation accuracy=0.7149999141693115
Epoch 68, CIFAR-10 Batch 2:  Loss=0.02487531304359436, Validation accuracy=0.7205998301506042
Epoch 68, CIFAR-10 Batch 3:  Loss=0.025951918214559555, Validation accuracy=0.7175998687744141
Epoch 68, CIFAR-10 Batch 4:  Loss=0.02427561767399311, Validation accuracy=0.7117998600006104
Epoch 68, CIFAR-10 Batch 5:  Loss=0.027859177440404892, Validation accuracy=0.7167998552322388
Epoch 69, CIFAR-10 Batch 1:  Loss=0.02514108270406723, Validation accuracy=0.714999794960022
Epoch 69, CIFAR-10 Batch 2:  Loss=0.024948429316282272, Validation accuracy=0.7203998565673828
Epoch 69, CIFAR-10 Batch 3:  Loss=0.029663793742656708, Validation accuracy=0.7157999277114868
Epoch 69, CIFAR-10 Batch 4:  Loss=0.02321239560842514, Validation accuracy=0.7093998789787292
Epoch 69, CIFAR-10 Batch 5:  Loss=0.031236032024025917, Validation accuracy=0.7141998410224915
Epoch 70, CIFAR-10 Batch 1:  Loss=0.027181200683116913, Validation accuracy=0.7149999141693115
Epoch 70, CIFAR-10 Batch 2:  Loss=0.02461812272667885, Validation accuracy=0.7213999032974243
Epoch 70, CIFAR-10 Batch 3:  Loss=0.021658804267644882, Validation accuracy=0.7181998491287231
Epoch 70, CIFAR-10 Batch 4:  Loss=0.01769775152206421, Validation accuracy=0.7165998816490173
Epoch 70, CIFAR-10 Batch 5:  Loss=0.025581613183021545, Validation accuracy=0.7171998620033264
Epoch 71, CIFAR-10 Batch 1:  Loss=0.02628903090953827, Validation accuracy=0.7069998979568481
Epoch 71, CIFAR-10 Batch 2:  Loss=0.023124391213059425, Validation accuracy=0.7195998430252075
Epoch 71, CIFAR-10 Batch 3:  Loss=0.024259794503450394, Validation accuracy=0.7171998023986816
Epoch 71, CIFAR-10 Batch 4:  Loss=0.018677968531847, Validation accuracy=0.7135999202728271
Epoch 71, CIFAR-10 Batch 5:  Loss=0.02495468035340309, Validation accuracy=0.7211997509002686
Epoch 72, CIFAR-10 Batch 1:  Loss=0.02289871871471405, Validation accuracy=0.7067998051643372
Epoch 72, CIFAR-10 Batch 2:  Loss=0.024390649050474167, Validation accuracy=0.7191997766494751
Epoch 72, CIFAR-10 Batch 3:  Loss=0.02197733335196972, Validation accuracy=0.7147998809814453
Epoch 72, CIFAR-10 Batch 4:  Loss=0.01779761351644993, Validation accuracy=0.7123997807502747
Epoch 72, CIFAR-10 Batch 5:  Loss=0.021592261269688606, Validation accuracy=0.7205998301506042
Epoch 73, CIFAR-10 Batch 1:  Loss=0.02566126175224781, Validation accuracy=0.7081998586654663
Epoch 73, CIFAR-10 Batch 2:  Loss=0.025029005482792854, Validation accuracy=0.7179998755455017
Epoch 73, CIFAR-10 Batch 3:  Loss=0.020810885354876518, Validation accuracy=0.7159998416900635
Epoch 73, CIFAR-10 Batch 4:  Loss=0.01682908460497856, Validation accuracy=0.7195998430252075
Epoch 73, CIFAR-10 Batch 5:  Loss=0.024554725736379623, Validation accuracy=0.7205998301506042
Epoch 74, CIFAR-10 Batch 1:  Loss=0.02142605371773243, Validation accuracy=0.711199939250946
Epoch 74, CIFAR-10 Batch 2:  Loss=0.02243703044950962, Validation accuracy=0.7203998565673828
Epoch 74, CIFAR-10 Batch 3:  Loss=0.01813054084777832, Validation accuracy=0.7239998579025269
Epoch 74, CIFAR-10 Batch 4:  Loss=0.022283945232629776, Validation accuracy=0.7135998606681824
Epoch 74, CIFAR-10 Batch 5:  Loss=0.02060537412762642, Validation accuracy=0.7153998613357544
Epoch 75, CIFAR-10 Batch 1:  Loss=0.021399807184934616, Validation accuracy=0.707399845123291
Epoch 75, CIFAR-10 Batch 2:  Loss=0.027560919523239136, Validation accuracy=0.7099999189376831
Epoch 75, CIFAR-10 Batch 3:  Loss=0.02120932750403881, Validation accuracy=0.7115998864173889
Epoch 75, CIFAR-10 Batch 4:  Loss=0.01907244510948658, Validation accuracy=0.7103998064994812
Epoch 75, CIFAR-10 Batch 5:  Loss=0.019225025549530983, Validation accuracy=0.7187998294830322
Epoch 76, CIFAR-10 Batch 1:  Loss=0.02522781677544117, Validation accuracy=0.7057998776435852
Epoch 76, CIFAR-10 Batch 2:  Loss=0.024243395775556564, Validation accuracy=0.7113997936248779
Epoch 76, CIFAR-10 Batch 3:  Loss=0.020653950050473213, Validation accuracy=0.7147998809814453
Epoch 76, CIFAR-10 Batch 4:  Loss=0.020075540989637375, Validation accuracy=0.7153998613357544
Epoch 76, CIFAR-10 Batch 5:  Loss=0.02005782164633274, Validation accuracy=0.7195998430252075
Epoch 77, CIFAR-10 Batch 1:  Loss=0.019759226590394974, Validation accuracy=0.7147997617721558
Epoch 77, CIFAR-10 Batch 2:  Loss=0.024899952113628387, Validation accuracy=0.7019999027252197
Epoch 77, CIFAR-10 Batch 3:  Loss=0.019529957324266434, Validation accuracy=0.7087998390197754
Epoch 77, CIFAR-10 Batch 4:  Loss=0.01663251779973507, Validation accuracy=0.7119998335838318
Epoch 77, CIFAR-10 Batch 5:  Loss=0.022007647901773453, Validation accuracy=0.7111998200416565
Epoch 78, CIFAR-10 Batch 1:  Loss=0.019773246720433235, Validation accuracy=0.7131998538970947
Epoch 78, CIFAR-10 Batch 2:  Loss=0.02372964844107628, Validation accuracy=0.7117998003959656
Epoch 78, CIFAR-10 Batch 3:  Loss=0.016285154968500137, Validation accuracy=0.7187998294830322
Epoch 78, CIFAR-10 Batch 4:  Loss=0.01457342877984047, Validation accuracy=0.7151998281478882
Epoch 78, CIFAR-10 Batch 5:  Loss=0.018234150484204292, Validation accuracy=0.7169998288154602
Epoch 79, CIFAR-10 Batch 1:  Loss=0.018522515892982483, Validation accuracy=0.7175998091697693
Epoch 79, CIFAR-10 Batch 2:  Loss=0.02090192772448063, Validation accuracy=0.7099997997283936
Epoch 79, CIFAR-10 Batch 3:  Loss=0.01735720783472061, Validation accuracy=0.7213999032974243
Epoch 79, CIFAR-10 Batch 4:  Loss=0.01301324088126421, Validation accuracy=0.7203998565673828
Epoch 79, CIFAR-10 Batch 5:  Loss=0.015696043148636818, Validation accuracy=0.7197998762130737
Epoch 80, CIFAR-10 Batch 1:  Loss=0.01894948072731495, Validation accuracy=0.7123998403549194
Epoch 80, CIFAR-10 Batch 2:  Loss=0.016260279342532158, Validation accuracy=0.7237998247146606
Epoch 80, CIFAR-10 Batch 3:  Loss=0.014886665157973766, Validation accuracy=0.7229999303817749
Epoch 80, CIFAR-10 Batch 4:  Loss=0.01252256240695715, Validation accuracy=0.7157999277114868
Epoch 80, CIFAR-10 Batch 5:  Loss=0.014558658935129642, Validation accuracy=0.7187998294830322
Epoch 81, CIFAR-10 Batch 1:  Loss=0.014987483620643616, Validation accuracy=0.7231998443603516
Epoch 81, CIFAR-10 Batch 2:  Loss=0.014510495588183403, Validation accuracy=0.7189998626708984
Epoch 81, CIFAR-10 Batch 3:  Loss=0.013355275616049767, Validation accuracy=0.7245997786521912
Epoch 81, CIFAR-10 Batch 4:  Loss=0.011639559641480446, Validation accuracy=0.7153998613357544
Epoch 81, CIFAR-10 Batch 5:  Loss=0.013392811641097069, Validation accuracy=0.7213998436927795
Epoch 82, CIFAR-10 Batch 1:  Loss=0.013182097114622593, Validation accuracy=0.7219998836517334
Epoch 82, CIFAR-10 Batch 2:  Loss=0.01655798964202404, Validation accuracy=0.7151999473571777
Epoch 82, CIFAR-10 Batch 3:  Loss=0.013956381939351559, Validation accuracy=0.718799889087677
Epoch 82, CIFAR-10 Batch 4:  Loss=0.009731466881930828, Validation accuracy=0.720399796962738
Epoch 82, CIFAR-10 Batch 5:  Loss=0.014565729536116123, Validation accuracy=0.721599817276001
Epoch 83, CIFAR-10 Batch 1:  Loss=0.013305666856467724, Validation accuracy=0.7167998552322388
Epoch 83, CIFAR-10 Batch 2:  Loss=0.014533968642354012, Validation accuracy=0.716999888420105
Epoch 83, CIFAR-10 Batch 3:  Loss=0.012171758338809013, Validation accuracy=0.7221998572349548
Epoch 83, CIFAR-10 Batch 4:  Loss=0.00841508712619543, Validation accuracy=0.7171999216079712
Epoch 83, CIFAR-10 Batch 5:  Loss=0.012255646288394928, Validation accuracy=0.7191998958587646
Epoch 84, CIFAR-10 Batch 1:  Loss=0.012645691633224487, Validation accuracy=0.7155998349189758
Epoch 84, CIFAR-10 Batch 2:  Loss=0.012040723115205765, Validation accuracy=0.7221998572349548
Epoch 84, CIFAR-10 Batch 3:  Loss=0.013914854265749454, Validation accuracy=0.7195999026298523
Epoch 84, CIFAR-10 Batch 4:  Loss=0.009389322251081467, Validation accuracy=0.7141998410224915
Epoch 84, CIFAR-10 Batch 5:  Loss=0.010611040517687798, Validation accuracy=0.7261999249458313
Epoch 85, CIFAR-10 Batch 1:  Loss=0.012450816109776497, Validation accuracy=0.722399890422821
Epoch 85, CIFAR-10 Batch 2:  Loss=0.013041820377111435, Validation accuracy=0.7133998274803162
Epoch 85, CIFAR-10 Batch 3:  Loss=0.011655531823635101, Validation accuracy=0.7235997915267944
Epoch 85, CIFAR-10 Batch 4:  Loss=0.0093611478805542, Validation accuracy=0.7177998423576355
Epoch 85, CIFAR-10 Batch 5:  Loss=0.009688027203083038, Validation accuracy=0.7235998511314392
Epoch 86, CIFAR-10 Batch 1:  Loss=0.011956630274653435, Validation accuracy=0.7197999358177185
Epoch 86, CIFAR-10 Batch 2:  Loss=0.010448240675032139, Validation accuracy=0.7141998410224915
Epoch 86, CIFAR-10 Batch 3:  Loss=0.011287158355116844, Validation accuracy=0.718799889087677
Epoch 86, CIFAR-10 Batch 4:  Loss=0.009117203764617443, Validation accuracy=0.7209998369216919
Epoch 86, CIFAR-10 Batch 5:  Loss=0.01112643163651228, Validation accuracy=0.7207998633384705
Epoch 87, CIFAR-10 Batch 1:  Loss=0.011310797184705734, Validation accuracy=0.7147998809814453
Epoch 87, CIFAR-10 Batch 2:  Loss=0.009515434503555298, Validation accuracy=0.7223997712135315
Epoch 87, CIFAR-10 Batch 3:  Loss=0.010363776236772537, Validation accuracy=0.7255998849868774
Epoch 87, CIFAR-10 Batch 4:  Loss=0.007171031553298235, Validation accuracy=0.7155998349189758
Epoch 87, CIFAR-10 Batch 5:  Loss=0.009686494246125221, Validation accuracy=0.7177998423576355
Epoch 88, CIFAR-10 Batch 1:  Loss=0.01149222906678915, Validation accuracy=0.7155998945236206
Epoch 88, CIFAR-10 Batch 2:  Loss=0.010427094995975494, Validation accuracy=0.715799868106842
Epoch 88, CIFAR-10 Batch 3:  Loss=0.011057598516345024, Validation accuracy=0.7215998768806458
Epoch 88, CIFAR-10 Batch 4:  Loss=0.008159306831657887, Validation accuracy=0.7213999032974243
Epoch 88, CIFAR-10 Batch 5:  Loss=0.008546038530766964, Validation accuracy=0.7225998044013977
Epoch 89, CIFAR-10 Batch 1:  Loss=0.009895135648548603, Validation accuracy=0.7167998552322388
Epoch 89, CIFAR-10 Batch 2:  Loss=0.010163695551455021, Validation accuracy=0.7193998098373413
Epoch 89, CIFAR-10 Batch 3:  Loss=0.009824748151004314, Validation accuracy=0.7209998965263367
Epoch 89, CIFAR-10 Batch 4:  Loss=0.006611320190131664, Validation accuracy=0.7183997631072998
Epoch 89, CIFAR-10 Batch 5:  Loss=0.010154088959097862, Validation accuracy=0.7231999039649963
Epoch 90, CIFAR-10 Batch 1:  Loss=0.01014951802790165, Validation accuracy=0.7221998572349548
Epoch 90, CIFAR-10 Batch 2:  Loss=0.0095282644033432, Validation accuracy=0.7245997786521912
Epoch 90, CIFAR-10 Batch 3:  Loss=0.007176887709647417, Validation accuracy=0.725199818611145
Epoch 90, CIFAR-10 Batch 4:  Loss=0.006764047313481569, Validation accuracy=0.7195998430252075
Epoch 90, CIFAR-10 Batch 5:  Loss=0.007977679371833801, Validation accuracy=0.7215998768806458
Epoch 91, CIFAR-10 Batch 1:  Loss=0.009497256949543953, Validation accuracy=0.7063997983932495
Epoch 91, CIFAR-10 Batch 2:  Loss=0.013514293357729912, Validation accuracy=0.7191997766494751
Epoch 91, CIFAR-10 Batch 3:  Loss=0.009037511423230171, Validation accuracy=0.7197998762130737
Epoch 91, CIFAR-10 Batch 4:  Loss=0.008567504584789276, Validation accuracy=0.7183998823165894
Epoch 91, CIFAR-10 Batch 5:  Loss=0.009020522236824036, Validation accuracy=0.7163999080657959
Epoch 92, CIFAR-10 Batch 1:  Loss=0.007796343881636858, Validation accuracy=0.7113998532295227
Epoch 92, CIFAR-10 Batch 2:  Loss=0.011643513105809689, Validation accuracy=0.7173998951911926
Epoch 92, CIFAR-10 Batch 3:  Loss=0.006604225840419531, Validation accuracy=0.7207998037338257
Epoch 92, CIFAR-10 Batch 4:  Loss=0.007020060904324055, Validation accuracy=0.7217998504638672
Epoch 92, CIFAR-10 Batch 5:  Loss=0.007260557264089584, Validation accuracy=0.7177999019622803
Epoch 93, CIFAR-10 Batch 1:  Loss=0.007058230694383383, Validation accuracy=0.7123998403549194
Epoch 93, CIFAR-10 Batch 2:  Loss=0.01074883621186018, Validation accuracy=0.7183998823165894
Epoch 93, CIFAR-10 Batch 3:  Loss=0.007204541936516762, Validation accuracy=0.7179999351501465
Epoch 93, CIFAR-10 Batch 4:  Loss=0.006434853188693523, Validation accuracy=0.7211998701095581
Epoch 93, CIFAR-10 Batch 5:  Loss=0.007312337402254343, Validation accuracy=0.7231999039649963
Epoch 94, CIFAR-10 Batch 1:  Loss=0.009099135175347328, Validation accuracy=0.7113999128341675
Epoch 94, CIFAR-10 Batch 2:  Loss=0.008132671006023884, Validation accuracy=0.7227998375892639
Epoch 94, CIFAR-10 Batch 3:  Loss=0.006944425404071808, Validation accuracy=0.7231998443603516
Epoch 94, CIFAR-10 Batch 4:  Loss=0.007608396466821432, Validation accuracy=0.7179998159408569
Epoch 94, CIFAR-10 Batch 5:  Loss=0.008369640447199345, Validation accuracy=0.7123998403549194
Epoch 95, CIFAR-10 Batch 1:  Loss=0.006686484906822443, Validation accuracy=0.7189998626708984
Epoch 95, CIFAR-10 Batch 2:  Loss=0.009955007582902908, Validation accuracy=0.7223998308181763
Epoch 95, CIFAR-10 Batch 3:  Loss=0.00806867890059948, Validation accuracy=0.7151998281478882
Epoch 95, CIFAR-10 Batch 4:  Loss=0.006423587910830975, Validation accuracy=0.7179998755455017
Epoch 95, CIFAR-10 Batch 5:  Loss=0.007088650017976761, Validation accuracy=0.7125998735427856
Epoch 96, CIFAR-10 Batch 1:  Loss=0.007588584907352924, Validation accuracy=0.7125998735427856
Epoch 96, CIFAR-10 Batch 2:  Loss=0.00792523194104433, Validation accuracy=0.7171999216079712
Epoch 96, CIFAR-10 Batch 3:  Loss=0.00879823137074709, Validation accuracy=0.7141999006271362
Epoch 96, CIFAR-10 Batch 4:  Loss=0.0073485239408910275, Validation accuracy=0.725199818611145
Epoch 96, CIFAR-10 Batch 5:  Loss=0.007139712572097778, Validation accuracy=0.7235998511314392
Epoch 97, CIFAR-10 Batch 1:  Loss=0.0068595753982663155, Validation accuracy=0.7159997820854187
Epoch 97, CIFAR-10 Batch 2:  Loss=0.007698113098740578, Validation accuracy=0.7239998579025269
Epoch 97, CIFAR-10 Batch 3:  Loss=0.00835911463946104, Validation accuracy=0.7149998545646667
Epoch 97, CIFAR-10 Batch 4:  Loss=0.007553663104772568, Validation accuracy=0.7193998694419861
Epoch 97, CIFAR-10 Batch 5:  Loss=0.007314232178032398, Validation accuracy=0.7163999080657959
Epoch 98, CIFAR-10 Batch 1:  Loss=0.006154011934995651, Validation accuracy=0.7153999209403992
Epoch 98, CIFAR-10 Batch 2:  Loss=0.00707691116258502, Validation accuracy=0.7205998301506042
Epoch 98, CIFAR-10 Batch 3:  Loss=0.006568027660250664, Validation accuracy=0.7207998037338257
Epoch 98, CIFAR-10 Batch 4:  Loss=0.005954512860625982, Validation accuracy=0.7155998945236206
Epoch 98, CIFAR-10 Batch 5:  Loss=0.006839243695139885, Validation accuracy=0.7227999567985535
Epoch 99, CIFAR-10 Batch 1:  Loss=0.0065099261701107025, Validation accuracy=0.71399986743927
Epoch 99, CIFAR-10 Batch 2:  Loss=0.0071231527253985405, Validation accuracy=0.7145998477935791
Epoch 99, CIFAR-10 Batch 3:  Loss=0.006846255622804165, Validation accuracy=0.718799889087677
Epoch 99, CIFAR-10 Batch 4:  Loss=0.005248973611742258, Validation accuracy=0.7187998294830322
Epoch 99, CIFAR-10 Batch 5:  Loss=0.007479959633201361, Validation accuracy=0.7219998836517334
Epoch 100, CIFAR-10 Batch 1:  Loss=0.0072721559554338455, Validation accuracy=0.7123998403549194
Epoch 100, CIFAR-10 Batch 2:  Loss=0.007958916015923023, Validation accuracy=0.7137998342514038
Epoch 100, CIFAR-10 Batch 3:  Loss=0.006088641472160816, Validation accuracy=0.7105998396873474
Epoch 100, CIFAR-10 Batch 4:  Loss=0.004985048435628414, Validation accuracy=0.7193998098373413
Epoch 100, CIFAR-10 Batch 5:  Loss=0.005210121627897024, Validation accuracy=0.7245998382568359

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.

In [53]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()
INFO:tensorflow:Restoring parameters from ./image_classification
Testing Accuracy: 0.717169564962387

Why 50-80% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]: