Language Translation

In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.

Get the Data

Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.

In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests

source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)

Explore the Data

Play around with view_sentence_range to view different parts of the data.

In [2]:
view_sentence_range = (10, 16)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))

sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))

print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028

English sentences 10 to 16:
the lime is her least liked fruit , but the banana is my least liked .
he saw a old yellow truck .
india is rainy during june , and it is sometimes warm in november .
that cat was my most loved animal .
he dislikes grapefruit , limes , and lemons .
her least liked fruit is the lemon , but his least liked is the grapefruit .

French sentences 10 to 16:
la chaux est son moins aimé des fruits , mais la banane est mon moins aimé.
il a vu un vieux camion jaune .
inde est pluvieux en juin , et il est parfois chaud en novembre .
ce chat était mon animal le plus aimé .
il n'aime pamplemousse , citrons verts et les citrons .
son fruit est moins aimé le citron , mais son moins aimé est le pamplemousse .

Implement Preprocessing Function

Text to Word Ids

As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.

You can get the <EOS> word id by doing:

target_vocab_to_int['<EOS>']

You can get other word ids using source_vocab_to_int and target_vocab_to_int.

In [3]:
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
    """
    Convert source and target text to proper word ids
    :param source_text: String that contains all the source text.
    :param target_text: String that contains all the target text.
    :param source_vocab_to_int: Dictionary to go from the source words to an id
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: A tuple of lists (source_id_text, target_id_text)
    """

    source_id_text = []
    target_id_text = []
    
    for entry in source_text.split("\n"):
        entry_ids = []
        for word in entry.split(" "):
            if word == "": continue
            entry_ids.append(source_vocab_to_int[word])
        source_id_text.append(entry_ids)

    for entry in target_text.split("\n"):
        entry_ids = []
        for word in entry.split(" "):
            if word == "": continue
            entry_ids.append(target_vocab_to_int[word])
        entry_ids.append(target_vocab_to_int['<EOS>'])
        target_id_text.append(entry_ids)
    
    return (source_id_text, target_id_text)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the data and save it to file.

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.

In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests

(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Build the Neural Network

You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:

  • model_inputs
  • process_decoder_input
  • encoding_layer
  • decoding_layer_train
  • decoding_layer_infer
  • decoding_layer
  • seq2seq_model

Input

Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
  • Targets placeholder with rank 2.
  • Learning rate placeholder with rank 0.
  • Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
  • Target sequence length placeholder named "target_sequence_length" with rank 1
  • Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
  • Source sequence length placeholder named "source_sequence_length" with rank 1

Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)

In [7]:
def model_inputs():
    """
    Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
    :return: Tuple (input, targets, learning rate, keep probability, target sequence length,
    max target sequence length, source sequence length)
    """
    input_ = tf.placeholder(tf.int32, [None, None], name="input")
    targets_ = tf.placeholder(tf.int32, [None, None], name="targets")
    learning_rate_ = tf.placeholder(tf.float32, name="learning_rate")
    keep_prob_ = tf.placeholder(tf.float32, name="keep_prob")
    target_seq_len_ = tf.placeholder(tf.int32, [None], name="target_sequence_length")
    max_target_seq_len_ = tf.reduce_max(target_seq_len_, name="max_target_len")
    source_seq_len_ = tf.placeholder(tf.int32, [None], name="source_sequence_length")
    
    return input_, targets_, learning_rate_, keep_prob_, target_seq_len_, max_target_seq_len_, source_seq_len_


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Process Decoder Input

Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.

In [8]:
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
    """
    Preprocess target data for encoding
    :param target_data: Target Placehoder
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param batch_size: Batch Size
    :return: Preprocessed target data
    """
    target = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
    decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target], 1)
    return decoder_input

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
Tests Passed

Encoding

Implement encoding_layer() to create a Encoder RNN layer:

In [9]:
from imp import reload
reload(tests)

def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, 
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param source_vocab_size: vocabulary size of source data
    :param encoding_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """

    # This proved useful to refer to:
    # https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb
    
    embed = tf.contrib.layers.embed_sequence(
        rnn_inputs, 
        source_vocab_size, 
        encoding_embedding_size)
        
    make_lstm = lambda size: tf.contrib.rnn.LSTMCell(
        size,
        initializer=tf.random_uniform_initializer(-0.1, 0.1))
    
    layer = tf.contrib.rnn.MultiRNNCell(
        [make_lstm(rnn_size) for _ in range(num_layers)])

    layer = tf.contrib.rnn.DropoutWrapper(
        layer, 
        output_keep_prob=keep_prob)
    
    encoder_output, encoder_state = tf.nn.dynamic_rnn(
        layer, 
        embed, 
        sequence_length=source_sequence_length, 
        dtype=tf.float32)
    
    return encoder_output, encoder_state

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
Tests Passed

Decoding - Training

Create a training decoding layer:

In [10]:
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, 
                         target_sequence_length, max_summary_length, 
                         output_layer, keep_prob):
    """
    Create a decoding layer for training
    :param encoder_state: Encoder State
    :param dec_cell: Decoder RNN Cell
    :param dec_embed_input: Decoder embedded input
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_summary_length: The length of the longest sequence in the batch
    :param output_layer: Function to apply the output layer
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing training logits and sample_id
    """
    training_helper = tf.contrib.seq2seq.TrainingHelper(
        inputs=dec_embed_input,
        sequence_length=target_sequence_length)
    basic_decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, training_helper, encoder_state, output_layer)
    output,_ = tf.contrib.seq2seq.dynamic_decode(
        basic_decoder, 
        impute_finished=True, 
        maximum_iterations=max_summary_length)
    return output



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
Tests Passed

Decoding - Inference

Create inference decoder:

In [11]:
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
                         end_of_sequence_id, max_target_sequence_length,
                         vocab_size, output_layer, batch_size, keep_prob):
    """
    Create a decoding layer for inference
    :param encoder_state: Encoder state
    :param dec_cell: Decoder RNN Cell
    :param dec_embeddings: Decoder embeddings
    :param start_of_sequence_id: GO ID
    :param end_of_sequence_id: EOS Id
    :param max_target_sequence_length: Maximum length of target sequences
    :param vocab_size: Size of decoder/target vocabulary
    :param decoding_scope: TenorFlow Variable Scope for decoding
    :param output_layer: Function to apply the output layer
    :param batch_size: Batch size
    :param keep_prob: Dropout keep probability
    :return: BasicDecoderOutput containing inference logits and sample_id
    """
    
    start_tokens = tf.tile(
        tf.constant([start_of_sequence_id], dtype=tf.int32), 
        [batch_size])
    
    greedy_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
        dec_embeddings, start_tokens, end_of_sequence_id)
    
    basic_decoder = tf.contrib.seq2seq.BasicDecoder(
        dec_cell, greedy_helper, encoder_state, output_layer)
    
    output, _ = tf.contrib.seq2seq.dynamic_decode(
        basic_decoder,
        impute_finished=True,
        maximum_iterations=max_target_sequence_length)
    return output



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
Tests Passed

Build the Decoding Layer

Implement decoding_layer() to create a Decoder RNN layer.

  • Embed the target sequences
  • Construct the decoder LSTM cell (just like you constructed the encoder cell above)
  • Create an output layer to map the outputs of the decoder to the elements of our vocabulary
  • Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
  • Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.

Note: You'll need to use tf.variable_scope to share variables between training and inference.

In [12]:
def decoding_layer(dec_input, encoder_state,
                   target_sequence_length, max_target_sequence_length,
                   rnn_size,
                   num_layers, target_vocab_to_int, target_vocab_size,
                   batch_size, keep_prob, decoding_embedding_size):
    """
    Create decoding layer
    :param dec_input: Decoder input
    :param encoder_state: Encoder state
    :param target_sequence_length: The lengths of each sequence in the target batch
    :param max_target_sequence_length: Maximum length of target sequences
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :param target_vocab_size: Size of target vocabulary
    :param batch_size: The size of the batch
    :param keep_prob: Dropout keep probability
    :param decoding_embedding_size: Decoding embedding size
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    
    start_of_sequence_id = target_vocab_to_int['<GO>']
    end_of_sequence_id = target_vocab_to_int['<EOS>']
    
    dec_embeddings = tf.Variable(
        tf.random_uniform([target_vocab_size, decoding_embedding_size]))
    dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
    
    make_lstm = lambda size: tf.contrib.rnn.LSTMCell(
        size,
        initializer=tf.random_uniform_initializer(-0.1, 0.1))
    layer = tf.contrib.rnn.MultiRNNCell(
        [make_lstm(rnn_size) for _ in range(num_layers)])
    
    output_layer = Dense(
        target_vocab_size,
        kernel_initializer=tf.truncated_normal_initializer(
            mean=0.0,
            stddev=0.1))
    
    with tf.variable_scope("decode"):
        training = decoding_layer_train(
            encoder_state,
            layer,
            dec_embed_input,
            target_sequence_length,
            max_target_sequence_length,
            output_layer,
            keep_prob)
    
    with tf.variable_scope("decode", reuse=True):
        inference = decoding_layer_infer(
            encoder_state,
            layer,
            dec_embeddings,
            start_of_sequence_id,
            end_of_sequence_id,
            max_target_sequence_length,
            target_vocab_size,
            output_layer,
            batch_size,
            keep_prob)

    
    return training, inference



"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
Tests Passed

Build the Neural Network

Apply the functions you implemented above to:

  • Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
  • Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
  • Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
In [13]:
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
                  source_sequence_length, target_sequence_length,
                  max_target_sentence_length,
                  source_vocab_size, target_vocab_size,
                  enc_embedding_size, dec_embedding_size,
                  rnn_size, num_layers, target_vocab_to_int):
    """
    Build the Sequence-to-Sequence part of the neural network
    :param input_data: Input placeholder
    :param target_data: Target placeholder
    :param keep_prob: Dropout keep probability placeholder
    :param batch_size: Batch Size
    :param source_sequence_length: Sequence Lengths of source sequences in the batch
    :param target_sequence_length: Sequence Lengths of target sequences in the batch
    :param source_vocab_size: Source vocabulary size
    :param target_vocab_size: Target vocabulary size
    :param enc_embedding_size: Decoder embedding size
    :param dec_embedding_size: Encoder embedding size
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param target_vocab_to_int: Dictionary to go from the target words to an id
    :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
    """
    _, enc_state = encoding_layer(
        input_data,
        rnn_size,
        num_layers,
        keep_prob,
        source_sequence_length,
        source_vocab_size,
        enc_embedding_size)
    dec_input = process_decoder_input(
        target_data,
        target_vocab_to_int,
        batch_size)
    training_dec_output, inference_dec_output = decoding_layer(
        dec_input,
        enc_state,
        target_sequence_length,
        max_target_sentence_length,
        rnn_size,
        num_layers,
        target_vocab_to_int,
        target_vocab_size,
        batch_size,
        keep_prob,
        dec_embedding_size)
    
    return training_dec_output, inference_dec_output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
Tests Passed

Neural Network Training

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of epochs.
  • Set batch_size to the batch size.
  • Set rnn_size to the size of the RNNs.
  • Set num_layers to the number of layers.
  • Set encoding_embedding_size to the size of the embedding for the encoder.
  • Set decoding_embedding_size to the size of the embedding for the decoder.
  • Set learning_rate to the learning rate.
  • Set keep_probability to the Dropout keep probability
  • Set display_step to state how many steps between each debug output statement
In [14]:
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 300
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 5
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.60
display_step = 25

Build the Graph

Build the graph using the neural network you implemented.

In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])

train_graph = tf.Graph()
with train_graph.as_default():
    input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()

    #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
                                                   targets,
                                                   keep_prob,
                                                   batch_size,
                                                   source_sequence_length,
                                                   target_sequence_length,
                                                   max_target_sequence_length,
                                                   len(source_vocab_to_int),
                                                   len(target_vocab_to_int),
                                                   encoding_embedding_size,
                                                   decoding_embedding_size,
                                                   rnn_size,
                                                   num_layers,
                                                   target_vocab_to_int)


    training_logits = tf.identity(train_logits.rnn_output, name='logits')
    inference_logits = tf.identity(inference_logits.sample_id, name='predictions')

    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            training_logits,
            targets,
            masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

Batch and pad the source and target sequences

In [16]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
    """Pad sentences with <PAD> so that each sentence of a batch has the same length"""
    max_sentence = max([len(sentence) for sentence in sentence_batch])
    return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]


def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
    """Batch targets, sources, and the lengths of their sentences together"""
    for batch_i in range(0, len(sources)//batch_size):
        start_i = batch_i * batch_size

        # Slice the right amount for the batch
        sources_batch = sources[start_i:start_i + batch_size]
        targets_batch = targets[start_i:start_i + batch_size]

        # Pad
        pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
        pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))

        # Need the lengths for the _lengths parameters
        pad_targets_lengths = []
        for target in pad_targets_batch:
            pad_targets_lengths.append(len(target))

        pad_source_lengths = []
        for source in pad_sources_batch:
            pad_source_lengths.append(len(source))

        yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths

Train

Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.

In [17]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
    """
    Calculate accuracy
    """
    max_seq = max(target.shape[1], logits.shape[1])
    if max_seq - target.shape[1]:
        target = np.pad(
            target,
            [(0,0),(0,max_seq - target.shape[1])],
            'constant')
    if max_seq - logits.shape[1]:
        logits = np.pad(
            logits,
            [(0,0),(0,max_seq - logits.shape[1])],
            'constant')

    return np.mean(np.equal(target, logits))

# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
                                                                                                             valid_target,
                                                                                                             batch_size,
                                                                                                             source_vocab_to_int['<PAD>'],
                                                                                                             target_vocab_to_int['<PAD>']))                                                                                                  
with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(epochs):
        for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
                get_batches(train_source, train_target, batch_size,
                            source_vocab_to_int['<PAD>'],
                            target_vocab_to_int['<PAD>'])):

            _, loss = sess.run(
                [train_op, cost],
                {input_data: source_batch,
                 targets: target_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})


            if batch_i % display_step == 0 and batch_i > 0:


                batch_train_logits = sess.run(
                    inference_logits,
                    {input_data: source_batch,
                     source_sequence_length: sources_lengths,
                     target_sequence_length: targets_lengths,
                     keep_prob: 1.0})


                batch_valid_logits = sess.run(
                    inference_logits,
                    {input_data: valid_sources_batch,
                     source_sequence_length: valid_sources_lengths,
                     target_sequence_length: valid_targets_lengths,
                     keep_prob: 1.0})

                train_acc = get_accuracy(target_batch, batch_train_logits)

                valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)

                print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
                      .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_path)
    print('Model Trained and Saved')
Epoch   0 Batch   25/459 - Train Accuracy: 0.4379, Validation Accuracy: 0.4695, Loss: 2.5191
Epoch   0 Batch   50/459 - Train Accuracy: 0.4407, Validation Accuracy: 0.4912, Loss: 2.2619
Epoch   0 Batch   75/459 - Train Accuracy: 0.4917, Validation Accuracy: 0.5300, Loss: 1.9297
Epoch   0 Batch  100/459 - Train Accuracy: 0.4986, Validation Accuracy: 0.5361, Loss: 1.7546
Epoch   0 Batch  125/459 - Train Accuracy: 0.5144, Validation Accuracy: 0.5179, Loss: 1.7833
Epoch   0 Batch  150/459 - Train Accuracy: 0.5145, Validation Accuracy: 0.5421, Loss: 1.6966
Epoch   0 Batch  175/459 - Train Accuracy: 0.5215, Validation Accuracy: 0.5365, Loss: 1.5625
Epoch   0 Batch  200/459 - Train Accuracy: 0.5212, Validation Accuracy: 0.5532, Loss: 1.3552
Epoch   0 Batch  225/459 - Train Accuracy: 0.5272, Validation Accuracy: 0.5541, Loss: 1.3052
Epoch   0 Batch  250/459 - Train Accuracy: 0.5413, Validation Accuracy: 0.5447, Loss: 1.1185
Epoch   0 Batch  275/459 - Train Accuracy: 0.5195, Validation Accuracy: 0.5545, Loss: 1.1327
Epoch   0 Batch  300/459 - Train Accuracy: 0.5148, Validation Accuracy: 0.5344, Loss: 1.0709
Epoch   0 Batch  325/459 - Train Accuracy: 0.5300, Validation Accuracy: 0.5626, Loss: 0.9415
Epoch   0 Batch  350/459 - Train Accuracy: 0.5620, Validation Accuracy: 0.5768, Loss: 0.9706
Epoch   0 Batch  375/459 - Train Accuracy: 0.5600, Validation Accuracy: 0.5964, Loss: 0.9914
Epoch   0 Batch  400/459 - Train Accuracy: 0.5588, Validation Accuracy: 0.5670, Loss: 0.9560
Epoch   0 Batch  425/459 - Train Accuracy: 0.5793, Validation Accuracy: 0.5998, Loss: 0.9035
Epoch   0 Batch  450/459 - Train Accuracy: 0.5910, Validation Accuracy: 0.6009, Loss: 0.8716
Epoch   1 Batch   25/459 - Train Accuracy: 0.6095, Validation Accuracy: 0.6112, Loss: 0.7855
Epoch   1 Batch   50/459 - Train Accuracy: 0.6222, Validation Accuracy: 0.6283, Loss: 0.7246
Epoch   1 Batch   75/459 - Train Accuracy: 0.6312, Validation Accuracy: 0.6158, Loss: 0.7238
Epoch   1 Batch  100/459 - Train Accuracy: 0.6537, Validation Accuracy: 0.6280, Loss: 0.6558
Epoch   1 Batch  125/459 - Train Accuracy: 0.6855, Validation Accuracy: 0.6602, Loss: 0.6041
Epoch   1 Batch  150/459 - Train Accuracy: 0.6362, Validation Accuracy: 0.6386, Loss: 0.6428
Epoch   1 Batch  175/459 - Train Accuracy: 0.6648, Validation Accuracy: 0.6556, Loss: 0.5596
Epoch   1 Batch  200/459 - Train Accuracy: 0.6692, Validation Accuracy: 0.6991, Loss: 0.5308
Epoch   1 Batch  225/459 - Train Accuracy: 0.6430, Validation Accuracy: 0.6855, Loss: 0.5178
Epoch   1 Batch  250/459 - Train Accuracy: 0.7090, Validation Accuracy: 0.6632, Loss: 0.4822
Epoch   1 Batch  275/459 - Train Accuracy: 0.6898, Validation Accuracy: 0.6736, Loss: 0.4656
Epoch   1 Batch  300/459 - Train Accuracy: 0.6941, Validation Accuracy: 0.6636, Loss: 0.4548
Epoch   1 Batch  325/459 - Train Accuracy: 0.7227, Validation Accuracy: 0.6892, Loss: 0.3929
Epoch   1 Batch  350/459 - Train Accuracy: 0.6963, Validation Accuracy: 0.6961, Loss: 0.4127
Epoch   1 Batch  375/459 - Train Accuracy: 0.7018, Validation Accuracy: 0.6797, Loss: 0.4011
Epoch   1 Batch  400/459 - Train Accuracy: 0.3802, Validation Accuracy: 0.4094, Loss: 4.9914
Epoch   1 Batch  425/459 - Train Accuracy: 0.5837, Validation Accuracy: 0.6348, Loss: 0.7862
Epoch   1 Batch  450/459 - Train Accuracy: 0.6777, Validation Accuracy: 0.6620, Loss: 0.5709
Epoch   2 Batch   25/459 - Train Accuracy: 0.6792, Validation Accuracy: 0.6908, Loss: 0.4492
Epoch   2 Batch   50/459 - Train Accuracy: 0.7395, Validation Accuracy: 0.7067, Loss: 0.4247
Epoch   2 Batch   75/459 - Train Accuracy: 0.7478, Validation Accuracy: 0.7138, Loss: 0.4003
Epoch   2 Batch  100/459 - Train Accuracy: 0.7470, Validation Accuracy: 0.7203, Loss: 0.3435
Epoch   2 Batch  125/459 - Train Accuracy: 0.7600, Validation Accuracy: 0.6992, Loss: 0.3165
Epoch   2 Batch  150/459 - Train Accuracy: 0.7455, Validation Accuracy: 0.7455, Loss: 0.3357
Epoch   2 Batch  175/459 - Train Accuracy: 0.8022, Validation Accuracy: 0.7758, Loss: 0.2732
Epoch   2 Batch  200/459 - Train Accuracy: 0.7805, Validation Accuracy: 0.7912, Loss: 0.2596
Epoch   2 Batch  225/459 - Train Accuracy: 0.7712, Validation Accuracy: 0.7650, Loss: 0.2600
Epoch   2 Batch  250/459 - Train Accuracy: 0.8244, Validation Accuracy: 0.7912, Loss: 0.2312
Epoch   2 Batch  275/459 - Train Accuracy: 0.8183, Validation Accuracy: 0.8227, Loss: 0.2251
Epoch   2 Batch  300/459 - Train Accuracy: 0.8227, Validation Accuracy: 0.8211, Loss: 0.2343
Epoch   2 Batch  325/459 - Train Accuracy: 0.8621, Validation Accuracy: 0.8415, Loss: 0.1752
Epoch   2 Batch  350/459 - Train Accuracy: 0.8072, Validation Accuracy: 0.7920, Loss: 0.2318
Epoch   2 Batch  375/459 - Train Accuracy: 0.8408, Validation Accuracy: 0.8405, Loss: 0.2072
Epoch   2 Batch  400/459 - Train Accuracy: 0.8613, Validation Accuracy: 0.8491, Loss: 0.1680
Epoch   2 Batch  425/459 - Train Accuracy: 0.8618, Validation Accuracy: 0.8626, Loss: 0.1422
Epoch   2 Batch  450/459 - Train Accuracy: 0.8713, Validation Accuracy: 0.8583, Loss: 0.1439
Epoch   3 Batch   25/459 - Train Accuracy: 0.8556, Validation Accuracy: 0.8671, Loss: 0.1333
Epoch   3 Batch   50/459 - Train Accuracy: 0.8777, Validation Accuracy: 0.8662, Loss: 0.1263
Epoch   3 Batch   75/459 - Train Accuracy: 0.9043, Validation Accuracy: 0.8742, Loss: 0.1169
Epoch   3 Batch  100/459 - Train Accuracy: 0.8871, Validation Accuracy: 0.8682, Loss: 0.0993
Epoch   3 Batch  125/459 - Train Accuracy: 0.9100, Validation Accuracy: 0.8665, Loss: 0.1008
Epoch   3 Batch  150/459 - Train Accuracy: 0.8740, Validation Accuracy: 0.8647, Loss: 0.1107
Epoch   3 Batch  175/459 - Train Accuracy: 0.9065, Validation Accuracy: 0.8758, Loss: 0.0871
Epoch   3 Batch  200/459 - Train Accuracy: 0.9000, Validation Accuracy: 0.8814, Loss: 0.0871
Epoch   3 Batch  225/459 - Train Accuracy: 0.8942, Validation Accuracy: 0.8862, Loss: 0.0886
Epoch   3 Batch  250/459 - Train Accuracy: 0.9303, Validation Accuracy: 0.8898, Loss: 0.0744
Epoch   3 Batch  275/459 - Train Accuracy: 0.9141, Validation Accuracy: 0.8865, Loss: 0.0839
Epoch   3 Batch  300/459 - Train Accuracy: 0.8886, Validation Accuracy: 0.9011, Loss: 0.0955
Epoch   3 Batch  325/459 - Train Accuracy: 0.9149, Validation Accuracy: 0.8945, Loss: 0.0680
Epoch   3 Batch  350/459 - Train Accuracy: 0.9197, Validation Accuracy: 0.8898, Loss: 0.0826
Epoch   3 Batch  375/459 - Train Accuracy: 0.9065, Validation Accuracy: 0.8873, Loss: 0.0895
Epoch   3 Batch  400/459 - Train Accuracy: 0.9090, Validation Accuracy: 0.8862, Loss: 0.0986
Epoch   3 Batch  425/459 - Train Accuracy: 0.8017, Validation Accuracy: 0.8012, Loss: 0.2647
Epoch   3 Batch  450/459 - Train Accuracy: 0.8797, Validation Accuracy: 0.8617, Loss: 0.1088
Epoch   4 Batch   25/459 - Train Accuracy: 0.9035, Validation Accuracy: 0.8991, Loss: 0.0719
Epoch   4 Batch   50/459 - Train Accuracy: 0.9067, Validation Accuracy: 0.9177, Loss: 0.0699
Epoch   4 Batch   75/459 - Train Accuracy: 0.9425, Validation Accuracy: 0.8998, Loss: 0.0676
Epoch   4 Batch  100/459 - Train Accuracy: 0.9083, Validation Accuracy: 0.9177, Loss: 0.0587
Epoch   4 Batch  125/459 - Train Accuracy: 0.9130, Validation Accuracy: 0.9065, Loss: 0.0603
Epoch   4 Batch  150/459 - Train Accuracy: 0.9153, Validation Accuracy: 0.9026, Loss: 0.0636
Epoch   4 Batch  175/459 - Train Accuracy: 0.9300, Validation Accuracy: 0.9191, Loss: 0.0539
Epoch   4 Batch  200/459 - Train Accuracy: 0.9405, Validation Accuracy: 0.9095, Loss: 0.0499
Epoch   4 Batch  225/459 - Train Accuracy: 0.9133, Validation Accuracy: 0.9267, Loss: 0.0591
Epoch   4 Batch  250/459 - Train Accuracy: 0.9317, Validation Accuracy: 0.9229, Loss: 0.0499
Epoch   4 Batch  275/459 - Train Accuracy: 0.9300, Validation Accuracy: 0.9136, Loss: 0.0569
Epoch   4 Batch  300/459 - Train Accuracy: 0.9281, Validation Accuracy: 0.9365, Loss: 0.0698
Epoch   4 Batch  325/459 - Train Accuracy: 0.9433, Validation Accuracy: 0.9250, Loss: 0.0461
Epoch   4 Batch  350/459 - Train Accuracy: 0.9322, Validation Accuracy: 0.9308, Loss: 0.0557
Epoch   4 Batch  375/459 - Train Accuracy: 0.9275, Validation Accuracy: 0.9274, Loss: 0.0589
Epoch   4 Batch  400/459 - Train Accuracy: 0.9408, Validation Accuracy: 0.9200, Loss: 0.0815
Epoch   4 Batch  425/459 - Train Accuracy: 0.9255, Validation Accuracy: 0.9241, Loss: 0.0553
Epoch   4 Batch  450/459 - Train Accuracy: 0.9210, Validation Accuracy: 0.9355, Loss: 0.0529
Model Trained and Saved

Save Parameters

Save the batch_size and save_path parameters for inference.

In [18]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)

Checkpoint

In [19]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()

Sentence to Sequence

To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.

  • Convert the sentence to lowercase
  • Convert words into ids using vocab_to_int
    • Convert words not in the vocabulary, to the <UNK> word id.
In [20]:
def sentence_to_seq(sentence, vocab_to_int):
    """
    Convert a sentence to a sequence of ids
    :param sentence: String
    :param vocab_to_int: Dictionary to go from the words to an id
    :return: List of word ids
    """
    result = []
    for word in sentence.lower().split(" "):
        if word not in vocab_to_int:
            result.append(vocab_to_int["<UNK>"])
        else:
            result.append(vocab_to_int[word])
    return result


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
Tests Passed

Translate

This will translate translate_sentence from English to French.

In [28]:
#translate_sentence = 'he saw a old yellow truck .' # -> il est une au . <EOS>
#translate_sentence = 'he likes mangos and oranges .' # -> la pamplemousse le fruit , mais l'orange est le aimé. <EOS>
translate_sentence = 'she eats and apple with joy .' # -> les chaud pluvieux au printemps , et il est printemps . <EOS>
#translate_sentence = 'spring is the best time to eat apples .' # -> l'orange est leur est leur moins automne . <EOS>



"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_path + '.meta')
    loader.restore(sess, load_path)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
                                         target_sequence_length: [len(translate_sentence)*2]*batch_size,
                                         source_sequence_length: [len(translate_sentence)]*batch_size,
                                         keep_prob: 1.0})[0]

print('Input')
print('  Word Ids:      {}'.format([i for i in translate_sentence]))
print('  English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))

print('\nPrediction')
print('  Word Ids:      {}'.format([i for i in translate_logits]))
print('  French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
INFO:tensorflow:Restoring parameters from checkpoints/dev
Input
  Word Ids:      [191, 2, 148, 25, 2, 2, 214]
  English Words: ['she', '<UNK>', 'and', 'apple', '<UNK>', '<UNK>', '.']

Prediction
  Word Ids:      [131, 216, 306, 227, 143, 72, 146, 111, 299, 143, 273, 1]
  French Words: les chaud pluvieux au printemps , et il est printemps . <EOS>

Imperfect Translation

You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.

You can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]: