Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/deep-learning-specialization/course-5-sequence-models/QA_dataset.ipynb
Views: 34203
Transformer Network Application: Question Answering
Welcome to Week 4's third, and the last lab of the course! Congratulations on making it this far. In this notebook you'll explore another application of the transformer architecture that you built.
After this assignment you'll be able to:
Perform extractive Question Answering
Fine-tune a pre-trained transformer model to a custom dataset
Implement a QA model in TensorFlow and PyTorch
1 - Extractive Question Answering
Question answering (QA) is a task of natural language processing that aims to automatically answer questions. The goal of extractive QA is to identify the portion of the text that contains the answer to a question. For example, when tasked with answering the question 'When will Jane go to Africa?' given the text data 'Jane visits Africa in September', the question answering model will highlight 'September'.
You will use a variation of the Transformer model you built in the last assignment to answer questions about stories.
You will implement extractive QA model in TensorFlow and in PyTorch.
Recommendation:
If you are interested, check out the Course 4: Natural Language Processing with Attention Models of our Natural Language Processing Specialization where you can learn how to build Transformers and perform QA using the Trax library.
1.1 - Data preprocessing
Run the following cell to load the QA bAbI dataset, which is one of the bAbI datasets generated by Facebook AI Research to advance natural language processing.
Take a look at the format of the data. For a given story, there are two sentences which serve as the context, and one question. Each of these phrases has an ID. There is also a supporting fact ID which refers to a sentence in the story that helps answer the question. For example, for the question 'What is east of the hallway?', the supporting fact 'The bedroom is east of the hallway' has the ID '2'. There is also the answer, 'bedroom' for the question.
Check and see if the entire dataset of stories has this format.
To make the data easier to work with, you will flatten the dataset to transform it from a dictionary structure to a table structure.
Now it is much easier to access the information you need! You can now easily extract the answer, question, and facts from the story, and also join the facts into a single entry under 'sentences'.
The goal of extractive QA is to find the part of the text that contains the answer to the question. You will identify the position of the answer using the indexes of the string. For example, if the answer to some question was 'September', you would need to find the start and end string indices of the word 'September' in the context sentence 'Jane visits Africa in September.'
Use this next function to get the start and end indices of the answer in each of the stories in your dataset.
1.2 - Tokenize and Align with 🤗 Library
Now you have all the data you need to train a Transformer model to perform Question Answering! You are ready for a task you may have already encountered in the Named-Entity Recognition lab - tokenizing and aligning your input. To feed text data to a Transformer model, you will need to tokenize your input using a 🤗 Transformer tokenizer. It is crucial that the tokenizer you use must match the Transformer model type you are using! In this exercise, you will use the 🤗 DistilBERT fast tokenizer, which standardizes the length of your sequence to 512 and pads with zeros.
Transformer models are often trained by tokenizers that split words into subwords. For instance, the word 'Africa' might get split into multiple subtokens. This can create some misalignment between the list of tags for the dataset and the list of labels generated by the tokenizer, since the tokenizer can split one word into several, or add special tokens. Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word with a tokenize_and_align()
function. In this case, since you are interested in the start and end indices of the answer, you will want to align the index of the sentence to match the index of the token for a word.
What you should remember:
The goal of extractive QA is to identify the portion of the text that contains the answer to a question.
Transformer models are often trained by tokenizers that split words into subwords.
Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word.
Train and test datasets
Note:
In the TensorFlow implementation, you will have to set the data format type to tensors, which may create ragged tensors (tensors of different lengths).
You will have to convert the ragged tensors to normal tensors using the
to_tensor()
method, which pads the tensors and sets the dimensions to[None, tokenizer.model_max_length]
so you can feed different size tensors into your model based on the batch size.
Training
It is finally time to start training your model!
Create a custom training function using tf.GradientTape()
Target two loss functions, one for the start index and one for the end index.
tf.GradientTape()
records the operations performed during forward prop for automatic differentiation during backprop.
Take a look at your losses and try playing around with some of the hyperparameters for better results!
You have successfully trained your model to help automatically answer questions! Try asking it a question about a story.
Congratulations! You just implemented your first QA model in TensorFlow.
2.2 PyTorch implementation
PyTorch is an open source machine learning framework developed by Facebook's AI Research lab that can be used for computer vision and natural language processing. As you can imagine, it is quite compatible with the bAbI dataset.
Train and test dataset
Go ahead and try creating a train and test dataset by importing PyTorch.
For the accuracy metrics for the PyTorch implementation, you will change things up a bit and use the F1 score for start and end indicies over the entire test dataset as the loss functions.
Training
Now it is time to load a pre-trained model.
Note: You will be using the DistilBERT instead of TFDistilBERT for a PyTorch implementation.
Instead of a custom training loop, you will use the 🤗 Trainer, which contains a basic training loop and is fairly easy to implement in PyTorch.
Now it is time to ask your PyTorch model a question!
Before testing your model with a question, you can tell PyTorch to send your model and inputs to the GPU if your machine has one, or the CPU if it does not.
You can then proceed to tokenize your input and create PyTorch tensors and send them to your device.
The rest of the pipeline is relatively similar to the one you implemented for TensorFlow.
Congratulations!
You've completed this notebook, and can now implement Transformer models for QA tasks!
You are now able to:
Perform extractive Question Answering
Fine-tune a pre-trained transformer model to a custom dataset
Implement a QA model in TensorFlow and PyTorch
What you should remember:
Transformer models are often trained by tokenizers that split words into subwords.
Before processing, it is important that you align the start and end indices with the tokens associated with the target answer word.
PyTorch is a relatively light and easy to implement framework that can make rapid prototyping easier, while TensorFlow has advantages in scaling and is more widely used in production
tf.GradientTape
allows you to build custom training loops in TensorFlowThe
Trainer
API in PyTorch gives you a basic training loop that is compatible with 🤗 models and datasets