Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/deep-learning-specialization/course-1-neural-networks-and-deep-learning/Deep Neural Network - Application.ipynb
Views: 34199
Deep Neural Network for Image Classification: Application
By the time you complete this notebook, you will have finished the last programming assignment of Week 4, and also the last programming assignment of Course 1! Go you!
To build your cat/not-a-cat classifier, you'll use the functions from the previous assignment to build a deep network. Hopefully, you'll see an improvement in accuracy over your previous logistic regression implementation.
After this assignment you will be able to:
Build and train a deep L-layer neural network, and apply it to supervised learning
Let's get started!
Important Note on Submission to the AutoGrader
Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:
You have not added any extra
print
statement(s) in the assignment.You have not added any extra code cell(s) in the assignment.
You have not changed any of the function parameters.
You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.
You are not changing the assignment code where it is not required, like creating extra variables.
If you do any of the following, you will get something like, Grader not found
(or similarly unexpected) error upon submitting your assignment. Before asking for help/debugging the errors in your assignment, check for these first. If this is the case, and you don't remember the changes you have made, you can get a fresh copy of the assignment by following these instructions.
Table of Contents
Begin by importing all the packages you'll need during this assignment.
numpy is the fundamental package for scientific computing with Python.
matplotlib is a library to plot graphs in Python.
h5py is a common package to interact with a dataset that is stored on an H5 file.
PIL and scipy are used here to test your model with your own picture at the end.
dnn_app_utils
provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.np.random.seed(1)
is used to keep all the random function calls consistent. It helps grade your work - so please don't change it!
2 - Load and Process the Dataset
You'll be using the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you built back then had 70% test accuracy on classifying cat vs non-cat images. Hopefully, your new model will perform even better!
Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train
images labelled as cat (1) or non-cat (0) - a test set of m_test
images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to check out other images.
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
Note: equals , which is the size of one reshaped image vector.
3.1 - 2-layer Neural Network
Now that you're familiar with the dataset, it's time to build a deep neural network to distinguish cat images from non-cat images!
You're going to build two different models:
A 2-layer neural network
An L-layer deep neural network
Then, you'll compare the performance of these models, and try out some different values for .
Let's look at the two architectures:
The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT.
Detailed Architecture of Figure 2:
The input is a (64,64,3) image which is flattened to a vector of size .
The corresponding vector: is then multiplied by the weight matrix of size .
Then, add a bias term and take its relu to get the following vector: .
Repeat the same process.
Multiply the resulting vector by and add the intercept (bias).
Finally, take the sigmoid of the result. If it's greater than 0.5, classify it as a cat.
3.2 - L-layer Deep Neural Network
It's pretty difficult to represent an L-layer deep neural network using the above representation. However, here is a simplified network representation:
The model can be summarized as: [LINEAR -> RELU] (L-1) -> LINEAR -> SIGMOID
Detailed Architecture of Figure 3:
The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
The corresponding vector: is then multiplied by the weight matrix and then you add the intercept . The result is called the linear unit.
Next, take the relu of the linear unit. This process could be repeated several times for each depending on the model architecture.
Finally, take the sigmoid of the final linear unit. If it is greater than 0.5, classify it as a cat.
3.3 - General Methodology
As usual, you'll follow the Deep Learning methodology to build the model:
Initialize parameters / Define hyperparameters
Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop)
Use trained parameters to predict labels
Now go ahead and implement those two models!
Cost after iteration 1: 0.6926114346158595
Cost after first iteration: 0.693049735659989
Cost after iteration 1: 0.6915746967050506
Cost after iteration 1: 0.6915746967050506
Cost after iteration 1: 0.6915746967050506
Cost after iteration 2: 0.6524135179683452
All tests passed.
Expected output:
Expected Output:
Cost after iteration 0 | 0.6930497356599888 |
Cost after iteration 100 | 0.6464320953428849 |
... | ... |
Cost after iteration 2499 | 0.04421498215868956 |
Nice! You successfully trained the model. Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
Expected Output:
Accuracy | 0.9999999999999998 |
Expected Output:
Accuracy | 0.72 |
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an -layer model.
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and you'll hear more about it in the next course. Early stopping is a way to prevent overfitting.
Cost after iteration 0: 0.7717493284237686
Cost after first iteration: 0.7717493284237686
Cost after iteration 1: 0.7070709008912569
Cost after iteration 1: 0.7070709008912569
Cost after iteration 1: 0.7070709008912569
Cost after iteration 2: 0.7063462654190897
All tests passed.
Expected Output:
Cost after iteration 0 | 0.771749 |
Cost after iteration 100 | 0.672053 |
... | ... |
Cost after iteration 2499 | 0.088439 |
Expected Output:
Train Accuracy | 0.985645933014 |
Expected Output:
Test Accuracy | 0.8 |
Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is pretty good performance for this task. Nice job!
In the next course on "Improving deep neural networks," you'll be able to obtain even higher accuracy by systematically searching for better hyperparameters: learning_rate, layers_dims, or num_iterations, for example.
A few types of images the model tends to do poorly on include:
Cat body in an unusual position
Cat appears against a background of a similar color
Unusual cat color and species
Camera Angle
Brightness of the picture
Scale variation (cat is very large or small in image)
Congratulations on finishing this assignment!
You just built and trained a deep L-layer neural network, and applied it in order to distinguish cats from non-cats, a very serious and important task in deep learning. 😉
By now, you've also completed all the assignments for Course 1 in the Deep Learning Specialization. Amazing work! If you'd like to test out how closely you resemble a cat yourself, there's an optional ungraded exercise below, where you can test your own image.
Great work and hope to see you in the next course!
7 - Test with your own image (optional/ungraded exercise)
From this point, if you so choose, you can use your own image to test the output of your model. To do that follow these steps:
Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
Add your image to this Jupyter Notebook's directory, in the "images" folder
Change your image's name in the following code
Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
References:
for auto-reloading external module: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython