Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/deep-learning-specialization/course-4-convolutional-neural-network/Face_Recognition.ipynb
Views: 34197
Face Recognition
Welcome! In this assignment, you're going to build a face recognition system. Many of the ideas presented here are from FaceNet. In the lecture, you also encountered DeepFace.
Face recognition problems commonly fall into one of two categories:
Face Verification "Is this the claimed person?" For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
Face Recognition "Who is this person?" For example, the video lecture showed a face recognition video of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
By the end of this assignment, you'll be able to:
Differentiate between face recognition and face verification
Implement one-shot learning to solve a face recognition problem
Apply the triplet loss function to learn a network's parameters in the context of face recognition
Explain how to pose face recognition as a binary classification problem
Map face images into 128-dimensional encodings using a pretrained model
Perform face verification and face recognition with these encodings
Channels-last notation
For this assignment, you'll be using a pre-trained model which represents ConvNet activations using a "channels last" convention, as used during the lecture and in previous programming assignments.
In other words, a batch of images will be of shape .
Important Note on Submission to the AutoGrader
Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:
You have not added any extra
print
statement(s) in the assignment.You have not added any extra code cell(s) in the assignment.
You have not changed any of the function parameters.
You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.
You are not changing the assignment code where it is not required, like creating extra variables.
If you do any of the following, you will get something like, Grader not found
(or similarly unexpected) error upon submitting your assignment. Before asking for help/debugging the errors in your assignment, check for these first. If this is the case, and you don't remember the changes you have made, you can get a fresh copy of the assignment by following these instructions.
Table of Contents
2 - Naive Face Verification
In Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images is below a chosen threshold, it may be the same person!
Of course, this algorithm performs poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding, .
By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person.
3 - Encoding Face Images into a 128-Dimensional Vector
3.1 - Using a ConvNet to Compute Encodings
The FaceNet model takes a lot of data and a long time to train. So following the common practice in applied deep learning, you'll load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. An Inception network implementation has been provided for you, and you can find it in the file inception_blocks_v2.py
to get a closer look at how it is implemented.
Hot tip: Go to "File->Open..." at the top of this notebook. This opens the file directory that contains the .py
file).
The key things to be aware of are:
This network uses 160x160 dimensional RGB images as its input. Specifically, a face image (or batch of face images) as a tensor of shape
The input images are originally of shape 96x96, thus, you need to scale them to 160x160. This is done in the
img_to_encoding()
function.The output is a matrix of shape that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images!
Now summarize the input and output shapes:
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows:
By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same person
So, an encoding is a good one if:
The encodings of two images of the same person are quite similar to each other.
The encodings of two images of different persons are very different.
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
In the next section, you'll call the pictures from left to right: Anchor (A), Positive (P), Negative (N)
3.2 - The Triplet Loss
Important Note: Since you're using a pretrained model, you won't actually need to implement the triplet loss function in this assignment. However, the triplet loss is the main ingredient of the face recognition algorithm, and you'll need to know how to use it for training your own FaceNet model, as well as other types of image similarity problems. Therefore, you'll implement it below, for fun and edification. 😃
For an image , its encoding is denoted as , where is the function computed by the neural network.
Training will use triplets of images :
A is an "Anchor" image--a picture of a person.
P is a "Positive" image--a picture of the same person as the Anchor image.
N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from the training dataset. is used here to denote the -th training example.
You'd like to make sure that an image of an individual is closer to the Positive than to the Negative image ) by at least a margin :
You would thus like to minimize the following "triplet cost":
Here, the notation "" is used to denote .
Notes:
The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.
is called the margin. It's a hyperparameter that you pick manually. You'll use .
Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., =1); you won't have to worry about that in this assignment.
Exercise 1 - triplet_loss
Implement the triplet loss as defined by formula (3). These are the 4 steps:
Compute the distance between the encodings of "anchor" and "positive":
Compute the distance between the encodings of "anchor" and "negative":
Compute the formula per training example:
Compute the full formula by taking the max with zero and summing over the training examples:
Hints:
Useful functions:
tf.reduce_sum()
,tf.square()
,tf.subtract()
,tf.add()
,tf.maximum()
.For steps 1 and 2, sum over the entries of and .
For step 4, you will sum over the training examples.
Additional Hints:
Recall that the square of the L2 norm is the sum of the squared differences:
Note that the anchor, positive and negative encodings are of shape (m,128), where m is the number of training examples and 128 is the number of elements used to encode a single example.
For steps 1 and 2, maintain the number of m training examples and sum along the 128 values of each encoding.
tf.reduce_sum
has an axis parameter. This chooses along which axis the sums are applied.Note that one way to choose the last axis in a tensor is to use negative indexing (axis=-1).
In step 4, when summing over training examples, the result will be a single scalar value.
For
tf.reduce_sum
to sum across all axes, keep the default value axis=None.
Expected Output:
loss: | tf.Tensor(527.2598, shape=(), dtype=float32) |
4 - Loading the Pre-trained Model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, you won't train it from scratch here. Instead, you'll load a previously trained model in the following cell; which might take a couple of minutes to run.
Here are some examples of distances between the encodings between three individuals:
Example of distance outputs between three individuals' encodings
Now use this model to perform face verification and face recognition!
5 - Applying the Model
You're building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.
You'd like to build a face verification system that gives access to a list of people. To be admitted, each person has to swipe an identification card at the entrance. The face recognition system then verifies that they are who they claim to be.
5.1 - Face Verification
Now you'll build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding, you'll use img_to_encoding(image_path, model)
, which runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a Python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
Load the images of Danielle and Kian:
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Exercise 2 - verify
Implement the verify()
function, which checks if the front-door camera picture (image_path
) is actually the person called "identity". You will have to go through the following steps:
Compute the encoding of the image from
image_path
.Compute the distance between this encoding and the encoding of the identity image stored in the database.
Open the door if the distance is less than 0.7, else do not open it.
As presented above, you should use the L2 distance np.linalg.norm
.
Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.
Hints:
identity
is a string that is also a key in the database dictionary.img_to_encoding
has two parameters: the image_path and model.
Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
Expected Output:
It's younes, welcome in! | (0.5992949, True) |
Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. Naughty Benoit! The camera took a picture of Benoit ("images/camera_2.jpg).
Run the verification algorithm to check if Benoit can enter.
Expected Output:
It's not kian, please go away | (1.0259346, False) |
5.2 - Face Recognition
Your face verification system is mostly working. But since Kian got his ID card stolen, when he came back to the office the next day he couldn't get in!
To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, you will no longer get a person's name as one of the inputs.
Exercise 3 - who_is_it
Implement who_is_it()
with the following steps:
Compute the target encoding of the image from
image_path
Find the encoding from the database that has smallest distance with the target encoding.
Initialize the
min_dist
variable to a large enough number (100). This helps you keep track of the closest encoding to the input's encoding.Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in
database.items()
.Compute the L2 distance between the target "encoding" and the current "encoding" from the database. If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
Younes is at the front door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is()
algorithm identifies Younes.
Expected Output:
it's younes, the distance is 0.5992949 it's younes, the distance is 0.5992949 it's younes, the distance is 0.0 |
You can change "camera_0.jpg" (picture of Younes) to "camera_1.jpg" (picture of Bertrand) and see the result.
Congratulations! You've completed this assignment, and your face recognition system is working well! It not only lets in authorized persons, but now people don't need to carry an ID card around anymore!
You've now seen how a state-of-the-art face recognition system works, and can describe the difference between face recognition and face verification. Here's a quick recap of what you've accomplished:
Posed face recognition as a binary classification problem
Implemented one-shot learning for a face recognition problem
Applied the triplet loss function to learn a network's parameters in the context of face recognition
Mapped face images into 128-dimensional encodings using a pretrained model
Performed face verification and face recognition with these encodings
Great work!
What you should remember:
Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
Triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
Ways to improve your facial recognition model:
Although you won't implement these here, here are some ways to further improve the algorithm:
Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then, given a new image, compare the new face to multiple pictures of the person. This would increase accuracy.
Crop the images to contain just the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
6 - References
Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). FaceNet: A Unified Embedding for Face Recognition and Clustering
Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). DeepFace: Closing the gap to human-level performance in face verification
This implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
Further inspiration was found here: https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/
And here: https://github.com/nyoki-mtl/keras-facenet/blob/master/notebook/tf_to_keras.ipynb