Path: blob/main/transformers_doc/en/image_text_to_text.ipynb
4522 views
Image-text-to-text
Image-text-to-text models, also known as vision language models (VLMs), are language models that take an image input. These models can tackle various tasks, from visual question answering to image segmentation. This task shares many similarities with image-to-text, but with some overlapping use cases like image captioning. Image-to-text models only take image inputs and often accomplish a specific task, whereas VLMs take open-ended text and image inputs and are more generalist models.
In this guide, we provide a brief overview of VLMs and show how to use them with Transformers for inference.
To begin with, there are multiple types of VLMs:
base models used for fine-tuning
chat fine-tuned models for conversation
instruction fine-tuned models
This guide focuses on inference with an instruction-tuned model.
Let's begin installing the dependencies.
Let's initialize the model and the processor.
This model has a chat template that helps user parse chat outputs. Moreover, the model can also accept multiple images as input in a single conversation or message. We will now prepare the inputs.
The image inputs look like the following.


Below is an example of the chat template. We can feed conversation turns and the last message as an input by appending it at the end of the template.
We will now call the processors' apply_chat_template() method to preprocess its output along with the image inputs.
We can now pass the preprocessed inputs to the model.
Pipeline
The fastest way to get started is to use the Pipeline API. Specify the "image-text-to-text"
task and the model you want to use.
The example below uses chat templates to format the text inputs.
Pass the chat template formatted text and image to Pipeline and set return_full_text=False
to remove the input from the generated output.
If you prefer, you can also load the images separately and pass them to the pipeline like so:
The images will still be included in the "input_text"
field of the output:
We can use text streaming for a better generation experience. Transformers supports streaming with the TextStreamer or TextIteratorStreamer classes. We will use the TextIteratorStreamer with IDEFICS-8B.
Assume we have an application that keeps chat history and takes in the new user input. We will preprocess the inputs as usual and initialize TextIteratorStreamer to handle the generation in a separate thread. This allows you to stream the generated text tokens in real-time. Any generation arguments can be passed to TextIteratorStreamer.
Now let's call the model_inference
function we created and stream the values.
Fit models in smaller hardware
VLMs are often large and need to be optimized to fit on smaller hardware. Transformers supports many model quantization libraries, and here we will only show int8 quantization with Quanto. int8 quantization offers memory improvements up to 75 percent (if all weights are quantized). However it is no free lunch, since 8-bit is not a CUDA-native precision, the weights are quantized back and forth on the fly, which adds up to latency.
First, install dependencies.
To quantize a model during loading, we need to first create QuantoConfig. Then load the model as usual, but pass quantization_config
during model initialization.
And that's it, we can use the model the same way with no changes.
Further Reading
Here are some more resources for the image-text-to-text task.
Image-text-to-text task page covers model types, use cases, datasets, and more.
Vision Language Models Explained is a blog post that covers everything about vision language models and supervised fine-tuning using TRL.