Path: blob/main/transformers_doc/en/audio_text_to_text.ipynb
8772 views
Audio-text-to-text
Audio-text-to-text models accept both audio and text as inputs and generate text as output. They combine audio understanding with language generation, enabling tasks like audio question answering (e.g., "What is being said in this clip?"), audio reasoning (e.g., "What emotion does the speaker convey?"), and spoken dialogue understanding. Unlike traditional automatic speech recognition (ASR) models that only transcribe speech into text, audio-text-to-text models can reason about the audio content, follow complex instructions, and produce contextual responses based on what they hear.
The example below shows how to load a model and processor, pass an audio file with a text prompt, and generate a response. In this case, we ask the model to transcribe a speech recording.
This guide will show you how to:
Fine-tune Audio Flamingo 3 on the AudioCaps dataset for audio captioning using LoRA.
Use your fine-tuned model for inference.
[!TIP] To see all architectures and checkpoints compatible with this task, we recommend checking the task-page.
Before you begin, make sure you have all the necessary libraries installed:
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
Load AudioCaps dataset
Start by loading the AudioCaps dataset from the 🤗 Datasets library in streaming mode. This dataset contains audio clips with descriptive captions, perfect for audio captioning tasks.
Cast the audio column to 16kHz, which is required by Audio Flamingo's Whisper feature extractor:
Split the dataset into train and test sets using .take() and .skip() for streaming datasets:
Take a look at an example:
The dataset contains:
audio: the audio waveformcaption: the descriptive text caption for the audio
Preprocess
Load the Audio Flamingo processor to handle both audio and text inputs:
Create a data collator that processes audio-text pairs into the format expected by Audio Flamingo. The collator uses the chat template format with direct audio arrays:
Instantiate the data collator:
Train
Configure LoRA
LoRA (Low-Rank Adaptation) enables efficient fine-tuning by only training a small number of additional parameters. Configure LoRA to target the language model's attention and feed-forward layers:
[!TIP] LoRA significantly reduces memory usage and training time by only updating a small number of adapter parameters instead of the full model. This configuration targets the language model's attention and feed-forward layers while keeping the audio encoder frozen, making it possible to fine-tune on a single GPU.
Setup training
Define training hyperparameters in TrainingArguments. Note that we use max_steps instead of epochs since we're using a streaming dataset:
Pass the training arguments to Trainer along with the model, datasets, and data collator:
Save the LoRA adapter and processor:
Once training is completed, share your model to the Hub:
Inference
Now that you've fine-tuned the model, you can use it for audio captioning.
Load the fine-tuned model and processor:
Load an audio sample for inference:
Prepare the input with a conversation format:
Generate a response:
Pipeline
You can also use the Pipeline API for quick inference. First, merge the LoRA adapter with the base model, then create a pipeline:
[!TIP] For more advanced use cases like multi-turn conversations with audio, you can structure your messages with alternating user and assistant roles, similar to image-text-to-text models.
Further Reading
Audio-text-to-text task page covers model types, use cases, and datasets.
PEFT documentation for more LoRA configuration options and other adapter methods.
Audio Flamingo 3 model card for model-specific details and capabilities.