CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/th/chapter2/section5_tf.ipynb
Views: 2554
Kernel: Unknown Kernel

การจัดการกับหลายๆประโยค(multiple sequences) (TensorFlow)

Install the Transformers, Datasets, and Evaluate libraries to run this notebook.

!pip install datasets evaluate transformers[sentencepiece]
import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForSequenceClassification checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint) sequence = "I've been waiting for a HuggingFace course my whole life." tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = tf.constant(ids) # This line will fail. model(input_ids)
InvalidArgumentError: Input to reshape is a tensor with 14 values, but the requested shape has 196 [Op:Reshape]
tokenized_inputs = tokenizer(sequence, return_tensors="tf") print(tokenized_inputs["input_ids"])
<tf.Tensor: shape=(1, 16), dtype=int32, numpy= array([[ 101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]], dtype=int32)>
import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForSequenceClassification checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint) sequence = "I've been waiting for a HuggingFace course my whole life." tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = tf.constant([ids]) print("Input IDs:", input_ids) output = model(input_ids) print("Logits:", output.logits)
Input IDs: tf.Tensor( [[ 1045 1005 2310 2042 3403 2005 1037 17662 12172 2607 2026 2878 2166 1012]], shape=(1, 14), dtype=int32) Logits: tf.Tensor([[-2.7276208 2.8789377]], shape=(1, 2), dtype=float32)
batched_ids = [ [200, 200, 200], [200, 200] ]
padding_id = 100 batched_ids = [ [200, 200, 200], [200, 200, padding_id], ]
model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint) sequence1_ids = [[200, 200, 200]] sequence2_ids = [[200, 200]] batched_ids = [ [200, 200, 200], [200, 200, tokenizer.pad_token_id], ] print(model(tf.constant(sequence1_ids)).logits) print(model(tf.constant(sequence2_ids)).logits) print(model(tf.constant(batched_ids)).logits)
tf.Tensor([[ 1.5693678 -1.3894581]], shape=(1, 2), dtype=float32) tf.Tensor([[ 0.5803005 -0.41252428]], shape=(1, 2), dtype=float32) tf.Tensor( [[ 1.5693681 -1.3894582] [ 1.3373486 -1.2163193]], shape=(2, 2), dtype=float32)
batched_ids = [ [200, 200, 200], [200, 200, tokenizer.pad_token_id], ] attention_mask = [ [1, 1, 1], [1, 1, 0], ] outputs = model(tf.constant(batched_ids), attention_mask=tf.constant(attention_mask)) print(outputs.logits)
tf.Tensor( [[ 1.5693681 -1.3894582 ] [ 0.5803021 -0.41252586]], shape=(2, 2), dtype=float32)
sequence = sequence[:max_sequence_length]