Ithy Logo

Comprehensive Hugging Face Transformers Library Cheat Sheet

Mastering Natural Language Processing with State-of-the-Art Tools

Hugging Face Transformers library

Key Takeaways

  • Installation & Setup: Easily install the Transformers library and its dependencies to start leveraging powerful NLP models.
  • Core Components & Pipelines: Understand the essential components like Tokenizers, Models, and Pipelines to perform various NLP tasks efficiently.
  • Advanced Features & Customization: Utilize advanced functionalities such as fine-tuning, mixed precision training, and deployment for tailored applications.

1. Installation & Setup

Installing the Transformers Library

Ensure you have Python version 3.8 or higher. Install the Transformers library using pip:

pip install transformers

Installing with GPU Support

For GPU acceleration, install PyTorch and torchvision:

pip install torch torchvision

Installing All Dependencies

Install additional dependencies such as datasets and training tools:

pip install transformers[torch]  # Or: transformers[tensorflow], transformers[flax]

Verifying the Installation

Check the installed version of the Transformers library:

import transformers
print(transformers.__version__)

2. Key Components

Tokenizer

The Tokenizer converts raw text into model-compatible inputs such as token IDs and attention masks.

Model

The Model encapsulates the transformer architecture (e.g., BERT, GPT, T5) and handles inference and training.

Pipeline

The Pipeline provides high-level APIs for common NLP tasks like text classification, summarization, and more.

Configuration

The Configuration defines model hyperparameters and architectural details.


3. Quick Start with Pipelines

Using the Pipeline API

The pipeline() API is the fastest way to perform NLP tasks without delving into the complexities of model architecture.

from transformers import pipeline

# Sentiment Analysis
sentiment = pipeline("sentiment-analysis")
print(sentiment("I love using Hugging Face Transformers!"))

# Text Generation
generator = pipeline("text-generation")
print(generator("Once upon a time, there was a robot"))

# Summarization
summarizer = pipeline("summarization")
print(summarizer("The Hugging Face Transformers library simplifies NLP workflows by providing pretrained models."))

# Translation
translator = pipeline("translation_en_to_fr")
print(translator("How are you today?"))

# Question Answering
qa = pipeline("question-answering")
print(qa(question="What is Transformers?", context="Transformers is a library provided by Hugging Face."))

# Zero-Shot Classification
classifier = pipeline("zero-shot-classification")
print(classifier("Hugging Face is amazing.", candidate_labels=["education", "AI", "entertainment"]))

4. Loading Pretrained Models

Using Auto Classes

Auto classes automatically select the appropriate model architecture based on the model name.

from transformers import AutoTokenizer, AutoModel

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")

Task-Specific Model Loading

Load models tailored for specific tasks.

from transformers import AutoModelForSequenceClassification, AutoModelForCausalLM, AutoModelForQuestionAnswering

# Sequence Classification
model_seq_class = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")

# Causal Language Modeling
model_causal_lm = AutoModelForCausalLM.from_pretrained("gpt2")

# Question Answering
model_qa = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")

5. Tokenization

Basic Tokenization

Tokenize input text to prepare it for the model.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text = "Hugging Face Transformers are amazing!"
tokens = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
print(tokens)

Advanced Tokenization Options

Customize tokenization parameters.

tokens = tokenizer(
    text,
    padding=True,
    truncation=True,
    max_length=512,
    return_tensors="pt"  # Use "tf" for TensorFlow
)

Decoding Tokens

Convert token IDs back to human-readable text.

decoded_text = tokenizer.decode(tokens["input_ids"][0])
print(decoded_text)

6. Model Inference

Basic Inference

Run the model to get predictions.

inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
print(last_hidden_states)

Text Generation

Generate text using the model.

prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
    inputs.input_ids,
    max_length=100,
    num_beams=4,
    temperature=0.7,
    no_repeat_ngram_size=2
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

7. Fine-Tuning

Using the Trainer API

Fine-tune a pretrained model on a custom dataset using the Trainer API.

from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_dataset

# Load dataset
dataset = load_dataset("imdb")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2)

# Tokenize dataset
def preprocess(example):
    return tokenizer(example["text"], truncation=True, padding="max_length")

tokenized_dataset = dataset.map(preprocess, batched=True)

# Define training arguments
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=16,
    save_steps=10000,
    save_total_limit=2,
    evaluation_strategy="epoch"
)

# Initialize Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset["train"],
    eval_dataset=tokenized_dataset["test"]
)

# Train the model
trainer.train()

Mixed Precision Training

Enable mixed precision training for faster computation.

from transformers import TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=16,
    fp16=True  # Enable mixed precision
)

8. Saving and Loading Models

Saving Models and Tokenizers

# Save model and tokenizer
model.save_pretrained("./my_model")
tokenizer.save_pretrained("./my_model")

Loading Saved Models and Tokenizers

from transformers import AutoModel, AutoTokenizer

# Load model and tokenizer
model = AutoModel.from_pretrained("./my_model")
tokenizer = AutoTokenizer.from_pretrained("./my_model")

Pushing to Hugging Face Hub

Share your models on the Hugging Face Hub.

# Push model and tokenizer to Hub
model.push_to_hub("username/model-name")
tokenizer.push_to_hub("username/model-name")

Device Management

Move models and inputs to GPU for accelerated computations.

import torch

# Move model to GPU
model = model.to("cuda")

# Move inputs to GPU
inputs = {k: v.to("cuda") for k, v in inputs.items()}

# Check device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)

9. Advanced Features

Customizing Pipelines

Create custom pipelines for specific tasks or models.

from transformers import pipeline

# Custom question-answering pipeline
qa_pipeline = pipeline("question-answering", model="distilbert-base-cased")
result = qa_pipeline(question="What is Transformers?", context="Transformers is by Hugging Face.")
print(result)

Using Multimodal Models

Handle tasks involving multiple modalities like vision and text.

from transformers import BlipProcessor, BlipForConditionalGeneration

# Load processor and model
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")

# Process image
image = YOUR_IMAGE_HERE  # e.g., a PIL Image
processed = processor(image, return_tensors="pt")

# Generate caption
output = model.generate(**processed)
caption = processor.decode(output[0], skip_special_tokens=True)
print(caption)

Exporting to ONNX

Optimize models for inference using ONNX.

python -m transformers.onnx --model=bert-base-uncased onnx/

Logging and Metrics

Manage logging levels and compute evaluation metrics.

from transformers import logging

# Set logging verbosity
logging.set_verbosity_warning()

# Using metrics
import evaluate
accuracy = evaluate.load("accuracy")
result = accuracy.compute(predictions=[...], references=[...])
print(result)

10. Common Pipeline Tasks

Text Classification

from transformers import pipeline

classifier = pipeline("text-classification")
result = classifier("This is a great movie!")
print(result)

Named Entity Recognition (NER)

from transformers import pipeline

ner = pipeline("ner", grouped_entities=True)
result = ner("Hugging Face is based in New York.")
print(result)

Question Answering

from transformers import pipeline

qa = pipeline("question-answering")
result = qa(question="What is Hugging Face?", context="Hugging Face is a company specializing in NLP.")
print(result)

Text Summarization

from transformers import pipeline

summarizer = pipeline("summarization")
result = summarizer("The Hugging Face Transformers library is amazing...", max_length=50)
print(result)

Translation

from transformers import pipeline

translator = pipeline("translation_en_to_fr")
result = translator("How are you today?")
print(result)

Zero-Shot Classification

from transformers import pipeline

classifier = pipeline("zero-shot-classification")
result = classifier("Hugging Face is amazing.", candidate_labels=["education", "AI", "entertainment"])
print(result)

11. Common Models in the Hub

Modality Model Names
Text bert-base-uncased, gpt2, t5-small
Vision vit-base-patch16-224, detr-resnet-50
Audio facebook/wav2vec2-base-960h
Multimodal Salesforce/blip-image-captioning-base

12. Resources


Conclusion

The Hugging Face Transformers library is a versatile and powerful tool for implementing state-of-the-art Natural Language Processing (NLP) models. From easy installation and using high-level pipelines to fine-tuning models for specific tasks, Transformers provide a comprehensive suite of features that cater to both beginners and advanced users. By leveraging the community-driven Model Hub and extensive documentation, users can efficiently create, deploy, and innovate within the realm of NLP and beyond.


References


Last updated January 22, 2025
Ask me more