Tridis / README.md
magistermilitum's picture
Update README.md
98cb8e2 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
    - name: Language
      dtype: string
    - name: Corpus
      dtype: string
    - name: Script
      dtype: string
    - name: Century
      dtype: string
    - name: Image_name
      dtype: string
    - name: NER_ann
      dtype: string
  splits:
    - name: train
      num_bytes: 30374609181
      num_examples: 177744
    - name: validation
      num_bytes: 1689908739
      num_examples: 9829
    - name: test
      num_bytes: 1278986029
      num_examples: 9827
  download_size: 33333506316
  dataset_size: 33343503949
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
tags:
  - handwritten-text-recognition
  - Image-to-text
  - Image-Text-to-text
Pipeline_tag: Image-Text-to-text
Tasks:
  - handwritten-text-recognition
  - Image-to-text
  - Image-Text-to-text
license: mit
task_categories:
  - image-to-text
language:
  - fr
  - es
  - la
  - de
  - nl
pretty_name: Tridis
size_categories:
  - 100M<n<1B

This is the first version of the dataset derived from the corpora used for TRIDIS (Tria Digita Scribunt).

TRIDIS encompasses a series of Handwriting Text Recognition (HTR) models trained using semi-diplomatic transcriptions of medieval and early modern manuscripts.

The semi-diplomatic transcription approach involves resolving abbreviations found in the original manuscripts and normalizing Punctuation and Allographs.

The dataset contains approximately 4,000 pages of manuscripts and is particularly suitable for working with documentary sources – manuscripts originating from legal, administrative, and memorial practices. Examples include registers, feudal books, charters, proceedings, and accounting records, primarily dating from the Late Middle Ages (13th century onwards).

The dataset covers Western European regions (mainly Spain, France, and Germany) and spans the 12th to the 17th centuries.

Corpora

The original ground-truth corpora are available under CC BY licenses on online repositories:

Citation

There is a pre-print presenting this corpus:

@article{aguilar2025tridis,
  title={TRIDIS: A Comprehensive Medieval and Early Modern Corpus for HTR and NER},
  author={Aguilar, Sergio Torres},
  journal={arXiv preprint arXiv:2503.22714},
  year={2025}
}

How to Get Started with this DataSet

Use this Python code to easily train a TrOCR model with the TRIDIS dataset:

#Use Transformers==4.43.0
#Note: Data augmentation is omitted here but strongly recommended.

import torch
from PIL import Image

import torchvision.transforms as transforms
from torch.utils.data import Dataset
from datasets import load_dataset # Import load_dataset
from transformers import (
    AutoFeatureExtractor,
    AutoTokenizer,
    TrOCRProcessor,
    VisionEncoderDecoderModel,
    Seq2SeqTrainer,
    Seq2SeqTrainingArguments,
    default_data_collator
)
from evaluate import load

# --- START MODIFIED SECTION ---

# Load the dataset from Hugging Face
dataset = load_dataset("magistermilitum/Tridis")
print("Dataset loaded.")

# Initialize the processor
# Use the specific processor associated with the TrOCR model
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") #or the large version for better performance
print("Processor loaded.")

# --- Custom Dataset Modified for Deferred Loading (No Augmentation) ---
class CustomDataset(Dataset):
    def __init__(self, hf_dataset, processor, max_target_length=160):
        """
        Args:
            hf_dataset: The dataset loaded by Hugging Face (datasets.Dataset).
            processor: The TrOCR processor.
            max_target_length: Maximum length for the target labels.
        """
        self.hf_dataset = hf_dataset
        self.processor = processor
        self.max_target_length = max_target_length

        # --- EFFICIENT FILTERING ---
        # Filter here to know the actual length and avoid processing invalid samples in __getitem__
        # Use indices to maintain the efficiency of accessing the original dataset
        self.valid_indices = [
            i for i, text in enumerate(self.hf_dataset["text"])
            if isinstance(text, str) and 3 < len(text) < 257 # Filter based on text length
        ]
        print(f"Dataset filtered. Valid samples: {len(self.valid_indices)} / {len(self.hf_dataset)}")

    def __len__(self):
        # The length is the number of valid indices after filtering
        return len(self.valid_indices)

    def __getitem__(self, idx):
        # Get the original index in the Hugging Face dataset
        original_idx = self.valid_indices[idx]

        # Load the specific sample from the Hugging Face dataset
        item = self.hf_dataset[original_idx]
        image = item["image"]
        text = item["text"]

        # Ensure the image is PIL and RGB
        if not isinstance(image, Image.Image):
            # If not PIL (rare with load_dataset, but for safety)
            # Assume it can be loaded by PIL or is a numpy array
            try:
                image = Image.fromarray(image).convert("RGB")
            except:
                # Fallback or error handling if conversion fails
                print(f"Error converting image at original index {original_idx}. Using placeholder.")
                # Returning a placeholder might be better handled by the collator or skipping.
                # For now, repeating the first valid sample as a placeholder (not ideal).
                item = self.hf_dataset[self.valid_indices[0]]
                image = item["image"].convert("RGB")
                text = item["text"]
        else:
            image = image.convert("RGB")

        # Process image using the TrOCR processor
        try:
            # The processor handles resizing and normalization
            pixel_values = self.processor(images=image, return_tensors="pt").pixel_values
        except Exception as e:
             print(f"Error processing image at original index {original_idx}: {e}. Using placeholder.")
             # Create a black placeholder tensor if processing fails
             # Ensure the size matches the expected input size for the model
             img_size = self.processor.feature_extractor.size
             # Check if size is defined as int or dict/tuple
             if isinstance(img_size, int):
                 h = w = img_size
             elif isinstance(img_size, dict) and 'height' in img_size and 'width' in img_size:
                 h = img_size['height']
                 w = img_size['width']
             elif isinstance(img_size, (tuple, list)) and len(img_size) == 2:
                 h, w = img_size
             else: # Default fallback size if uncertain
                 h, w = 384, 384 # Common TrOCR size, adjust if needed
             pixel_values = torch.zeros((3, h, w))


        # Tokenize the text
        labels = self.processor.tokenizer(
            text,
            padding="max_length",
            max_length=self.max_target_length,
            truncation=True # Important to add truncation just in case
        ).input_ids

        # Replace pad tokens with -100 to ignore in the loss function
        labels = [label if label != self.processor.tokenizer.pad_token_id else -100
                  for label in labels]

        encoding = {
            # .squeeze() removes dimensions of size 1, necessary as we process one image at a time
            "pixel_values": pixel_values.squeeze(),
            "labels": torch.tensor(labels)
        }
        return encoding

# --- Create Instances of the Modified Dataset ---
# Pass the Hugging Face dataset directly
train_dataset = CustomDataset(dataset["train"], processor)
eval_dataset = CustomDataset(dataset["validation"], processor)

print(f"\nNumber of training examples (valid and filtered): {len(train_dataset)}")
print(f"Number of validation examples (valid and filtered): {len(eval_dataset)}")

# --- END MODIFIED SECTION ---


# Load pretrained model
print("\nLoading pre-trained model...")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
model.to(device)
print(f"Model loaded on: {device}")

# Configure the model for fine-tuning
print("Configuring model...")
model.config.decoder.is_decoder = True # Explicitly set decoder flag
model.config.decoder.add_cross_attention = True # Ensure decoder attends to encoder outputs
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id # Start generation with CLS token
model.config.pad_token_id = processor.tokenizer.pad_token_id # Set pad token ID
model.config.vocab_size = model.config.decoder.vocab_size # Set vocabulary size
model.config.eos_token_id = processor.tokenizer.sep_token_id # Set end-of-sequence token ID

# Generation configuration (influences evaluation and inference)
model.config.max_length = 160 # Max generated sequence length
model.config.early_stopping = True # Stop generation early if EOS is reached
model.config.no_repeat_ngram_size = 3 # Prevent repetitive n-grams
model.config.length_penalty = 2.0 # Encourage longer sequences slightly
model.config.num_beams = 3 # Use beam search for better quality generation

# Metrics
print("Loading metrics...")
cer_metric = load("cer")
wer_metric = load("wer")

def compute_metrics(pred):
    labels_ids = pred.label_ids
    pred_ids = pred.predictions

    # Replace -100 with pad_token_id for correct decoding
    labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id

    # Decode predictions and labels
    pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
    label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)

    # Calculate CER and WER
    cer = cer_metric.compute(predictions=pred_str, references=label_str)
    wer = wer_metric.compute(predictions=pred_str, references=label_str)

    print(f"\nEvaluation Step Metrics - CER: {cer:.4f}, WER: {wer:.4f}") # Print metrics

    return {"cer": cer, "wer": wer} # Return metrics required by Trainer


# Training configuration
batch_size_train = 32 # Adjust based on GPU memory, 32 for 48GB of vram
batch_size_eval = 32  # Adjust based on GPU memory
epochs = 10 # Number of training epochs (15 recommended)

print("\nConfiguring training arguments...")
training_args = Seq2SeqTrainingArguments(
    predict_with_generate=True,       # Use generate for evaluation (needed for CER/WER)
    per_device_train_batch_size=batch_size_train,
    per_device_eval_batch_size=batch_size_eval,
    fp16=True if device == "cuda" else False, # Enable mixed precision training on GPU
    output_dir="./trocr-model-tridis", # Directory to save model checkpoints
    logging_strategy="steps",
    logging_steps=10,                 # Log training loss every 50 steps
    evaluation_strategy='steps',      # Evaluate every N steps
    eval_steps=5000,                  # Adjust based on dataset size
    save_strategy='steps',            # Save checkpoint every N steps
    save_steps=5000,                  # Match eval steps)
    num_train_epochs=epochs,
    save_total_limit=3,               # Keep only the last 3 checkpoints
    learning_rate=7e-5,               # Learning rate for the optimizer
    weight_decay=0.01,                # Weight decay for regularization
    warmup_ratio=0.05,                # Percentage of training steps for learning rate warmup
    lr_scheduler_type="cosine",       # Learning rate scheduler type (better than linear)
    dataloader_num_workers=8,         # Use multiple workers for data loading (adjust based on CPU cores)
    # report_to="tensorboard",        # Uncomment to enable TensorBoard logging
)

# Initialize the Trainer
trainer = Seq2SeqTrainer(
    model=model,
    tokenizer=processor.feature_extractor, # Pass the feature_extractor for collation
    args=training_args,
    compute_metrics=compute_metrics,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    data_collator=default_data_collator, # Default collator handles padding inputs/labels
)

# Start Training
print("\n--- Starting Training ---")
try:
    trainer.train()
    print("\n--- Training Completed ---")
except Exception as e:
    error_message = f"Error during training: {e}"
    print(error_message)
    # Consider saving a checkpoint on error if needed
    # trainer.save_model("./trocr-model-magistermilitum-interrupted")

# Save the final model and processor
print("Saving final model and processor...")
# Ensure the final directory name is consistent
final_save_path = "./trocr-model-tridis-final"
trainer.save_model(final_save_path)
processor.save_pretrained(final_save_path) # Save the processor alongside the model
print(f"Model and processor saved to {final_save_path}")

# Clean up CUDA cache if GPU was used
if device == "cuda":
    torch.cuda.empty_cache()
    print("CUDA cache cleared.")