attention-rollout / description.md
Martijn van Beers
Update texts for the extra method
3b59ebc

A newer version of the Gradio SDK is available: 5.9.1

Upgrade

Attention Rollout -- RoBERTa

In this demo, we use the RoBERTa language model (optimized for masked language modelling and finetuned for sentiment analysis). The model predicts for a given sentences whether it expresses a positive, negative or neutral sentiment. But how does it arrive at its classification? This is, surprisingly perhaps, very difficult to determine.

Abnar & Zuidema (2020) proposed a method for Transformers called Attention Rollout, which was further refined by Chefer et al. (2021) into Gradient-weighted Attention Rollout. Here we compare them to another popular method called Integrated Gradients.