Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
glove-v / README.md
avaimar's picture
Update README.md
20c28db verified
---
task_categories:
- feature-extraction
pretty_name: GloVe-V
---
# Dataset Card for Statistical Uncertainty in Word Embeddings: GloVe-V
<!-- Provide a quick summary of the dataset. -->
This is the data repository for the paper "Statistical Uncertainty in Word Embeddings: GloVe-V".
Our preprint is available [here](https://arxiv.org/abs/2406.12165).
**We introduce a method to obtain approximate, easy-to-use, and scalable uncertainty estimates for the GloVe word embeddings and
demonstrate its usefulness in natural language tasks and computational social science analysis.**
## Dataset Details
This data repository contains pre-computed GloVe embeddings and GloVe-V variances for several corpora, including:
- **Toy Corpus (300-dim)**: a subset of 11 words from the Corpus of Historical American English (1900-1999). Downloadable as `Toy-Embeddings`
- **Corpus of Historical American English (COHA) (1900-1999) (300-dim)**: Downloadable as `COHA_1900-1999_300d`
- More to come!
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset contains pre-computed GloVe embeddings and GloVe-V variances for the corpora listed above.
Given a vocabulary of size $V$, the GloVe-V variances require storing $V \times (D x D)$ floating point numbers.
For this reason, we produce two versions of the variances:
1. **Approximation Variances**: These are approximations to the full GloVe-V variances that can use either a diagonal approximation to the full variance, or a low-rank Singular Value Decomposition (SVD) approximation. We optimize this approximation at the level of each word to guarantee at least 90% reconstruction of the original variance. These approximations require storing much fewer floating point numbers than the full variances.
2. **Complete Variances**: These are the full GloVe-V variances, which require storing $V \times (D x D)$ floating point numbers. For example, in the case of the 300-dimensional embeddings for the COHA (1900-1999) corpus, this would be approximately 6.4 billion floating point numbers!
- **Created By:** Andrea Vallebueno, Cassandra Handan-Nader, Christopher D. Manning, and Daniel E. Ho
- **Languages:** English
- **License:** The license of these data products varies according to each corpora. In the case of the COHA corpus, these data products are intended for academic use only.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [GloVe-V GitHub repository](https://github.com/reglab/glove-v)
- **Paper:** [Preprint](https://arxiv.org/abs/2406.12165)
- **Demo:** [Tutorial](https://github.com/reglab/glove-v/blob/main/glove_v/docs/tutorial.ipynb)
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset for each corpus contains the following files (see the **Storage of GloVe-V Variances** section below for more details on the differences between the complete and approximated variances):
- `vocab.txt`: a list of the words in the corpus with associated frequencies
- `vectors.safetensors`: a safetensors file containing the embeddings for each word in the corpus
- `complete_chunk_{i}.safetensors`: a set of safetensors file containing the complete variances for each word in the corpus. These variances are size $D \times D$, where $D$ is the embedding dimensionality, and thus are very storage-intensive.
- `approx_info.txt`: a text file containing information on the approximation used to approximate the full variance of each word (diagonal approximation, or SVD approximation)
- `ApproximationVariances.safetensors`: a safetensors file containing the approximation variances for each word in the corpus. These approximations require storing much fewer floating point numbers than the full variances. If a word has been approximated by a diagonal approximation, then this file will contain only $D$ floating point numbers for each word. Alternatively, if a word has been approximated by an SVD approximation of rank $k$, then this file will contain $k(2D + 1)$ floating point numbers for each word.
## Use
Our tutorial notebook is available [here](https://github.com/reglab/glove-v/blob/main/glove_v/docs/tutorial.ipynb) and offers a detailed walkthrough of the process of downloading and interacting with the GloVe-V data products.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you use this software, please cite it as below:
**BibTeX:**
```bibtex
@inproceedings{glovev2024,
title = "Statistical Uncertainty in Word Embeddings: {GloVe-V}",
author = "Vallebueno, Andrea and Handan-Nader, Cassandra and Manning, Christopher D. and Ho, Daniel E.",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
year = "2024",
publisher = "Association for Computational Linguistics",
location = "Miami, Florida"
}
```
## Contact
Daniel E. Ho ([email protected])