|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# CTRL |
|
|
|
<div class="flex flex-wrap space-x-1"> |
|
<a href="https://huggingface.co/models?filter=ctrl"> |
|
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> |
|
</a> |
|
<a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl"> |
|
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> |
|
</a> |
|
</div> |
|
|
|
## Overview |
|
|
|
CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and |
|
Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus |
|
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). |
|
|
|
The abstract from the paper is the following: |
|
|
|
*Large-scale language models show promising text generation capabilities, but users cannot easily control particular |
|
aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, |
|
trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were |
|
derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while |
|
providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the |
|
training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data |
|
via model-based source attribution.* |
|
|
|
Tips: |
|
|
|
- CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences |
|
or links to generate coherent text. Refer to the [original implementation](https://github.com/salesforce/ctrl) for |
|
more information. |
|
- CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than |
|
the left. |
|
- CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next |
|
token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be |
|
observed in the *run_generation.py* example script. |
|
- The PyTorch models can take the `past_key_values` as input, which is the previously computed key/value attention pairs. |
|
TensorFlow models accepts `past` as input. Using the `past_key_values` value prevents the model from re-computing |
|
pre-computed values in the context of text generation. See the [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) |
|
method for more information on the usage of this argument. |
|
|
|
This model was contributed by [keskarnitishr](https://huggingface.co/keskarnitishr). The original code can be found |
|
[here](https://github.com/salesforce/ctrl). |
|
|
|
## Documentation resources |
|
|
|
- [Text classification task guide](../tasks/sequence_classification) |
|
- [Causal language modeling task guide](../tasks/language_modeling) |
|
|
|
## CTRLConfig |
|
|
|
[[autodoc]] CTRLConfig |
|
|
|
## CTRLTokenizer |
|
|
|
[[autodoc]] CTRLTokenizer |
|
- save_vocabulary |
|
|
|
## CTRLModel |
|
|
|
[[autodoc]] CTRLModel |
|
- forward |
|
|
|
## CTRLLMHeadModel |
|
|
|
[[autodoc]] CTRLLMHeadModel |
|
- forward |
|
|
|
## CTRLForSequenceClassification |
|
|
|
[[autodoc]] CTRLForSequenceClassification |
|
- forward |
|
|
|
## TFCTRLModel |
|
|
|
[[autodoc]] TFCTRLModel |
|
- call |
|
|
|
## TFCTRLLMHeadModel |
|
|
|
[[autodoc]] TFCTRLLMHeadModel |
|
- call |
|
|
|
## TFCTRLForSequenceClassification |
|
|
|
[[autodoc]] TFCTRLForSequenceClassification |
|
- call |
|
|