license: apache-2.0 | |
# Mixture of Tokens | |
## Model description | |
Mixture of Tokens is a fully-differentiable model that retains the benefits of MoE architectures while avoiding the aforementioned difficulties. Rather than routing tokens to experts, this approach mixes tokens from different examples prior to feeding them to experts, enabling the model to learn from all token-expert combinations. Importantly, this mixing can be disabled to avoid mixing of different sequences during inference. Crucially, this method is fully compatible with both masked and causal Large Language Model training and inference. | |
## Tips: | |
During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as `group_size` in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`. | |
## Usage example | |
The example generated by the model hub may be incorrect. To get started, try running: | |
```python | |
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline | |
tokenizer = AutoTokenizer.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True) | |
model = AutoModelForCausalLM.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True) | |
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) | |
pipe("Is mixture of tokens better than a dense model?") | |
``` | |