Update README.md
Browse files
README.md
CHANGED
@@ -2659,3 +2659,15 @@ embeddings = F.normalize(embeddings, p=2, dim=1)
|
|
2659 |
print(embeddings)
|
2660 |
```
|
2661 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2659 |
print(embeddings)
|
2660 |
```
|
2661 |
|
2662 |
+
The model natively supports scaling of the sequence length past 2048 tokens. To do so,
|
2663 |
+
|
2664 |
+
```python
|
2665 |
+
- tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
|
2666 |
+
+ tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192)
|
2667 |
+
|
2668 |
+
|
2669 |
+
- model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1-unsupervised', trust_remote_code=True)
|
2670 |
+
+ model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1-unsupervised', trust_remote_code=True, rotary_scaling_factor=2)
|
2671 |
+
```
|
2672 |
+
|
2673 |
+
|