Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jinaai
/
jina-bert-flash-implementation
like
5
Follow
Jina AI
1.25k
Transformers
bert
custom_code
๐ช๐บ Region: EU
Model card
Files
Files and versions
Community
18
Train
Deploy
Use this model
c768124
jina-bert-flash-implementation
Ctrl+K
Ctrl+K
6 contributors
History:
107 commits
michael-guenther
fix: glu for non-flash-attn
c768124
over 1 year ago
README.md
Safe
1.89 kB
feat: added README
over 1 year ago
bert_padding.py
Safe
9.78 kB
reference the flash attention GitHub
over 1 year ago
block.py
Safe
17.4 kB
reference the flash attention GitHub
over 1 year ago
configuration_bert.py
Safe
5.77 kB
Porting v2 models to flash attention (#15)
over 1 year ago
convert_v2_weights.py
Safe
6.1 kB
feat: for converting v2, added lines to save model weights and print config
over 1 year ago
embedding.py
Safe
2.26 kB
clean up embeddings.py (#7)
over 1 year ago
mha.py
35.3 kB
reference the flash attention GitHub
over 1 year ago
mlp.py
Safe
8.05 kB
fix: glu for non-flash-attn
over 1 year ago
modeling_bert.py
Safe
33.4 kB
fix: glu for non-flash-attn
over 1 year ago
modeling_for_glue.py
Safe
10.7 kB
feat: assert return_dict
over 1 year ago
modeling_lora.py
Safe
12.3 kB
fix: use staticmethod istead of classmethod
over 1 year ago