Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
jinaai
/
jina-bert-flash-implementation
like
5
Follow
Jina AI
707
Transformers
bert
custom_code
Inference Endpoints
🇪🇺 Region: EU
Model card
Files
Files and versions
Community
18
Train
Deploy
Use this model
e151a8f
jina-bert-flash-implementation
6 contributors
History:
70 commits
Markus28
michael-guenther
support-multiple-task-ids (
#5
)
e151a8f
verified
12 months ago
bert_padding.py
9.78 kB
reference the flash attention GitHub
12 months ago
block.py
17.4 kB
reference the flash attention GitHub
12 months ago
configuration_bert.py
5.76 kB
added classifier dropout
12 months ago
embedding.py
6.43 kB
reference the flash attention GitHub
12 months ago
mha.py
35.3 kB
reference the flash attention GitHub
12 months ago
mlp.py
6.17 kB
reference the flash attention GitHub
12 months ago
modeling_bert.py
28.6 kB
fix: assert is None for other kwargs too
12 months ago
modeling_for_glue.py
10.7 kB
feat: assert return_dict
12 months ago
modeling_lora.py
5.91 kB
fix: use proper initilization for embedding layer
12 months ago
tokenizer.py
3.65 kB
support-multiple-task-ids (#5)
12 months ago