Safetensors
English
falcon_mamba
4-bit precision
bitsandbytes
ybelkada commited on
Commit
80a3a6e
1 Parent(s): 8ffe0cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -231,11 +231,15 @@ Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatr
231
 
232
  # Citation
233
 
234
- *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
235
  ```
236
- @article{falconmamba,
237
- title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
238
- author={Zuo, Jingwei and Velikanov, Maksim and Rhaiem, Dhia Eddine and Chahed, Ilyas and Belkada, Younes and Kunsch, Guillaume and Hacid, Hakim},
239
- year={2024}
 
 
 
 
240
  }
241
  ```
 
231
 
232
  # Citation
233
 
234
+ You can use the following bibtex citation:
235
  ```
236
+ @misc{zuo2024falconmambacompetitiveattentionfree,
237
+ title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
238
+ author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
239
+ year={2024},
240
+ eprint={2410.05355},
241
+ archivePrefix={arXiv},
242
+ primaryClass={cs.CL},
243
+ url={https://arxiv.org/abs/2410.05355},
244
  }
245
  ```