Text Generation
Transformers
Safetensors
English
llama
python
codegen
markdown
smol_llama
text-generation-inference
pszemraj commited on
Commit
422db7f
·
1 Parent(s): 55da9c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -105,7 +105,14 @@ This is `BEE-spoke-data/smol_llama-220M-GQA` fine-tuned for code generation on:
105
 
106
  This model (and the base model) were both trained using ctx length 2048.
107
 
108
- Example script for inference testing: [here](https://gist.github.com/pszemraj/c7738f664a64b935a558974d23a7aa8c)
109
 
 
 
 
 
 
 
 
110
 
111
  ---
 
105
 
106
  This model (and the base model) were both trained using ctx length 2048.
107
 
108
+ ## examples
109
 
110
+ > Example script for inference testing: [here](https://gist.github.com/pszemraj/c7738f664a64b935a558974d23a7aa8c)
111
+
112
+ It has its limitations at 220M, but seems decent for single-line or docstring generation, and/or being used for speculative decoding for such purposes.
113
+
114
+
115
+
116
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/bLrtpr7Vi_MPvtF7mozDN.png)
117
 
118
  ---