Alexandre Marques
alexmarques
AI & ML interests
None yet
Recent Activity
updated
a model
21 days ago
nm-testing/llama-3-fp8-2of4-dynamic-uncompressed
updated
a model
21 days ago
neuralmagic/Sparse-Llama-3.1-8B-gsm8k-2of4-FP8-dynamic
Organizations
alexmarques's activity
KV Cache Quantization - what is the default precision
1
#2 opened 20 days ago
by
deepmage121
Usage of --apply_chat_template in lm_eval benchmarks
1
#1 opened 5 months ago
by
VlSav
how many resources were used for quantizing this model?
1
#4 opened 4 months ago
by
fengyang1995
4k or 128k?
1
#1 opened 4 months ago
by
pavidu
Update README.md
#1 opened 4 months ago
by
nm-research
Llama-3.1 8B quantization?
1
#1 opened 5 months ago
by
ashbo
Language Support
1
#2 opened 5 months ago
by
ashbo
possible problem in description
2
#3 opened 5 months ago
by
kuliev-vitaly
Weird variable name mistakes in code generation
1
#1 opened 5 months ago
by
krana
Update README.md
#1 opened 6 months ago
by
alexmarques
Update README.md
#1 opened 6 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques
Update README.md
#1 opened 9 months ago
by
alexmarques