Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Organizations
Collections
3
models
38
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
18
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
72
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
135
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
27
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
24
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
24
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
22
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
23
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
197
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
55