matlok
's Collections
Papers - Fine-tuning - LoRA
updated
Unleashing the Power of Pre-trained Language Models for Offline
Reinforcement Learning
Paper
•
2310.20587
•
Published
•
16
MedAlpaca -- An Open-Source Collection of Medical Conversational AI
Models and Training Data
Paper
•
2304.08247
•
Published
•
2
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper
•
2311.03285
•
Published
•
28
WavLLM: Towards Robust and Adaptive Speech Large Language Model
Paper
•
2404.00656
•
Published
•
10
OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of
Instruction Data
Paper
•
2404.12195
•
Published
•
11
OpenELM: An Efficient Language Model Family with Open-source Training
and Inference Framework
Paper
•
2404.14619
•
Published
•
126
Stylus: Automatic Adapter Selection for Diffusion Models
Paper
•
2404.18928
•
Published
•
14
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper
•
2405.00732
•
Published
•
119
In-context Vectors: Making In Context Learning More Effective and
Controllable Through Latent Space Steering
Paper
•
2311.06668
•
Published
•
5
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
Paper
•
2312.03732
•
Published
•
8
CLEAR: Character Unlearning in Textual and Visual Modalities
Paper
•
2410.18057
•
Published
•
200
LoRA vs Full Fine-tuning: An Illusion of Equivalence
Paper
•
2410.21228
•
Published
•
2
Physics of Language Models: Part 2.2, How to Learn From Mistakes on
Grade-School Math Problems
Paper
•
2408.16293
•
Published
•
25
AnglE-optimized Text Embeddings
Paper
•
2309.12871
•
Published
•
2
No More Adam: Learning Rate Scaling at Initialization is All You Need
Paper
•
2412.11768
•
Published
•
41