File size: 2,060 Bytes
9a158e1
 
 
 
 
 
 
 
60fd680
 
9a158e1
e666e70
 
650a799
e666e70
 
fb3d542
 
e666e70
fb3d542
 
 
 
 
 
 
fbda837
fb3d542
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: cc-by-nc-4.0
pretty_name: C
extra_gated_fields:
  Full Name: text
  Affiliation (Organization/University): text
  Designation/Status in Your Organization: text
  Country: country
  I want to use this model for (please provide the reason(s)): text
  LoRMA model is free for research use but NOT for commercial use; do you agree if you are provided with the LoRMA model, you will NOT use for any commercial purposes: checkbox
---
# LoRMA: Low-Rank Multiplicative Adaptation for LLMs
[![GitHub](https://img.shields.io/badge/GitHub-LoRMA-green?logo=github&style=flat-square)](https://github.com/Exploration-Lab/LoRMA)
[![GitHub](https://img.shields.io/badge/Webpage-LoRMA-yellow?style=flat-square)](https://exploration-lab.github.io/LoRMA/)
[![Arxiv](https://img.shields.io/badge/Arxiv-Paper-red?logo=arxiv&style=flat-square)](http://arxiv.org/abs/2506.07621)

**Title**

LoRMA: Low-Rank Multiplicative Adaptation for LLMs

**Abstract**

Large Language Models have shown remarkable capabilities in the NLP domain.Their effectiveness can mainly be attributed to their ability to adapt to an array of downstream tasks. However, generally, full fine-tuning is a computationally expensive job. To mitigate this, many techniques have been developed that prime efficiency, a prominent one being Low-Rank Adaptation (LoRA). However, LoRA and its variants employ re-parametrized additive updates. In this paper, we propose Low-Rank Multiplicative Adaptation (LoRMA), which shifts the paradigm of additive updates to a richer space of matrix multiplicative transformations. We tackle challenges such as computational complexity and rank bottleneck of matrix multiplication by effectively re-ordering operations and introducing rank inflation strategies. We conduct extensive experiments to demonstrate the effectiveness of our approach in terms of various evaluation metrics.

**For more details:**
- [GitHub Repository](https://github.com/Exploration-Lab/LoRMA)
- [Webpage](https://exploration-lab.github.io/LoRMA/)
- [Paper](https://arxiv.org/abs/2506.07621)