File size: 1,564 Bytes
5670be8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
model-index:
- name: CodeLlama-7b
  results:
  - task:
      type: code-generation
    dataset:
      name: Humaneval
      type: humaneval
    metrics:
    - type: pass@1 (BASELINE)
      value: 0.3048780487804878
    - type: pass@1 (BASIC)
      value: 0.3170731707317073
---
This is a d-Matrix functional reference of the CODELLAMA-7B model.
The reference provides the following functional *configurations*:
  Configuration | Explanation
  :-- | :-- 
  **`BASELINE`** | a reference functionally equivalent to the original model
  **`BASIC`** | all linear algebraic operands quantized to `BFP16-64`, and all other operations transformed to approximated kernel simulations


### Usage

Install d-Matrix [Dmx_Compressor](https://github.com/d-matrix-ai/dmx-compressor) first.
```sh
pip install dmx_compressor
```

The following is an example model and its evaluation.

```sh
git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness
pip install .
```

```python
from dmx.compressor.modeling import DmxModel
from bigcode_eval.evaluator import Evaluator

pipe = pipeline(
  task="text-generation",
  model="d-matrix/CodeLlama-7b",
  trust_remote_code=True,
)

# Transform the model with DMX
model = DmxModel.from_torch(pipe.model).to_basic_model()  # Using BASIC configuration
model(torch.randint(1, 100, (1, max_length))) # Assign desired max length of generation

evaluator = Evaluator(accelerator, model, tokenizer, eval_args)

eval_results = evaluator.evaluate(task)  # Assign desired task, i.e. "humaneval"
```