YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

patent-evol-merge - bnb 8bits

Original model description:

base_model: [] library_name: transformers tags:

  • mergekit
  • merge

best_merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287 as a base.

Models Merged

The following models were included in the merge:

  • /root/evol_merge_storage/input_models/Patent-Instruct-7b_60368649
  • /root/evol_merge_storage/input_models/Orca-2-7b_2312263870
  • /root/evol_merge_storage/input_models/Barcenas-Orca-2-7b_1478912867

Configuration

The following YAML configuration was used to produce this model:

base_model: /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  int8_mask: 1.0
  normalize: 0.0
slices:
- sources:
  - layer_range: [0, 8]
    model: /root/evol_merge_storage/input_models/Patent-Instruct-7b_60368649
    parameters:
      weight: 0.12964183139810131
  - layer_range: [0, 8]
    model: /root/evol_merge_storage/input_models/Barcenas-Orca-2-7b_1478912867
    parameters:
      weight: 0.6876744008045087
  - layer_range: [0, 8]
    model: /root/evol_merge_storage/input_models/Orca-2-7b_2312263870
    parameters:
      weight: 0.04984086375373306
  - layer_range: [0, 8]
    model: /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287
- sources:
  - layer_range: [8, 16]
    model: /root/evol_merge_storage/input_models/Patent-Instruct-7b_60368649
    parameters:
      weight: 0.4744102649337617
  - layer_range: [8, 16]
    model: /root/evol_merge_storage/input_models/Barcenas-Orca-2-7b_1478912867
    parameters:
      weight: 0.00040582065232951103
  - layer_range: [8, 16]
    model: /root/evol_merge_storage/input_models/Orca-2-7b_2312263870
    parameters:
      weight: 1.1436607369426315
  - layer_range: [8, 16]
    model: /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287
- sources:
  - layer_range: [16, 24]
    model: /root/evol_merge_storage/input_models/Patent-Instruct-7b_60368649
    parameters:
      weight: 0.3615157971780197
  - layer_range: [16, 24]
    model: /root/evol_merge_storage/input_models/Barcenas-Orca-2-7b_1478912867
    parameters:
      weight: 0.11547324542144169
  - layer_range: [16, 24]
    model: /root/evol_merge_storage/input_models/Orca-2-7b_2312263870
    parameters:
      weight: 0.773494346001556
  - layer_range: [16, 24]
    model: /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287
- sources:
  - layer_range: [24, 32]
    model: /root/evol_merge_storage/input_models/Patent-Instruct-7b_60368649
    parameters:
      weight: 0.027506217667945004
  - layer_range: [24, 32]
    model: /root/evol_merge_storage/input_models/Barcenas-Orca-2-7b_1478912867
    parameters:
      weight: 0.4112043376249425
  - layer_range: [24, 32]
    model: /root/evol_merge_storage/input_models/Orca-2-7b_2312263870
    parameters:
      weight: 0.48967743702922145
  - layer_range: [24, 32]
    model: /root/evol_merge_storage/input_models/Llama-2-7B-fp16_227878287
Downloads last month
2
Safetensors
Model size
6.74B params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.