YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigWeave v6 90B

A Goliath-120b style frankenmerge of Xwin-LM-70b-v0.1 and Euryale-1.3-70b. The goal is to find other merge combinations that work well.

The version number is for me to keep track of the merges, only results that seem to work reasonably well are kept/published.

Prompting Format

Vicuna and Alpaca.

Merge process

The models used in the merge are Xwin-LM-70b-v0.1 and Euryale-1.3-70b.

The layer mix:

- range 0, 12
  Xwin
- range 9, 14
  Euryale
- range 12, 62
  Xwin
- range 54, 71
  Euryale
- range 62, 80
  Xwin

Acknowledgements

@Xwin-LM For creating Xwin

@Sao10K For creating Euryale

@alpindale For creating the original Goliath

@chargoddard For developing mergekit.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.47
AI2 Reasoning Challenge (25-Shot) 65.36
HellaSwag (10-Shot) 87.21
MMLU (5-Shot) 68.04
TruthfulQA (0-shot) 57.96
Winogrande (5-shot) 81.69
GSM8k (5-shot) 44.58
Downloads last month
91
Safetensors
Model size
87.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigWeave-v6-90b

Quantizations
2 models

Collection including llmixer/BigWeave-v6-90b

Evaluation results