YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SynapseLLM:

SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.

Model Details

SynapseLLM:

  • Parameters: 7B
  • Learning rate: 2e-4
  • Adapter used: Qlora
  • Precision: float16
  • Batch size: 32
  • Maximum gradient normal: 0.3
  • Optimizer: paged_adamw_32bit
  • Warmup Ratio: 0.03
  • Step(s) (trained): 100
  • Epoch(s) (trained): 1

Model Description

This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 140k rows comprising of 73k Code and 67k General Q/A (Through GPT-4). This is a full model merged and compiled with trained adapters, so you can easily load this through transformers.

  • Developed by: WebraftAI
  • Funded by: Webraft Cloud
  • Shared by: WebraftAI
  • Model type: Decoder-only Transformer
  • Language(s): English Only
  • License: Apache 2.0
  • Finetuned from model: Mistral-7b-v0.1
Downloads last month
11
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for WebraftAI/synapsellm-7b-mistral-v0.2

Quantizations
2 models