pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Solar 10.7B Instruct v1.0 - GPTQ
- Model creator: [upstage](https://huggingface.co/upstage)
- Original model: [Solar 10.7B Instruct v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
<!-- description start -->
# Description
This repo contains GPTQ model files for [upstage's Solar 10.7B Instruct v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF)
* [upstage's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `SOLAR-10.7B-Instruct-v1.0-GPTQ`:
```shell
mkdir SOLAR-10.7B-Instruct-v1.0-GPTQ
huggingface-cli download TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ --local-dir SOLAR-10.7B-Instruct-v1.0-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir SOLAR-10.7B-Instruct-v1.0-GPTQ
huggingface-cli download TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir SOLAR-10.7B-Instruct-v1.0-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir SOLAR-10.7B-Instruct-v1.0-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ --local-dir SOLAR-10.7B-Instruct-v1.0-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SOLAR-10.7B-Instruct-v1.0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### User:
{prompt}
### Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''### User:
{prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: upstage's Solar 10.7B Instruct v1.0
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation. Detailed description to be added.)**
# **Introduction**
We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]
# **Usage Instructions**
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
### **Version**
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
"Upstage/SOLAR-10.7B-Instruct-v1.0",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **Conducting Single-Turn Conversation**
```python
conversation = [ {'role': 'user', 'content': 'Hello?'} ]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_length=4096) output_text = tokenizer.decode(outputs[0])
print(output_text)
```
Below is an example of the output.
```
<s> ### User:
Hello?
### Assistant:
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
```
### **The Upstage AI Team** ###
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
### **Contact Us** ###
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected]](mailto:[email protected])
|
{"license": "apache-2.0", "model_name": "Solar 10.7B Instruct v1.0", "base_model": "upstage/SOLAR-10.7B-Instruct-v1.0", "inference": false, "model_creator": "upstage", "model_type": "solar", "prompt_template": "### User:\n{prompt}\n\n### Assistant:\n", "quantized_by": "TheBloke"}
|
dmanary-pronavigator/SOLAR-10.7B-Instruct-v1.0-gptq-8bit-1g-actorder_True
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null |
2024-04-12T16:46:59+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #base_model-upstage/SOLAR-10.7B-Instruct-v1.0 #license-apache-2.0 #autotrain_compatible #text-generation-inference #8-bit #region-us
|

[[TheBloke's LLM work is generously supported by a grant from [andreessen horowitz (a16z)](URL)](URL to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style=)](URL & support: TheBloke's Discord server</a></p>
</div>
<div style=)
---
Solar 10.7B Instruct v1.0 - GPTQ
================================
* Model creator: upstage
* Original model: Solar 10.7B Instruct v1.0
Description
===========
This repo contains GPTQ model files for upstage's Solar 10.7B Instruct v1.0.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by Massed Compute.
Repositories available
----------------------
* AWQ model(s) for GPU inference.
* GPTQ models for GPU inference, with multiple quantisation parameter options.
* 2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
* upstage's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: User-Assistant-Newlines
----------------------------------------
Known compatible clients / servers
----------------------------------
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
* text-generation-webui
* KoboldAI United
* LoLLMS Web UI
* Hugging Face Text Generation Inference (TGI)
This may not be a complete list; if you know of others, please let me know!
Provided files, and GPTQ parameters
-----------------------------------
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
Explanation of GPTQ parameters
* Bits: The bit size of the quantised model.
* GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
* Act Order: True or False. Also known as 'desc\_act'. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
* Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
* GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
* Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
* ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
How to download, including from branches
----------------------------------------
### In text-generation-webui
To download from the 'main' branch, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ' in the "Download model" box.
To download from another branch, add ':branchname' to the end of the download name, eg 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\_True'
### From the command line
I recommend using the 'huggingface-hub' Python library:
To download the 'main' branch to a folder called 'SOLAR-10.7B-Instruct-v1.0-GPTQ':
To download from a different branch, add the '--revision' parameter:
More advanced huggingface-cli download usage
If you remove the '--local-dir-use-symlinks False' parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: '~/.cache/huggingface'), and symlinks will be added to the specified '--local-dir', pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the 'HF\_HOME' environment variable, and/or the '--cache-dir' parameter to 'huggingface-cli'.
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer':
And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command.
### With 'git' (not recommended)
To clone a specific branch with 'git', use a command like this:
Note that using Git with HF repos is strongly discouraged. It will be much slower than using 'huggingface-hub', and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the '.git' folder as a blob.)
How to easily download and use this model in text-generation-webui
------------------------------------------------------------------
Please make sure you're using the latest version of text-generation-webui.
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the Model tab.
2. Under Download custom model or LoRA, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ'.
* To download from a specific branch, enter for example 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\_True'
* see Provided Files above for the list of branches for each option.
3. Click Download.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to Model.
6. In the Model dropdown, choose the model you just downloaded: 'SOLAR-10.7B-Instruct-v1.0-GPTQ'
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file 'quantize\_config.json'.
9. Once you're ready, click the Text Generation tab and enter a prompt to get started!
Serving this model from Text Generation Inference (TGI)
-------------------------------------------------------
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: 'URL
Example Docker parameters:
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
Python code example: inference from this GPTQ model
---------------------------------------------------
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
### Example Python code
Compatibility
-------------
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
ExLlama is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
Discord
-------
For further support, and discussions on these models and AI in general, join us at:
TheBloke AI's Discord server
Thanks, and how to contribute
-----------------------------
Thanks to the URL team!
Thanks to Clay from URL!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: URL
* Ko-Fi: URL
Special thanks to: Aemon Algiz.
Patreon special mentions: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, URL, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S\_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: upstage's Solar 10.7B Instruct v1.0
========================================================
Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!
======================================================================
(This model is upstage/SOLAR-10.7B-v1.0 fine-tuned version for single-turn conversation. Detailed description to be added.)
Introduction
============
We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]
Usage Instructions
==================
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
### Version
Make sure you have the correct version of the transformers library installed:
### Loading the Model
Use the following Python code to load the model:
### Conducting Single-Turn Conversation
Below is an example of the output.
### The Upstage AI Team
Upstage is creating the best LLM and DocAI. Please find more information at URL
### Contact Us
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to contact@URL
|
[
"### In text-generation-webui\n\n\nTo download from the 'main' branch, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ' in the \"Download model\" box.\n\n\nTo download from another branch, add ':branchname' to the end of the download name, eg 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\\_True'",
"### From the command line\n\n\nI recommend using the 'huggingface-hub' Python library:\n\n\nTo download the 'main' branch to a folder called 'SOLAR-10.7B-Instruct-v1.0-GPTQ':\n\n\nTo download from a different branch, add the '--revision' parameter:\n\n\n\nMore advanced huggingface-cli download usage\nIf you remove the '--local-dir-use-symlinks False' parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: '~/.cache/huggingface'), and symlinks will be added to the specified '--local-dir', pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.\n\n\nThe cache location can be changed with the 'HF\\_HOME' environment variable, and/or the '--cache-dir' parameter to 'huggingface-cli'.\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\\_transfer':\n\n\nAnd set environment variable 'HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER' to '1':\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER=1' before the download command.",
"### With 'git' (not recommended)\n\n\nTo clone a specific branch with 'git', use a command like this:\n\n\nNote that using Git with HF repos is strongly discouraged. It will be much slower than using 'huggingface-hub', and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the '.git' folder as a blob.)\n\n\nHow to easily download and use this model in text-generation-webui\n------------------------------------------------------------------\n\n\nPlease make sure you're using the latest version of text-generation-webui.\n\n\nIt is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.\n\n\n1. Click the Model tab.\n2. Under Download custom model or LoRA, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ'.\n\n\n\t* To download from a specific branch, enter for example 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\\_True'\n\t* see Provided Files above for the list of branches for each option.\n3. Click Download.\n4. The model will start downloading. Once it's finished it will say \"Done\".\n5. In the top left, click the refresh icon next to Model.\n6. In the Model dropdown, choose the model you just downloaded: 'SOLAR-10.7B-Instruct-v1.0-GPTQ'\n7. The model will automatically load, and is now ready for use!\n8. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.\n\n\n\t* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file 'quantize\\_config.json'.\n9. Once you're ready, click the Text Generation tab and enter a prompt to get started!\n\n\nServing this model from Text Generation Inference (TGI)\n-------------------------------------------------------\n\n\nIt's recommended to use TGI version 1.1.0 or later. The official Docker container is: 'URL\n\n\nExample Docker parameters:\n\n\nExample Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):\n\n\nPython code example: inference from this GPTQ model\n---------------------------------------------------",
"### Install the necessary packages\n\n\nRequires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.\n\n\nIf you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:",
"### Example Python code\n\n\nCompatibility\n-------------\n\n\nThe files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.\n\n\nExLlama is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.\n\n\nFor a list of clients/servers, please see \"Known compatible clients / servers\", above.\n\n\nDiscord\n-------\n\n\nFor further support, and discussions on these models and AI in general, join us at:\n\n\nTheBloke AI's Discord server\n\n\nThanks, and how to contribute\n-----------------------------\n\n\nThanks to the URL team!\n\n\nThanks to Clay from URL!\n\n\nI've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.\n\n\nIf you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.\n\n\nDonaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.\n\n\n* Patreon: URL\n* Ko-Fi: URL\n\n\nSpecial thanks to: Aemon Algiz.\n\n\nPatreon special mentions: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, URL, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S\\_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros\n\n\nThank you to all my generous patrons and donaters!\n\n\nAnd thank you again to a16z for their generous grant.\n\n\nOriginal model card: upstage's Solar 10.7B Instruct v1.0\n========================================================\n\n\nMeet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!\n======================================================================\n\n\n(This model is upstage/SOLAR-10.7B-v1.0 fine-tuned version for single-turn conversation. Detailed description to be added.)\n\n\nIntroduction\n============\n\n\nWe introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.\n\n\nWe developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.\n\n\nDepth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).\nSolar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]\n\n\nUsage Instructions\n==================\n\n\nThis model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.",
"### Version\n\n\nMake sure you have the correct version of the transformers library installed:",
"### Loading the Model\n\n\nUse the following Python code to load the model:",
"### Conducting Single-Turn Conversation\n\n\nBelow is an example of the output.",
"### The Upstage AI Team\n\n\nUpstage is creating the best LLM and DocAI. Please find more information at URL",
"### Contact Us\n\n\nAny questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to contact@URL"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #base_model-upstage/SOLAR-10.7B-Instruct-v1.0 #license-apache-2.0 #autotrain_compatible #text-generation-inference #8-bit #region-us \n",
"### In text-generation-webui\n\n\nTo download from the 'main' branch, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ' in the \"Download model\" box.\n\n\nTo download from another branch, add ':branchname' to the end of the download name, eg 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\\_True'",
"### From the command line\n\n\nI recommend using the 'huggingface-hub' Python library:\n\n\nTo download the 'main' branch to a folder called 'SOLAR-10.7B-Instruct-v1.0-GPTQ':\n\n\nTo download from a different branch, add the '--revision' parameter:\n\n\n\nMore advanced huggingface-cli download usage\nIf you remove the '--local-dir-use-symlinks False' parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: '~/.cache/huggingface'), and symlinks will be added to the specified '--local-dir', pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.\n\n\nThe cache location can be changed with the 'HF\\_HOME' environment variable, and/or the '--cache-dir' parameter to 'huggingface-cli'.\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\\_transfer':\n\n\nAnd set environment variable 'HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER' to '1':\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF\\_HUB\\_ENABLE\\_HF\\_TRANSFER=1' before the download command.",
"### With 'git' (not recommended)\n\n\nTo clone a specific branch with 'git', use a command like this:\n\n\nNote that using Git with HF repos is strongly discouraged. It will be much slower than using 'huggingface-hub', and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the '.git' folder as a blob.)\n\n\nHow to easily download and use this model in text-generation-webui\n------------------------------------------------------------------\n\n\nPlease make sure you're using the latest version of text-generation-webui.\n\n\nIt is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.\n\n\n1. Click the Model tab.\n2. Under Download custom model or LoRA, enter 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ'.\n\n\n\t* To download from a specific branch, enter for example 'TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ:gptq-4bit-32g-actorder\\_True'\n\t* see Provided Files above for the list of branches for each option.\n3. Click Download.\n4. The model will start downloading. Once it's finished it will say \"Done\".\n5. In the top left, click the refresh icon next to Model.\n6. In the Model dropdown, choose the model you just downloaded: 'SOLAR-10.7B-Instruct-v1.0-GPTQ'\n7. The model will automatically load, and is now ready for use!\n8. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.\n\n\n\t* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file 'quantize\\_config.json'.\n9. Once you're ready, click the Text Generation tab and enter a prompt to get started!\n\n\nServing this model from Text Generation Inference (TGI)\n-------------------------------------------------------\n\n\nIt's recommended to use TGI version 1.1.0 or later. The official Docker container is: 'URL\n\n\nExample Docker parameters:\n\n\nExample Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):\n\n\nPython code example: inference from this GPTQ model\n---------------------------------------------------",
"### Install the necessary packages\n\n\nRequires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.\n\n\nIf you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:",
"### Example Python code\n\n\nCompatibility\n-------------\n\n\nThe files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.\n\n\nExLlama is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.\n\n\nFor a list of clients/servers, please see \"Known compatible clients / servers\", above.\n\n\nDiscord\n-------\n\n\nFor further support, and discussions on these models and AI in general, join us at:\n\n\nTheBloke AI's Discord server\n\n\nThanks, and how to contribute\n-----------------------------\n\n\nThanks to the URL team!\n\n\nThanks to Clay from URL!\n\n\nI've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.\n\n\nIf you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.\n\n\nDonaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.\n\n\n* Patreon: URL\n* Ko-Fi: URL\n\n\nSpecial thanks to: Aemon Algiz.\n\n\nPatreon special mentions: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, URL, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S\\_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros\n\n\nThank you to all my generous patrons and donaters!\n\n\nAnd thank you again to a16z for their generous grant.\n\n\nOriginal model card: upstage's Solar 10.7B Instruct v1.0\n========================================================\n\n\nMeet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!\n======================================================================\n\n\n(This model is upstage/SOLAR-10.7B-v1.0 fine-tuned version for single-turn conversation. Detailed description to be added.)\n\n\nIntroduction\n============\n\n\nWe introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.\n\n\nWe developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.\n\n\nDepth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).\nSolar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]\n\n\nUsage Instructions\n==================\n\n\nThis model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.",
"### Version\n\n\nMake sure you have the correct version of the transformers library installed:",
"### Loading the Model\n\n\nUse the following Python code to load the model:",
"### Conducting Single-Turn Conversation\n\n\nBelow is an example of the output.",
"### The Upstage AI Team\n\n\nUpstage is creating the best LLM and DocAI. Please find more information at URL",
"### Contact Us\n\n\nAny questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to contact@URL"
] |
text-generation
|
transformers
|
# RogerMerge-12B-MoE
RogerMerge-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [allknowingroger/PercivalMelodias-7B-slerp](https://huggingface.co/allknowingroger/PercivalMelodias-7B-slerp)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: allknowingroger/PercivalMelodias-7B-slerp
positive_prompts: ["why "]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/RogerMerge-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "allknowingroger/PercivalMelodias-7B-slerp"], "base_model": ["allknowingroger/MultiverseEx26-7B-slerp", "allknowingroger/PercivalMelodias-7B-slerp"]}
|
allknowingroger/RogerMerge-12B-MoE
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/PercivalMelodias-7B-slerp",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:allknowingroger/PercivalMelodias-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T16:47:15+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #allknowingroger/PercivalMelodias-7B-slerp #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-allknowingroger/PercivalMelodias-7B-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# RogerMerge-12B-MoE
RogerMerge-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* allknowingroger/MultiverseEx26-7B-slerp
* allknowingroger/PercivalMelodias-7B-slerp
## Configuration
## Usage
|
[
"# RogerMerge-12B-MoE\n\nRogerMerge-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* allknowingroger/PercivalMelodias-7B-slerp",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #allknowingroger/PercivalMelodias-7B-slerp #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-allknowingroger/PercivalMelodias-7B-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# RogerMerge-12B-MoE\n\nRogerMerge-12B-MoE is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* allknowingroger/PercivalMelodias-7B-slerp",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
skitzobaka/SFT_Finetuned_Gemma_2B
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T16:49:03+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Anaphase21/bloom_for_zulu
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T16:49:07+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# NeuralSynthesis-7b-v0.4-slerp
NeuralSynthesis-7b-v0.4-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [Kukedlc/NeuralSynthesis-7B-v0.1](https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/MultiverseEx26-7B-slerp
layer_range: [0, 32]
- model: Kukedlc/NeuralSynthesis-7B-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralSynthesis-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralSynthesis-7b-v0.4-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "Kukedlc/NeuralSynthesis-7B-v0.1"], "base_model": ["allknowingroger/MultiverseEx26-7B-slerp", "Kukedlc/NeuralSynthesis-7B-v0.1"]}
|
Kukedlc/NeuralSynthesis-7b-v0.4-slerp
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"Kukedlc/NeuralSynthesis-7B-v0.1",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T16:49:09+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #Kukedlc/NeuralSynthesis-7B-v0.1 #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NeuralSynthesis-7b-v0.4-slerp
NeuralSynthesis-7b-v0.4-slerp is a merge of the following models using LazyMergekit:
* allknowingroger/MultiverseEx26-7B-slerp
* Kukedlc/NeuralSynthesis-7B-v0.1
## Configuration
## Usage
|
[
"# NeuralSynthesis-7b-v0.4-slerp\n\nNeuralSynthesis-7b-v0.4-slerp is a merge of the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* Kukedlc/NeuralSynthesis-7B-v0.1",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #Kukedlc/NeuralSynthesis-7B-v0.1 #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NeuralSynthesis-7b-v0.4-slerp\n\nNeuralSynthesis-7b-v0.4-slerp is a merge of the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* Kukedlc/NeuralSynthesis-7B-v0.1",
"## Configuration",
"## Usage"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-intro-disc-PLOS
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "BioNLP-intro-disc-PLOS", "results": []}]}
|
dtorber/BioNLP-intro-disc-PLOS
| null |
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T16:50:01+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# BioNLP-intro-disc-PLOS
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"# BioNLP-intro-disc-PLOS\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# BioNLP-intro-disc-PLOS\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violence-audio-Recognition-666
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Accuracy: 0.9646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4755 | 0.99 | 61 | 0.3227 | 0.8715 |
| 0.2665 | 1.99 | 123 | 0.2088 | 0.9322 |
| 0.1808 | 3.0 | 185 | 0.1783 | 0.9474 |
| 0.1505 | 4.0 | 247 | 0.1528 | 0.9504 |
| 0.1158 | 4.99 | 308 | 0.1260 | 0.9615 |
| 0.0928 | 5.99 | 370 | 0.1302 | 0.9656 |
| 0.0792 | 7.0 | 432 | 0.1327 | 0.9626 |
| 0.0707 | 7.9 | 488 | 0.1258 | 0.9646 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["audiofolder"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "violence-audio-Recognition-666", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "audiofolder", "type": "audiofolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9645748987854251, "name": "Accuracy"}]}]}]}
|
Hemg/violence-audio-Recognition-666
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T16:53:31+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
violence-audio-Recognition-666
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the audiofolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1258
* Accuracy: 0.9646
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.01
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
16 layers
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron"}
|
LeroyDyer/Mixtral_AI_MiniTron_3.75b
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:LeroyDyer/Mixtral_AI_MiniTron",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T16:53:33+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
16 layers
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dhara3078/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
|
Dhara3078/ppo-Huggy
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null |
2024-04-12T16:53:46+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: Dhara3078/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Dhara3078/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Dhara3078/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-ner
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1892
- Overall Precision: 0.4664
- Overall Recall: 0.5553
- Overall F1: 0.5070
- Overall Accuracy: 0.9572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2307 | 1.0 | 1358 | 0.1703 | 0.3849 | 0.3888 | 0.3869 | 0.9497 |
| 0.1415 | 2.0 | 2716 | 0.1589 | 0.3761 | 0.5286 | 0.4395 | 0.9490 |
| 0.0932 | 3.0 | 4074 | 0.1515 | 0.4580 | 0.5197 | 0.4869 | 0.9560 |
| 0.0763 | 4.0 | 5432 | 0.1763 | 0.4885 | 0.5146 | 0.5012 | 0.9583 |
| 0.0586 | 5.0 | 6790 | 0.1892 | 0.4664 | 0.5553 | 0.5070 | 0.9572 |
### Framework versions
- Transformers 4.39.3
- Pytorch 1.12.1+cu113
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "biogpt-ner", "results": []}]}
|
ttlanhhh/biogpt-ner
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:00:20+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
biogpt-ner
==========
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1892
* Overall Precision: 0.4664
* Overall Recall: 0.5553
* Overall F1: 0.5070
* Overall Accuracy: 0.9572
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 1.12.1+cu113
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu113\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu113\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
riyadule/toxic_stack_gemma
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:01:23+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HenryCai1129/adapter-emo
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:02:07+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image
|
diffusers
|
# LoRA DreamBooth - squaadinc/1712941448370x493394559352216000
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
A photo of TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'squaadinc/1712941448370x493394559352216000',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic A photo of TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
{"tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora"], "datasets": ["jamine23/jamine23juliavzs"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of TOK", "inference": false}
|
squaadinc/1712941448370x493394559352216000
| null |
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:jamine23/jamine23juliavzs",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-12T17:04:34+00:00
|
[] |
[] |
TAGS
#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-jamine23/jamine23juliavzs #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# LoRA DreamBooth - squaadinc/1712941448370x493394559352216000
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
To just use the base model, you can run:
|
[
"# LoRA DreamBooth - squaadinc/1712941448370x493394559352216000\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
[
"TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-jamine23/jamine23juliavzs #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# LoRA DreamBooth - squaadinc/1712941448370x493394559352216000\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
abhayesian/BobzillaV11
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:04:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-bass-classifier5
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the bass_design_encoded dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0292
- Accuracy: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4595 | 1.0 | 1914 | 0.7017 | 0.9218 |
| 0.8718 | 2.0 | 3828 | 0.4075 | 0.9733 |
| 0.0 | 3.0 | 5742 | 0.2594 | 0.9841 |
| 0.0 | 4.0 | 7656 | 0.1175 | 0.9918 |
| 0.0 | 5.0 | 9570 | 0.0862 | 0.9965 |
| 0.0 | 6.0 | 11484 | 0.0947 | 0.9956 |
| 0.6718 | 7.0 | 13398 | 0.3438 | 0.9877 |
| 0.0021 | 8.0 | 15312 | 0.0936 | 0.9953 |
| 0.0 | 9.0 | 17226 | 0.0909 | 0.9956 |
| 0.0 | 10.0 | 19140 | 0.0292 | 0.9982 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["TheDuyx/augmented_bass_sounds"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-bass-classifier5", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "bass_design_encoded", "type": "TheDuyx/augmented_bass_sounds"}, "metrics": [{"type": "accuracy", "value": 0.9982363315696648, "name": "Accuracy"}]}]}]}
|
TheDuyx/distilhubert-bass-classifier5
| null |
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:TheDuyx/augmented_bass_sounds",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-12T17:06:57+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #hubert #audio-classification #generated_from_trainer #dataset-TheDuyx/augmented_bass_sounds #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
distilhubert-bass-classifier5
=============================
This model is a fine-tuned version of ntu-spml/distilhubert on the bass\_design\_encoded dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0292
* Accuracy: 0.9982
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.2
* Pytorch 2.2.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #hubert #audio-classification #generated_from_trainer #dataset-TheDuyx/augmented_bass_sounds #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "metrics": ["bleu"]}
|
Reyansh4/NMT_T5_wmt14_en_to_de
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:07:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OwOpeepeepoopoo/ummm5
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:08:22+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
bharathirajan89/bharathi_mistral_7b_pulse_tmp_3000_rows_summary_v1
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:08:30+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
Model Card for Fantastica-7b-Instruct-0.2-Italian
# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹
Fantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹
# COLAB NOTEBOOK (load in 4bit):
https://colab.research.google.com/drive/1txMLI6-rvo2lBmBazsP3-5WgYxVWiKzt?usp=sharing
# Fantastica-7b-Instruct-0.2-Italian's peculiar features:
- Mistral-7B-Instruct-v0.2 v.02 as base.
- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.
- generation of film and novel plots in Italian.
- generation of descriptions of paintings in Italian.
- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (https://liberliber.it/), and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.
- Training took 70 hours with Google Colab A100.
- First version of a series of models dedicated to Italian literature.
# Notes on sources:
- the model makes use of literary works no longer covered by copyright, largely derived from liberliber.it corpus.
- the model uses entries selected from the free Wikipedia encyclopedia.
# PROMPT FORMAT
Your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence \<s\>. The next instructions should not.
\<s\>[INST]{instruction}[/INST]
for few-shots prompt use:
\<s\>[INST]{instruction}[/INST]{response}[INST]{instruction2}[/INST]
# Python
```python
# For faster text generation: !pip install accelerate
# import accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("scribis/Fantastica-7b-Instruct-0.2-Italian_merged")
# with accelerate: model = AutoModelForCausalLM.from_pretrained("scribis/Fantastica-7b-Instruct-0.2-Italian_merged", device_map = 'cuda')
tokenizer = AutoTokenizer.from_pretrained("scribis/Fantastica-7b-Instruct-0.2-Italian_merged")
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=768, temperature=0.75, repetition_penalty=1.2, do_sample=True)
prompt = "<s>[INST]@Scribis @Italiano. Raccontami la trama di un romanzo storico.[/INST]### Titolo: Il figlio dell'imperatore. ### Introduzione: Il figlio dell'imperatore (The Son of the Emperor) è un romanzo storico del 1915 scritto da Charles Giblin, ambientato nell\'Antica Roma. ### Trama."
result = pipe(prompt)
print(result)
```
# Examples:
# INSTRUCTION
@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.
# RESPONSE
# temperature:0.6 (Style imitation: low. Content consistency: medium)
Titolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...
# temperature: 0.7 (Style imitation: medium; Content consistency: low)
Titolo: Viaggio nel regno dominato da mostri e spietati signori\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c
# TRY SETTING DIFFERENT PARAMETERS!
# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:
# Prompt to generate texts in the style of Carlo Collodi
@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate sentences in the style of Luigi Pirandello
@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.
# Prompt to generate texts in the style of Luigi Pirandello
@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Carlo Dossi
@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Alessandro Manzoni
@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate Movie plots (Wikipedia style)
@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.
@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.
@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.
@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.
# Prompt to generate plots of novels (Wikipedia style)
@Scribis @Italiano. Raccontami la trama di un romanzo.
@Scribis @Italiano. Riassumi la storia di un libro.
@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.
@Scribis @Italiano. Scrivi la trama di un romanzo.
# Prompt to generate painting descriptions (Wikipedia style)
@Scribis @Italiano. Il tuo compito è descrivere un dipinto.
@Scribis @Italiano. Descrivi un dipinto.
@Scribis @Italiano. Descrivi un celebre dipinto.
@Scribis @Italiano. Descrivi un famoso dipinto.
# Prompt to generate a Greek mythology topic. (Wikipedia style)
@Scribis @Italiano. Parlami di un argomento di mitologia greca.
@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.
More information coming soon to help you make the best use of the model
Democratize AI!
## Limitations
- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
## Model Author:
Fabio Martines <[email protected]>
|
{"language": ["it"], "license": "apache-2.0", "tags": ["Italian", "Mistral", "finetuning", "Text Generation"], "datasets": ["scribis/Wikipedia_it_Trame_Romanzi", "scribis/Corpus-Frasi-da-Opere-Letterarie", "scribis/Wikipedia-it-Trame-di-Film", "scribis/Wikipedia-it-Descrizioni-di-Dipinti", "scribis/Wikipedia-it-Mitologia-Greca"], "pipeline_tag": "text-generation"}
|
scribis/Fantastica-7b-Instruct-0.2-Italian_merged
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Italian",
"Mistral",
"finetuning",
"Text Generation",
"conversational",
"it",
"dataset:scribis/Wikipedia_it_Trame_Romanzi",
"dataset:scribis/Corpus-Frasi-da-Opere-Letterarie",
"dataset:scribis/Wikipedia-it-Trame-di-Film",
"dataset:scribis/Wikipedia-it-Descrizioni-di-Dipinti",
"dataset:scribis/Wikipedia-it-Mitologia-Greca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:09:40+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #safetensors #mistral #text-generation #Italian #Mistral #finetuning #Text Generation #conversational #it #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #dataset-scribis/Wikipedia-it-Mitologia-Greca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Model Card for Fantastica-7b-Instruct-0.2-Italian
# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹
Fantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹
# COLAB NOTEBOOK (load in 4bit):
URL
# Fantastica-7b-Instruct-0.2-Italian's peculiar features:
- Mistral-7B-Instruct-v0.2 v.02 as base.
- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.
- generation of film and novel plots in Italian.
- generation of descriptions of paintings in Italian.
- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.
- Training took 70 hours with Google Colab A100.
- First version of a series of models dedicated to Italian literature.
# Notes on sources:
- the model makes use of literary works no longer covered by copyright, largely derived from URL corpus.
- the model uses entries selected from the free Wikipedia encyclopedia.
# PROMPT FORMAT
Your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence \<s\>. The next instructions should not.
\<s\>[INST]{instruction}[/INST]
for few-shots prompt use:
\<s\>[INST]{instruction}[/INST]{response}[INST]{instruction2}[/INST]
# Python
# Examples:
# INSTRUCTION
@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.
# RESPONSE
# temperature:0.6 (Style imitation: low. Content consistency: medium)
Titolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...
# temperature: 0.7 (Style imitation: medium; Content consistency: low)
Titolo: Viaggio nel regno dominato da mostri e spietati signori\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c
# TRY SETTING DIFFERENT PARAMETERS!
# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:
# Prompt to generate texts in the style of Carlo Collodi
@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate sentences in the style of Luigi Pirandello
@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.
# Prompt to generate texts in the style of Luigi Pirandello
@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Carlo Dossi
@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Alessandro Manzoni
@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate Movie plots (Wikipedia style)
@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.
@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.
@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.
@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.
# Prompt to generate plots of novels (Wikipedia style)
@Scribis @Italiano. Raccontami la trama di un romanzo.
@Scribis @Italiano. Riassumi la storia di un libro.
@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.
@Scribis @Italiano. Scrivi la trama di un romanzo.
# Prompt to generate painting descriptions (Wikipedia style)
@Scribis @Italiano. Il tuo compito è descrivere un dipinto.
@Scribis @Italiano. Descrivi un dipinto.
@Scribis @Italiano. Descrivi un celebre dipinto.
@Scribis @Italiano. Descrivi un famoso dipinto.
# Prompt to generate a Greek mythology topic. (Wikipedia style)
@Scribis @Italiano. Parlami di un argomento di mitologia greca.
@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.
More information coming soon to help you make the best use of the model
Democratize AI!
## Limitations
- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
## Model Author:
Fabio Martines <scribisonline@URL>
|
[
"# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 \n\nFantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹",
"# COLAB NOTEBOOK (load in 4bit):\n\nURL",
"# Fantastica-7b-Instruct-0.2-Italian's peculiar features:\n\n- Mistral-7B-Instruct-v0.2 v.02 as base.\n- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.\n- generation of film and novel plots in Italian.\n- generation of descriptions of paintings in Italian.\n- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.\n- Training took 70 hours with Google Colab A100.\n- First version of a series of models dedicated to Italian literature.",
"# Notes on sources:\n- the model makes use of literary works no longer covered by copyright, largely derived from URL corpus.\n- the model uses entries selected from the free Wikipedia encyclopedia.",
"# PROMPT FORMAT\n\nYour prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence \\<s\\>. The next instructions should not.\n\n\\<s\\>[INST]{instruction}[/INST]\n\nfor few-shots prompt use:\n\n\\<s\\>[INST]{instruction}[/INST]{response}[INST]{instruction2}[/INST]",
"# Python",
"# Examples:",
"# INSTRUCTION\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.",
"# RESPONSE",
"# temperature:0.6 (Style imitation: low. Content consistency: medium)\n\nTitolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...",
"# temperature: 0.7 (Style imitation: medium; Content consistency: low)\n\nTitolo: Viaggio nel regno dominato da mostri e spietati signori\\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c",
"# TRY SETTING DIFFERENT PARAMETERS!",
"# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:",
"# Prompt to generate texts in the style of Carlo Collodi\n\n@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate sentences in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.",
"# Prompt to generate texts in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate texts in the style of Carlo Dossi\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate texts in the style of Alessandro Manzoni\n\n@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate Movie plots (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.\n@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.\n@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.\n@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.",
"# Prompt to generate plots of novels (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un romanzo.\n@Scribis @Italiano. Riassumi la storia di un libro.\n@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.\n@Scribis @Italiano. Scrivi la trama di un romanzo.",
"# Prompt to generate painting descriptions (Wikipedia style)\n\t\t \n@Scribis @Italiano. Il tuo compito è descrivere un dipinto.\n@Scribis @Italiano. Descrivi un dipinto.\n@Scribis @Italiano. Descrivi un celebre dipinto.\n@Scribis @Italiano. Descrivi un famoso dipinto.",
"# Prompt to generate a Greek mythology topic. (Wikipedia style)\t\t\n\t\t\t\t\n@Scribis @Italiano. Parlami di un argomento di mitologia greca.\n@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.\t\t\n\nMore information coming soon to help you make the best use of the model\n\nDemocratize AI!",
"## Limitations\n\n- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.\n- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.",
"## Model Author:\nFabio Martines <scribisonline@URL>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #Italian #Mistral #finetuning #Text Generation #conversational #it #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #dataset-scribis/Wikipedia-it-Mitologia-Greca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 \n\nFantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹",
"# COLAB NOTEBOOK (load in 4bit):\n\nURL",
"# Fantastica-7b-Instruct-0.2-Italian's peculiar features:\n\n- Mistral-7B-Instruct-v0.2 v.02 as base.\n- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.\n- generation of film and novel plots in Italian.\n- generation of descriptions of paintings in Italian.\n- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.\n- Training took 70 hours with Google Colab A100.\n- First version of a series of models dedicated to Italian literature.",
"# Notes on sources:\n- the model makes use of literary works no longer covered by copyright, largely derived from URL corpus.\n- the model uses entries selected from the free Wikipedia encyclopedia.",
"# PROMPT FORMAT\n\nYour prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence \\<s\\>. The next instructions should not.\n\n\\<s\\>[INST]{instruction}[/INST]\n\nfor few-shots prompt use:\n\n\\<s\\>[INST]{instruction}[/INST]{response}[INST]{instruction2}[/INST]",
"# Python",
"# Examples:",
"# INSTRUCTION\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.",
"# RESPONSE",
"# temperature:0.6 (Style imitation: low. Content consistency: medium)\n\nTitolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...",
"# temperature: 0.7 (Style imitation: medium; Content consistency: low)\n\nTitolo: Viaggio nel regno dominato da mostri e spietati signori\\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c",
"# TRY SETTING DIFFERENT PARAMETERS!",
"# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:",
"# Prompt to generate texts in the style of Carlo Collodi\n\n@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate sentences in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.",
"# Prompt to generate texts in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate texts in the style of Carlo Dossi\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate texts in the style of Alessandro Manzoni\n\n@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.",
"# Prompt to generate Movie plots (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.\n@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.\n@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.\n@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.",
"# Prompt to generate plots of novels (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un romanzo.\n@Scribis @Italiano. Riassumi la storia di un libro.\n@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.\n@Scribis @Italiano. Scrivi la trama di un romanzo.",
"# Prompt to generate painting descriptions (Wikipedia style)\n\t\t \n@Scribis @Italiano. Il tuo compito è descrivere un dipinto.\n@Scribis @Italiano. Descrivi un dipinto.\n@Scribis @Italiano. Descrivi un celebre dipinto.\n@Scribis @Italiano. Descrivi un famoso dipinto.",
"# Prompt to generate a Greek mythology topic. (Wikipedia style)\t\t\n\t\t\t\t\n@Scribis @Italiano. Parlami di un argomento di mitologia greca.\n@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.\t\t\n\nMore information coming soon to help you make the best use of the model\n\nDemocratize AI!",
"## Limitations\n\n- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.\n- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.",
"## Model Author:\nFabio Martines <scribisonline@URL>"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
aekang12/zephyr-7b-beta-Agent-Instruct
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:10:47+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
trl
|
# Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
Description: Experiment with SFT and a new tokenizer configuration for chat template of mixtral
It achieves the following results on the evaluation set:
{'eval_loss': 1.02373468875885, 'eval_runtime': 12.0105, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}
## Intended uses & limitations
This model has not been trained to avoid specific intructions.
## Training procedure
Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
```
---------------------
System_prompt:
Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
{instructions_formatted}
{context_statement}
Lista de requisitos:
- Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
- Nunca traga informações do seu próprio conhecimento.
- Repito é crucial que você responda usando apenas informações do contexto.
- Nunca mencione o contexto fornecido.
- Nunca mencione a pergunta fornecida.
- Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
- Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
---------------------
Question:
{question}
---------------------
Response:
{answer}
---------------------
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- per_device_train_batch_size: 1
- per_device_eval_batch_size: 1
- gradient_accumulation_steps: 4
- num_gpus: 4
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: cosine
- num_steps: 78
- quantization_type: bitsandbytes
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n - task_type: CAUSAL_LM",)
### Training results
### Framework versions
- transformers==4.38.2
- datasets==2.18.0
- peft==0.10.0
- safetensors==0.4.2
- evaluate==0.4.1
- bitsandbytes==0.43
- huggingface_hub==0.22.2
- seqeval==1.2.2
- optimum==1.18.1
- auto-gptq==0.7.1
- gpustat==1.1.1
- deepspeed==0.14.0
- wandb==0.16.6
- trl==0.8.1
- accelerate==0.29.2
- coloredlogs==15.0.1
- traitlets==5.14.2
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.4/autoawq-0.2.4+cu118-cp310-cp310-linux_x86_64.whl
### Hardware
- Cloud provided: runpod.io
|
{"language": ["pt"], "license": "mit", "library_name": "trl", "tags": ["SFT", "WeniGPT"], "base_model": "mistralai/Mixtral-8x7B-Instruct-v0.1", "model-index": [{"name": "Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT", "results": []}]}
|
Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
| null |
[
"trl",
"safetensors",
"SFT",
"WeniGPT",
"pt",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:mit",
"region:us"
] | null |
2024-04-12T17:10:52+00:00
|
[] |
[
"pt"
] |
TAGS
#trl #safetensors #SFT #WeniGPT #pt #base_model-mistralai/Mixtral-8x7B-Instruct-v0.1 #license-mit #region-us
|
# Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for Weni.
Description: Experiment with SFT and a new tokenizer configuration for chat template of mixtral
It achieves the following results on the evaluation set:
{'eval_loss': 1.02373468875885, 'eval_runtime': 12.0105, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}
## Intended uses & limitations
This model has not been trained to avoid specific intructions.
## Training procedure
Finetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- per_device_train_batch_size: 1
- per_device_eval_batch_size: 1
- gradient_accumulation_steps: 4
- num_gpus: 4
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: cosine
- num_steps: 78
- quantization_type: bitsandbytes
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n - task_type: CAUSAL_LM",)
### Training results
### Framework versions
- transformers==4.38.2
- datasets==2.18.0
- peft==0.10.0
- safetensors==0.4.2
- evaluate==0.4.1
- bitsandbytes==0.43
- huggingface_hub==0.22.2
- seqeval==1.2.2
- optimum==1.18.1
- auto-gptq==0.7.1
- gpustat==1.1.1
- deepspeed==0.14.0
- wandb==0.16.6
- trl==0.8.1
- accelerate==0.29.2
- coloredlogs==15.0.1
- traitlets==5.14.2
- autoawq@URL
### Hardware
- Cloud provided: URL
|
[
"# Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT\n\nThis model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for Weni.\nDescription: Experiment with SFT and a new tokenizer configuration for chat template of mixtral\n\nIt achieves the following results on the evaluation set:\n{'eval_loss': 1.02373468875885, 'eval_runtime': 12.0105, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}",
"## Intended uses & limitations\n\nThis model has not been trained to avoid specific intructions.",
"## Training procedure\n\nFinetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- per_device_train_batch_size: 1\n- per_device_eval_batch_size: 1\n- gradient_accumulation_steps: 4\n- num_gpus: 4\n- total_train_batch_size: 16\n- optimizer: AdamW\n- lr_scheduler_type: cosine\n- num_steps: 78\n- quantization_type: bitsandbytes\n- LoRA: (\"\\n - bits: 4\\n - use_exllama: True\\n - device_map: auto\\n - use_cache: False\\n - lora_r: 16\\n - lora_alpha: 32\\n - lora_dropout: 0.05\\n - bias: none\\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\\n - task_type: CAUSAL_LM\",)",
"### Training results",
"### Framework versions\n\n- transformers==4.38.2\n- datasets==2.18.0\n- peft==0.10.0\n- safetensors==0.4.2\n- evaluate==0.4.1\n- bitsandbytes==0.43\n- huggingface_hub==0.22.2\n- seqeval==1.2.2\n- optimum==1.18.1\n- auto-gptq==0.7.1\n- gpustat==1.1.1\n- deepspeed==0.14.0\n- wandb==0.16.6\n- trl==0.8.1\n- accelerate==0.29.2\n- coloredlogs==15.0.1\n- traitlets==5.14.2\n- autoawq@URL",
"### Hardware\n- Cloud provided: URL"
] |
[
"TAGS\n#trl #safetensors #SFT #WeniGPT #pt #base_model-mistralai/Mixtral-8x7B-Instruct-v0.1 #license-mit #region-us \n",
"# Weni/WeniGPT-Agents-Mixtral-1.0.5-SFT\n\nThis model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for Weni.\nDescription: Experiment with SFT and a new tokenizer configuration for chat template of mixtral\n\nIt achieves the following results on the evaluation set:\n{'eval_loss': 1.02373468875885, 'eval_runtime': 12.0105, 'eval_samples_per_second': 3.83, 'eval_steps_per_second': 0.999, 'epoch': 2.97}",
"## Intended uses & limitations\n\nThis model has not been trained to avoid specific intructions.",
"## Training procedure\n\nFinetuning was done on the model mistralai/Mixtral-8x7B-Instruct-v0.1 with the following prompt:",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- per_device_train_batch_size: 1\n- per_device_eval_batch_size: 1\n- gradient_accumulation_steps: 4\n- num_gpus: 4\n- total_train_batch_size: 16\n- optimizer: AdamW\n- lr_scheduler_type: cosine\n- num_steps: 78\n- quantization_type: bitsandbytes\n- LoRA: (\"\\n - bits: 4\\n - use_exllama: True\\n - device_map: auto\\n - use_cache: False\\n - lora_r: 16\\n - lora_alpha: 32\\n - lora_dropout: 0.05\\n - bias: none\\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\\n - task_type: CAUSAL_LM\",)",
"### Training results",
"### Framework versions\n\n- transformers==4.38.2\n- datasets==2.18.0\n- peft==0.10.0\n- safetensors==0.4.2\n- evaluate==0.4.1\n- bitsandbytes==0.43\n- huggingface_hub==0.22.2\n- seqeval==1.2.2\n- optimum==1.18.1\n- auto-gptq==0.7.1\n- gpustat==1.1.1\n- deepspeed==0.14.0\n- wandb==0.16.6\n- trl==0.8.1\n- accelerate==0.29.2\n- coloredlogs==15.0.1\n- traitlets==5.14.2\n- autoawq@URL",
"### Hardware\n- Cloud provided: URL"
] |
object-detection
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Pollen Grain Object Detection model using DETR Resnet50 backbone
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
| Title | Type | IoU String | Area | MaxDets | Score |
|---------------------|-----------------|----------------------|---------|---------|-------|
| Average Precision | AP | @[IoU=0.50:0.95 | all | 100 | 0.551 |
| Average Precision | AP | @[IoU=0.50 | all | 100 | 0.747 |
| Average Precision | AP | @[IoU=0.75 | all | 100 | 0.632 |
| Average Precision | AP | @[IoU=0.50:0.95 | small | 100 | 0.253 |
| Average Precision | AP | @[IoU=0.50:0.95 | medium | 100 | 0.667 |
| Average Precision | AP | @[IoU=0.50:0.95 | large | 100 | 0.787 |
| Average Recall | AR | @[IoU=0.50:0.95 | all | 1 | 0.330 |
| Average Recall | AR | @[IoU=0.50:0.95 | all | 10 | 0.595 |
| Average Recall | AR | @[IoU=0.50:0.95 | all | 100 | 0.626 |
| Average Recall | AR | @[IoU=0.50:0.95 | small | 100 | 0.316 |
| Average Recall | AR | @[IoU=0.50:0.95 | medium | 100 | 0.742 |
| Average Recall | AR | @[IoU=0.50:0.95 | large | 100 | 0.865 |
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results

#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Charliesgt/pollen_detr_resnet50_benchmark
| null |
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:11:20+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us
|
Model Card for Model ID
=======================
Model Details
-------------
### Model Description
Pollen Grain Object Detection model using DETR Resnet50 backbone
* Developed by:
* Funded by [optional]:
* Shared by [optional]:
* Model type:
* Language(s) (NLP):
* License:
* Finetuned from model [optional]:
### Model Sources [optional]
* Repository:
* Paper [optional]:
* Demo [optional]:
Uses
----
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
Bias, Risks, and Limitations
----------------------------
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model.
Training Details
----------------
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
* Training regime:
#### Speeds, Sizes, Times [optional]
Evaluation
----------
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
!Sample
#### Summary
Model Examination [optional]
----------------------------
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
* Hardware Type:
* Hours used:
* Cloud Provider:
* Compute Region:
* Carbon Emitted:
Technical Specifications [optional]
-----------------------------------
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
Glossary [optional]
-------------------
More Information [optional]
---------------------------
Model Card Authors [optional]
-----------------------------
Model Card Contact
------------------
|
[
"### Model Description\n\n\nPollen Grain Object Detection model using DETR Resnet50 backbone\n\n\n* Developed by:\n* Funded by [optional]:\n* Shared by [optional]:\n* Model type:\n* Language(s) (NLP):\n* License:\n* Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n* Repository:\n* Paper [optional]:\n* Demo [optional]:\n\n\nUses\n----",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use\n\n\nBias, Risks, and Limitations\n----------------------------",
"### Recommendations\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nTraining Details\n----------------",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n\n* Training regime:",
"#### Speeds, Sizes, Times [optional]\n\n\nEvaluation\n----------",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results\n\n\n!Sample",
"#### Summary\n\n\nModel Examination [optional]\n----------------------------\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type:\n* Hours used:\n* Cloud Provider:\n* Compute Region:\n* Carbon Emitted:\n\n\nTechnical Specifications [optional]\n-----------------------------------",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n[optional]\n\n\nBibTeX:\n\n\nAPA:\n\n\nGlossary [optional]\n-------------------\n\n\nMore Information [optional]\n---------------------------\n\n\nModel Card Authors [optional]\n-----------------------------\n\n\nModel Card Contact\n------------------"
] |
[
"TAGS\n#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"### Model Description\n\n\nPollen Grain Object Detection model using DETR Resnet50 backbone\n\n\n* Developed by:\n* Funded by [optional]:\n* Shared by [optional]:\n* Model type:\n* Language(s) (NLP):\n* License:\n* Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n* Repository:\n* Paper [optional]:\n* Demo [optional]:\n\n\nUses\n----",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use\n\n\nBias, Risks, and Limitations\n----------------------------",
"### Recommendations\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nTraining Details\n----------------",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n\n* Training regime:",
"#### Speeds, Sizes, Times [optional]\n\n\nEvaluation\n----------",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results\n\n\n!Sample",
"#### Summary\n\n\nModel Examination [optional]\n----------------------------\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n\n* Hardware Type:\n* Hours used:\n* Cloud Provider:\n* Compute Region:\n* Carbon Emitted:\n\n\nTechnical Specifications [optional]\n-----------------------------------",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n[optional]\n\n\nBibTeX:\n\n\nAPA:\n\n\nGlossary [optional]\n-------------------\n\n\nMore Information [optional]\n---------------------------\n\n\nModel Card Authors [optional]\n-----------------------------\n\n\nModel Card Contact\n------------------"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbert-pii-detection-model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1572
- Precision: 0.7413
- Recall: 0.8012
- F1: 0.7701
- Accuracy: 0.9433
- Classification Report: {'B-ACCOUNTNAME': {'precision': 0.9833333333333333, 'recall': 0.9962476547842402, 'f1-score': 0.9897483690587139, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9695238095238096, 'recall': 0.960377358490566, 'f1-score': 0.9649289099526066, 'support': 530.0}, 'B-AGE': {'precision': 0.9133663366336634, 'recall': 0.8163716814159292, 'f1-score': 0.8621495327102804, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.9108910891089109, 'recall': 0.6917293233082706, 'f1-score': 0.7863247863247863, 'support': 266.0}, 'B-BIC': {'precision': 0.8987341772151899, 'recall': 0.9342105263157895, 'f1-score': 0.9161290322580645, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9475890985324947, 'recall': 0.9847494553376906, 'f1-score': 0.9658119658119658, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9186991869918699, 'recall': 0.6457142857142857, 'f1-score': 0.7583892617449665, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.9163987138263665, 'recall': 0.9693877551020408, 'f1-score': 0.9421487603305785, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.827054794520548, 'recall': 0.9165085388994307, 'f1-score': 0.8694869486948695, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.5887096774193549, 'recall': 0.6293103448275862, 'f1-score': 0.6083333333333333, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.6754385964912281, 'recall': 0.4350282485875706, 'f1-score': 0.5292096219931272, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.5, 'recall': 0.08387096774193549, 'f1-score': 0.143646408839779, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.8604651162790697, 'recall': 0.8306122448979592, 'f1-score': 0.8452751817237798, 'support': 490.0}, 'B-DATE': {'precision': 0.7462817147856518, 'recall': 0.9530726256983241, 'f1-score': 0.8370951913640824, 'support': 895.0}, 'B-DOB': {'precision': 0.7791666666666667, 'recall': 0.3887733887733888, 'f1-score': 0.5187239944521498, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9718482252141983, 'recall': 0.9742331288343559, 'f1-score': 0.9730392156862745, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9938271604938271, 'recall': 0.9969040247678018, 'f1-score': 0.9953632148377125, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8881578947368421, 'recall': 0.7670454545454546, 'f1-score': 0.823170731707317, 'support': 176.0}, 'B-GENDER': {'precision': 0.9295154185022027, 'recall': 0.8940677966101694, 'f1-score': 0.9114470842332614, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.9221556886227545, 'recall': 0.9112426035502958, 'f1-score': 0.9166666666666666, 'support': 169.0}, 'B-IBAN': {'precision': 0.9788359788359788, 'recall': 0.9814323607427056, 'f1-score': 0.9801324503311258, 'support': 377.0}, 'B-IP': {'precision': 0.46153846153846156, 'recall': 0.06217616580310881, 'f1-score': 0.1095890410958904, 'support': 386.0}, 'B-IPV4': {'precision': 0.7432239657631954, 'recall': 0.9774859287054409, 'f1-score': 0.8444084278768234, 'support': 533.0}, 'B-IPV6': {'precision': 0.6947368421052632, 'recall': 0.927710843373494, 'f1-score': 0.7944969905417025, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.7265774378585086, 'recall': 0.6896551724137931, 'f1-score': 0.707635009310987, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.9020866773675762, 'recall': 0.9525423728813559, 'f1-score': 0.9266281945589447, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.8084358523725835, 'recall': 0.8550185873605948, 'f1-score': 0.8310749774164409, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9659863945578231, 'recall': 0.8765432098765432, 'f1-score': 0.919093851132686, 'support': 162.0}, 'B-LOCATION': {'precision': 0.8701642819289878, 'recall': 0.8556539864512767, 'f1-score': 0.8628481345244351, 'support': 5757.0}, 'B-MAC': {'precision': 0.9497716894977168, 'recall': 0.9541284403669725, 'f1-score': 0.9519450800915332, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8656716417910447, 'recall': 0.7360406091370558, 'f1-score': 0.7956104252400549, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.9970059880239521, 'f1-score': 0.9985007496251874, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.8532110091743119, 'recall': 0.8942307692307693, 'f1-score': 0.8732394366197183, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.7381056307289393, 'recall': 0.7174374204497243, 'f1-score': 0.7276247848537005, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9702970297029703, 'recall': 0.962671905697446, 'f1-score': 0.9664694280078896, 'support': 509.0}, 'B-PERSON': {'precision': 0.8583856935717739, 'recall': 0.8355025874235534, 'f1-score': 0.8467895740623014, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9868073878627969, 'recall': 0.9842105263157894, 'f1-score': 0.9855072463768116, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9647058823529412, 'recall': 0.9800796812749004, 'f1-score': 0.9723320158102767, 'support': 502.0}, 'B-PIN': {'precision': 0.7631578947368421, 'recall': 0.7837837837837838, 'f1-score': 0.7733333333333333, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8800521512385919, 'recall': 0.9811046511627907, 'f1-score': 0.9278350515463918, 'support': 688.0}, 'B-SEX': {'precision': 0.9587242026266416, 'recall': 0.9826923076923076, 'f1-score': 0.9705603038936372, 'support': 520.0}, 'B-SSN': {'precision': 0.9507042253521126, 'recall': 0.9529411764705882, 'f1-score': 0.9518213866039953, 'support': 425.0}, 'B-TIME': {'precision': 0.9319727891156463, 'recall': 0.9432013769363167, 'f1-score': 0.9375534644995723, 'support': 581.0}, 'B-URL': {'precision': 0.9930555555555556, 'recall': 0.9982547993019197, 'f1-score': 0.9956483899042646, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951690821256038, 'recall': 0.9927710843373494, 'f1-score': 0.9939686369119421, 'support': 415.0}, 'B-USERNAME': {'precision': 0.9203539823008849, 'recall': 0.9171075837742504, 'f1-score': 0.9187279151943463, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9704142011834319, 'recall': 0.9879518072289156, 'f1-score': 0.9791044776119403, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.9585492227979274, 'recall': 0.9438775510204082, 'f1-score': 0.9511568123393316, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9683972911963883, 'recall': 0.9907621247113164, 'f1-score': 0.9794520547945206, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9874560963371801, 'recall': 0.9902717208990272, 'f1-score': 0.9888619043631187, 'support': 5962.0}, 'I-AGE': {'precision': 0.8314606741573034, 'recall': 0.9844789356984479, 'f1-score': 0.9015228426395939, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.9181771103055412, 'recall': 0.9363612358067072, 'f1-score': 0.9271800235324879, 'support': 3787.0}, 'I-BIC': {'precision': 0.8971631205673759, 'recall': 0.9270795163063393, 'f1-score': 0.9118760136961614, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9611269208878771, 'recall': 0.9860158234315243, 'f1-score': 0.9734123038346807, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.9364548494983278, 'recall': 0.813953488372093, 'f1-score': 0.8709175738724728, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9808612440191388, 'recall': 0.9927360774818402, 'f1-score': 0.98676293622142, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8299468791500664, 'recall': 0.9182339112547752, 'f1-score': 0.8718610491071429, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.6748278500382555, 'recall': 0.740865182696346, 'f1-score': 0.7063063063063063, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.637987012987013, 'recall': 0.637987012987013, 'f1-score': 0.637987012987013, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.46060606060606063, 'recall': 0.32374866879659214, 'f1-score': 0.3802376485303315, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.8227513227513228, 'recall': 0.7369668246445498, 'f1-score': 0.7775, 'support': 422.0}, 'I-DATE': {'precision': 0.7552550542241264, 'recall': 0.9241480996068152, 'f1-score': 0.8312090179031901, 'support': 6104.0}, 'I-DOB': {'precision': 0.7493784186971656, 'recall': 0.45666666666666667, 'f1-score': 0.5675014121634344, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9976601747987062, 'recall': 0.9993106776039153, 'f1-score': 0.9984847441283835, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9971705872676427, 'recall': 0.999637203166227, 'f1-score': 0.9984023717368031, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8764478764478765, 'recall': 0.961864406779661, 'f1-score': 0.9171717171717172, 'support': 236.0}, 'I-GENDER': {'precision': 0.9548311076197957, 'recall': 0.9806373537716822, 'f1-score': 0.9675621890547264, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9332477535301669, 'recall': 0.9706275033377837, 'f1-score': 0.9515706806282722, 'support': 749.0}, 'I-IBAN': {'precision': 0.9920039365235577, 'recall': 0.9861807508866333, 'f1-score': 0.9890837728443518, 'support': 16354.0}, 'I-IP': {'precision': 0.49273447820343463, 'recall': 0.3739473332442187, 'f1-score': 0.42520044077972413, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7541926209870627, 'recall': 0.98375, 'f1-score': 0.8538106861947383, 'support': 3200.0}, 'I-IPV6': {'precision': 0.7565486700049205, 'recall': 0.8201186031188228, 'f1-score': 0.7870520927431497, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8258928571428571, 'recall': 0.8564814814814815, 'f1-score': 0.8409090909090909, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.9352864013992129, 'recall': 0.9958100558659218, 'f1-score': 0.9645997745208568, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.928082191780822, 'recall': 0.9492119089316988, 'f1-score': 0.9385281385281385, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9491078669910786, 'recall': 0.882125694902478, 'f1-score': 0.9143917566049714, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.935253227408143, 'recall': 0.9305404604288113, 'f1-score': 0.9328908919815759, 'support': 20242.0}, 'I-MAC': {'precision': 0.9934322549258088, 'recall': 0.9946419873356064, 'f1-score': 0.9940367530728976, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.8802466464733881, 'recall': 0.7585531835555142, 'f1-score': 0.8148815782885184, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9985329176600036, 'recall': 1.0, 'f1-score': 0.9992659203523583, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.856851510148587, 'recall': 0.8855766335441132, 'f1-score': 0.8709772951628826, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9827474485663372, 'recall': 0.9809200420405854, 'f1-score': 0.9818328950030346, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8926155035762614, 'recall': 0.9168528170761976, 'f1-score': 0.9045718343658937, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9931120851596744, 'recall': 0.9997478567826525, 'f1-score': 0.9964189231639128, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9784533648170012, 'recall': 0.9929609105885877, 'f1-score': 0.9856537575262023, 'support': 6677.0}, 'I-PIN': {'precision': 0.7491694352159468, 'recall': 0.8541666666666666, 'f1-score': 0.7982300884955752, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8575924468922108, 'recall': 0.9936189608021878, 'f1-score': 0.9206081081081081, 'support': 1097.0}, 'I-SSN': {'precision': 0.9718875502008032, 'recall': 0.9855609033691225, 'f1-score': 0.9786764705882353, 'support': 5402.0}, 'I-TIME': {'precision': 0.9683184402924452, 'recall': 0.9892116182572614, 'f1-score': 0.9786535303776683, 'support': 1205.0}, 'I-URL': {'precision': 0.9914529914529915, 'recall': 0.9998631823778903, 'f1-score': 0.9956403269754769, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.998585829284212, 'recall': 0.9999636891793755, 'f1-score': 0.9992742842628543, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.944710770214126, 'recall': 0.9668029435813573, 'f1-score': 0.9556291925967834, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9745319615449486, 'recall': 0.9946634532621794, 'f1-score': 0.9844948032032714, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9772364217252396, 'recall': 0.9788, 'f1-score': 0.978017585931255, 'support': 2500.0}, 'O': {'precision': 0.9892495017985387, 'recall': 0.9837872366945033, 'f1-score': 0.986510808226626, 'support': 320920.0}, 'accuracy': 0.9433351828810348, 'macro avg': {'precision': 0.8793777747202005, 'recall': 0.8718829567729829, 'f1-score': 0.8691194147844159, 'support': 713688.0}, 'weighted avg': {'precision': 0.9419843792518461, 'recall': 0.9433351828810348, 'f1-score': 0.941698874575059, 'support': 713688.0}}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Classification Report |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.6243 | 1.0 | 1609 | 0.4623 | 0.3800 | 0.4252 | 0.4014 | 0.8613 | {'B-ACCOUNTNAME': {'precision': 0.9016736401673641, 'recall': 0.8086303939962477, 'f1-score': 0.8526211671612265, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.573943661971831, 'recall': 0.9226415094339623, 'f1-score': 0.7076700434153401, 'support': 530.0}, 'B-AGE': {'precision': 0.9655172413793104, 'recall': 0.18584070796460178, 'f1-score': 0.3116883116883117, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 266.0}, 'B-BIC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.7135416666666666, 'recall': 0.8954248366013072, 'f1-score': 0.7942028985507247, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.46622889305816134, 'recall': 0.9430740037950665, 'f1-score': 0.6239799121155053, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.47297297297297297, 'recall': 0.21428571428571427, 'f1-score': 0.2949438202247191, 'support': 490.0}, 'B-DATE': {'precision': 0.565410199556541, 'recall': 0.8547486033519553, 'f1-score': 0.6806049822064056, 'support': 895.0}, 'B-DOB': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 481.0}, 'B-EMAIL': {'precision': 0.7891231964483907, 'recall': 0.8723926380368098, 'f1-score': 0.8286713286713286, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9806451612903225, 'recall': 0.9411764705882353, 'f1-score': 0.9605055292259084, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 176.0}, 'B-GENDER': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 169.0}, 'B-IBAN': {'precision': 0.8633333333333333, 'recall': 0.6870026525198939, 'f1-score': 0.7651403249630724, 'support': 377.0}, 'B-IP': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 386.0}, 'B-IPV4': {'precision': 0.6032719836400818, 'recall': 0.5534709193245778, 'f1-score': 0.5772994129158513, 'support': 533.0}, 'B-IPV6': {'precision': 0.648729446935725, 'recall': 0.8714859437751004, 'f1-score': 0.7437874892887746, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 162.0}, 'B-LOCATION': {'precision': 0.5921890714046262, 'recall': 0.6136876845579294, 'f1-score': 0.6027467371833148, 'support': 5757.0}, 'B-MAC': {'precision': 0.9090909090909091, 'recall': 0.13761467889908258, 'f1-score': 0.23904382470119523, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.10179640718562874, 'f1-score': 0.18478260869565216, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.6696165191740413, 'recall': 0.19261773440814595, 'f1-score': 0.29917627677100495, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.6061884669479606, 'recall': 0.8467583497053045, 'f1-score': 0.7065573770491803, 'support': 509.0}, 'B-PERSON': {'precision': 0.46653702803435426, 'recall': 0.45146620668025716, 'f1-score': 0.45887790883009244, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9689119170984456, 'recall': 0.4921052631578947, 'f1-score': 0.6527050610820244, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.6161290322580645, 'recall': 0.7609561752988048, 'f1-score': 0.6809269162210339, 'support': 502.0}, 'B-PIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8307291666666666, 'recall': 0.4636627906976744, 'f1-score': 0.5951492537313433, 'support': 688.0}, 'B-SEX': {'precision': 0.7757847533632287, 'recall': 0.6653846153846154, 'f1-score': 0.7163561076604554, 'support': 520.0}, 'B-SSN': {'precision': 0.9795918367346939, 'recall': 0.22588235294117648, 'f1-score': 0.367112810707457, 'support': 425.0}, 'B-TIME': {'precision': 0.7577639751552795, 'recall': 0.4199655765920826, 'f1-score': 0.540420819490587, 'support': 581.0}, 'B-URL': {'precision': 0.967687074829932, 'recall': 0.9930191972076788, 'f1-score': 0.9801894918173988, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.8994708994708994, 'recall': 0.8192771084337349, 'f1-score': 0.8575031525851198, 'support': 415.0}, 'B-USERNAME': {'precision': 0.627906976744186, 'recall': 0.047619047619047616, 'f1-score': 0.08852459016393442, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.8038095238095239, 'recall': 0.9745958429561201, 'f1-score': 0.8810020876826722, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.8037922506183017, 'recall': 0.9812143575981215, 'f1-score': 0.8836858006042296, 'support': 5962.0}, 'I-AGE': {'precision': 0.612184249628529, 'recall': 0.9135254988913526, 'f1-score': 0.7330960854092526, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.5548321464903357, 'recall': 0.7200950620543967, 'f1-score': 0.6267524706963916, 'support': 3787.0}, 'I-BIC': {'precision': 0.7792672028596962, 'recall': 0.6390619274459509, 'f1-score': 0.7022347493456815, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.8603819834541926, 'recall': 0.983738650629142, 'f1-score': 0.917934538322187, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.7272727272727273, 'recall': 0.5326876513317191, 'f1-score': 0.6149545772187281, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.5504077244512962, 'recall': 0.9966940934469586, 'f1-score': 0.7091816732442957, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.4798206278026906, 'recall': 0.04493910121797564, 'f1-score': 0.08218125960061444, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 422.0}, 'I-DATE': {'precision': 0.6246747614917606, 'recall': 0.9439711664482306, 'f1-score': 0.7518267223382046, 'support': 6104.0}, 'I-DOB': {'precision': 0.559322033898305, 'recall': 0.02, 'f1-score': 0.03861907548273844, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.987352156969987, 'recall': 0.9955194044254497, 'f1-score': 0.9914189606645157, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9947819237963966, 'recall': 0.9997361477572559, 'f1-score': 0.9972528828280502, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 236.0}, 'I-GENDER': {'precision': 0.9163346613545816, 'recall': 0.27833803953206937, 'f1-score': 0.42698019801980197, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 749.0}, 'I-IBAN': {'precision': 0.9779240898528273, 'recall': 0.9263788675553382, 'f1-score': 0.9514538717578346, 'support': 16354.0}, 'I-IP': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 14962.0}, 'I-IPV4': {'precision': 0.6198944694156733, 'recall': 0.99125, 'f1-score': 0.7627750390765902, 'support': 3200.0}, 'I-IPV6': {'precision': 0.6920097484550439, 'recall': 0.9978350224341879, 'f1-score': 0.8172485127269457, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.7324561403508771, 'recall': 0.07774674115456238, 'f1-score': 0.14057239057239057, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9023499523658304, 'recall': 0.5354753604070479, 'f1-score': 0.6721069126603986, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.7736853310623802, 'recall': 0.7181108586108093, 'f1-score': 0.7448629259543941, 'support': 20242.0}, 'I-MAC': {'precision': 0.9398008798332947, 'recall': 0.9885533365806137, 'f1-score': 0.9635608308605341, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 1.0, 'recall': 0.012771511140113732, 'f1-score': 0.025220913107511046, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9002489626556016, 'recall': 0.9961432506887052, 'f1-score': 0.9457715780296425, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.6546114971572963, 'recall': 0.416039345578641, 'f1-score': 0.5087450138079166, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.8728904847396768, 'recall': 0.9826986821893443, 'f1-score': 0.9245455236936183, 'support': 12369.0}, 'I-PERSON': {'precision': 0.5489927504760632, 'recall': 0.8157359146190122, 'f1-score': 0.6562961779623787, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9453817210826556, 'recall': 0.9819717599596571, 'f1-score': 0.9633294168573372, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9101089588377724, 'recall': 0.9007039089411413, 'f1-score': 0.9053820097854723, 'support': 6677.0}, 'I-PIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 528.0}, 'I-PREFIX': {'precision': 0.6593984962406015, 'recall': 0.7994530537830447, 'f1-score': 0.722702925422332, 'support': 1097.0}, 'I-SSN': {'precision': 0.9083463338533542, 'recall': 0.8622732321362458, 'f1-score': 0.884710351377018, 'support': 5402.0}, 'I-TIME': {'precision': 0.8597560975609756, 'recall': 0.7020746887966804, 'f1-score': 0.7729556875285518, 'support': 1205.0}, 'I-URL': {'precision': 0.9723705285638014, 'recall': 0.9967163770693666, 'f1-score': 0.984392946422539, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.9912046716170427, 'recall': 0.998474945533769, 'f1-score': 0.9948265258131037, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.6820336766388753, 'recall': 0.6822567457072772, 'f1-score': 0.6821451929365598, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.8922569498979112, 'recall': 0.9779652263728696, 'f1-score': 0.9331471747700394, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.5564168819982773, 'recall': 0.7752, 'f1-score': 0.647835534013037, 'support': 2500.0}, 'O': {'precision': 0.9453196047435655, 'recall': 0.9652499065187585, 'f1-score': 0.9551808031378063, 'support': 320920.0}, 'accuracy': 0.8613063411462712, 'macro avg': {'precision': 0.5200387497441232, 'recall': 0.46428613959952064, 'f1-score': 0.4529271315457713, 'support': 713688.0}, 'weighted avg': {'precision': 0.8430070225232931, 'recall': 0.8613063411462712, 'f1-score': 0.8348072158443501, 'support': 713688.0}} |
| 0.3168 | 2.0 | 3218 | 0.2627 | 0.5706 | 0.6327 | 0.6001 | 0.9121 | {'B-ACCOUNTNAME': {'precision': 0.9548736462093863, 'recall': 0.9924953095684803, 'f1-score': 0.9733210671573137, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9012567324955116, 'recall': 0.9471698113207547, 'f1-score': 0.9236430542778289, 'support': 530.0}, 'B-AGE': {'precision': 0.8801089918256131, 'recall': 0.7146017699115044, 'f1-score': 0.7887667887667887, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.6666666666666666, 'recall': 0.022556390977443608, 'f1-score': 0.04363636363636364, 'support': 266.0}, 'B-BIC': {'precision': 0.9833333333333333, 'recall': 0.3881578947368421, 'f1-score': 0.5566037735849056, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.8361266294227188, 'recall': 0.9782135076252724, 'f1-score': 0.9016064257028112, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.8244274809160306, 'recall': 0.7346938775510204, 'f1-score': 0.7769784172661871, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.7198211624441133, 'recall': 0.9165085388994307, 'f1-score': 0.8063439065108514, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.463768115942029, 'recall': 0.1839080459770115, 'f1-score': 0.26337448559670784, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.4864479315263909, 'recall': 0.6959183673469388, 'f1-score': 0.5726280436607892, 'support': 490.0}, 'B-DATE': {'precision': 0.6565579984836998, 'recall': 0.9675977653631285, 'f1-score': 0.7822944896115628, 'support': 895.0}, 'B-DOB': {'precision': 0.6923076923076923, 'recall': 0.037422037422037424, 'f1-score': 0.07100591715976332, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9091967403958091, 'recall': 0.9582822085889571, 'f1-score': 0.9330943847072879, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9876160990712074, 'recall': 0.9876160990712074, 'f1-score': 0.9876160990712074, 'support': 323.0}, 'B-EYECOLOR': {'precision': 1.0, 'recall': 0.13068181818181818, 'f1-score': 0.23115577889447236, 'support': 176.0}, 'B-GENDER': {'precision': 0.7881040892193308, 'recall': 0.4491525423728814, 'f1-score': 0.5721997300944669, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 169.0}, 'B-IBAN': {'precision': 0.9421052631578948, 'recall': 0.9496021220159151, 'f1-score': 0.9458388375165125, 'support': 377.0}, 'B-IP': {'precision': 0.14285714285714285, 'recall': 0.0025906735751295338, 'f1-score': 0.005089058524173028, 'support': 386.0}, 'B-IPV4': {'precision': 0.6737967914438503, 'recall': 0.9455909943714822, 'f1-score': 0.7868852459016393, 'support': 533.0}, 'B-IPV6': {'precision': 0.6642754662840746, 'recall': 0.929718875502008, 'f1-score': 0.7748953974895397, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.3955637707948244, 'recall': 0.38838475499092556, 'f1-score': 0.39194139194139194, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.6927899686520376, 'recall': 0.7491525423728813, 'f1-score': 0.7198697068403909, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.6944444444444444, 'recall': 0.13940520446096655, 'f1-score': 0.23219814241486067, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9868421052631579, 'recall': 0.46296296296296297, 'f1-score': 0.6302521008403361, 'support': 162.0}, 'B-LOCATION': {'precision': 0.6558498253783469, 'recall': 0.7828730241445198, 'f1-score': 0.7137540581201995, 'support': 5757.0}, 'B-MAC': {'precision': 0.9295774647887324, 'recall': 0.908256880733945, 'f1-score': 0.9187935034802784, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8103448275862069, 'recall': 0.5964467005076142, 'f1-score': 0.6871345029239766, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 0.9536231884057971, 'recall': 0.9850299401197605, 'f1-score': 0.9690721649484536, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.9193548387096774, 'recall': 0.27403846153846156, 'f1-score': 0.4222222222222222, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.6084592145015105, 'recall': 0.42723801442511666, 'f1-score': 0.5019940179461615, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9256360078277887, 'recall': 0.9292730844793713, 'f1-score': 0.9274509803921569, 'support': 509.0}, 'B-PERSON': {'precision': 0.7883310719131614, 'recall': 0.6377607025246982, 'f1-score': 0.7050970873786407, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9758713136729222, 'recall': 0.9578947368421052, 'f1-score': 0.9667994687915007, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9230769230769231, 'recall': 0.9322709163346613, 'f1-score': 0.9276511397423192, 'support': 502.0}, 'B-PIN': {'precision': 0.8936170212765957, 'recall': 0.28378378378378377, 'f1-score': 0.4307692307692308, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8677685950413223, 'recall': 0.9156976744186046, 'f1-score': 0.8910891089108911, 'support': 688.0}, 'B-SEX': {'precision': 0.8274706867671692, 'recall': 0.95, 'f1-score': 0.8845120859444942, 'support': 520.0}, 'B-SSN': {'precision': 0.9158415841584159, 'recall': 0.8705882352941177, 'f1-score': 0.8926417370325693, 'support': 425.0}, 'B-TIME': {'precision': 0.7867298578199052, 'recall': 0.8571428571428571, 'f1-score': 0.8204283360790774, 'support': 581.0}, 'B-URL': {'precision': 0.9913194444444444, 'recall': 0.9965095986038395, 'f1-score': 0.9939077458659704, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951573849878934, 'recall': 0.9903614457831326, 'f1-score': 0.9927536231884058, 'support': 415.0}, 'B-USERNAME': {'precision': 0.8150943396226416, 'recall': 0.7619047619047619, 'f1-score': 0.7876025524156791, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9470588235294117, 'recall': 0.9698795180722891, 'f1-score': 0.9583333333333334, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.7848837209302325, 'recall': 0.6887755102040817, 'f1-score': 0.7336956521739131, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9470198675496688, 'recall': 0.9907621247113164, 'f1-score': 0.9683972911963883, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.966644758462044, 'recall': 0.9867494129486749, 'f1-score': 0.976593625498008, 'support': 5962.0}, 'I-AGE': {'precision': 0.7512605042016807, 'recall': 0.991130820399113, 'f1-score': 0.8546845124282982, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.7029923170238577, 'recall': 0.9181410087140216, 'f1-score': 0.7962899347303333, 'support': 3787.0}, 'I-BIC': {'precision': 0.8182717678100264, 'recall': 0.9091242213264932, 'f1-score': 0.8613088005554591, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.8743351063829787, 'recall': 0.9981607450442297, 'f1-score': 0.9321537140286544, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.8, 'recall': 0.627906976744186, 'f1-score': 0.7035830618892508, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9035591274397244, 'recall': 0.9527845036319612, 'f1-score': 0.9275191514437242, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.7646555935515389, 'recall': 0.9199235968263297, 'f1-score': 0.8351340536214485, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.5147891755821271, 'recall': 0.6871062578748425, 'f1-score': 0.5885950710559453, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.7126436781609196, 'recall': 0.10064935064935066, 'f1-score': 0.1763869132290185, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.2653061224489796, 'recall': 0.013844515441959531, 'f1-score': 0.02631578947368421, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.7424242424242424, 'recall': 0.11611374407582939, 'f1-score': 0.20081967213114754, 'support': 422.0}, 'I-DATE': {'precision': 0.670254512837762, 'recall': 0.9793577981651376, 'f1-score': 0.7958463689010185, 'support': 6104.0}, 'I-DOB': {'precision': 0.7672727272727272, 'recall': 0.12787878787878787, 'f1-score': 0.21922077922077923, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9958043881972626, 'recall': 0.9980009650513545, 'f1-score': 0.9969014666391242, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9989651488850314, 'recall': 0.9869722955145118, 'f1-score': 0.9929325104519212, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.7888888888888889, 'recall': 0.902542372881356, 'f1-score': 0.841897233201581, 'support': 236.0}, 'I-GENDER': {'precision': 0.8748519542045006, 'recall': 0.8939088342073417, 'f1-score': 0.8842777334397446, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9301745635910225, 'recall': 0.4979973297730307, 'f1-score': 0.648695652173913, 'support': 749.0}, 'I-IBAN': {'precision': 0.9675725190839695, 'recall': 0.9688149688149689, 'f1-score': 0.9681933453512176, 'support': 16354.0}, 'I-IP': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7210561949898443, 'recall': 0.9984375, 'f1-score': 0.8373738697418425, 'support': 3200.0}, 'I-IPV6': {'precision': 0.6941894018887723, 'recall': 0.9963603275705186, 'f1-score': 0.8182696643690008, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.7811435062476335, 'recall': 0.9604283054003724, 'f1-score': 0.861557736479432, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.853515625, 'recall': 0.7653239929947461, 'f1-score': 0.8070175438596491, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9973118279569892, 'recall': 0.5593140488080656, 'f1-score': 0.716691820102626, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.7498403575989783, 'recall': 0.9281691532457267, 'f1-score': 0.8295288975230695, 'support': 20242.0}, 'I-MAC': {'precision': 0.9886005335920446, 'recall': 0.9926936190940088, 'f1-score': 0.9906428484627536, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.8558747380101097, 'recall': 0.6471520462384637, 'f1-score': 0.737020915171462, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9889171511627907, 'recall': 0.9996326905417815, 'f1-score': 0.9942460498675678, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.7886456908344733, 'recall': 0.6943691659138813, 'f1-score': 0.7385108086469175, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9737436240488335, 'recall': 0.9414665696499313, 'f1-score': 0.9573331141072016, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8398266164404768, 'recall': 0.8078927773641102, 'f1-score': 0.8235502479506123, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9921697398332913, 'recall': 0.9904185577407968, 'f1-score': 0.9912933753943217, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9707251163488966, 'recall': 0.9683989815785532, 'f1-score': 0.9695606537711801, 'support': 6677.0}, 'I-PIN': {'precision': 0.6933638443935927, 'recall': 0.5738636363636364, 'f1-score': 0.627979274611399, 'support': 528.0}, 'I-PREFIX': {'precision': 0.824750192159877, 'recall': 0.9781221513217867, 'f1-score': 0.8949124270225187, 'support': 1097.0}, 'I-SSN': {'precision': 0.9306327298799068, 'recall': 0.9611255090707146, 'f1-score': 0.9456333667243421, 'support': 5402.0}, 'I-TIME': {'precision': 0.9040247678018576, 'recall': 0.9692946058091286, 'f1-score': 0.935522627152583, 'support': 1205.0}, 'I-URL': {'precision': 0.9870252736856332, 'recall': 0.9991790942673416, 'f1-score': 0.9930649986401958, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.9974982777999347, 'recall': 0.9989832970225128, 'f1-score': 0.9982402351191016, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.9018691588785047, 'recall': 0.8837285363859362, 'f1-score': 0.8927066986041133, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9731127197518097, 'recall': 0.9719400929592013, 'f1-score': 0.9725260528808888, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.8837119753561802, 'recall': 0.918, 'f1-score': 0.9005297233666862, 'support': 2500.0}, 'O': {'precision': 0.9771261455694223, 'recall': 0.9734419793094853, 'f1-score': 0.9752805831759361, 'support': 320920.0}, 'accuracy': 0.9120652161728935, 'macro avg': {'precision': 0.7714180691665777, 'recall': 0.7098286024352655, 'f1-score': 0.7073861197077229, 'support': 713688.0}, 'weighted avg': {'precision': 0.898265319056747, 'recall': 0.9120652161728935, 'f1-score': 0.8989643032526424, 'support': 713688.0}} |
| 0.212 | 3.0 | 4827 | 0.1970 | 0.6644 | 0.7339 | 0.6974 | 0.9313 | {'B-ACCOUNTNAME': {'precision': 0.9706959706959707, 'recall': 0.9943714821763602, 'f1-score': 0.9823911028730306, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9530956848030019, 'recall': 0.9584905660377359, 'f1-score': 0.955785512699906, 'support': 530.0}, 'B-AGE': {'precision': 0.8531645569620253, 'recall': 0.745575221238938, 'f1-score': 0.7957497048406139, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.8662420382165605, 'recall': 0.5112781954887218, 'f1-score': 0.6430260047281324, 'support': 266.0}, 'B-BIC': {'precision': 0.8048780487804879, 'recall': 0.868421052631579, 'f1-score': 0.8354430379746836, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9284253578732107, 'recall': 0.9891067538126361, 'f1-score': 0.9578059071729957, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9101123595505618, 'recall': 0.46285714285714286, 'f1-score': 0.6136363636363636, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.8403614457831325, 'recall': 0.9489795918367347, 'f1-score': 0.8913738019169329, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.8303571428571429, 'recall': 0.8823529411764706, 'f1-score': 0.8555657773689053, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.47164948453608246, 'recall': 0.5258620689655172, 'f1-score': 0.49728260869565216, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.6086956521739131, 'recall': 0.07909604519774012, 'f1-score': 0.14, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.703971119133574, 'recall': 0.7959183673469388, 'f1-score': 0.7471264367816092, 'support': 490.0}, 'B-DATE': {'precision': 0.6906645569620253, 'recall': 0.9754189944134078, 'f1-score': 0.8087077350625289, 'support': 895.0}, 'B-DOB': {'precision': 0.7727272727272727, 'recall': 0.21205821205821207, 'f1-score': 0.33278955954323003, 'support': 481.0}, 'B-EMAIL': {'precision': 0.94377990430622, 'recall': 0.9680981595092024, 'f1-score': 0.9557843731072078, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9937888198757764, 'recall': 0.9907120743034056, 'f1-score': 0.9922480620155039, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8505747126436781, 'recall': 0.42045454545454547, 'f1-score': 0.5627376425855514, 'support': 176.0}, 'B-GENDER': {'precision': 0.8281622911694511, 'recall': 0.7351694915254238, 'f1-score': 0.7789001122334456, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.8064516129032258, 'recall': 0.2958579881656805, 'f1-score': 0.4329004329004329, 'support': 169.0}, 'B-IBAN': {'precision': 0.9655172413793104, 'recall': 0.9655172413793104, 'f1-score': 0.9655172413793104, 'support': 377.0}, 'B-IP': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 386.0}, 'B-IPV4': {'precision': 0.723463687150838, 'recall': 0.9718574108818011, 'f1-score': 0.8294635708566853, 'support': 533.0}, 'B-IPV6': {'precision': 0.6747503566333809, 'recall': 0.9497991967871486, 'f1-score': 0.7889908256880734, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.5817490494296578, 'recall': 0.5553539019963702, 'f1-score': 0.5682451253481894, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.7937685459940653, 'recall': 0.9067796610169492, 'f1-score': 0.8465189873417721, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.6797235023041475, 'recall': 0.5483271375464684, 'f1-score': 0.6069958847736625, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9848484848484849, 'recall': 0.8024691358024691, 'f1-score': 0.8843537414965986, 'support': 162.0}, 'B-LOCATION': {'precision': 0.787662121715996, 'recall': 0.8228243877019281, 'f1-score': 0.8048594002208819, 'support': 5757.0}, 'B-MAC': {'precision': 0.9318181818181818, 'recall': 0.9403669724770642, 'f1-score': 0.9360730593607306, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8199445983379502, 'recall': 0.751269035532995, 'f1-score': 0.7841059602649006, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 0.9910179640718563, 'recall': 0.9910179640718563, 'f1-score': 0.9910179640718563, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.7745098039215687, 'recall': 0.7596153846153846, 'f1-score': 0.7669902912621359, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.6804651162790698, 'recall': 0.6207042851081884, 'f1-score': 0.6492123363656535, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9485148514851485, 'recall': 0.9410609037328095, 'f1-score': 0.9447731755424064, 'support': 509.0}, 'B-PERSON': {'precision': 0.7832101870775036, 'recall': 0.7812450995766034, 'f1-score': 0.7822264091694143, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9712041884816754, 'recall': 0.9763157894736842, 'f1-score': 0.973753280839895, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.94921875, 'recall': 0.9681274900398407, 'f1-score': 0.9585798816568047, 'support': 502.0}, 'B-PIN': {'precision': 0.7426470588235294, 'recall': 0.6824324324324325, 'f1-score': 0.7112676056338029, 'support': 148.0}, 'B-PREFIX': {'precision': 0.874025974025974, 'recall': 0.9781976744186046, 'f1-score': 0.9231824417009602, 'support': 688.0}, 'B-SEX': {'precision': 0.9177330895795247, 'recall': 0.9653846153846154, 'f1-score': 0.9409559512652296, 'support': 520.0}, 'B-SSN': {'precision': 0.9414634146341463, 'recall': 0.908235294117647, 'f1-score': 0.9245508982035928, 'support': 425.0}, 'B-TIME': {'precision': 0.8648208469055375, 'recall': 0.9139414802065404, 'f1-score': 0.8887029288702929, 'support': 581.0}, 'B-URL': {'precision': 0.9930434782608696, 'recall': 0.9965095986038395, 'f1-score': 0.9947735191637631, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951456310679612, 'recall': 0.9879518072289156, 'f1-score': 0.9915356711003628, 'support': 415.0}, 'B-USERNAME': {'precision': 0.8631578947368421, 'recall': 0.8677248677248677, 'f1-score': 0.8654353562005277, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9479768786127167, 'recall': 0.9879518072289156, 'f1-score': 0.967551622418879, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.8924731182795699, 'recall': 0.8469387755102041, 'f1-score': 0.8691099476439791, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9608062709966405, 'recall': 0.9907621247113164, 'f1-score': 0.9755542922114838, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9786988110964333, 'recall': 0.9941294867494129, 'f1-score': 0.9863538026293892, 'support': 5962.0}, 'I-AGE': {'precision': 0.7705479452054794, 'recall': 0.9977827050997783, 'f1-score': 0.8695652173913043, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.8427299703264095, 'recall': 0.8999207816213362, 'f1-score': 0.8703869237645256, 'support': 3787.0}, 'I-BIC': {'precision': 0.7780133252574197, 'recall': 0.9413704653719311, 'f1-score': 0.8519316862875145, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9502316753196003, 'recall': 0.9938691501474324, 'f1-score': 0.9715606672469641, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.897887323943662, 'recall': 0.7412790697674418, 'f1-score': 0.8121019108280255, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.939150401836969, 'recall': 0.9903147699757869, 'f1-score': 0.9640542133176193, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8434155929242193, 'recall': 0.8511607405230679, 'f1-score': 0.8472704669274927, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.5812307692307692, 'recall': 0.7933641327173456, 'f1-score': 0.6709287870715681, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.5480225988700564, 'recall': 0.4724025974025974, 'f1-score': 0.5074106364428945, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.3525641025641026, 'recall': 0.11714589989350373, 'f1-score': 0.17585931254996004, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.7457044673539519, 'recall': 0.514218009478673, 'f1-score': 0.6086956521739131, 'support': 422.0}, 'I-DATE': {'precision': 0.6920248218800276, 'recall': 0.9865661861074705, 'f1-score': 0.8134540051330542, 'support': 6104.0}, 'I-DOB': {'precision': 0.8482676224611708, 'recall': 0.21515151515151515, 'f1-score': 0.34324389654338894, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9958072719774554, 'recall': 0.9986902874474392, 'f1-score': 0.9972466960352423, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9985827290705339, 'recall': 0.9992414248021109, 'f1-score': 0.9989119683481701, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8106060606060606, 'recall': 0.9067796610169492, 'f1-score': 0.856, 'support': 236.0}, 'I-GENDER': {'precision': 0.8952164009111617, 'recall': 0.9511899959661154, 'f1-score': 0.9223547819284178, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9008498583569405, 'recall': 0.8491321762349799, 'f1-score': 0.8742268041237113, 'support': 749.0}, 'I-IBAN': {'precision': 0.994522100491122, 'recall': 0.965818759936407, 'f1-score': 0.9799602928403027, 'support': 16354.0}, 'I-IP': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7448098903662235, 'recall': 0.9978125, 'f1-score': 0.8529451048484039, 'support': 3200.0}, 'I-IPV6': {'precision': 0.6945915898240795, 'recall': 0.9997489881083116, 'f1-score': 0.8196902654867256, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8275862068965517, 'recall': 0.5555555555555556, 'f1-score': 0.6648199445983379, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.8740585774058578, 'recall': 0.9725325884543762, 'f1-score': 0.9206698986337594, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.8292282430213465, 'recall': 0.8844133099824869, 'f1-score': 0.8559322033898306, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9634786333940237, 'recall': 0.8476396871761048, 'f1-score': 0.9018546365914787, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.9011455032902754, 'recall': 0.9132990811184666, 'f1-score': 0.9071815884387958, 'support': 20242.0}, 'I-MAC': {'precision': 0.9953488372093023, 'recall': 0.9902581587920117, 'f1-score': 0.9927969722866561, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.795629939562994, 'recall': 0.7977067213573227, 'f1-score': 0.7966669770040034, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9950657894736842, 'recall': 1.0, 'f1-score': 0.9975267930750206, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.8087948517940717, 'recall': 0.8325805480277025, 'f1-score': 0.8205153568425738, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9758956579164603, 'recall': 0.9557765381194923, 'f1-score': 0.965731323775681, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8404404217598208, 'recall': 0.89421692727724, 'f1-score': 0.8664951057023978, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.988411214953271, 'recall': 1.0, 'f1-score': 0.9941718368114307, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9794856548238442, 'recall': 0.9868204283360791, 'f1-score': 0.9831393613846613, 'support': 6677.0}, 'I-PIN': {'precision': 0.7114427860696517, 'recall': 0.8125, 'f1-score': 0.7586206896551724, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8421461897356143, 'recall': 0.9872379216043756, 'f1-score': 0.9089383130507763, 'support': 1097.0}, 'I-SSN': {'precision': 0.9705936748659145, 'recall': 0.9714920399851906, 'f1-score': 0.9710426496438154, 'support': 5402.0}, 'I-TIME': {'precision': 0.9503257328990228, 'recall': 0.9684647302904564, 'f1-score': 0.9593094944512947, 'support': 1205.0}, 'I-URL': {'precision': 0.9833176375622226, 'recall': 1.0, 'f1-score': 0.9915886582553249, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.998042769119246, 'recall': 0.9998547567175018, 'f1-score': 0.9989479412298204, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.921511154535092, 'recall': 0.9254292722812756, 'f1-score': 0.9234660574412533, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9531275788083842, 'recall': 0.9941470132552935, 'f1-score': 0.9732052578361982, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9395239953179867, 'recall': 0.9632, 'f1-score': 0.9512146948449536, 'support': 2500.0}, 'O': {'precision': 0.9868025619112791, 'recall': 0.9750779010345257, 'f1-score': 0.9809051968020163, 'support': 320920.0}, 'accuracy': 0.9312738899911446, 'macro avg': {'precision': 0.826287971168476, 'recall': 0.8120426616041074, 'f1-score': 0.8062362693668675, 'support': 713688.0}, 'weighted avg': {'precision': 0.9172739569827054, 'recall': 0.9312738899911446, 'f1-score': 0.9211758912357131, 'support': 713688.0}} |
| 0.1683 | 4.0 | 6436 | 0.1721 | 0.7294 | 0.7739 | 0.7510 | 0.9376 | {'B-ACCOUNTNAME': {'precision': 0.9851576994434137, 'recall': 0.9962476547842402, 'f1-score': 0.9906716417910447, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.967680608365019, 'recall': 0.960377358490566, 'f1-score': 0.9640151515151515, 'support': 530.0}, 'B-AGE': {'precision': 0.931129476584022, 'recall': 0.7477876106194691, 'f1-score': 0.8294478527607362, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.9111111111111111, 'recall': 0.6165413533834586, 'f1-score': 0.7354260089686099, 'support': 266.0}, 'B-BIC': {'precision': 0.8493975903614458, 'recall': 0.9276315789473685, 'f1-score': 0.8867924528301887, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9494736842105264, 'recall': 0.9825708061002179, 'f1-score': 0.9657387580299786, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9494949494949495, 'recall': 0.5371428571428571, 'f1-score': 0.6861313868613139, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.9064516129032258, 'recall': 0.95578231292517, 'f1-score': 0.9304635761589404, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.8, 'recall': 0.9411764705882353, 'f1-score': 0.8648648648648649, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.5605633802816902, 'recall': 0.5718390804597702, 'f1-score': 0.566145092460882, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.7105263157894737, 'recall': 0.3050847457627119, 'f1-score': 0.4268774703557312, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.375, 'recall': 0.01935483870967742, 'f1-score': 0.03680981595092025, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.8546255506607929, 'recall': 0.7918367346938775, 'f1-score': 0.8220338983050848, 'support': 490.0}, 'B-DATE': {'precision': 0.7255892255892256, 'recall': 0.9631284916201117, 'f1-score': 0.8276524243879021, 'support': 895.0}, 'B-DOB': {'precision': 0.8181818181818182, 'recall': 0.3180873180873181, 'f1-score': 0.45808383233532934, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9682926829268292, 'recall': 0.9742331288343559, 'f1-score': 0.9712538226299694, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9938271604938271, 'recall': 0.9969040247678018, 'f1-score': 0.9953632148377125, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8416666666666667, 'recall': 0.5738636363636364, 'f1-score': 0.6824324324324325, 'support': 176.0}, 'B-GENDER': {'precision': 0.9184149184149184, 'recall': 0.8347457627118644, 'f1-score': 0.8745837957824639, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.9019607843137255, 'recall': 0.8165680473372781, 'f1-score': 0.8571428571428571, 'support': 169.0}, 'B-IBAN': {'precision': 0.9757412398921833, 'recall': 0.9602122015915119, 'f1-score': 0.9679144385026738, 'support': 377.0}, 'B-IP': {'precision': 0.5833333333333334, 'recall': 0.018134715025906734, 'f1-score': 0.035175879396984924, 'support': 386.0}, 'B-IPV4': {'precision': 0.7467994310099573, 'recall': 0.9849906191369606, 'f1-score': 0.8495145631067961, 'support': 533.0}, 'B-IPV6': {'precision': 0.685131195335277, 'recall': 0.9437751004016064, 'f1-score': 0.793918918918919, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.7174887892376681, 'recall': 0.5807622504537205, 'f1-score': 0.641925777331996, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.9003267973856209, 'recall': 0.9338983050847458, 'f1-score': 0.9168053244592346, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.7481751824817519, 'recall': 0.7620817843866171, 'f1-score': 0.7550644567219152, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9586206896551724, 'recall': 0.8580246913580247, 'f1-score': 0.9055374592833876, 'support': 162.0}, 'B-LOCATION': {'precision': 0.8343960020678959, 'recall': 0.8410630536737884, 'f1-score': 0.8377162629757785, 'support': 5757.0}, 'B-MAC': {'precision': 0.9534883720930233, 'recall': 0.9403669724770642, 'f1-score': 0.9468822170900693, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8929765886287625, 'recall': 0.6776649746192893, 'f1-score': 0.7705627705627706, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.9970059880239521, 'f1-score': 0.9985007496251874, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.8301886792452831, 'recall': 0.8461538461538461, 'f1-score': 0.8380952380952381, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.6836692719045661, 'recall': 0.7051336444633008, 'f1-score': 0.6942355889724311, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9566929133858267, 'recall': 0.9548133595284872, 'f1-score': 0.9557522123893806, 'support': 509.0}, 'B-PERSON': {'precision': 0.863457927974057, 'recall': 0.7933197428257801, 'f1-score': 0.8269042170644001, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9893899204244032, 'recall': 0.9815789473684211, 'f1-score': 0.9854689564068693, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9757085020242915, 'recall': 0.9601593625498008, 'f1-score': 0.9678714859437751, 'support': 502.0}, 'B-PIN': {'precision': 0.7651515151515151, 'recall': 0.6824324324324325, 'f1-score': 0.7214285714285714, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8882978723404256, 'recall': 0.9709302325581395, 'f1-score': 0.9277777777777778, 'support': 688.0}, 'B-SEX': {'precision': 0.9459962756052142, 'recall': 0.9769230769230769, 'f1-score': 0.9612109744560076, 'support': 520.0}, 'B-SSN': {'precision': 0.9649122807017544, 'recall': 0.9058823529411765, 'f1-score': 0.9344660194174758, 'support': 425.0}, 'B-TIME': {'precision': 0.9230769230769231, 'recall': 0.9294320137693631, 'f1-score': 0.9262435677530018, 'support': 581.0}, 'B-URL': {'precision': 0.9930555555555556, 'recall': 0.9982547993019197, 'f1-score': 0.9956483899042646, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9975786924939467, 'recall': 0.9927710843373494, 'f1-score': 0.9951690821256038, 'support': 415.0}, 'B-USERNAME': {'precision': 0.9316888045540797, 'recall': 0.8659611992945326, 'f1-score': 0.8976234003656307, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9111111111111111, 'recall': 0.9879518072289156, 'f1-score': 0.9479768786127167, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.9238578680203046, 'recall': 0.9285714285714286, 'f1-score': 0.926208651399491, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9673055242390078, 'recall': 0.9907621247113164, 'f1-score': 0.9788933257273246, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9841930116472546, 'recall': 0.9921167393492116, 'f1-score': 0.9881389909789509, 'support': 5962.0}, 'I-AGE': {'precision': 0.8068592057761733, 'recall': 0.991130820399113, 'f1-score': 0.8895522388059701, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.8951387121404938, 'recall': 0.9287034592025349, 'f1-score': 0.9116122343182996, 'support': 3787.0}, 'I-BIC': {'precision': 0.8106228264306039, 'recall': 0.9395382924148039, 'f1-score': 0.8703326544467074, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9639191742985899, 'recall': 0.967915219104896, 'f1-score': 0.9659130637454842, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.9212328767123288, 'recall': 0.7819767441860465, 'f1-score': 0.8459119496855346, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9760479041916168, 'recall': 0.986682808716707, 'f1-score': 0.9813365442504516, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8006708491210635, 'recall': 0.9469585659712019, 'f1-score': 0.8676921005688129, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.6248785228377065, 'recall': 0.8101637967240655, 'f1-score': 0.7055596196049744, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.6521739130434783, 'recall': 0.560064935064935, 'f1-score': 0.6026200873362445, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.5, 'recall': 0.13099041533546327, 'f1-score': 0.20759493670886076, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.8102409638554217, 'recall': 0.6374407582938388, 'f1-score': 0.713527851458886, 'support': 422.0}, 'I-DATE': {'precision': 0.7221130221130221, 'recall': 0.9629750982961992, 'f1-score': 0.8253299634934007, 'support': 6104.0}, 'I-DOB': {'precision': 0.8082089552238806, 'recall': 0.3281818181818182, 'f1-score': 0.4668103448275862, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.996767982395819, 'recall': 0.9991728131246984, 'f1-score': 0.9979689490171779, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9967763157894737, 'recall': 0.9994063324538258, 'f1-score': 0.9980895915678525, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8246268656716418, 'recall': 0.9364406779661016, 'f1-score': 0.876984126984127, 'support': 236.0}, 'I-GENDER': {'precision': 0.9495631453534551, 'recall': 0.9645018152480839, 'f1-score': 0.9569741845107064, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9162303664921466, 'recall': 0.9345794392523364, 'f1-score': 0.9253139458030403, 'support': 749.0}, 'I-IBAN': {'precision': 0.9875510962467484, 'recall': 0.9749908279320044, 'f1-score': 0.9812307692307692, 'support': 16354.0}, 'I-IP': {'precision': 0.3125, 'recall': 0.000334179922470258, 'f1-score': 0.0006676458806249165, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7474275023386342, 'recall': 0.99875, 'f1-score': 0.8550026752273944, 'support': 3200.0}, 'I-IPV6': {'precision': 0.6950114616308263, 'recall': 0.9988704464874023, 'f1-score': 0.8196869045779906, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8402061855670103, 'recall': 0.7546296296296297, 'f1-score': 0.7951219512195122, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.932396839332748, 'recall': 0.9888268156424581, 'f1-score': 0.9597830998644374, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.9480286738351255, 'recall': 0.9264448336252189, 'f1-score': 0.937112488928255, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.8935382863649303, 'recall': 0.8873080184679167, 'f1-score': 0.8904122541603631, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.8993537350313628, 'recall': 0.9349866613970952, 'f1-score': 0.9168241050234947, 'support': 20242.0}, 'I-MAC': {'precision': 0.9914965986394558, 'recall': 0.9939113492450073, 'f1-score': 0.9927025054731209, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.9146597505248858, 'recall': 0.6904073832385569, 'f1-score': 0.7868678283042924, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9988992845349477, 'recall': 1.0, 'f1-score': 0.9994493392070485, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.7874757794609829, 'recall': 0.8974204556860383, 'f1-score': 0.8388610029553877, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9740154283394235, 'recall': 0.9697631174710971, 'f1-score': 0.971884621617242, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8961850526634078, 'recall': 0.8827500620501365, 'f1-score': 0.8894168250475143, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.992862509391435, 'recall': 0.9996217851739788, 'f1-score': 0.9962306822465133, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9845042876485632, 'recall': 0.9800808746443014, 'f1-score': 0.9822876013209246, 'support': 6677.0}, 'I-PIN': {'precision': 0.7775467775467776, 'recall': 0.7083333333333334, 'f1-score': 0.7413280475718533, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8614020950846092, 'recall': 0.9744758432087511, 'f1-score': 0.9144568006843456, 'support': 1097.0}, 'I-SSN': {'precision': 0.9774127310061602, 'recall': 0.9692706405035172, 'f1-score': 0.9733246584255042, 'support': 5402.0}, 'I-TIME': {'precision': 0.9649551752241239, 'recall': 0.9825726141078838, 'f1-score': 0.9736842105263158, 'support': 1205.0}, 'I-URL': {'precision': 0.9944187312823305, 'recall': 0.999452729511561, 'f1-score': 0.9969293756397134, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.9988385176581612, 'recall': 0.9992374727668846, 'f1-score': 0.999037955382912, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.9533948030176027, 'recall': 0.9300081766148814, 'f1-score': 0.9415562913907285, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9313757300454251, 'recall': 0.9882940265105871, 'f1-score': 0.9589910632255909, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9545983701979045, 'recall': 0.984, 'f1-score': 0.9690762261177861, 'support': 2500.0}, 'O': {'precision': 0.9875282096288867, 'recall': 0.9817431135485479, 'f1-score': 0.9846271641977623, 'support': 320920.0}, 'accuracy': 0.9376226025938506, 'macro avg': {'precision': 0.869305376000613, 'recall': 0.8420562196160797, 'f1-score': 0.8414341155555606, 'support': 713688.0}, 'weighted avg': {'precision': 0.9303649774455068, 'recall': 0.9376226025938506, 'f1-score': 0.9277568423694396, 'support': 713688.0}} |
| 0.1398 | 5.0 | 8045 | 0.1575 | 0.7381 | 0.7963 | 0.7661 | 0.9430 | {'B-ACCOUNTNAME': {'precision': 0.9851576994434137, 'recall': 0.9962476547842402, 'f1-score': 0.9906716417910447, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9694072657743786, 'recall': 0.9566037735849057, 'f1-score': 0.9629629629629629, 'support': 530.0}, 'B-AGE': {'precision': 0.9201030927835051, 'recall': 0.7898230088495575, 'f1-score': 0.85, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.8904761904761904, 'recall': 0.7030075187969925, 'f1-score': 0.7857142857142857, 'support': 266.0}, 'B-BIC': {'precision': 0.9050632911392406, 'recall': 0.9407894736842105, 'f1-score': 0.9225806451612903, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9458333333333333, 'recall': 0.9891067538126361, 'f1-score': 0.966986155484558, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9391304347826087, 'recall': 0.6171428571428571, 'f1-score': 0.7448275862068966, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.9073482428115016, 'recall': 0.9659863945578231, 'f1-score': 0.9357495881383855, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.8383658969804618, 'recall': 0.8956356736242884, 'f1-score': 0.8660550458715597, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.5738161559888579, 'recall': 0.5919540229885057, 'f1-score': 0.5827439886845828, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.6923076923076923, 'recall': 0.3559322033898305, 'f1-score': 0.4701492537313433, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.5454545454545454, 'recall': 0.03870967741935484, 'f1-score': 0.07228915662650602, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.8405797101449275, 'recall': 0.8285714285714286, 'f1-score': 0.8345323741007195, 'support': 490.0}, 'B-DATE': {'precision': 0.7480314960629921, 'recall': 0.9553072625698324, 'f1-score': 0.8390578999018645, 'support': 895.0}, 'B-DOB': {'precision': 0.7872340425531915, 'recall': 0.38461538461538464, 'f1-score': 0.5167597765363129, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9695493300852619, 'recall': 0.9766871165644172, 'f1-score': 0.9731051344743277, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9969040247678018, 'recall': 0.9969040247678018, 'f1-score': 0.9969040247678018, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8533333333333334, 'recall': 0.7272727272727273, 'f1-score': 0.7852760736196319, 'support': 176.0}, 'B-GENDER': {'precision': 0.9242761692650334, 'recall': 0.8792372881355932, 'f1-score': 0.9011943539630836, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.9197530864197531, 'recall': 0.8816568047337278, 'f1-score': 0.9003021148036254, 'support': 169.0}, 'B-IBAN': {'precision': 0.9787798408488063, 'recall': 0.9787798408488063, 'f1-score': 0.9787798408488063, 'support': 377.0}, 'B-IP': {'precision': 0.4594594594594595, 'recall': 0.04404145077720207, 'f1-score': 0.08037825059101655, 'support': 386.0}, 'B-IPV4': {'precision': 0.7464183381088825, 'recall': 0.9774859287054409, 'f1-score': 0.8464662875710804, 'support': 533.0}, 'B-IPV6': {'precision': 0.6929955290611028, 'recall': 0.9337349397590361, 'f1-score': 0.795551753635586, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.7391304347826086, 'recall': 0.647912885662432, 'f1-score': 0.690522243713733, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.9102773246329527, 'recall': 0.9457627118644067, 'f1-score': 0.9276807980049875, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.7982300884955752, 'recall': 0.8382899628252788, 'f1-score': 0.8177697189483227, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9724137931034482, 'recall': 0.8703703703703703, 'f1-score': 0.9185667752442996, 'support': 162.0}, 'B-LOCATION': {'precision': 0.8737446197991392, 'recall': 0.84627410109432, 'f1-score': 0.8597899938233478, 'support': 5757.0}, 'B-MAC': {'precision': 0.9490740740740741, 'recall': 0.9403669724770642, 'f1-score': 0.9447004608294931, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.840782122905028, 'recall': 0.7639593908629442, 'f1-score': 0.800531914893617, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.9970059880239521, 'f1-score': 0.9985007496251874, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.8440366972477065, 'recall': 0.8846153846153846, 'f1-score': 0.863849765258216, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.7189097103918228, 'recall': 0.7161646160373356, 'f1-score': 0.7175345377258235, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9722222222222222, 'recall': 0.962671905697446, 'f1-score': 0.9674234945705824, 'support': 509.0}, 'B-PERSON': {'precision': 0.8415040456925273, 'recall': 0.8317390622549788, 'f1-score': 0.8365930599369085, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9868073878627969, 'recall': 0.9842105263157894, 'f1-score': 0.9855072463768116, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9684418145956607, 'recall': 0.9780876494023905, 'f1-score': 0.9732408325074331, 'support': 502.0}, 'B-PIN': {'precision': 0.73125, 'recall': 0.7905405405405406, 'f1-score': 0.7597402597402597, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8871181938911022, 'recall': 0.9709302325581395, 'f1-score': 0.9271339347675226, 'support': 688.0}, 'B-SEX': {'precision': 0.9551401869158879, 'recall': 0.9826923076923076, 'f1-score': 0.9687203791469194, 'support': 520.0}, 'B-SSN': {'precision': 0.9528301886792453, 'recall': 0.9505882352941176, 'f1-score': 0.9517078916372202, 'support': 425.0}, 'B-TIME': {'precision': 0.9272419627749577, 'recall': 0.9432013769363167, 'f1-score': 0.9351535836177475, 'support': 581.0}, 'B-URL': {'precision': 0.9930555555555556, 'recall': 0.9982547993019197, 'f1-score': 0.9956483899042646, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951690821256038, 'recall': 0.9927710843373494, 'f1-score': 0.9939686369119421, 'support': 415.0}, 'B-USERNAME': {'precision': 0.9230769230769231, 'recall': 0.91005291005291, 'f1-score': 0.91651865008881, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9647058823529412, 'recall': 0.9879518072289156, 'f1-score': 0.9761904761904762, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.968421052631579, 'recall': 0.9387755102040817, 'f1-score': 0.9533678756476683, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9727891156462585, 'recall': 0.9907621247113164, 'f1-score': 0.9816933638443935, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9899244332493703, 'recall': 0.9887621603488762, 'f1-score': 0.9893429554418058, 'support': 5962.0}, 'I-AGE': {'precision': 0.8240740740740741, 'recall': 0.9866962305986696, 'f1-score': 0.8980827447023209, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.9198533263488737, 'recall': 0.9273831528914708, 'f1-score': 0.923602892833662, 'support': 3787.0}, 'I-BIC': {'precision': 0.895774647887324, 'recall': 0.9322096005862953, 'f1-score': 0.9136290177769797, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9602942429492459, 'recall': 0.9870960207865005, 'f1-score': 0.9735106964959258, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.9358108108108109, 'recall': 0.8052325581395349, 'f1-score': 0.865625, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9819711538461539, 'recall': 0.9891041162227603, 'f1-score': 0.985524728588661, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8494111048794167, 'recall': 0.8900969732588893, 'f1-score': 0.8692782321710432, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.6404494382022472, 'recall': 0.8378832423351533, 'f1-score': 0.7259825327510917, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.6697247706422018, 'recall': 0.5925324675324676, 'f1-score': 0.6287683031869078, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.6375838926174496, 'recall': 0.10117145899893504, 'f1-score': 0.17463235294117646, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.7989690721649485, 'recall': 0.7345971563981043, 'f1-score': 0.7654320987654321, 'support': 422.0}, 'I-DATE': {'precision': 0.7301051051051051, 'recall': 0.9559305373525557, 'f1-score': 0.827894438138479, 'support': 6104.0}, 'I-DOB': {'precision': 0.790994623655914, 'recall': 0.3566666666666667, 'f1-score': 0.49164578111946533, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9975913564104328, 'recall': 0.9992417453643069, 'f1-score': 0.9984158688614918, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9983532589006356, 'recall': 0.999769129287599, 'f1-score': 0.9990606924509335, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8648648648648649, 'recall': 0.9491525423728814, 'f1-score': 0.9050505050505051, 'support': 236.0}, 'I-GENDER': {'precision': 0.9537436299490396, 'recall': 0.9814441306978621, 'f1-score': 0.9673956262425447, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9461942257217848, 'recall': 0.9626168224299065, 'f1-score': 0.9543348775645268, 'support': 749.0}, 'I-IBAN': {'precision': 0.9889188196400147, 'recall': 0.9877094288858995, 'f1-score': 0.9883137542829172, 'support': 16354.0}, 'I-IP': {'precision': 0.5290163259706407, 'recall': 0.25771955620906295, 'f1-score': 0.3465911644420476, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7541846006695361, 'recall': 0.985625, 'f1-score': 0.854510972636142, 'support': 3200.0}, 'I-IPV6': {'precision': 0.7371911101901258, 'recall': 0.8929748046813718, 'f1-score': 0.8076393717099195, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8333333333333334, 'recall': 0.8101851851851852, 'f1-score': 0.8215962441314554, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.9379947229551451, 'recall': 0.9930167597765364, 'f1-score': 0.9647218453188603, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.9163822525597269, 'recall': 0.9404553415061296, 'f1-score': 0.9282627484874676, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9502909053791977, 'recall': 0.8772260435315179, 'f1-score': 0.912297893189613, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.9360191942417275, 'recall': 0.925106214800909, 'f1-score': 0.930530709600477, 'support': 20242.0}, 'I-MAC': {'precision': 0.9927078269324259, 'recall': 0.9946419873356064, 'f1-score': 0.9936739659367396, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.8489172737251771, 'recall': 0.7930455859047264, 'f1-score': 0.8200308463466358, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9972527472527473, 'recall': 1.0, 'f1-score': 0.9986244841815681, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.8537132819361765, 'recall': 0.8780487804878049, 'f1-score': 0.8657100445324097, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9835999025736787, 'recall': 0.9794647910097826, 'f1-score': 0.9815279915741716, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8619437293122472, 'recall': 0.9307024075452967, 'f1-score': 0.895004415590615, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9936114242765878, 'recall': 1.0, 'f1-score': 0.9967954759660698, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9774069698759599, 'recall': 0.9913134641305976, 'f1-score': 0.9843111011971151, 'support': 6677.0}, 'I-PIN': {'precision': 0.6990740740740741, 'recall': 0.8579545454545454, 'f1-score': 0.7704081632653061, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8605577689243028, 'recall': 0.9845031905195989, 'f1-score': 0.9183673469387755, 'support': 1097.0}, 'I-SSN': {'precision': 0.9766028002947679, 'recall': 0.9813032210292484, 'f1-score': 0.9789473684210527, 'support': 5402.0}, 'I-TIME': {'precision': 0.9691558441558441, 'recall': 0.9908713692946058, 'f1-score': 0.9798933114485022, 'support': 1205.0}, 'I-URL': {'precision': 0.9929338225302351, 'recall': 0.9997263647557806, 'f1-score': 0.9963185164985001, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.9984048145596925, 'recall': 0.9999636891793755, 'f1-score': 0.9991836438510241, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.9526885192959793, 'recall': 0.9648405560098119, 'f1-score': 0.9587260318492038, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9743546482200102, 'recall': 0.9941470132552935, 'f1-score': 0.9841513292433538, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9772273272073512, 'recall': 0.9784, 'f1-score': 0.9778133120127923, 'support': 2500.0}, 'O': {'precision': 0.9894668122983441, 'recall': 0.9823507416178486, 'f1-score': 0.985895936403495, 'support': 320920.0}, 'accuracy': 0.9429666745132327, 'macro avg': {'precision': 0.8798398901632298, 'recall': 0.8633480886388905, 'f1-score': 0.8606121111819207, 'support': 713688.0}, 'weighted avg': {'precision': 0.9409699641727243, 'recall': 0.9429666745132327, 'f1-score': 0.93937824960725, 'support': 713688.0}} |
| 0.1224 | 6.0 | 9654 | 0.1572 | 0.7413 | 0.8012 | 0.7701 | 0.9433 | {'B-ACCOUNTNAME': {'precision': 0.9833333333333333, 'recall': 0.9962476547842402, 'f1-score': 0.9897483690587139, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9695238095238096, 'recall': 0.960377358490566, 'f1-score': 0.9649289099526066, 'support': 530.0}, 'B-AGE': {'precision': 0.9133663366336634, 'recall': 0.8163716814159292, 'f1-score': 0.8621495327102804, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.9108910891089109, 'recall': 0.6917293233082706, 'f1-score': 0.7863247863247863, 'support': 266.0}, 'B-BIC': {'precision': 0.8987341772151899, 'recall': 0.9342105263157895, 'f1-score': 0.9161290322580645, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9475890985324947, 'recall': 0.9847494553376906, 'f1-score': 0.9658119658119658, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9186991869918699, 'recall': 0.6457142857142857, 'f1-score': 0.7583892617449665, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.9163987138263665, 'recall': 0.9693877551020408, 'f1-score': 0.9421487603305785, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.827054794520548, 'recall': 0.9165085388994307, 'f1-score': 0.8694869486948695, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.5887096774193549, 'recall': 0.6293103448275862, 'f1-score': 0.6083333333333333, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.6754385964912281, 'recall': 0.4350282485875706, 'f1-score': 0.5292096219931272, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.5, 'recall': 0.08387096774193549, 'f1-score': 0.143646408839779, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.8604651162790697, 'recall': 0.8306122448979592, 'f1-score': 0.8452751817237798, 'support': 490.0}, 'B-DATE': {'precision': 0.7462817147856518, 'recall': 0.9530726256983241, 'f1-score': 0.8370951913640824, 'support': 895.0}, 'B-DOB': {'precision': 0.7791666666666667, 'recall': 0.3887733887733888, 'f1-score': 0.5187239944521498, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9718482252141983, 'recall': 0.9742331288343559, 'f1-score': 0.9730392156862745, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9938271604938271, 'recall': 0.9969040247678018, 'f1-score': 0.9953632148377125, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8881578947368421, 'recall': 0.7670454545454546, 'f1-score': 0.823170731707317, 'support': 176.0}, 'B-GENDER': {'precision': 0.9295154185022027, 'recall': 0.8940677966101694, 'f1-score': 0.9114470842332614, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.9221556886227545, 'recall': 0.9112426035502958, 'f1-score': 0.9166666666666666, 'support': 169.0}, 'B-IBAN': {'precision': 0.9788359788359788, 'recall': 0.9814323607427056, 'f1-score': 0.9801324503311258, 'support': 377.0}, 'B-IP': {'precision': 0.46153846153846156, 'recall': 0.06217616580310881, 'f1-score': 0.1095890410958904, 'support': 386.0}, 'B-IPV4': {'precision': 0.7432239657631954, 'recall': 0.9774859287054409, 'f1-score': 0.8444084278768234, 'support': 533.0}, 'B-IPV6': {'precision': 0.6947368421052632, 'recall': 0.927710843373494, 'f1-score': 0.7944969905417025, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.7265774378585086, 'recall': 0.6896551724137931, 'f1-score': 0.707635009310987, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.9020866773675762, 'recall': 0.9525423728813559, 'f1-score': 0.9266281945589447, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.8084358523725835, 'recall': 0.8550185873605948, 'f1-score': 0.8310749774164409, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9659863945578231, 'recall': 0.8765432098765432, 'f1-score': 0.919093851132686, 'support': 162.0}, 'B-LOCATION': {'precision': 0.8701642819289878, 'recall': 0.8556539864512767, 'f1-score': 0.8628481345244351, 'support': 5757.0}, 'B-MAC': {'precision': 0.9497716894977168, 'recall': 0.9541284403669725, 'f1-score': 0.9519450800915332, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8656716417910447, 'recall': 0.7360406091370558, 'f1-score': 0.7956104252400549, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.9970059880239521, 'f1-score': 0.9985007496251874, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.8532110091743119, 'recall': 0.8942307692307693, 'f1-score': 0.8732394366197183, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.7381056307289393, 'recall': 0.7174374204497243, 'f1-score': 0.7276247848537005, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9702970297029703, 'recall': 0.962671905697446, 'f1-score': 0.9664694280078896, 'support': 509.0}, 'B-PERSON': {'precision': 0.8583856935717739, 'recall': 0.8355025874235534, 'f1-score': 0.8467895740623014, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9868073878627969, 'recall': 0.9842105263157894, 'f1-score': 0.9855072463768116, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9647058823529412, 'recall': 0.9800796812749004, 'f1-score': 0.9723320158102767, 'support': 502.0}, 'B-PIN': {'precision': 0.7631578947368421, 'recall': 0.7837837837837838, 'f1-score': 0.7733333333333333, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8800521512385919, 'recall': 0.9811046511627907, 'f1-score': 0.9278350515463918, 'support': 688.0}, 'B-SEX': {'precision': 0.9587242026266416, 'recall': 0.9826923076923076, 'f1-score': 0.9705603038936372, 'support': 520.0}, 'B-SSN': {'precision': 0.9507042253521126, 'recall': 0.9529411764705882, 'f1-score': 0.9518213866039953, 'support': 425.0}, 'B-TIME': {'precision': 0.9319727891156463, 'recall': 0.9432013769363167, 'f1-score': 0.9375534644995723, 'support': 581.0}, 'B-URL': {'precision': 0.9930555555555556, 'recall': 0.9982547993019197, 'f1-score': 0.9956483899042646, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951690821256038, 'recall': 0.9927710843373494, 'f1-score': 0.9939686369119421, 'support': 415.0}, 'B-USERNAME': {'precision': 0.9203539823008849, 'recall': 0.9171075837742504, 'f1-score': 0.9187279151943463, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9704142011834319, 'recall': 0.9879518072289156, 'f1-score': 0.9791044776119403, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.9585492227979274, 'recall': 0.9438775510204082, 'f1-score': 0.9511568123393316, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9683972911963883, 'recall': 0.9907621247113164, 'f1-score': 0.9794520547945206, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9874560963371801, 'recall': 0.9902717208990272, 'f1-score': 0.9888619043631187, 'support': 5962.0}, 'I-AGE': {'precision': 0.8314606741573034, 'recall': 0.9844789356984479, 'f1-score': 0.9015228426395939, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.9181771103055412, 'recall': 0.9363612358067072, 'f1-score': 0.9271800235324879, 'support': 3787.0}, 'I-BIC': {'precision': 0.8971631205673759, 'recall': 0.9270795163063393, 'f1-score': 0.9118760136961614, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9611269208878771, 'recall': 0.9860158234315243, 'f1-score': 0.9734123038346807, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.9364548494983278, 'recall': 0.813953488372093, 'f1-score': 0.8709175738724728, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9808612440191388, 'recall': 0.9927360774818402, 'f1-score': 0.98676293622142, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8299468791500664, 'recall': 0.9182339112547752, 'f1-score': 0.8718610491071429, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.6748278500382555, 'recall': 0.740865182696346, 'f1-score': 0.7063063063063063, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.637987012987013, 'recall': 0.637987012987013, 'f1-score': 0.637987012987013, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.46060606060606063, 'recall': 0.32374866879659214, 'f1-score': 0.3802376485303315, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.8227513227513228, 'recall': 0.7369668246445498, 'f1-score': 0.7775, 'support': 422.0}, 'I-DATE': {'precision': 0.7552550542241264, 'recall': 0.9241480996068152, 'f1-score': 0.8312090179031901, 'support': 6104.0}, 'I-DOB': {'precision': 0.7493784186971656, 'recall': 0.45666666666666667, 'f1-score': 0.5675014121634344, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9976601747987062, 'recall': 0.9993106776039153, 'f1-score': 0.9984847441283835, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9971705872676427, 'recall': 0.999637203166227, 'f1-score': 0.9984023717368031, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8764478764478765, 'recall': 0.961864406779661, 'f1-score': 0.9171717171717172, 'support': 236.0}, 'I-GENDER': {'precision': 0.9548311076197957, 'recall': 0.9806373537716822, 'f1-score': 0.9675621890547264, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9332477535301669, 'recall': 0.9706275033377837, 'f1-score': 0.9515706806282722, 'support': 749.0}, 'I-IBAN': {'precision': 0.9920039365235577, 'recall': 0.9861807508866333, 'f1-score': 0.9890837728443518, 'support': 16354.0}, 'I-IP': {'precision': 0.49273447820343463, 'recall': 0.3739473332442187, 'f1-score': 0.42520044077972413, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7541926209870627, 'recall': 0.98375, 'f1-score': 0.8538106861947383, 'support': 3200.0}, 'I-IPV6': {'precision': 0.7565486700049205, 'recall': 0.8201186031188228, 'f1-score': 0.7870520927431497, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8258928571428571, 'recall': 0.8564814814814815, 'f1-score': 0.8409090909090909, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.9352864013992129, 'recall': 0.9958100558659218, 'f1-score': 0.9645997745208568, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.928082191780822, 'recall': 0.9492119089316988, 'f1-score': 0.9385281385281385, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9491078669910786, 'recall': 0.882125694902478, 'f1-score': 0.9143917566049714, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.935253227408143, 'recall': 0.9305404604288113, 'f1-score': 0.9328908919815759, 'support': 20242.0}, 'I-MAC': {'precision': 0.9934322549258088, 'recall': 0.9946419873356064, 'f1-score': 0.9940367530728976, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.8802466464733881, 'recall': 0.7585531835555142, 'f1-score': 0.8148815782885184, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9985329176600036, 'recall': 1.0, 'f1-score': 0.9992659203523583, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.856851510148587, 'recall': 0.8855766335441132, 'f1-score': 0.8709772951628826, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9827474485663372, 'recall': 0.9809200420405854, 'f1-score': 0.9818328950030346, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8926155035762614, 'recall': 0.9168528170761976, 'f1-score': 0.9045718343658937, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9931120851596744, 'recall': 0.9997478567826525, 'f1-score': 0.9964189231639128, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9784533648170012, 'recall': 0.9929609105885877, 'f1-score': 0.9856537575262023, 'support': 6677.0}, 'I-PIN': {'precision': 0.7491694352159468, 'recall': 0.8541666666666666, 'f1-score': 0.7982300884955752, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8575924468922108, 'recall': 0.9936189608021878, 'f1-score': 0.9206081081081081, 'support': 1097.0}, 'I-SSN': {'precision': 0.9718875502008032, 'recall': 0.9855609033691225, 'f1-score': 0.9786764705882353, 'support': 5402.0}, 'I-TIME': {'precision': 0.9683184402924452, 'recall': 0.9892116182572614, 'f1-score': 0.9786535303776683, 'support': 1205.0}, 'I-URL': {'precision': 0.9914529914529915, 'recall': 0.9998631823778903, 'f1-score': 0.9956403269754769, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.998585829284212, 'recall': 0.9999636891793755, 'f1-score': 0.9992742842628543, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.944710770214126, 'recall': 0.9668029435813573, 'f1-score': 0.9556291925967834, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9745319615449486, 'recall': 0.9946634532621794, 'f1-score': 0.9844948032032714, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9772364217252396, 'recall': 0.9788, 'f1-score': 0.978017585931255, 'support': 2500.0}, 'O': {'precision': 0.9892495017985387, 'recall': 0.9837872366945033, 'f1-score': 0.986510808226626, 'support': 320920.0}, 'accuracy': 0.9433351828810348, 'macro avg': {'precision': 0.8793777747202005, 'recall': 0.8718829567729829, 'f1-score': 0.8691194147844159, 'support': 713688.0}, 'weighted avg': {'precision': 0.9419843792518461, 'recall': 0.9433351828810348, 'f1-score': 0.941698874575059, 'support': 713688.0}} |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "dbert-pii-detection-model", "results": []}]}
|
omshikhare/dbert-pii-detection-model
| null |
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:12:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
dbert-pii-detection-model
=========================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1572
* Precision: 0.7413
* Recall: 0.8012
* F1: 0.7701
* Accuracy: 0.9433
* Classification Report: {'B-ACCOUNTNAME': {'precision': 0.9833333333333333, 'recall': 0.9962476547842402, 'f1-score': 0.9897483690587139, 'support': 533.0}, 'B-ACCOUNTNUMBER': {'precision': 0.9695238095238096, 'recall': 0.960377358490566, 'f1-score': 0.9649289099526066, 'support': 530.0}, 'B-AGE': {'precision': 0.9133663366336634, 'recall': 0.8163716814159292, 'f1-score': 0.8621495327102804, 'support': 452.0}, 'B-AMOUNT': {'precision': 0.9108910891089109, 'recall': 0.6917293233082706, 'f1-score': 0.7863247863247863, 'support': 266.0}, 'B-BIC': {'precision': 0.8987341772151899, 'recall': 0.9342105263157895, 'f1-score': 0.9161290322580645, 'support': 152.0}, 'B-BITCOINADDRESS': {'precision': 0.9475890985324947, 'recall': 0.9847494553376906, 'f1-score': 0.9658119658119658, 'support': 459.0}, 'B-CREDITCARDCVV': {'precision': 0.9186991869918699, 'recall': 0.6457142857142857, 'f1-score': 0.7583892617449665, 'support': 175.0}, 'B-CREDITCARDISSUER': {'precision': 0.9163987138263665, 'recall': 0.9693877551020408, 'f1-score': 0.9421487603305785, 'support': 294.0}, 'B-CREDITCARDNUMBER': {'precision': 0.827054794520548, 'recall': 0.9165085388994307, 'f1-score': 0.8694869486948695, 'support': 527.0}, 'B-CURRENCY': {'precision': 0.5887096774193549, 'recall': 0.6293103448275862, 'f1-score': 0.6083333333333333, 'support': 348.0}, 'B-CURRENCYCODE': {'precision': 0.6754385964912281, 'recall': 0.4350282485875706, 'f1-score': 0.5292096219931272, 'support': 177.0}, 'B-CURRENCYNAME': {'precision': 0.5, 'recall': 0.08387096774193549, 'f1-score': 0.143646408839779, 'support': 155.0}, 'B-CURRENCYSYMBOL': {'precision': 0.8604651162790697, 'recall': 0.8306122448979592, 'f1-score': 0.8452751817237798, 'support': 490.0}, 'B-DATE': {'precision': 0.7462817147856518, 'recall': 0.9530726256983241, 'f1-score': 0.8370951913640824, 'support': 895.0}, 'B-DOB': {'precision': 0.7791666666666667, 'recall': 0.3887733887733888, 'f1-score': 0.5187239944521498, 'support': 481.0}, 'B-EMAIL': {'precision': 0.9718482252141983, 'recall': 0.9742331288343559, 'f1-score': 0.9730392156862745, 'support': 815.0}, 'B-ETHEREUMADDRESS': {'precision': 0.9938271604938271, 'recall': 0.9969040247678018, 'f1-score': 0.9953632148377125, 'support': 323.0}, 'B-EYECOLOR': {'precision': 0.8881578947368421, 'recall': 0.7670454545454546, 'f1-score': 0.823170731707317, 'support': 176.0}, 'B-GENDER': {'precision': 0.9295154185022027, 'recall': 0.8940677966101694, 'f1-score': 0.9114470842332614, 'support': 472.0}, 'B-HEIGHT': {'precision': 0.9221556886227545, 'recall': 0.9112426035502958, 'f1-score': 0.9166666666666666, 'support': 169.0}, 'B-IBAN': {'precision': 0.9788359788359788, 'recall': 0.9814323607427056, 'f1-score': 0.9801324503311258, 'support': 377.0}, 'B-IP': {'precision': 0.46153846153846156, 'recall': 0.06217616580310881, 'f1-score': 0.1095890410958904, 'support': 386.0}, 'B-IPV4': {'precision': 0.7432239657631954, 'recall': 0.9774859287054409, 'f1-score': 0.8444084278768234, 'support': 533.0}, 'B-IPV6': {'precision': 0.6947368421052632, 'recall': 0.927710843373494, 'f1-score': 0.7944969905417025, 'support': 498.0}, 'B-JOBAREA': {'precision': 0.7265774378585086, 'recall': 0.6896551724137931, 'f1-score': 0.707635009310987, 'support': 551.0}, 'B-JOBTITLE': {'precision': 0.9020866773675762, 'recall': 0.9525423728813559, 'f1-score': 0.9266281945589447, 'support': 590.0}, 'B-JOBTYPE': {'precision': 0.8084358523725835, 'recall': 0.8550185873605948, 'f1-score': 0.8310749774164409, 'support': 538.0}, 'B-LITECOINADDRESS': {'precision': 0.9659863945578231, 'recall': 0.8765432098765432, 'f1-score': 0.919093851132686, 'support': 162.0}, 'B-LOCATION': {'precision': 0.8701642819289878, 'recall': 0.8556539864512767, 'f1-score': 0.8628481345244351, 'support': 5757.0}, 'B-MAC': {'precision': 0.9497716894977168, 'recall': 0.9541284403669725, 'f1-score': 0.9519450800915332, 'support': 218.0}, 'B-MASKEDNUMBER': {'precision': 0.8656716417910447, 'recall': 0.7360406091370558, 'f1-score': 0.7956104252400549, 'support': 394.0}, 'B-NEARBYGPSCOORDINATE': {'precision': 1.0, 'recall': 0.9970059880239521, 'f1-score': 0.9985007496251874, 'support': 334.0}, 'B-ORDINALDIRECTION': {'precision': 0.8532110091743119, 'recall': 0.8942307692307693, 'f1-score': 0.8732394366197183, 'support': 208.0}, 'B-ORGANIZATION': {'precision': 0.7381056307289393, 'recall': 0.7174374204497243, 'f1-score': 0.7276247848537005, 'support': 2357.0}, 'B-PASSWORD': {'precision': 0.9702970297029703, 'recall': 0.962671905697446, 'f1-score': 0.9664694280078896, 'support': 509.0}, 'B-PERSON': {'precision': 0.8583856935717739, 'recall': 0.8355025874235534, 'f1-score': 0.8467895740623014, 'support': 6377.0}, 'B-PHONEIMEI': {'precision': 0.9868073878627969, 'recall': 0.9842105263157894, 'f1-score': 0.9855072463768116, 'support': 380.0}, 'B-PHONENUMBER': {'precision': 0.9647058823529412, 'recall': 0.9800796812749004, 'f1-score': 0.9723320158102767, 'support': 502.0}, 'B-PIN': {'precision': 0.7631578947368421, 'recall': 0.7837837837837838, 'f1-score': 0.7733333333333333, 'support': 148.0}, 'B-PREFIX': {'precision': 0.8800521512385919, 'recall': 0.9811046511627907, 'f1-score': 0.9278350515463918, 'support': 688.0}, 'B-SEX': {'precision': 0.9587242026266416, 'recall': 0.9826923076923076, 'f1-score': 0.9705603038936372, 'support': 520.0}, 'B-SSN': {'precision': 0.9507042253521126, 'recall': 0.9529411764705882, 'f1-score': 0.9518213866039953, 'support': 425.0}, 'B-TIME': {'precision': 0.9319727891156463, 'recall': 0.9432013769363167, 'f1-score': 0.9375534644995723, 'support': 581.0}, 'B-URL': {'precision': 0.9930555555555556, 'recall': 0.9982547993019197, 'f1-score': 0.9956483899042646, 'support': 573.0}, 'B-USERAGENT': {'precision': 0.9951690821256038, 'recall': 0.9927710843373494, 'f1-score': 0.9939686369119421, 'support': 415.0}, 'B-USERNAME': {'precision': 0.9203539823008849, 'recall': 0.9171075837742504, 'f1-score': 0.9187279151943463, 'support': 567.0}, 'B-VEHICLEVIN': {'precision': 0.9704142011834319, 'recall': 0.9879518072289156, 'f1-score': 0.9791044776119403, 'support': 166.0}, 'B-VEHICLEVRM': {'precision': 0.9585492227979274, 'recall': 0.9438775510204082, 'f1-score': 0.9511568123393316, 'support': 196.0}, 'I-ACCOUNTNAME': {'precision': 0.9683972911963883, 'recall': 0.9907621247113164, 'f1-score': 0.9794520547945206, 'support': 866.0}, 'I-ACCOUNTNUMBER': {'precision': 0.9874560963371801, 'recall': 0.9902717208990272, 'f1-score': 0.9888619043631187, 'support': 5962.0}, 'I-AGE': {'precision': 0.8314606741573034, 'recall': 0.9844789356984479, 'f1-score': 0.9015228426395939, 'support': 451.0}, 'I-AMOUNT': {'precision': 0.9181771103055412, 'recall': 0.9363612358067072, 'f1-score': 0.9271800235324879, 'support': 3787.0}, 'I-BIC': {'precision': 0.8971631205673759, 'recall': 0.9270795163063393, 'f1-score': 0.9118760136961614, 'support': 2729.0}, 'I-BITCOINADDRESS': {'precision': 0.9611269208878771, 'recall': 0.9860158234315243, 'f1-score': 0.9734123038346807, 'support': 34253.0}, 'I-CREDITCARDCVV': {'precision': 0.9364548494983278, 'recall': 0.813953488372093, 'f1-score': 0.8709175738724728, 'support': 344.0}, 'I-CREDITCARDISSUER': {'precision': 0.9808612440191388, 'recall': 0.9927360774818402, 'f1-score': 0.98676293622142, 'support': 826.0}, 'I-CREDITCARDNUMBER': {'precision': 0.8299468791500664, 'recall': 0.9182339112547752, 'f1-score': 0.8718610491071429, 'support': 13612.0}, 'I-CURRENCY': {'precision': 0.6748278500382555, 'recall': 0.740865182696346, 'f1-score': 0.7063063063063063, 'support': 2381.0}, 'I-CURRENCYCODE': {'precision': 0.637987012987013, 'recall': 0.637987012987013, 'f1-score': 0.637987012987013, 'support': 616.0}, 'I-CURRENCYNAME': {'precision': 0.46060606060606063, 'recall': 0.32374866879659214, 'f1-score': 0.3802376485303315, 'support': 939.0}, 'I-CURRENCYSYMBOL': {'precision': 0.8227513227513228, 'recall': 0.7369668246445498, 'f1-score': 0.7775, 'support': 422.0}, 'I-DATE': {'precision': 0.7552550542241264, 'recall': 0.9241480996068152, 'f1-score': 0.8312090179031901, 'support': 6104.0}, 'I-DOB': {'precision': 0.7493784186971656, 'recall': 0.45666666666666667, 'f1-score': 0.5675014121634344, 'support': 3300.0}, 'I-EMAIL': {'precision': 0.9976601747987062, 'recall': 0.9993106776039153, 'f1-score': 0.9984847441283835, 'support': 14507.0}, 'I-ETHEREUMADDRESS': {'precision': 0.9971705872676427, 'recall': 0.999637203166227, 'f1-score': 0.9984023717368031, 'support': 30320.0}, 'I-EYECOLOR': {'precision': 0.8764478764478765, 'recall': 0.961864406779661, 'f1-score': 0.9171717171717172, 'support': 236.0}, 'I-GENDER': {'precision': 0.9548311076197957, 'recall': 0.9806373537716822, 'f1-score': 0.9675621890547264, 'support': 2479.0}, 'I-HEIGHT': {'precision': 0.9332477535301669, 'recall': 0.9706275033377837, 'f1-score': 0.9515706806282722, 'support': 749.0}, 'I-IBAN': {'precision': 0.9920039365235577, 'recall': 0.9861807508866333, 'f1-score': 0.9890837728443518, 'support': 16354.0}, 'I-IP': {'precision': 0.49273447820343463, 'recall': 0.3739473332442187, 'f1-score': 0.42520044077972413, 'support': 14962.0}, 'I-IPV4': {'precision': 0.7541926209870627, 'recall': 0.98375, 'f1-score': 0.8538106861947383, 'support': 3200.0}, 'I-IPV6': {'precision': 0.7565486700049205, 'recall': 0.8201186031188228, 'f1-score': 0.7870520927431497, 'support': 31871.0}, 'I-JOBAREA': {'precision': 0.8258928571428571, 'recall': 0.8564814814814815, 'f1-score': 0.8409090909090909, 'support': 216.0}, 'I-JOBTITLE': {'precision': 0.9352864013992129, 'recall': 0.9958100558659218, 'f1-score': 0.9645997745208568, 'support': 2148.0}, 'I-JOBTYPE': {'precision': 0.928082191780822, 'recall': 0.9492119089316988, 'f1-score': 0.9385281385281385, 'support': 571.0}, 'I-LITECOINADDRESS': {'precision': 0.9491078669910786, 'recall': 0.882125694902478, 'f1-score': 0.9143917566049714, 'support': 10613.0}, 'I-LOCATION': {'precision': 0.935253227408143, 'recall': 0.9305404604288113, 'f1-score': 0.9328908919815759, 'support': 20242.0}, 'I-MAC': {'precision': 0.9934322549258088, 'recall': 0.9946419873356064, 'f1-score': 0.9940367530728976, 'support': 4106.0}, 'I-MASKEDNUMBER': {'precision': 0.8802466464733881, 'recall': 0.7585531835555142, 'f1-score': 0.8148815782885184, 'support': 10727.0}, 'I-NEARBYGPSCOORDINATE': {'precision': 0.9985329176600036, 'recall': 1.0, 'f1-score': 0.9992659203523583, 'support': 5445.0}, 'I-ORGANIZATION': {'precision': 0.856851510148587, 'recall': 0.8855766335441132, 'f1-score': 0.8709772951628826, 'support': 9963.0}, 'I-PASSWORD': {'precision': 0.9827474485663372, 'recall': 0.9809200420405854, 'f1-score': 0.9818328950030346, 'support': 12369.0}, 'I-PERSON': {'precision': 0.8926155035762614, 'recall': 0.9168528170761976, 'f1-score': 0.9045718343658937, 'support': 20145.0}, 'I-PHONEIMEI': {'precision': 0.9931120851596744, 'recall': 0.9997478567826525, 'f1-score': 0.9964189231639128, 'support': 7932.0}, 'I-PHONENUMBER': {'precision': 0.9784533648170012, 'recall': 0.9929609105885877, 'f1-score': 0.9856537575262023, 'support': 6677.0}, 'I-PIN': {'precision': 0.7491694352159468, 'recall': 0.8541666666666666, 'f1-score': 0.7982300884955752, 'support': 528.0}, 'I-PREFIX': {'precision': 0.8575924468922108, 'recall': 0.9936189608021878, 'f1-score': 0.9206081081081081, 'support': 1097.0}, 'I-SSN': {'precision': 0.9718875502008032, 'recall': 0.9855609033691225, 'f1-score': 0.9786764705882353, 'support': 5402.0}, 'I-TIME': {'precision': 0.9683184402924452, 'recall': 0.9892116182572614, 'f1-score': 0.9786535303776683, 'support': 1205.0}, 'I-URL': {'precision': 0.9914529914529915, 'recall': 0.9998631823778903, 'f1-score': 0.9956403269754769, 'support': 7309.0}, 'I-USERAGENT': {'precision': 0.998585829284212, 'recall': 0.9999636891793755, 'f1-score': 0.9992742842628543, 'support': 27540.0}, 'I-USERNAME': {'precision': 0.944710770214126, 'recall': 0.9668029435813573, 'f1-score': 0.9556291925967834, 'support': 6115.0}, 'I-VEHICLEVIN': {'precision': 0.9745319615449486, 'recall': 0.9946634532621794, 'f1-score': 0.9844948032032714, 'support': 5809.0}, 'I-VEHICLEVRM': {'precision': 0.9772364217252396, 'recall': 0.9788, 'f1-score': 0.978017585931255, 'support': 2500.0}, 'O': {'precision': 0.9892495017985387, 'recall': 0.9837872366945033, 'f1-score': 0.986510808226626, 'support': 320920.0}, 'accuracy': 0.9433351828810348, 'macro avg': {'precision': 0.8793777747202005, 'recall': 0.8718829567729829, 'f1-score': 0.8691194147844159, 'support': 713688.0}, 'weighted avg': {'precision': 0.9419843792518461, 'recall': 0.9433351828810348, 'f1-score': 0.941698874575059, 'support': 713688.0}}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 6
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - shljessie/tactile_img2img_LoRA
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a tactile graphic using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "if", "if-diffusers"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "a tactile graphic"}
|
shljessie/tactile_img2img_LoRA
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"if",
"if-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-12T17:13:50+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #if #if-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - shljessie/tactile_img2img_LoRA
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a tactile graphic using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# DreamBooth - shljessie/tactile_img2img_LoRA\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a tactile graphic using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #if #if-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - shljessie/tactile_img2img_LoRA\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a tactile graphic using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-uz-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3824
- eval_wer: 0.4664
- eval_runtime: 936.4524
- eval_samples_per_second: 13.157
- eval_steps_per_second: 1.646
- epoch: 0.85
- step: 1600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_13_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-uz-colab", "results": []}]}
|
zohirjonsharipov/uz_asr_model
| null |
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:14:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-uz-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3824
- eval_wer: 0.4664
- eval_runtime: 936.4524
- eval_samples_per_second: 13.157
- eval_steps_per_second: 1.646
- epoch: 0.85
- step: 1600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# wav2vec2-large-xls-r-300m-uz-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_13_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3824\n- eval_wer: 0.4664\n- eval_runtime: 936.4524\n- eval_samples_per_second: 13.157\n- eval_steps_per_second: 1.646\n- epoch: 0.85\n- step: 1600",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.1.0+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-uz-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_13_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.3824\n- eval_wer: 0.4664\n- eval_runtime: 936.4524\n- eval_samples_per_second: 13.157\n- eval_steps_per_second: 1.646\n- epoch: 0.85\n- step: 1600",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.1.0+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Wukong-0.1-Mistral-7B-v0.2
Join Our Discord! https://discord.gg/cognitivecomputations

Wukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.
This model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection https://m-a-p.ai, and the Dolphin dataset from Cognitive Computations https://erichartford.com/dolphin 🐬
This model was trained for 3 epochs over 4 4090's.
# Example Outputs
TBD
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
{"license": "apache-2.0", "datasets": ["teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "m-a-p/Code-Feedback"], "pipeline_tag": "text-generation"}
|
RESMPDEV/Wukong-0.1-Mistral-7B-v0.2
| null |
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:15:47+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #mistral #text-generation #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-m-a-p/Code-Feedback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Wukong-0.1-Mistral-7B-v0.2
Join Our Discord! URL
!image/jpeg
Wukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.
This model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection URL, and the Dolphin dataset from Cognitive Computations URL
This model was trained for 3 epochs over 4 4090's.
# Example Outputs
TBD
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
|
[
"# Wukong-0.1-Mistral-7B-v0.2\n\nJoin Our Discord! URL \n\n!image/jpeg\n\nWukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.\n\nThis model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection URL, and the Dolphin dataset from Cognitive Computations URL \n\nThis model was trained for 3 epochs over 4 4090's.",
"# Example Outputs\n\nTBD\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>"
] |
[
"TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-m-a-p/Code-Feedback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Wukong-0.1-Mistral-7B-v0.2\n\nJoin Our Discord! URL \n\n!image/jpeg\n\nWukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.\n\nThis model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection URL, and the Dolphin dataset from Cognitive Computations URL \n\nThis model was trained for 3 epochs over 4 4090's.",
"# Example Outputs\n\nTBD\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>"
] |
text-to-image
|
diffusers
|
# AutoTrain SDXL LoRA DreamBooth - Suiren00/dreambooth_stablediffusion_test_kz
<Gallery />
## Model description
These are Suiren00/dreambooth_stablediffusion_test_kz LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a photo of CHAN wearing clothes to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Suiren00/dreambooth_stablediffusion_test_kz/tree/main) them in the Files & versions tab.
|
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of CHAN wearing clothes"}
|
Suiren00/dreambooth_stablediffusion_test_kz
| null |
[
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null |
2024-04-12T17:17:04+00:00
|
[] |
[] |
TAGS
#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# AutoTrain SDXL LoRA DreamBooth - Suiren00/dreambooth_stablediffusion_test_kz
<Gallery />
## Model description
These are Suiren00/dreambooth_stablediffusion_test_kz LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a photo of CHAN wearing clothes to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# AutoTrain SDXL LoRA DreamBooth - Suiren00/dreambooth_stablediffusion_test_kz\n\n<Gallery />",
"## Model description\n\nThese are Suiren00/dreambooth_stablediffusion_test_kz LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use a photo of CHAN wearing clothes to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# AutoTrain SDXL LoRA DreamBooth - Suiren00/dreambooth_stablediffusion_test_kz\n\n<Gallery />",
"## Model description\n\nThese are Suiren00/dreambooth_stablediffusion_test_kz LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use a photo of CHAN wearing clothes to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nuovo_amazon_kindle_sentiment_analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "nuovo_amazon_kindle_sentiment_analysis", "results": []}]}
|
denise227/nuovo_amazon_kindle_sentiment_analysis
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:18:17+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# nuovo_amazon_kindle_sentiment_analysis
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# nuovo_amazon_kindle_sentiment_analysis\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# nuovo_amazon_kindle_sentiment_analysis\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuning-dolphin-mistral-with-webglm-qa-with-lora_1
This model is a fine-tuned version of [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 60
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7558 | 0.16 | 10 | 1.4842 |
| 1.4966 | 0.32 | 20 | 1.3367 |
| 1.2328 | 0.48 | 30 | 1.1282 |
| 0.9873 | 0.64 | 40 | 1.0817 |
| 0.9661 | 0.8 | 50 | 0.9967 |
| 0.8808 | 0.96 | 60 | 0.8844 |
| 0.7455 | 1.13 | 70 | 0.7337 |
| 0.6018 | 1.29 | 80 | 0.6164 |
| 0.4899 | 1.45 | 90 | 0.5440 |
| 0.4402 | 1.61 | 100 | 0.4971 |
| 0.4154 | 1.77 | 110 | 0.4555 |
| 0.4025 | 1.93 | 120 | 0.4238 |
| 0.3992 | 2.09 | 130 | 0.4007 |
| 0.3585 | 2.25 | 140 | 0.3862 |
| 0.3369 | 2.41 | 150 | 0.3666 |
| 0.3328 | 2.57 | 160 | 0.3537 |
| 0.3216 | 2.73 | 170 | 0.3423 |
| 0.2859 | 2.89 | 180 | 0.3303 |
| 0.2967 | 3.05 | 190 | 0.3211 |
| 0.2933 | 3.22 | 200 | 0.3114 |
| 0.2716 | 3.38 | 210 | 0.3097 |
| 0.255 | 3.54 | 220 | 0.3053 |
| 0.2731 | 3.7 | 230 | 0.2990 |
| 0.2729 | 3.86 | 240 | 0.2972 |
| 0.2701 | 4.02 | 250 | 0.3030 |
| 0.2558 | 4.18 | 260 | 0.3042 |
| 0.2612 | 4.34 | 270 | 0.3301 |
| 0.3048 | 4.5 | 280 | 0.4564 |
| 0.5437 | 4.66 | 290 | 0.7938 |
| 1.5888 | 4.82 | 300 | 1.5418 |
| 0.6588 | 4.98 | 310 | 0.4630 |
| 0.5345 | 5.14 | 320 | 0.9088 |
| 1.1475 | 5.31 | 330 | 1.6381 |
| 1.6442 | 5.47 | 340 | 2.0495 |
| 2.2517 | 5.63 | 350 | 1.7558 |
| 0.9492 | 5.79 | 360 | 0.5187 |
| 0.3727 | 5.95 | 370 | 0.3763 |
| 0.3139 | 6.11 | 380 | 0.3376 |
| 0.2896 | 6.27 | 390 | 0.3195 |
| 0.283 | 6.43 | 400 | 0.3106 |
| 0.2646 | 6.59 | 410 | 0.3105 |
| 0.2674 | 6.75 | 420 | 0.3256 |
| 0.3482 | 6.91 | 430 | 0.4016 |
| 0.4193 | 7.07 | 440 | 0.6300 |
| 0.7397 | 7.23 | 450 | 1.0617 |
| 1.1954 | 7.4 | 460 | 1.6157 |
| 1.6177 | 7.56 | 470 | 1.8019 |
| 1.2996 | 7.72 | 480 | 0.9151 |
| 0.6605 | 7.88 | 490 | 0.5433 |
| 0.416 | 8.04 | 500 | 0.4012 |
| 0.3412 | 8.2 | 510 | 0.3685 |
| 0.3322 | 8.36 | 520 | 0.3928 |
| 0.3516 | 8.52 | 530 | 0.3641 |
| 0.3406 | 8.68 | 540 | 0.4061 |
| 0.3772 | 8.84 | 550 | 0.4145 |
| 0.3695 | 9.0 | 560 | 0.5453 |
| 0.5824 | 9.16 | 570 | 0.7332 |
| 0.5139 | 9.32 | 580 | 0.4839 |
| 0.3798 | 9.49 | 590 | 0.3758 |
| 0.319 | 9.65 | 600 | 0.3438 |
| 0.3082 | 9.81 | 610 | 0.3301 |
| 0.3017 | 9.97 | 620 | 0.3225 |
| 0.2862 | 10.13 | 630 | 0.3156 |
| 0.2586 | 10.29 | 640 | 0.3109 |
| 0.2878 | 10.45 | 650 | 0.3082 |
| 0.2766 | 10.61 | 660 | 0.3056 |
| 0.2834 | 10.77 | 670 | 0.3042 |
| 0.2513 | 10.93 | 680 | 0.3020 |
| 0.2762 | 11.09 | 690 | 0.3007 |
| 0.28 | 11.25 | 700 | 0.2999 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "model-index": [{"name": "fine-tuning-dolphin-mistral-with-webglm-qa-with-lora_1", "results": []}]}
|
Gunslinger3D/fine-tuning-dolphin-mistral-with-webglm-qa-with-lora_1
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T17:18:22+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #license-apache-2.0 #region-us
|
fine-tuning-dolphin-mistral-with-webglm-qa-with-lora\_1
=======================================================
This model is a fine-tuned version of cognitivecomputations/dolphin-2.8-mistral-7b-v02 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2999
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 5
* total\_train\_batch\_size: 10
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 60
* training\_steps: 700
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.0.0
* Datasets 2.15.0
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 60\n* training\\_steps: 700\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.0.0\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 60\n* training\\_steps: 700\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.0.0\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
artixjain/diff_instr_model_2
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:20:46+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Misted v2 7B
This is another version of [misted-7b](https://huggingface.co/walmart-the-bag/misted-7b). This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
##### Quantizations
- [gguf](https://huggingface.co/Walmart-the-bag/Misted-v2-7B-gguf) or [imatrix](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF)
- [hqq](https://huggingface.co/HQQHouse/Misted-v2-7B-hqq)
|
{"language": ["en", "es"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code", "mistral", "merge", "slerp"]}
|
Walmart-the-bag/Misted-v2-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"merge",
"slerp",
"conversational",
"en",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:21:29+00:00
|
[] |
[
"en",
"es"
] |
TAGS
#transformers #safetensors #mistral #text-generation #code #merge #slerp #conversational #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Misted v2 7B
This is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.
##### Quantizations
- gguf or imatrix
- hqq
|
[
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.",
"##### Quantizations\n- gguf or imatrix\n- hqq"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #code #merge #slerp #conversational #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Misted v2 7B\nThis is another version of misted-7b. This creation was designed to tackle coding, provide instructions, solve riddles, and fulfill a variety of purposes. It was developed using the slerp approach, which involved combining several mistral models with misted-7b.",
"##### Quantizations\n- gguf or imatrix\n- hqq"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"region:us"
] | null |
2024-04-12T17:26:01+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #region-us
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #region-us \n",
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ppoyaa/Lumina-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q5_K_S.gguf) | Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lumina-3-GGUF/resolve/main/Lumina-3.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit"], "base_model": "Ppoyaa/Lumina-3", "quantized_by": "mradermacher"}
|
mradermacher/Lumina-3-GGUF
| null |
[
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/Lumina-3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:27:32+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #en #base_model-Ppoyaa/Lumina-3 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #en #base_model-Ppoyaa/Lumina-3 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1538
- Precision: 0.0968
- Recall: 0.0968
- F1: 0.0968
- Accuracy: 0.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2834 | 1.0 | 679 | 0.1527 | 0.0357 | 0.0484 | 0.0411 | 0.9554 |
| 0.1906 | 2.0 | 1358 | 0.1326 | 0.0759 | 0.0968 | 0.0851 | 0.9633 |
| 0.123 | 3.0 | 2037 | 0.1538 | 0.0968 | 0.0968 | 0.0968 | 0.9655 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": []}]}
|
shubhanmathur/bert-finetuned-ner
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:28:04+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
bert-finetuned-ner
==================
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1538
* Precision: 0.0968
* Recall: 0.0968
* F1: 0.0968
* Accuracy: 0.9655
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMA_Lab_8
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Precision: 0.4383
- Recall: 0.5324
- F1: 0.4808
- Accuracy: 0.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3184 | 1.0 | 679 | 0.1776 | 0.2907 | 0.4587 | 0.3558 | 0.9438 |
| 0.1706 | 2.0 | 1358 | 0.1540 | 0.3742 | 0.5197 | 0.4351 | 0.9510 |
| 0.0973 | 3.0 | 2037 | 0.1458 | 0.4383 | 0.5324 | 0.4808 | 0.9569 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "MLMA_Lab_8", "results": []}]}
|
rupav02gmail/MLMA_Lab_8
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:28:07+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
MLMA\_Lab\_8
============
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1458
* Precision: 0.4383
* Recall: 0.5324
* F1: 0.4808
* Accuracy: 0.9569
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/A2_adapter_seq_bn_classification_C_5` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/A2_adapter_seq_bn_classification_C_5", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/A2_adapter_seq_bn_classification_C_5
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-12T17:28:45+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/A2_adapter_seq_bn_classification_C_5' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/A2_adapter_seq_bn_classification_C_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/A2_adapter_seq_bn_classification_C_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** cackerman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-it-bnb-4bit"}
|
cackerman/rewrites_gem7unsloth_4bit_ft_full
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:29:40+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: cackerman
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Model Card for Colossus 120b
Colussus 120b is a finetuning of alpindale/goliath-120b.

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers"}
|
ibivibiv/colossus_120b
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:30:45+00:00
|
[
"1910.09700"
] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #en #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Colossus 120b
Colussus 120b is a finetuning of alpindale/goliath-120b.
!img
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Colossus 120b\n\nColussus 120b is a finetuning of alpindale/goliath-120b. \n\n!img",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #en #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Colossus 120b\n\nColussus 120b is a finetuning of alpindale/goliath-120b. \n\n!img",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ashnaz/suggest_doctors_symptoms
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:33:36+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: epfl-llm/meditron-7b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
datasets:
- path: humantrue/packed_translations
type: sharegpt
conversation: llama-2
val_set_size: 0.05
output_dir: models/packed
dataset_prepared_path: models/packed/last_run_prepared
load_in_8bit: false
load_in_4bit: false
strict: false
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug: true
# deepspeed: zero3.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# models/packed
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.684 | 0.0 | 1 | 1.5973 |
| 0.7318 | 0.25 | 5841 | 0.8356 |
| 0.6904 | 0.5 | 11682 | 0.8056 |
| 0.7642 | 0.75 | 17523 | 0.7865 |
| 0.5814 | 1.0 | 23364 | 0.7731 |
| 0.7904 | 1.23 | 29205 | 0.7810 |
| 0.4428 | 1.48 | 35046 | 0.7798 |
| 0.6604 | 1.73 | 40887 | 0.7796 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.0.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0
|
{"license": "llama2", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "models/packed", "results": []}]}
|
humantrue/packed-7b
| null |
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:35:08+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #llama #text-generation #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
models/packed
=============
This model is a fine-tuned version of epfl-llm/meditron-7b on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7796
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.37.0
* Pytorch 2.0.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0\n* Pytorch 2.0.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.0\n* Pytorch 2.0.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pdejong/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
|
pdejong/ppo-SnowballTarget
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | null |
2024-04-12T17:35:51+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
|
# ppo Agent playing SnowballTarget
This is a trained model of a ppo agent playing SnowballTarget
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: pdejong/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pdejong/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n",
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: pdejong/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
text-to-image
|
diffusers
|
# skin-hands-eyes-xl
<Gallery />
## Model description
This is a combined LoRA for both woman and men: for skin, hands, and eyes. It's not perfect; especially hands can look good right away or require some post-processing...by PolyhedronAI
## Trigger words
You should use `perfect eyes` to trigger the image generation.
You should use `skin blemish` to trigger the image generation.
You should use `detailed skin` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/skin-hands-eyes-xl/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "RAW photo, full body shot of a man: 30 year old warrior, wearing shiny metal armor, full sharp, detailed face, blue eyes, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3", "parameters": {"negative_prompt": "BadDream, drawing, painting, digital art, helmet, nude, nsfw, large breasts"}, "output": {"url": "images/04.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "perfect eyes, skin blemish, detailed skin"}
|
MarkBW/skin-hands-eyes-xl
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-12T17:36:27+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# skin-hands-eyes-xl
<Gallery />
## Model description
This is a combined LoRA for both woman and men: for skin, hands, and eyes. It's not perfect; especially hands can look good right away or require some post-processing...by PolyhedronAI
## Trigger words
You should use 'perfect eyes' to trigger the image generation.
You should use 'skin blemish' to trigger the image generation.
You should use 'detailed skin' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# skin-hands-eyes-xl\n\n<Gallery />",
"## Model description \n\nThis is a combined LoRA for both woman and men: for skin, hands, and eyes. It's not perfect; especially hands can look good right away or require some post-processing...by PolyhedronAI",
"## Trigger words\n\nYou should use 'perfect eyes' to trigger the image generation.\n\nYou should use 'skin blemish' to trigger the image generation.\n\nYou should use 'detailed skin' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# skin-hands-eyes-xl\n\n<Gallery />",
"## Model description \n\nThis is a combined LoRA for both woman and men: for skin, hands, and eyes. It's not perfect; especially hands can look good right away or require some post-processing...by PolyhedronAI",
"## Trigger words\n\nYou should use 'perfect eyes' to trigger the image generation.\n\nYou should use 'skin blemish' to trigger the image generation.\n\nYou should use 'detailed skin' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TrOCR-SIN-DeiT-Handwritten
This model is a fine-tuned version of [kavg/TrOCR-SIN-DeiT](https://huggingface.co/kavg/TrOCR-SIN-DeiT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9839
- Cer: 0.5253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2915 | 3.45 | 100 | 1.8613 | 0.6450 |
| 0.061 | 6.9 | 200 | 1.8118 | 0.5707 |
| 0.0363 | 10.34 | 300 | 2.3998 | 0.6420 |
| 0.0202 | 13.79 | 400 | 2.4144 | 0.6353 |
| 0.0329 | 17.24 | 500 | 2.4393 | 0.6577 |
| 0.0364 | 20.69 | 600 | 1.9231 | 0.5679 |
| 0.004 | 24.14 | 700 | 2.4344 | 0.5866 |
| 0.0167 | 27.59 | 800 | 3.0998 | 0.5744 |
| 0.0269 | 31.03 | 900 | 2.6785 | 0.5804 |
| 0.0151 | 34.48 | 1000 | 2.2443 | 0.5916 |
| 0.0008 | 37.93 | 1100 | 2.1480 | 0.5684 |
| 0.0067 | 41.38 | 1200 | 2.3553 | 0.5625 |
| 0.0198 | 44.83 | 1300 | 2.1915 | 0.5492 |
| 0.0002 | 48.28 | 1400 | 2.0370 | 0.5620 |
| 0.001 | 51.72 | 1500 | 2.4303 | 0.6056 |
| 0.1666 | 55.17 | 1600 | 2.3324 | 0.5627 |
| 0.0001 | 58.62 | 1700 | 2.8753 | 0.5582 |
| 0.0 | 62.07 | 1800 | 2.5749 | 0.5355 |
| 0.0002 | 65.52 | 1900 | 2.8105 | 0.5572 |
| 0.0 | 68.97 | 2000 | 2.5275 | 0.5462 |
| 0.1231 | 72.41 | 2100 | 2.7452 | 0.5477 |
| 0.0 | 75.86 | 2200 | 2.4278 | 0.5403 |
| 0.0 | 79.31 | 2300 | 3.0099 | 0.5487 |
| 0.0 | 82.76 | 2400 | 3.1290 | 0.5467 |
| 0.0 | 86.21 | 2500 | 2.7705 | 0.5263 |
| 0.0 | 89.66 | 2600 | 2.7828 | 0.5275 |
| 0.0 | 93.1 | 2700 | 3.2488 | 0.5345 |
| 0.0 | 96.55 | 2800 | 3.1309 | 0.5273 |
| 0.0 | 100.0 | 2900 | 2.9839 | 0.5253 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
{"tags": ["generated_from_trainer"], "base_model": "kavg/TrOCR-SIN-DeiT", "model-index": [{"name": "TrOCR-SIN-DeiT-Handwritten", "results": []}]}
|
kavg/TrOCR-SIN-DeiT-Handwritten
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:kavg/TrOCR-SIN-DeiT",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-12T17:36:47+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-kavg/TrOCR-SIN-DeiT #endpoints_compatible #has_space #region-us
|
TrOCR-SIN-DeiT-Handwritten
==========================
This model is a fine-tuned version of kavg/TrOCR-SIN-DeiT on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9839
* Cer: 0.5253
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-kavg/TrOCR-SIN-DeiT #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
aekang12/zephyr_ogft
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:37:16+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Trubnik1967/zephyr-7b-beta-Agent-Instruct_v2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:38:30+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
|
stable-baselines3
|
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga i-pj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga i-pj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga i-pj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "577.50 +/- 100.28", "name": "mean_reward", "verified": false}]}]}]}
|
i-pj/dqn-SpaceInvadersNoFrameskip-v4
| null |
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-12T17:39:32+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing SpaceInvadersNoFrameskip-v4
This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
|
[
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
[
"TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ckpts
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2980
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1628 | 1.0 | 223 | 0.7126 | 0.7727 |
| 0.6562 | 2.0 | 446 | 0.5069 | 0.8485 |
| 0.4199 | 3.0 | 669 | 0.3570 | 0.8990 |
| 0.325 | 4.0 | 892 | 0.2092 | 0.9394 |
| 0.2217 | 5.0 | 1115 | 0.2392 | 0.9444 |
| 0.1831 | 6.0 | 1338 | 0.2754 | 0.9293 |
| 0.1598 | 7.0 | 1561 | 0.3294 | 0.9343 |
| 0.1676 | 8.0 | 1784 | 0.2669 | 0.9495 |
| 0.1597 | 9.0 | 2007 | 0.3438 | 0.9293 |
| 0.1132 | 10.0 | 2230 | 0.3159 | 0.9444 |
| 0.1224 | 11.0 | 2453 | 0.2980 | 0.9545 |
| 0.095 | 12.0 | 2676 | 0.2970 | 0.9444 |
| 0.1087 | 13.0 | 2899 | 0.3449 | 0.9343 |
| 0.1254 | 14.0 | 3122 | 0.3198 | 0.9444 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/hubert-base-ls960", "model-index": [{"name": "ckpts", "results": []}]}
|
Gizachew/ckpts
| null |
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-12T17:40:20+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #hubert #generated_from_trainer #base_model-facebook/hubert-base-ls960 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
ckpts
=====
This model is a fine-tuned version of facebook/hubert-base-ls960 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2980
* Accuracy: 0.9545
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #hubert #generated_from_trainer #base_model-facebook/hubert-base-ls960 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# confused-gemma
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 593
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8658 | 0.82 | 100 | 2.5334 |
| 2.447 | 1.64 | 200 | 2.3105 |
| 2.2783 | 2.46 | 300 | 2.2104 |
| 2.2211 | 3.28 | 400 | 2.1636 |
| 2.1796 | 4.1 | 500 | 2.1404 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "confused-gemma", "results": []}]}
|
utkarshsingh99/confused-gemma
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:40:39+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
confused-gemma
==============
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1404
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 593
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GemmaSheep-2B-LORA-TUNED
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Hardware
This model was trained using Intel(R) Data Center GPU Max 1100
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 593
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8445 | 0.82 | 100 | 2.5144 |
| 2.4282 | 1.64 | 200 | 2.2987 |
| 2.2735 | 2.46 | 300 | 2.2079 |
| 2.2193 | 3.28 | 400 | 2.1661 |
| 2.19 | 4.1 | 500 | 2.1445 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "GemmaSheep-2B-LORA-TUNED", "results": []}]}
|
eduardo-alvarez/GemmaSheep-2B-LORA-TUNED
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"ipex",
"GPU Max 1100",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:43:09+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
GemmaSheep-2B-LORA-TUNED
========================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1445
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training Hardware
-----------------
This model was trained using Intel(R) Data Center GPU Max 1100
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 593
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dallema
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Hardware
This model was trained using Intel Data Center GPU Max 1100
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 296
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8348 | 1.64 | 100 | 2.5300 |
| 2.4651 | 3.28 | 200 | 2.3748 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU MAX 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "dallema", "results": []}]}
|
ThejasElandassery/dallema
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"ipex",
"GPU MAX 1100",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:43:11+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU MAX 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
dallema
=======
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3748
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training Hardware
-----------------
This model was trained using Intel Data Center GPU Max 1100
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 296
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 296",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU MAX 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 296",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:43:12+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma2B-LORAfied
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Hardware
This model was trained using:
GPU: Intel(R) Data Center GPU Max 1100
CPU: Intel(R) Xeon(R) Platinum 8480+
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1480
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.927 | 1.64 | 100 | 2.5783 |
| 2.4568 | 3.28 | 200 | 2.2983 |
| 2.2609 | 4.92 | 300 | 2.1769 |
| 2.1671 | 6.56 | 400 | 2.1051 |
| 2.1065 | 8.2 | 500 | 2.0739 |
| 2.0844 | 9.84 | 600 | 2.0567 |
| 2.0643 | 11.48 | 700 | 2.0455 |
| 2.0511 | 13.11 | 800 | 2.0374 |
| 2.0435 | 14.75 | 900 | 2.0318 |
| 2.0304 | 16.39 | 1000 | 2.0276 |
| 2.0245 | 18.03 | 1100 | 2.0248 |
| 2.0247 | 19.67 | 1200 | 2.0228 |
| 2.0096 | 21.31 | 1300 | 2.0212 |
| 2.0183 | 22.95 | 1400 | 2.0206 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "Gemma2B-LORAfied", "results": []}]}
|
migaraa/Gemma2B-LORAfied
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"ipex",
"GPU Max 1100",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:43:13+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
Gemma2B-LORAfied
================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0206
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training Hardware
-----------------
This model was trained using:
GPU: Intel(R) Data Center GPU Max 1100
CPU: Intel(R) Xeon(R) Platinum 8480+
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 1480
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
adapter-transformers
|
# Adapter `jgrc3/houlsby_adapter_classification_noPre` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("jgrc3/houlsby_adapter_classification_noPre", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
jgrc3/houlsby_adapter_classification_noPre
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-12T17:44:12+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'jgrc3/houlsby_adapter_classification_noPre' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'jgrc3/houlsby_adapter_classification_noPre' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'jgrc3/houlsby_adapter_classification_noPre' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMA_Lab_8_GPT_model
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1507
- Precision: 0.4388
- Recall: 0.5464
- F1: 0.4867
- Accuracy: 0.9562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2973 | 1.0 | 679 | 0.1688 | 0.3128 | 0.4447 | 0.3673 | 0.9464 |
| 0.1679 | 2.0 | 1358 | 0.1500 | 0.4156 | 0.5070 | 0.4568 | 0.9555 |
| 0.0985 | 3.0 | 2037 | 0.1507 | 0.4388 | 0.5464 | 0.4867 | 0.9562 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "MLMA_Lab_8_GPT_model", "results": []}]}
|
rupav02gmail/MLMA_Lab_8_GPT_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:47:38+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
MLMA\_Lab\_8\_GPT\_model
========================
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1507
* Precision: 0.4388
* Recall: 0.5464
* F1: 0.4867
* Accuracy: 0.9562
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdp-paf-classifier-limited
This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1412
- Accuracy: 0.9561
- F1: 0.9534
- Precision: 0.9779
- Recall: 0.9301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.6888 | 1.0 | 50 | 0.6419 | 0.5431 | 0.6676 | 0.7079 | 0.4406 |
| 0.6657 | 2.0 | 100 | 0.7534 | 0.7224 | 0.6400 | 0.7917 | 0.6643 |
| 0.6329 | 3.0 | 150 | 0.7905 | 0.7704 | 0.6033 | 0.8189 | 0.7273 |
| 0.5912 | 4.0 | 200 | 0.8007 | 0.7790 | 0.5523 | 0.8387 | 0.7273 |
| 0.5312 | 5.0 | 250 | 0.8209 | 0.7969 | 0.4900 | 0.8814 | 0.7273 |
| 0.4667 | 6.0 | 300 | 0.8649 | 0.8507 | 0.4189 | 0.912 | 0.7972 |
| 0.3954 | 7.0 | 350 | 0.8682 | 0.8539 | 0.3610 | 0.9194 | 0.7972 |
| 0.342 | 8.0 | 400 | 0.8851 | 0.8712 | 0.3203 | 0.9504 | 0.8042 |
| 0.3019 | 9.0 | 450 | 0.9020 | 0.8922 | 0.2851 | 0.9524 | 0.8392 |
| 0.2695 | 10.0 | 500 | 0.9020 | 0.8922 | 0.2642 | 0.9524 | 0.8392 |
| 0.2409 | 11.0 | 550 | 0.9088 | 0.8996 | 0.2444 | 0.9603 | 0.8462 |
| 0.2226 | 12.0 | 600 | 0.9155 | 0.9077 | 0.2310 | 0.9609 | 0.8601 |
| 0.2134 | 13.0 | 650 | 0.9189 | 0.9118 | 0.2175 | 0.9612 | 0.8671 |
| 0.1908 | 14.0 | 700 | 0.9257 | 0.9185 | 0.2148 | 0.9764 | 0.8671 |
| 0.178 | 15.0 | 750 | 0.9291 | 0.9225 | 0.2047 | 0.9766 | 0.8741 |
| 0.1721 | 16.0 | 800 | 0.9358 | 0.9304 | 0.1975 | 0.9769 | 0.8881 |
| 0.1662 | 17.0 | 850 | 0.9392 | 0.9343 | 0.1924 | 0.9771 | 0.8951 |
| 0.1646 | 18.0 | 900 | 0.9358 | 0.9304 | 0.1925 | 0.9769 | 0.8881 |
| 0.1606 | 19.0 | 950 | 0.9392 | 0.9343 | 0.1899 | 0.9771 | 0.8951 |
| 0.1543 | 20.0 | 1000 | 0.9392 | 0.9343 | 0.1895 | 0.9771 | 0.8951 |
| 0.1605 | 21.0 | 1050 | 0.9392 | 0.9343 | 0.1868 | 0.9771 | 0.8951 |
| 0.1482 | 22.0 | 1100 | 0.9426 | 0.9382 | 0.1765 | 0.9773 | 0.9021 |
| 0.1409 | 23.0 | 1150 | 0.9426 | 0.9382 | 0.1737 | 0.9773 | 0.9021 |
| 0.1353 | 24.0 | 1200 | 0.9459 | 0.9420 | 0.1699 | 0.9774 | 0.9091 |
| 0.1315 | 25.0 | 1250 | 0.9459 | 0.9420 | 0.1682 | 0.9774 | 0.9091 |
| 0.1265 | 26.0 | 1300 | 0.9459 | 0.9420 | 0.1655 | 0.9774 | 0.9091 |
| 0.1234 | 27.0 | 1350 | 0.9459 | 0.9420 | 0.1658 | 0.9774 | 0.9091 |
| 0.1206 | 28.0 | 1400 | 0.9459 | 0.9420 | 0.1646 | 0.9774 | 0.9091 |
| 0.1201 | 29.0 | 1450 | 0.9459 | 0.9420 | 0.1634 | 0.9774 | 0.9091 |
| 0.1195 | 30.0 | 1500 | 0.9459 | 0.9420 | 0.1631 | 0.9774 | 0.9091 |
| 0.1182 | 31.0 | 1550 | 0.9527 | 0.9496 | 0.1597 | 0.9778 | 0.9231 |
| 0.1163 | 32.0 | 1600 | 0.9527 | 0.9496 | 0.1602 | 0.9778 | 0.9231 |
| 0.1063 | 33.0 | 1650 | 0.9527 | 0.9496 | 0.1568 | 0.9778 | 0.9231 |
| 0.1086 | 34.0 | 1700 | 0.9527 | 0.9496 | 0.1534 | 0.9778 | 0.9231 |
| 0.1071 | 35.0 | 1750 | 0.9527 | 0.9496 | 0.1535 | 0.9778 | 0.9231 |
| 0.1041 | 36.0 | 1800 | 0.9527 | 0.9496 | 0.1533 | 0.9778 | 0.9231 |
| 0.1026 | 37.0 | 1850 | 0.9527 | 0.9496 | 0.1533 | 0.9778 | 0.9231 |
| 0.0976 | 38.0 | 1900 | 0.9527 | 0.9496 | 0.1529 | 0.9778 | 0.9231 |
| 0.1004 | 39.0 | 1950 | 0.9527 | 0.9496 | 0.1526 | 0.9778 | 0.9231 |
| 0.0984 | 40.0 | 2000 | 0.9527 | 0.9496 | 0.1530 | 0.9778 | 0.9231 |
| 0.0975 | 41.0 | 2050 | 0.9527 | 0.9496 | 0.1512 | 0.9778 | 0.9231 |
| 0.0973 | 42.0 | 2100 | 0.9527 | 0.9496 | 0.1522 | 0.9778 | 0.9231 |
| 0.0931 | 43.0 | 2150 | 0.9527 | 0.9496 | 0.1504 | 0.9778 | 0.9231 |
| 0.09 | 44.0 | 2200 | 0.9527 | 0.9496 | 0.1507 | 0.9778 | 0.9231 |
| 0.0907 | 45.0 | 2250 | 0.9527 | 0.9496 | 0.1506 | 0.9778 | 0.9231 |
| 0.0879 | 46.0 | 2300 | 0.9527 | 0.9496 | 0.1493 | 0.9778 | 0.9231 |
| 0.0862 | 47.0 | 2350 | 0.9527 | 0.9496 | 0.1465 | 0.9778 | 0.9231 |
| 0.0851 | 48.0 | 2400 | 0.9561 | 0.9534 | 0.1453 | 0.9779 | 0.9301 |
| 0.0875 | 49.0 | 2450 | 0.9527 | 0.9496 | 0.1453 | 0.9778 | 0.9231 |
| 0.08 | 50.0 | 2500 | 0.9527 | 0.9496 | 0.1456 | 0.9778 | 0.9231 |
| 0.0857 | 51.0 | 2550 | 0.1457 | 0.9527 | 0.9496 | 0.9778 | 0.9231 |
| 0.0778 | 52.0 | 2600 | 0.1447 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0778 | 53.0 | 2650 | 0.1447 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0769 | 54.0 | 2700 | 0.1437 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0747 | 55.0 | 2750 | 0.1434 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0772 | 56.0 | 2800 | 0.1414 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0725 | 57.0 | 2850 | 0.1412 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0737 | 58.0 | 2900 | 0.1419 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0737 | 59.0 | 2950 | 0.1419 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
| 0.0703 | 60.0 | 3000 | 0.1416 | 0.9561 | 0.9534 | 0.9779 | 0.9301 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "cdp-paf-classifier-limited", "results": []}]}
|
alex-miller/cdp-paf-classifier-limited
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:alex-miller/ODABert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:47:39+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
cdp-paf-classifier-limited
==========================
This model is a fine-tuned version of alex-miller/ODABert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1412
* Accuracy: 0.9561
* F1: 0.9534
* Precision: 0.9779
* Recall: 0.9301
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 60
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.0.1
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_CyberTron_Coder`](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Coder) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Coder) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF --model mixtral_ai_cybertron_coder.Q5_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF --model mixtral_ai_cybertron_coder.Q5_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cybertron_coder.Q5_K_S.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "Mixtral_AI_CyberTron"}
|
LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Mixtral_AI_CyberTron",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:47:42+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Mixtral_AI_CyberTron #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Mixtral_AI_CyberTron #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q5_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_model
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "output_model", "results": []}]}
|
aparnaanand/output_model
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T17:47:45+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
|
# output_model
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# output_model\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n",
"# output_model\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
|
mille055/duke_chatbot0412_adapter2
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | null |
2024-04-12T17:48:42+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-dolly-tuned
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Hardware
This model was trained using Intel(R) Data Center GPU Max 1100
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1480
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9204 | 1.64 | 100 | 2.5693 |
| 2.4476 | 3.28 | 200 | 2.2830 |
| 2.2496 | 4.92 | 300 | 2.1630 |
| 2.141 | 6.56 | 400 | 2.0984 |
| 2.1131 | 8.2 | 500 | 2.0701 |
| 2.0799 | 9.84 | 600 | 2.0543 |
| 2.0593 | 11.48 | 700 | 2.0434 |
| 2.0529 | 13.11 | 800 | 2.0357 |
| 2.0433 | 14.75 | 900 | 2.0305 |
| 2.0322 | 16.39 | 1000 | 2.0264 |
| 2.0197 | 18.03 | 1100 | 2.0234 |
| 2.0204 | 19.67 | 1200 | 2.0214 |
| 2.0153 | 21.31 | 1300 | 2.0199 |
| 2.0101 | 22.95 | 1400 | 2.0192 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-2b-dolly-tuned", "results": []}]}
|
Vinhduyle/gemma-2b-dolly-tuned
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"ipex",
"GPU Max 1100",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:51:04+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
gemma-2b-dolly-tuned
====================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0192
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training Hardware
-----------------
This model was trained using Intel(R) Data Center GPU Max 1100
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 1480
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:51:11+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GreenBit LLMs
=============
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
|
[
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GemmaDoll-2b-dolly-LORA-Tune
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 593
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8667 | 0.82 | 100 | 2.5298 |
| 2.4504 | 1.64 | 200 | 2.3277 |
| 2.2982 | 2.46 | 300 | 2.2235 |
| 2.2202 | 3.28 | 400 | 2.1739 |
| 2.2001 | 4.1 | 500 | 2.1517 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "GemmaDoll-2b-dolly-LORA-Tune", "results": []}]}
|
swathijn/GemmaDoll-2b-dolly-LORA-Tune
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"ipex",
"GPU Max 1100",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:51:15+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
GemmaDoll-2b-dolly-LORA-Tune
============================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1517
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 593
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GemmaSheep-2B-LORA-TUNED
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 593
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8581 | 0.82 | 100 | 2.5287 |
| 2.4457 | 1.64 | 200 | 2.3240 |
| 2.2896 | 2.46 | 300 | 2.2217 |
| 2.2211 | 3.28 | 400 | 2.1753 |
| 2.2061 | 4.1 | 500 | 2.1504 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "GemmaSheep-2B-LORA-TUNED", "results": []}]}
|
quasar1256/gemmalearnnew
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-12T17:51:40+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
GemmaSheep-2B-LORA-TUNED
========================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1504
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 593
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.0.1a0+URL
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.0.1a0+URL\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"region:us"
] | null |
2024-04-12T17:52:05+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #region-us
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #region-us \n",
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.5']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_CyberTron_Coder`](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Coder) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Coder) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF --model mixtral_ai_cybertron_coder.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF --model mixtral_ai_cybertron_coder.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cybertron_coder.Q4_K_S.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "Mixtral_AI_CyberTron"}
|
LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Mixtral_AI_CyberTron",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:54:35+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Mixtral_AI_CyberTron #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Mixtral_AI_CyberTron #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_CyberTron_Coder-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Coder' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"region:us"
] | null |
2024-04-12T17:56:26+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #region-us
|
# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #region-us \n",
"# GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Llama-2-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: utrgvseniorproject/medtext-txt
type: completion
dataset_prepared_path: /home/ethensanchez01/med-llm/last_run_prepared
val_set_size: 0.05
output_dir: ./med-lora/Llama2-Medtext-txt-lora-epochs-2-lr-0001
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
wandb_project: Llama2-Medtext-Lora
wandb_entity: utrgvmedai
wandb_watch:
wandb_name: Llama2-Medtext-txt-lora-epochs-2-lr-0001
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: True # make sure you have this on True
group_by_length: false
bf16: true
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
eval_sample_packing:
save_steps: 800
debug:
deepspeed: /home/ethensanchez01/src/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.001
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# med-lora/Llama2-Medtext-txt-lora-epochs-2-lr-0001
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.532 | 0.19 | 1 | 1.4208 |
| 1.5994 | 0.38 | 2 | 1.4210 |
| 1.6281 | 0.76 | 4 | 1.4198 |
| 1.6221 | 1.05 | 6 | 1.4168 |
| 1.5891 | 1.43 | 8 | 1.4136 |
| 1.582 | 1.81 | 10 | 1.4128 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.0
|
{"tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "med-lora/Llama2-Medtext-txt-lora-epochs-2-lr-0001", "results": []}]}
|
ethensanchez/Llama2-Medtext-txt-lora-epochs-2-lr-0001
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:56:47+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #llama #text-generation #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
See axolotl config
axolotl version: '0.4.0'
med-lora/Llama2-Medtext-txt-lora-epochs-2-lr-0001
=================================================
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4128
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.0.dev0
* Pytorch 2.0.1+cu117
* Datasets 2.17.0
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.0.dev0\n* Pytorch 2.0.1+cu117\n* Datasets 2.17.0\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.0.dev0\n* Pytorch 2.0.1+cu117\n* Datasets 2.17.0\n* Tokenizers 0.15.0"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_MiniTron_II`](https://huggingface.co/LeroyDyer/Mixtral_AI_MiniTron_II) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_MiniTron_II) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF --model mixtral_ai_minitron_ii.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF --model mixtral_ai_minitron_ii.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_minitron_ii.Q4_K_S.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron"}
|
LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:LeroyDyer/Mixtral_AI_MiniTron",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T17:57:20+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_MiniTron_II' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_MiniTron_II' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_MiniTron_II-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_MiniTron_II' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T17:59:11+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
artixjain/diff_instr_model_3
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:00:18+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
|
BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null |
2024-04-12T18:00:36+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/A3_adapter_seq_bn_classification_from_pretraining_P_5_C_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:01:55+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Dhara3078/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.70", "name": "mean_reward", "verified": false}]}]}]}
|
Dhara3078/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T18:02:16+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null |
gguf
|
Author of this model: Microsoft, 2024. License: MIT.
Link to the original card: https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1
Prompt template: ChatML (according to llama.cpp's `server`)? Mistral (according to `tokenizer_config.json`)? Alpaca (according to text-generation-webui)? All three seem to work.
Context length: ?
According to their [paper on arXiv](https://arxiv.org/abs/2404.07965), rho-math-7b-v0.1 is a continued pretraining on Mistral-7B, while their 1B model is a continued pretraining on TinyLlama-1.1B.
# My experience
Unlike [the non-interpreter variant](https://huggingface.co/microsoft/rho-math-7b-v0.1) of this model, this `-interpreter` variant always answers with Python code, example:
Input: `Width of circle is 3cm, what is its area?`
with settings `{
presence_penalty: 0,
frequency_penalty: 0,
top_p: 1,
min_p: 0,
top_k: 0,
temperature: 0.8
}`
outputs (using unquantized gguf):
<pre>
```python
from sympy import pi, Rational
def circle_area():
"""Width of circle is 3cm, what is its area?"""
r = Rational(3, 2) # Radius of the circle
area = pi * r**2 # Area of the circle
return area
result = circle_area()
print(result)
```
```output
27*pi/4
```
The area of the circle is $\boxed{\frac{27\pi}{4}}$ square cm.
</pre>
??? It should have been `9*pi/4`. Am I using this model wrong? Same result with temperature=0.0,top_k=1.
In comparison, the Q6_K of the non-interpreter variant of this model with temp=0.0 outputs:
```
The area of a circle is given by the formula A = πr^2, where r is the radius of the circle. Since the width of the circle is 3cm, the radius is half of that, or 1.5cm. Substituting this value into the formula, we get:
A = π(1.5)^2
A = π(2.25)
A = 6.9887654321
So, the area of the circle is approximately 6.99 cm^2.
```
(close to the expected 7.06 which `deepseek-math-7b-rl.Q8_0.gguf` outputs)
# imatrix
I created imatrix with
```
./imatrix --mlock --verbosity 2 -m /tmp/rho-math-7b-interpreter-v0.1.f32.gguf -f ~/Downloads/groups_merged_forkOfArzeth.txt -c 32768 -o rho-math-7b-interpreter-v0.1.f32.ctx32768imatrix.dat
```
which took 1665 seconds (28 minutes) on my GTX 1660 Super and used only 1 thread on Ryzen 2600 downclocked to 3000MHz. `imatrix` consumed 35685 MiB of RAM (3200MHz) and 3158 MiB of VRAM.
# quantize
Quantized with llama.cpp b2661 (2024-04-12), compiled with `LLAMA_CUDA_FORCE_MMQ=1` (full cmd: `make -j6 LLAMA_CUDA_FORCE_MMQ=1 LLAMA_CUDA=1 LLAMA_FAST=1 LLAMA_OPENBLAS=1 LLAMA_BLAS_VENDOR=OpenBLAS`) for a big speed up (GTX 1660 Super doesn't have tensor cores, so it's better to use MMQ than nothing).
|
{"license": "mit", "library_name": "gguf", "tags": ["math"], "model_name": "rho-math-7b-interpreter-v0.1", "base_model": "microsoft/rho-math-7b-interpreter-v0.1", "model_creator": "Microsoft", "model_type": "mistral", "quantized_by": "arzeth"}
|
arzeth/rho-math-7b-interpreter-v0.1.imatrix-GGUF
| null |
[
"gguf",
"math",
"arxiv:2404.07965",
"base_model:microsoft/rho-math-7b-interpreter-v0.1",
"license:mit",
"region:us"
] | null |
2024-04-12T18:04:34+00:00
|
[
"2404.07965"
] |
[] |
TAGS
#gguf #math #arxiv-2404.07965 #base_model-microsoft/rho-math-7b-interpreter-v0.1 #license-mit #region-us
|
Author of this model: Microsoft, 2024. License: MIT.
Link to the original card: URL
Prompt template: ChatML (according to URL's 'server')? Mistral (according to 'tokenizer_config.json')? Alpaca (according to text-generation-webui)? All three seem to work.
Context length: ?
According to their paper on arXiv, rho-math-7b-v0.1 is a continued pretraining on Mistral-7B, while their 1B model is a continued pretraining on TinyLlama-1.1B.
# My experience
Unlike the non-interpreter variant of this model, this '-interpreter' variant always answers with Python code, example:
Input: 'Width of circle is 3cm, what is its area?'
with settings '{
presence_penalty: 0,
frequency_penalty: 0,
top_p: 1,
min_p: 0,
top_k: 0,
temperature: 0.8
}'
outputs (using unquantized gguf):
<pre>
The area of the circle is $\boxed{\frac{27\pi}{4}}$ square cm.
</pre>
??? It should have been '9*pi/4'. Am I using this model wrong? Same result with temperature=0.0,top_k=1.
In comparison, the Q6_K of the non-interpreter variant of this model with temp=0.0 outputs:
(close to the expected 7.06 which 'deepseek-math-7b-rl.Q8_0.gguf' outputs)
# imatrix
I created imatrix with
which took 1665 seconds (28 minutes) on my GTX 1660 Super and used only 1 thread on Ryzen 2600 downclocked to 3000MHz. 'imatrix' consumed 35685 MiB of RAM (3200MHz) and 3158 MiB of VRAM.
# quantize
Quantized with URL b2661 (2024-04-12), compiled with 'LLAMA_CUDA_FORCE_MMQ=1' (full cmd: 'make -j6 LLAMA_CUDA_FORCE_MMQ=1 LLAMA_CUDA=1 LLAMA_FAST=1 LLAMA_OPENBLAS=1 LLAMA_BLAS_VENDOR=OpenBLAS') for a big speed up (GTX 1660 Super doesn't have tensor cores, so it's better to use MMQ than nothing).
|
[
"# My experience\n\nUnlike the non-interpreter variant of this model, this '-interpreter' variant always answers with Python code, example:\n\nInput: 'Width of circle is 3cm, what is its area?'\n\nwith settings '{\n presence_penalty: 0,\n frequency_penalty: 0,\n top_p: 1,\n min_p: 0,\n top_k: 0,\n temperature: 0.8\n}'\n\noutputs (using unquantized gguf):\n\n<pre>\n\n\nThe area of the circle is $\\boxed{\\frac{27\\pi}{4}}$ square cm.\n</pre>\n\n??? It should have been '9*pi/4'. Am I using this model wrong? Same result with temperature=0.0,top_k=1.\n\nIn comparison, the Q6_K of the non-interpreter variant of this model with temp=0.0 outputs:\n\n(close to the expected 7.06 which 'deepseek-math-7b-rl.Q8_0.gguf' outputs)",
"# imatrix\n\nI created imatrix with\n\nwhich took 1665 seconds (28 minutes) on my GTX 1660 Super and used only 1 thread on Ryzen 2600 downclocked to 3000MHz. 'imatrix' consumed 35685 MiB of RAM (3200MHz) and 3158 MiB of VRAM.",
"# quantize\n\nQuantized with URL b2661 (2024-04-12), compiled with 'LLAMA_CUDA_FORCE_MMQ=1' (full cmd: 'make -j6 LLAMA_CUDA_FORCE_MMQ=1 LLAMA_CUDA=1 LLAMA_FAST=1 LLAMA_OPENBLAS=1 LLAMA_BLAS_VENDOR=OpenBLAS') for a big speed up (GTX 1660 Super doesn't have tensor cores, so it's better to use MMQ than nothing)."
] |
[
"TAGS\n#gguf #math #arxiv-2404.07965 #base_model-microsoft/rho-math-7b-interpreter-v0.1 #license-mit #region-us \n",
"# My experience\n\nUnlike the non-interpreter variant of this model, this '-interpreter' variant always answers with Python code, example:\n\nInput: 'Width of circle is 3cm, what is its area?'\n\nwith settings '{\n presence_penalty: 0,\n frequency_penalty: 0,\n top_p: 1,\n min_p: 0,\n top_k: 0,\n temperature: 0.8\n}'\n\noutputs (using unquantized gguf):\n\n<pre>\n\n\nThe area of the circle is $\\boxed{\\frac{27\\pi}{4}}$ square cm.\n</pre>\n\n??? It should have been '9*pi/4'. Am I using this model wrong? Same result with temperature=0.0,top_k=1.\n\nIn comparison, the Q6_K of the non-interpreter variant of this model with temp=0.0 outputs:\n\n(close to the expected 7.06 which 'deepseek-math-7b-rl.Q8_0.gguf' outputs)",
"# imatrix\n\nI created imatrix with\n\nwhich took 1665 seconds (28 minutes) on my GTX 1660 Super and used only 1 thread on Ryzen 2600 downclocked to 3000MHz. 'imatrix' consumed 35685 MiB of RAM (3200MHz) and 3158 MiB of VRAM.",
"# quantize\n\nQuantized with URL b2661 (2024-04-12), compiled with 'LLAMA_CUDA_FORCE_MMQ=1' (full cmd: 'make -j6 LLAMA_CUDA_FORCE_MMQ=1 LLAMA_CUDA=1 LLAMA_FAST=1 LLAMA_OPENBLAS=1 LLAMA_BLAS_VENDOR=OpenBLAS') for a big speed up (GTX 1660 Super doesn't have tensor cores, so it's better to use MMQ than nothing)."
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dhara3078/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.70", "name": "mean_reward", "verified": false}]}]}]}
|
Dhara3078/taxi
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T18:05:55+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
|
[
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dhara3078/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.70", "name": "mean_reward", "verified": false}]}]}]}
|
Dhara3078/taxi-v3
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T18:06:26+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
|
[
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Grayx/unstable_2137
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:10:30+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ed001/datascience-coder-6.7b-v0.2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:12:01+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 1B
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
This model has been converted from [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) for the
Hugging Face Transformers format.
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 |
We are releasing many checkpoints for these models, for every 1000 training steps. These have not
yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
- **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-1B/reports/OLMo-1B--Vmlldzo2NzY1Njk1
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B-hf")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
### Fine-tuning
This model does not directly support our fine-tuning processes. Model fine-tuning can be done
from the final checkpoint or many intermediate checkpoints of
[allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the 7B model are found below.
| | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
| --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
| arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
| arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
| boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
| copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
| hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
| openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
| piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
| sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
| winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
| **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
| truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
| MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
| GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
| **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
And for the 1B model:
| task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
| arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
| arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
| boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
| copa | 50 | 84 | 72 | 78 | 79 |
| hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
| openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
| piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
| sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
| winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
| Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
| num heads | 32 | 32 | 32 | 71 | 16 |
| num layers | 32 | 32 | 32 | 32 | 32 |
| MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
| LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
| pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
| attention variant | full | GQA | full | MQA | MQA |
| biases | none | none | in LN only | in LN only | none |
| block type | sequential | sequential | sequential | parallel | parallel |
| activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
| sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
| batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
| batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
| weight tying | no | no | no | no | yes |
### Hyperparameters
AdamW optimizer parameters are shown below.
| Size | Peak LR | Betas | Epsilon | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
| 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
Optimizer settings comparison with peer models.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps | 5000 | 2000 | 2000 | 1000 |
| peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
| minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
| weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
| beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
| beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
| epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
| LR schedule | linear | cosine | cosine | cosine |
| gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
| gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
| optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
## Environmental Impact
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
| | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
| OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["allenai/dolma"]}
|
allenai/OLMo-1B-hf
| null |
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"en",
"dataset:allenai/dolma",
"arxiv:2402.00838",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:13:34+00:00
|
[
"2402.00838",
"2302.13971"
] |
[
"en"
] |
TAGS
#transformers #safetensors #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #arxiv-2302.13971 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
<img src="URL alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Model Card for OLMo 1B
======================
OLMo is a series of Open Language Models designed to enable the science of language models.
The OLMo models are trained on the Dolma dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
This model has been converted from allenai/OLMo-1B for the
Hugging Face Transformers format.
Model Details
-------------
The core models released in this batch are the following:
We are releasing many checkpoints for these models, for every 1000 training steps. These have not
yet been converted into Hugging Face Transformers format, but are available in allenai/OLMo-1B.
### Model Description
* Developed by: Allen Institute for AI (AI2)
* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
* Model type: a Transformer style autoregressive language model.
* Language(s) (NLP): English
* License: The code and model are released under Apache 2.0.
* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'
* Date cutoff: Feb./March 2023 based on Dolma dataset version.
### Model Sources
* Project Page: URL
* Repositories:
+ Core repo (training, inference, fine-tuning etc.): URL
+ Evaluation code: URL
+ Further fine-tuning code: URL
* Paper: Link
* Technical blog post: URL
* W&B Logs: URL
Uses
----
### Inference
Quickly get inference running with the following:
Alternatively, with the pipeline abstraction:
Or, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\_pretrained("allenai/OLMo-1B-hf", torch\_dtype=torch.float16, load\_in\_8bit=True)' (requires 'bitsandbytes').
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\_ids.to('cuda')' to avoid potential issues.
### Fine-tuning
This model does not directly support our fine-tuning processes. Model fine-tuning can be done
from the final checkpoint or many intermediate checkpoints of
allenai/OLMo-1B.
Evaluation
----------
Core model results for the 7B model are found below.
And for the 1B model:
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
Model Details
-------------
### Data
For training data details, please see the Dolma documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
### Hyperparameters
AdamW optimizer parameters are shown below.
Optimizer settings comparison with peer models.
Environmental Impact
--------------------
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
Bias, Risks, and Limitations
----------------------------
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
BibTeX:
APA:
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
Model Card Contact
------------------
For errors in this model card, contact Nathan, Akshita or Shane, '{nathanl, akshitab, shanea} at allenai dot org'.
|
[
"### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Feb./March 2023 based on Dolma dataset version.",
"### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: URL\n\n\nUses\n----",
"### Inference\n\n\nQuickly get inference running with the following:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1B-hf\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.",
"### Fine-tuning\n\n\nThis model does not directly support our fine-tuning processes. Model fine-tuning can be done\nfrom the final checkpoint or many intermediate checkpoints of\nallenai/OLMo-1B.\n\n\nEvaluation\n----------\n\n\nCore model results for the 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------",
"### Data\n\n\nFor training data details, please see the Dolma documentation.",
"### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.",
"### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, Akshita or Shane, '{nathanl, akshitab, shanea} at allenai dot org'."
] |
[
"TAGS\n#transformers #safetensors #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #arxiv-2302.13971 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Feb./March 2023 based on Dolma dataset version.",
"### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: URL\n\n\nUses\n----",
"### Inference\n\n\nQuickly get inference running with the following:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1B-hf\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.",
"### Fine-tuning\n\n\nThis model does not directly support our fine-tuning processes. Model fine-tuning can be done\nfrom the final checkpoint or many intermediate checkpoints of\nallenai/OLMo-1B.\n\n\nEvaluation\n----------\n\n\nCore model results for the 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------",
"### Data\n\n\nFor training data details, please see the Dolma documentation.",
"### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.",
"### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, Akshita or Shane, '{nathanl, akshitab, shanea} at allenai dot org'."
] |
null | null |
---
license: apache-2.0
---import requests
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from datasets import Dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
# Additional information about the Aditya-L1 mission
additional_info = """
Aditya-L1 is a satellite dedicated to the comprehensive study of the Sun. It has 7 distinct payloads developed, all developed indigenously. Five by ISRO and two by Indian academic institutes in collaboration with ISRO.
Aditya in Sanskrit means the Sun. L1 here refers to Lagrange Point 1 of the Sun-Earth system. For common understanding, L1 is a location in space where the gravitational forces of two celestial bodies, such as the Sun and Earth, are in equilibrium. This allows an object placed there to remain relatively stable with respect to both celestial bodies.
Following its scheduled launch on September 2, 2023, Aditya-L1 stays Earth-bound orbits for 16 days, during which it undergoes 5 maneuvres to gain the necessary velocity for its journey. Subsequently, Aditya-L1 undergoes a Trans-Lagrangian1 insertion maneuvre, marking the beginning of its 110-day trajectory to the destination around the L1 Lagrange point. Upon arrival at the L1 point, another maneuvre binds Aditya-L1 to an orbit around L1, a balanced gravitational location between the Earth and the Sun. The satellite spends its whole mission life orbiting around L1 in an irregularly shaped orbit in a plane roughly perpendicular to the line joining the Earth and the Sun.
The strategic placement at the L1 Lagrange point ensures that Aditya-L1 can maintain a constant, uninterrupted view of the Sun. This location also allows the satellite to access solar radiation and magnetic storms before they are influenced by Earth's magnetic field and atmosphere. Additionally, the L1 point’s gravitational stability minimizes the need for frequent orbital maintenance efforts, optimizing the satellite's operational efficiency.
Quick Facts: Aditya-L1 will stay approximately 1.5 million km away from Earth, directed towards the Sun, which is about 1% of the Earth-Sun distance. The Sun is a giant sphere of gas and Aditya-L1 would study the outer atmosphere of the Sun. Aditya-L1 will neither land on the Sun nor approach the Sun any closer.
"""
# Step 1: Prepare the Dataset
# Assuming the ISRO Aditya L1V1 space mission dataset is available online, we'll download it
data_url = "URL_TO_ISRO_DATASET"
response = requests.get(data_url)
data = response.text.split("\n") # Assuming data is newline separated, adjust accordingly
# Concatenating the additional information with the dataset
data_with_info = additional_info + "\n" + "\n".join(data)
# Step 2: Data Deduplication
# Using TF-IDF and cosine similarity for deduplication
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(data_with_info)
cosine_sim = cosine_similarity(tfidf_matrix, tfidf_matrix)
# Grouping similar data using cosine similarity threshold
threshold = 0.95
groups = []
visited = set()
for i in range(len(data_with_info)):
if i not in visited:
group = [i]
visited.add(i)
for j in range(i+1, len(data_with_info)):
if cosine_sim[i][j] > threshold:
group.append(j)
visited.add(j)
groups.append(group)
# Selecting representative from each group
deduplicated_data = [data_with_info[group[0]] for group in groups]
# Step 3: Publish Dataset on Hugging Face
# Creating Dataset object using the deduplicated data
dataset_dict = {"text": deduplicated_data}
dataset = Dataset.from_dict(dataset_dict)
# Uploading dataset to Hugging Face
# Follow Hugging Face documentation for uploading dataset
# (https://huggingface.co/docs/datasets/upload_dataset)
# Step 4: Fine-Tune GPT-2 Model
# Load the pre-trained GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Tokenize the dataset
tokenized_dataset = dataset.map(lambda examples: tokenizer(examples["text"], truncation=True, padding=True), batched=True)
# Define training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
num_train_epochs=3,
logging_dir='./logs',
overwrite_output_dir=True,
)
# Define Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
)
# Fine-tune the model
trainer.train()
# Step 5: Save the Fine-Tuned Model
model.save_pretrained("fine_tuned_gpt2_model")
tokenizer.save_pretrained("fine_tuned_gpt2_model")
|
{}
|
Amitleo10/Assignment
| null |
[
"region:us"
] | null |
2024-04-12T18:16:34+00:00
|
[] |
[] |
TAGS
#region-us
|
---
license: apache-2.0
---import requests
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from datasets import Dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
# Additional information about the Aditya-L1 mission
additional_info = """
Aditya-L1 is a satellite dedicated to the comprehensive study of the Sun. It has 7 distinct payloads developed, all developed indigenously. Five by ISRO and two by Indian academic institutes in collaboration with ISRO.
Aditya in Sanskrit means the Sun. L1 here refers to Lagrange Point 1 of the Sun-Earth system. For common understanding, L1 is a location in space where the gravitational forces of two celestial bodies, such as the Sun and Earth, are in equilibrium. This allows an object placed there to remain relatively stable with respect to both celestial bodies.
Following its scheduled launch on September 2, 2023, Aditya-L1 stays Earth-bound orbits for 16 days, during which it undergoes 5 maneuvres to gain the necessary velocity for its journey. Subsequently, Aditya-L1 undergoes a Trans-Lagrangian1 insertion maneuvre, marking the beginning of its 110-day trajectory to the destination around the L1 Lagrange point. Upon arrival at the L1 point, another maneuvre binds Aditya-L1 to an orbit around L1, a balanced gravitational location between the Earth and the Sun. The satellite spends its whole mission life orbiting around L1 in an irregularly shaped orbit in a plane roughly perpendicular to the line joining the Earth and the Sun.
The strategic placement at the L1 Lagrange point ensures that Aditya-L1 can maintain a constant, uninterrupted view of the Sun. This location also allows the satellite to access solar radiation and magnetic storms before they are influenced by Earth's magnetic field and atmosphere. Additionally, the L1 point’s gravitational stability minimizes the need for frequent orbital maintenance efforts, optimizing the satellite's operational efficiency.
Quick Facts: Aditya-L1 will stay approximately 1.5 million km away from Earth, directed towards the Sun, which is about 1% of the Earth-Sun distance. The Sun is a giant sphere of gas and Aditya-L1 would study the outer atmosphere of the Sun. Aditya-L1 will neither land on the Sun nor approach the Sun any closer.
"""
# Step 1: Prepare the Dataset
# Assuming the ISRO Aditya L1V1 space mission dataset is available online, we'll download it
data_url = "URL_TO_ISRO_DATASET"
response = URL(data_url)
data = URL("\n") # Assuming data is newline separated, adjust accordingly
# Concatenating the additional information with the dataset
data_with_info = additional_info + "\n" + "\n".join(data)
# Step 2: Data Deduplication
# Using TF-IDF and cosine similarity for deduplication
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(data_with_info)
cosine_sim = cosine_similarity(tfidf_matrix, tfidf_matrix)
# Grouping similar data using cosine similarity threshold
threshold = 0.95
groups = []
visited = set()
for i in range(len(data_with_info)):
if i not in visited:
group = [i]
URL(i)
for j in range(i+1, len(data_with_info)):
if cosine_sim[i][j] > threshold:
URL(j)
URL(j)
URL(group)
# Selecting representative from each group
deduplicated_data = [data_with_info[group[0]] for group in groups]
# Step 3: Publish Dataset on Hugging Face
# Creating Dataset object using the deduplicated data
dataset_dict = {"text": deduplicated_data}
dataset = Dataset.from_dict(dataset_dict)
# Uploading dataset to Hugging Face
# Follow Hugging Face documentation for uploading dataset
# (URL
# Step 4: Fine-Tune GPT-2 Model
# Load the pre-trained GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Tokenize the dataset
tokenized_dataset = URL(lambda examples: tokenizer(examples["text"], truncation=True, padding=True), batched=True)
# Define training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
num_train_epochs=3,
logging_dir='./logs',
overwrite_output_dir=True,
)
# Define Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
)
# Fine-tune the model
URL()
# Step 5: Save the Fine-Tuned Model
model.save_pretrained("fine_tuned_gpt2_model")
tokenizer.save_pretrained("fine_tuned_gpt2_model")
|
[
"# Additional information about the Aditya-L1 mission\nadditional_info = \"\"\"\nAditya-L1 is a satellite dedicated to the comprehensive study of the Sun. It has 7 distinct payloads developed, all developed indigenously. Five by ISRO and two by Indian academic institutes in collaboration with ISRO.\n\nAditya in Sanskrit means the Sun. L1 here refers to Lagrange Point 1 of the Sun-Earth system. For common understanding, L1 is a location in space where the gravitational forces of two celestial bodies, such as the Sun and Earth, are in equilibrium. This allows an object placed there to remain relatively stable with respect to both celestial bodies.\n\nFollowing its scheduled launch on September 2, 2023, Aditya-L1 stays Earth-bound orbits for 16 days, during which it undergoes 5 maneuvres to gain the necessary velocity for its journey. Subsequently, Aditya-L1 undergoes a Trans-Lagrangian1 insertion maneuvre, marking the beginning of its 110-day trajectory to the destination around the L1 Lagrange point. Upon arrival at the L1 point, another maneuvre binds Aditya-L1 to an orbit around L1, a balanced gravitational location between the Earth and the Sun. The satellite spends its whole mission life orbiting around L1 in an irregularly shaped orbit in a plane roughly perpendicular to the line joining the Earth and the Sun.\n\nThe strategic placement at the L1 Lagrange point ensures that Aditya-L1 can maintain a constant, uninterrupted view of the Sun. This location also allows the satellite to access solar radiation and magnetic storms before they are influenced by Earth's magnetic field and atmosphere. Additionally, the L1 point’s gravitational stability minimizes the need for frequent orbital maintenance efforts, optimizing the satellite's operational efficiency.\n\nQuick Facts: Aditya-L1 will stay approximately 1.5 million km away from Earth, directed towards the Sun, which is about 1% of the Earth-Sun distance. The Sun is a giant sphere of gas and Aditya-L1 would study the outer atmosphere of the Sun. Aditya-L1 will neither land on the Sun nor approach the Sun any closer.\n\"\"\"",
"# Step 1: Prepare the Dataset",
"# Assuming the ISRO Aditya L1V1 space mission dataset is available online, we'll download it\ndata_url = \"URL_TO_ISRO_DATASET\"\nresponse = URL(data_url)\ndata = URL(\"\\n\") # Assuming data is newline separated, adjust accordingly",
"# Concatenating the additional information with the dataset\ndata_with_info = additional_info + \"\\n\" + \"\\n\".join(data)",
"# Step 2: Data Deduplication",
"# Using TF-IDF and cosine similarity for deduplication\nvectorizer = TfidfVectorizer()\ntfidf_matrix = vectorizer.fit_transform(data_with_info)\ncosine_sim = cosine_similarity(tfidf_matrix, tfidf_matrix)",
"# Grouping similar data using cosine similarity threshold\nthreshold = 0.95\ngroups = []\nvisited = set()\nfor i in range(len(data_with_info)):\n if i not in visited:\n group = [i]\n URL(i)\n for j in range(i+1, len(data_with_info)):\n if cosine_sim[i][j] > threshold:\n URL(j)\n URL(j)\n URL(group)",
"# Selecting representative from each group\ndeduplicated_data = [data_with_info[group[0]] for group in groups]",
"# Step 3: Publish Dataset on Hugging Face",
"# Creating Dataset object using the deduplicated data\ndataset_dict = {\"text\": deduplicated_data}\ndataset = Dataset.from_dict(dataset_dict)",
"# Uploading dataset to Hugging Face",
"# Follow Hugging Face documentation for uploading dataset",
"# (URL",
"# Step 4: Fine-Tune GPT-2 Model",
"# Load the pre-trained GPT-2 model and tokenizer\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")",
"# Tokenize the dataset\ntokenized_dataset = URL(lambda examples: tokenizer(examples[\"text\"], truncation=True, padding=True), batched=True)",
"# Define training arguments\ntraining_args = TrainingArguments(\n per_device_train_batch_size=4,\n num_train_epochs=3,\n logging_dir='./logs',\n overwrite_output_dir=True,\n)",
"# Define Trainer\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=tokenized_dataset[\"train\"],\n)",
"# Fine-tune the model\nURL()",
"# Step 5: Save the Fine-Tuned Model\nmodel.save_pretrained(\"fine_tuned_gpt2_model\")\ntokenizer.save_pretrained(\"fine_tuned_gpt2_model\")"
] |
[
"TAGS\n#region-us \n",
"# Additional information about the Aditya-L1 mission\nadditional_info = \"\"\"\nAditya-L1 is a satellite dedicated to the comprehensive study of the Sun. It has 7 distinct payloads developed, all developed indigenously. Five by ISRO and two by Indian academic institutes in collaboration with ISRO.\n\nAditya in Sanskrit means the Sun. L1 here refers to Lagrange Point 1 of the Sun-Earth system. For common understanding, L1 is a location in space where the gravitational forces of two celestial bodies, such as the Sun and Earth, are in equilibrium. This allows an object placed there to remain relatively stable with respect to both celestial bodies.\n\nFollowing its scheduled launch on September 2, 2023, Aditya-L1 stays Earth-bound orbits for 16 days, during which it undergoes 5 maneuvres to gain the necessary velocity for its journey. Subsequently, Aditya-L1 undergoes a Trans-Lagrangian1 insertion maneuvre, marking the beginning of its 110-day trajectory to the destination around the L1 Lagrange point. Upon arrival at the L1 point, another maneuvre binds Aditya-L1 to an orbit around L1, a balanced gravitational location between the Earth and the Sun. The satellite spends its whole mission life orbiting around L1 in an irregularly shaped orbit in a plane roughly perpendicular to the line joining the Earth and the Sun.\n\nThe strategic placement at the L1 Lagrange point ensures that Aditya-L1 can maintain a constant, uninterrupted view of the Sun. This location also allows the satellite to access solar radiation and magnetic storms before they are influenced by Earth's magnetic field and atmosphere. Additionally, the L1 point’s gravitational stability minimizes the need for frequent orbital maintenance efforts, optimizing the satellite's operational efficiency.\n\nQuick Facts: Aditya-L1 will stay approximately 1.5 million km away from Earth, directed towards the Sun, which is about 1% of the Earth-Sun distance. The Sun is a giant sphere of gas and Aditya-L1 would study the outer atmosphere of the Sun. Aditya-L1 will neither land on the Sun nor approach the Sun any closer.\n\"\"\"",
"# Step 1: Prepare the Dataset",
"# Assuming the ISRO Aditya L1V1 space mission dataset is available online, we'll download it\ndata_url = \"URL_TO_ISRO_DATASET\"\nresponse = URL(data_url)\ndata = URL(\"\\n\") # Assuming data is newline separated, adjust accordingly",
"# Concatenating the additional information with the dataset\ndata_with_info = additional_info + \"\\n\" + \"\\n\".join(data)",
"# Step 2: Data Deduplication",
"# Using TF-IDF and cosine similarity for deduplication\nvectorizer = TfidfVectorizer()\ntfidf_matrix = vectorizer.fit_transform(data_with_info)\ncosine_sim = cosine_similarity(tfidf_matrix, tfidf_matrix)",
"# Grouping similar data using cosine similarity threshold\nthreshold = 0.95\ngroups = []\nvisited = set()\nfor i in range(len(data_with_info)):\n if i not in visited:\n group = [i]\n URL(i)\n for j in range(i+1, len(data_with_info)):\n if cosine_sim[i][j] > threshold:\n URL(j)\n URL(j)\n URL(group)",
"# Selecting representative from each group\ndeduplicated_data = [data_with_info[group[0]] for group in groups]",
"# Step 3: Publish Dataset on Hugging Face",
"# Creating Dataset object using the deduplicated data\ndataset_dict = {\"text\": deduplicated_data}\ndataset = Dataset.from_dict(dataset_dict)",
"# Uploading dataset to Hugging Face",
"# Follow Hugging Face documentation for uploading dataset",
"# (URL",
"# Step 4: Fine-Tune GPT-2 Model",
"# Load the pre-trained GPT-2 model and tokenizer\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")",
"# Tokenize the dataset\ntokenized_dataset = URL(lambda examples: tokenizer(examples[\"text\"], truncation=True, padding=True), batched=True)",
"# Define training arguments\ntraining_args = TrainingArguments(\n per_device_train_batch_size=4,\n num_train_epochs=3,\n logging_dir='./logs',\n overwrite_output_dir=True,\n)",
"# Define Trainer\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=tokenized_dataset[\"train\"],\n)",
"# Fine-tune the model\nURL()",
"# Step 5: Save the Fine-Tuned Model\nmodel.save_pretrained(\"fine_tuned_gpt2_model\")\ntokenizer.save_pretrained(\"fine_tuned_gpt2_model\")"
] |
null |
mlx
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
{"tags": ["mlx"]}
|
GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx
| null |
[
"mlx",
"safetensors",
"qwen2",
"region:us"
] | null |
2024-04-12T18:18:08+00:00
|
[] |
[] |
TAGS
#mlx #safetensors #qwen2 #region-us
|
# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2']().
Refer to the original model card for more details on the model.
## Use with mlx
|
[
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
[
"TAGS\n#mlx #safetensors #qwen2 #region-us \n",
"# GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/Qwen-1.5-1.8B-Chat-layer-mix-bpw-2.2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0471
- Rouge1: 35.9205
- Rouge2: 22.7367
- Rougel: 32.7559
- Rougelsum: 32.5835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.3604 | 1.0 | 97 | 3.7707 | 37.8594 | 24.1486 | 33.9059 | 34.3919 |
| 4.7306 | 2.0 | 194 | 2.9997 | 35.0984 | 22.3056 | 31.7269 | 32.1465 |
| 3.5847 | 3.0 | 291 | 2.1937 | 29.695 | 17.4061 | 26.5237 | 26.9075 |
| 2.9164 | 4.0 | 388 | 1.7267 | 31.2343 | 18.1163 | 28.5381 | 28.5461 |
| 2.6073 | 5.0 | 485 | 1.4625 | 30.7086 | 17.9701 | 28.4382 | 28.3002 |
| 2.3667 | 6.0 | 582 | 1.2163 | 33.4671 | 20.5523 | 30.5306 | 30.2693 |
| 2.2182 | 7.0 | 679 | 1.1063 | 34.406 | 21.7252 | 31.6548 | 31.4595 |
| 2.1638 | 8.0 | 776 | 1.0471 | 35.9205 | 22.7367 | 32.7559 | 32.5835 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/mt5-small", "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
JohnDoe70/mt5-small-finetuned-amazon-en-es
| null |
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:18:13+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #mt5 #text2text-generation #summarization #generated_from_trainer #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-small-finetuned-amazon-en-es
================================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0471
* Rouge1: 35.9205
* Rouge2: 22.7367
* Rougel: 32.7559
* Rougelsum: 32.5835
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #mt5 #text2text-generation #summarization #generated_from_trainer #base_model-google/mt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="samzapo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
samzapo/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T18:20:58+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** cackerman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
|
cackerman/rewrites_mistral7unsloth_4bit_ft_full
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:23:36+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: cackerman
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: cackerman\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
artixjain/diff_instr_model_4
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:24:57+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WizardLM/WizardMath-7B-V1.1", "NousResearch/Hermes-2-Pro-Mistral-7B"]}
|
mergekit-community/mergekit-slerp-nfoezyj
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:26:27+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* WizardLM/WizardMath-7B-V1.1
* NousResearch/Hermes-2-Pro-Mistral-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
skitzobaka/gemma_sft_model
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:26:53+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
This model is a test using Intel Gaudi Habana 2 chips using DDP training. Trained over 8xGaudi-2 Chips using DDP and Deepspeed using the openassistant dataset.
# Prompt Format
This model uses ChatML as the prompt format.
```
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
```
|
{"license": "apache-2.0", "datasets": ["timdettmers/openassistant-guanaco"]}
|
ndavidson/Phi-2-openassistant
| null |
[
"safetensors",
"optimum_habana",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"region:us"
] | null |
2024-04-12T18:27:38+00:00
|
[] |
[] |
TAGS
#safetensors #optimum_habana #dataset-timdettmers/openassistant-guanaco #license-apache-2.0 #region-us
|
This model is a test using Intel Gaudi Habana 2 chips using DDP training. Trained over 8xGaudi-2 Chips using DDP and Deepspeed using the openassistant dataset.
# Prompt Format
This model uses ChatML as the prompt format.
|
[
"# Prompt Format\n\nThis model uses ChatML as the prompt format."
] |
[
"TAGS\n#safetensors #optimum_habana #dataset-timdettmers/openassistant-guanaco #license-apache-2.0 #region-us \n",
"# Prompt Format\n\nThis model uses ChatML as the prompt format."
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.8067 | 1.0 | 12753 | 3.5393 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "base_model": "t5-small", "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
|
edithram23/t5-small-finetuned-xsum
| null |
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:28:52+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #dataset-xsum #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the xsum dataset.
It achieves the following results on the evaluation set:
* Loss: 3.5393
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #dataset-xsum #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="samzapo/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
|
samzapo/Taxi-v3
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-12T18:28:59+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
|
[
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-cased", "model-index": [{"name": "distilbert-finetuned-squad", "results": []}]}
|
noushsuon/distilbert-finetuned-squad
| null |
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:31:23+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-finetuned-squad
This model is a fine-tuned version of distilbert-base-cased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.0
|
[
"# distilbert-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdp-aa-classifier-synth-limited
This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2051
- Accuracy: 0.9071
- F1: 0.9065
- Precision: 0.9130
- Recall: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6913 | 1.0 | 48 | 0.6872 | 0.6143 | 0.6197 | 0.6111 | 0.6286 |
| 0.678 | 2.0 | 96 | 0.6755 | 0.6857 | 0.6812 | 0.6912 | 0.6714 |
| 0.6628 | 3.0 | 144 | 0.6602 | 0.6571 | 0.6522 | 0.6618 | 0.6429 |
| 0.6426 | 4.0 | 192 | 0.6382 | 0.6857 | 0.6765 | 0.6970 | 0.6571 |
| 0.6129 | 5.0 | 240 | 0.6048 | 0.7 | 0.7 | 0.7 | 0.7 |
| 0.582 | 6.0 | 288 | 0.5709 | 0.7071 | 0.7050 | 0.7101 | 0.7 |
| 0.5495 | 7.0 | 336 | 0.5393 | 0.7143 | 0.7101 | 0.7206 | 0.7 |
| 0.5191 | 8.0 | 384 | 0.5051 | 0.7429 | 0.75 | 0.7297 | 0.7714 |
| 0.4867 | 9.0 | 432 | 0.4742 | 0.7571 | 0.7606 | 0.75 | 0.7714 |
| 0.4576 | 10.0 | 480 | 0.4472 | 0.8 | 0.7941 | 0.8182 | 0.7714 |
| 0.4304 | 11.0 | 528 | 0.4177 | 0.7786 | 0.7801 | 0.7746 | 0.7857 |
| 0.401 | 12.0 | 576 | 0.3912 | 0.8071 | 0.8029 | 0.8209 | 0.7857 |
| 0.3746 | 13.0 | 624 | 0.3674 | 0.8214 | 0.8201 | 0.8261 | 0.8143 |
| 0.3547 | 14.0 | 672 | 0.3478 | 0.8214 | 0.8201 | 0.8261 | 0.8143 |
| 0.3304 | 15.0 | 720 | 0.3277 | 0.8214 | 0.8201 | 0.8261 | 0.8143 |
| 0.3149 | 16.0 | 768 | 0.3057 | 0.8429 | 0.8451 | 0.8333 | 0.8571 |
| 0.2892 | 17.0 | 816 | 0.2916 | 0.8429 | 0.8451 | 0.8333 | 0.8571 |
| 0.272 | 18.0 | 864 | 0.2817 | 0.8571 | 0.8611 | 0.8378 | 0.8857 |
| 0.2524 | 19.0 | 912 | 0.2693 | 0.8643 | 0.8671 | 0.8493 | 0.8857 |
| 0.2412 | 20.0 | 960 | 0.2596 | 0.8714 | 0.8732 | 0.8611 | 0.8857 |
| 0.2285 | 21.0 | 1008 | 0.2524 | 0.8786 | 0.8811 | 0.8630 | 0.9 |
| 0.2265 | 22.0 | 1056 | 0.2435 | 0.8786 | 0.8811 | 0.8630 | 0.9 |
| 0.2025 | 23.0 | 1104 | 0.2382 | 0.8786 | 0.8811 | 0.8630 | 0.9 |
| 0.1975 | 24.0 | 1152 | 0.2323 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1917 | 25.0 | 1200 | 0.2273 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1791 | 26.0 | 1248 | 0.2234 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1715 | 27.0 | 1296 | 0.2215 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1622 | 28.0 | 1344 | 0.2169 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1618 | 29.0 | 1392 | 0.2141 | 0.9143 | 0.9143 | 0.9143 | 0.9143 |
| 0.1554 | 30.0 | 1440 | 0.2126 | 0.9143 | 0.9143 | 0.9143 | 0.9143 |
| 0.1515 | 31.0 | 1488 | 0.2111 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1433 | 32.0 | 1536 | 0.2087 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1455 | 33.0 | 1584 | 0.2081 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1436 | 34.0 | 1632 | 0.2073 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1413 | 35.0 | 1680 | 0.2063 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1377 | 36.0 | 1728 | 0.2058 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1392 | 37.0 | 1776 | 0.2053 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1383 | 38.0 | 1824 | 0.2053 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1334 | 39.0 | 1872 | 0.2051 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
| 0.1272 | 40.0 | 1920 | 0.2051 | 0.9071 | 0.9065 | 0.9130 | 0.9 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "cdp-aa-classifier-synth-limited", "results": []}]}
|
alex-miller/cdp-aa-classifier-synth-limited
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:alex-miller/ODABert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:33:59+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
cdp-aa-classifier-synth-limited
===============================
This model is a fine-tuned version of alex-miller/ODABert on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2051
* Accuracy: 0.9071
* F1: 0.9065
* Precision: 0.9130
* Recall: 0.9
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 40
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.0.1
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-alex-miller/ODABert #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1536
- Accuracy: 0.8203
- Precision: 0.8212
- Recall: 0.8203
- Precision Macro: 0.7660
- Recall Macro: 0.7548
- Macro Fpr: 0.0156
- Weighted Fpr: 0.0150
- Weighted Specificity: 0.9766
- Macro Specificity: 0.9867
- Weighted Sensitivity: 0.8242
- Macro Sensitivity: 0.7548
- F1 Micro: 0.8242
- F1 Macro: 0.7566
- F1 Weighted: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| 1.1096 | 1.0 | 643 | 0.6748 | 0.7978 | 0.7855 | 0.7978 | 0.6239 | 0.6340 | 0.0188 | 0.0178 | 0.9702 | 0.9845 | 0.7978 | 0.6340 | 0.7978 | 0.6134 | 0.7840 |
| 0.6187 | 2.0 | 1286 | 0.6449 | 0.8110 | 0.8196 | 0.8110 | 0.7806 | 0.7327 | 0.0169 | 0.0164 | 0.9755 | 0.9858 | 0.8110 | 0.7327 | 0.8110 | 0.7268 | 0.8090 |
| 0.4747 | 3.0 | 1929 | 0.8151 | 0.8149 | 0.8192 | 0.8149 | 0.7659 | 0.7390 | 0.0166 | 0.0160 | 0.9761 | 0.9861 | 0.8149 | 0.7390 | 0.8149 | 0.7370 | 0.8125 |
| 0.2645 | 4.0 | 2572 | 0.9345 | 0.8218 | 0.8198 | 0.8218 | 0.7446 | 0.7413 | 0.0158 | 0.0152 | 0.9774 | 0.9866 | 0.8218 | 0.7413 | 0.8218 | 0.7385 | 0.8189 |
| 0.1901 | 5.0 | 3215 | 1.0929 | 0.8195 | 0.8242 | 0.8195 | 0.8264 | 0.7432 | 0.0161 | 0.0155 | 0.9750 | 0.9863 | 0.8195 | 0.7432 | 0.8195 | 0.7595 | 0.8166 |
| 0.1131 | 6.0 | 3858 | 1.1536 | 0.8203 | 0.8212 | 0.8203 | 0.7968 | 0.7786 | 0.0159 | 0.0154 | 0.9766 | 0.9865 | 0.8203 | 0.7786 | 0.8203 | 0.7840 | 0.8197 |
| 0.063 | 7.0 | 4501 | 1.3218 | 0.8118 | 0.8184 | 0.8118 | 0.7518 | 0.7526 | 0.0166 | 0.0163 | 0.9773 | 0.9859 | 0.8118 | 0.7526 | 0.8118 | 0.7495 | 0.8136 |
| 0.0264 | 8.0 | 5144 | 1.3863 | 0.8257 | 0.8262 | 0.8257 | 0.7784 | 0.7768 | 0.0155 | 0.0149 | 0.9768 | 0.9868 | 0.8257 | 0.7768 | 0.8257 | 0.7730 | 0.8247 |
| 0.03 | 9.0 | 5787 | 1.5542 | 0.8079 | 0.8167 | 0.8079 | 0.7639 | 0.7653 | 0.0172 | 0.0167 | 0.9744 | 0.9855 | 0.8079 | 0.7653 | 0.8079 | 0.7595 | 0.8096 |
| 0.0149 | 10.0 | 6430 | 1.5835 | 0.8141 | 0.8155 | 0.8141 | 0.7545 | 0.7361 | 0.0168 | 0.0160 | 0.9730 | 0.9858 | 0.8141 | 0.7361 | 0.8141 | 0.7412 | 0.8127 |
| 0.005 | 11.0 | 7073 | 1.5325 | 0.8242 | 0.8250 | 0.8242 | 0.7805 | 0.7812 | 0.0156 | 0.0150 | 0.9758 | 0.9867 | 0.8242 | 0.7812 | 0.8242 | 0.7681 | 0.8226 |
| 0.003 | 12.0 | 7716 | 1.5714 | 0.8288 | 0.8299 | 0.8288 | 0.7701 | 0.7679 | 0.0152 | 0.0145 | 0.9765 | 0.9870 | 0.8288 | 0.7679 | 0.8288 | 0.7626 | 0.8276 |
| 0.0033 | 13.0 | 8359 | 1.5511 | 0.8249 | 0.8219 | 0.8249 | 0.7676 | 0.7598 | 0.0156 | 0.0149 | 0.9760 | 0.9867 | 0.8249 | 0.7598 | 0.8249 | 0.7608 | 0.8225 |
| 0.0018 | 14.0 | 9002 | 1.5510 | 0.8249 | 0.8225 | 0.8249 | 0.7686 | 0.7554 | 0.0155 | 0.0149 | 0.9767 | 0.9868 | 0.8249 | 0.7554 | 0.8249 | 0.7572 | 0.8224 |
| 0.0008 | 15.0 | 9645 | 1.5469 | 0.8242 | 0.8220 | 0.8242 | 0.7660 | 0.7548 | 0.0156 | 0.0150 | 0.9766 | 0.9867 | 0.8242 | 0.7548 | 0.8242 | 0.7566 | 0.8221 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
{"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "nlpaueb/legal-bert-base-uncased", "model-index": [{"name": "legal-bert-base-uncased", "results": []}]}
|
xshubhamx/legal-bert-base-uncased
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:35:14+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-nlpaueb/legal-bert-base-uncased #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
legal-bert-base-uncased
=======================
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1536
* Accuracy: 0.8203
* Precision: 0.8212
* Recall: 0.8203
* Precision Macro: 0.7660
* Recall Macro: 0.7548
* Macro Fpr: 0.0156
* Weighted Fpr: 0.0150
* Weighted Specificity: 0.9766
* Macro Specificity: 0.9867
* Weighted Sensitivity: 0.8242
* Macro Sensitivity: 0.7548
* F1 Micro: 0.8242
* F1 Macro: 0.7566
* F1 Weighted: 0.8221
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.1.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-nlpaueb/legal-bert-base-uncased #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 141B-A35B
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs.
> [!NOTE]
> This model was trained collaboratively between Argilla, KAIST, and Hugging Face
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English.
- **License:** Apache 2.0
- **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized
## Performance
Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
| Model | MT Bench | IFEval | BBH | AGIEval |
|-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:|
| [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 |
| [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 |
| [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 |
## Intended uses & limitations
The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "You are Zephyr, a helpful assistant.",
},
{"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
## Citation
If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:
```
@misc{hong2024orpo,
title={ORPO: Monolithic Preference Optimization without Reference Model},
author={Jiwoo Hong and Noah Lee and James Thorne},
year={2024},
eprint={2403.07691},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
You may also wish to cite the creators of this model:
```
@misc{zephyr_141b,
author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall},
title = {Zephyr 141B A35B},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}}
}
```
|
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
|
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"arxiv:2403.07691",
"arxiv:2311.07911",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-12T18:37:25+00:00
|
[
"2403.07691",
"2311.07911"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Model Card for Zephyr 141B-A35B
===============================
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs.
>
> [!NOTE]
> This model was trained collaboratively between Argilla, KAIST, and Hugging Face
>
>
>
Model Details
-------------
### Model Description
* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.
* Language(s) (NLP): Primarily English.
* License: Apache 2.0
* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1
### Model Sources
* Repository: URL
* Dataset: URL
Performance
-----------
Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
Intended uses & limitations
---------------------------
The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:
Bias, Risks, and Limitations
----------------------------
Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 32
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: inverse\_sqrt
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:
You may also wish to cite the creators of this model:
|
[
"### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1",
"### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1",
"### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_precision: 0.4837
- eval_recall: 0.5349
- eval_f1: 0.5080
- eval_accuracy: 0.9438
- eval_loss: 0.1802
- eval_runtime: 14.3435
- eval_samples_per_second: 65.605
- eval_steps_per_second: 1.046
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "results", "results": []}]}
|
SweetZiyi/results
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:37:37+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# results
This model is a fine-tuned version of microsoft/biogpt on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_precision: 0.4837
- eval_recall: 0.5349
- eval_f1: 0.5080
- eval_accuracy: 0.9438
- eval_loss: 0.1802
- eval_runtime: 14.3435
- eval_samples_per_second: 65.605
- eval_steps_per_second: 1.046
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results\n\nThis model is a fine-tuned version of microsoft/biogpt on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_precision: 0.4837\n- eval_recall: 0.5349\n- eval_f1: 0.5080\n- eval_accuracy: 0.9438\n- eval_loss: 0.1802\n- eval_runtime: 14.3435\n- eval_samples_per_second: 65.605\n- eval_steps_per_second: 1.046\n- step: 0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# results\n\nThis model is a fine-tuned version of microsoft/biogpt on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_precision: 0.4837\n- eval_recall: 0.5349\n- eval_f1: 0.5080\n- eval_accuracy: 0.9438\n- eval_loss: 0.1802\n- eval_runtime: 14.3435\n- eval_samples_per_second: 65.605\n- eval_steps_per_second: 1.046\n- step: 0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/saucam/Orpomis-Prime-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orpomis-Prime-7B-GGUF/resolve/main/Orpomis-Prime-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit"], "base_model": "saucam/Orpomis-Prime-7B", "quantized_by": "mradermacher"}
|
mradermacher/Orpomis-Prime-7B-GGUF
| null |
[
"transformers",
"gguf",
"merge",
"mergekit",
"en",
"base_model:saucam/Orpomis-Prime-7B",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:40:08+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #merge #mergekit #en #base_model-saucam/Orpomis-Prime-7B #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #merge #mergekit #en #base_model-saucam/Orpomis-Prime-7B #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Juniplayground/juniper-mxbai-embed-large-v1-v10
| null |
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:40:30+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Psoriasis-Project-M-beit-base-patch16-224-pt22k-ft22k
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1695
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 0.3253 | 0.8958 |
| 0.0442 | 2.0 | 13 | 0.2781 | 0.9167 |
| 0.0442 | 2.92 | 19 | 0.2423 | 0.9167 |
| 0.0205 | 4.0 | 26 | 0.2234 | 0.9375 |
| 0.0073 | 4.92 | 32 | 0.1979 | 0.9583 |
| 0.0073 | 6.0 | 39 | 0.1701 | 0.9792 |
| 0.0055 | 6.92 | 45 | 0.1599 | 0.9792 |
| 0.0071 | 8.0 | 52 | 0.1710 | 0.9792 |
| 0.0071 | 8.92 | 58 | 0.1704 | 0.9792 |
| 0.003 | 9.23 | 60 | 0.1695 | 0.9792 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/beit-base-patch16-224-pt22k-ft22k", "model-index": [{"name": "Psoriasis-Project-M-beit-base-patch16-224-pt22k-ft22k", "results": []}]}
|
ahmedesmail16/Psoriasis-Project-M-beit-base-patch16-224-pt22k-ft22k
| null |
[
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:41:13+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #beit #image-classification #generated_from_trainer #base_model-microsoft/beit-base-patch16-224-pt22k-ft22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Psoriasis-Project-M-beit-base-patch16-224-pt22k-ft22k
=====================================================
This model is a fine-tuned version of microsoft/beit-base-patch16-224-pt22k-ft22k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1695
* Accuracy: 0.9792
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #beit #image-classification #generated_from_trainer #base_model-microsoft/beit-base-patch16-224-pt22k-ft22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
EdBerg/opt-125m-gptq-4bit
| null |
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-12T18:43:18+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|

# Orpomis-Prime-7B-dare
Orpomis-Prime-7B-dare is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [kaist-ai/mistral-orpo-beta](https://huggingface.co/kaist-ai/mistral-orpo-beta)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
## 🧩 Configuration
```yamlname: Orpomis-Prime-7B-dare
models:
- model: kaist-ai/mistral-orpo-beta
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Mistral-7B
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: kaist-ai/mistral-orpo-beta
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Orpomis-Prime-7B-dare"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Sample Run
```
<|user|>
Beth bakes 4, or 2 dozen batches of cookies in a week. If these cookies are shared amongst 16 people equally, how many cookies does each person consume?</s>
<|assistant|>
If Beth bakes 4, or 2 dozen batches of cookies in a week, then she bakes a total of 2 x 4 = 8 dozen cookies.
If these cookies are shared amongst 16 people equally, each person would consume 8 dozen cookies / 16 people = 0.5 dozen cookies.
Since there are 12 cookies in a dozen, each person would consume 0.5 dozen cookies * 12 cookies/dozen = 6 cookies.
So, each person consumes 6 cookies.
```
|
{"tags": ["merge", "mergekit", "kaist-ai/mistral-orpo-beta", "NousResearch/Hermes-2-Pro-Mistral-7B"], "base_model": ["kaist-ai/mistral-orpo-beta", "NousResearch/Hermes-2-Pro-Mistral-7B"]}
|
saucam/Orpomis-Prime-7B-dare
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"kaist-ai/mistral-orpo-beta",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"conversational",
"base_model:kaist-ai/mistral-orpo-beta",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-12T18:44:22+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #kaist-ai/mistral-orpo-beta #NousResearch/Hermes-2-Pro-Mistral-7B #conversational #base_model-kaist-ai/mistral-orpo-beta #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
 on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7015
- Accuracy: 0.8456
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.528 | 1.09 | 500 | 0.4319 | 0.8358 | 0.8827 |
| 0.338 | 2.18 | 1000 | 0.7015 | 0.8456 | 0.8901 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilroberta-base", "model-index": [{"name": "platzi-distilroberta-base-mrpc-glue-jdgalvan", "results": []}]}
|
platzi/platzi-distilroberta-base-mrpc-glue-jdgalvan
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-12T18:47:05+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
platzi-distilroberta-base-mrpc-glue-jdgalvan
============================================
This model is a fine-tuned version of distilroberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7015
* Accuracy: 0.8456
* F1: 0.8901
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.