Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.01_4iters_bs256_nodpo_only4w_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_1](https://huggingface.co/ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_1", "model-index": [{"name": "0.01_4iters_bs256_nodpo_only4w_iter_2", "results": []}]} | ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:01:57+00:00 |
null | null | {} | RyotaKadoya1993/MoE_Test_V2 | null | [
"region:us"
]
| null | 2024-04-27T14:02:41+00:00 |
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"} | cgihlstorf/NEW_finetuned_Mistral-7B32_1_0.0003_sequential | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2024-04-27T14:02:47+00:00 |
text-generation | transformers | # Experto-4X8B-untrained
Experto-4X8B-untrained is a merge of the following models using [mergoo](https://github.com/Leeroo-AI/mergoo/tree/main):
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
* [dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2)
## 🧩 Configuration
```json```
WARNING: This model needs further training to train the router layers | {} | saucam/Experto-4X8B-untrained | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:02:53+00:00 |
null | transformers |
# hus960/Llama-3-NeuralPaca-8b-Q4_K_M-GGUF
This model was converted to GGUF format from [`NeuralNovel/Llama-3-NeuralPaca-8b`](https://huggingface.co/NeuralNovel/Llama-3-NeuralPaca-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NeuralNovel/Llama-3-NeuralPaca-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/Llama-3-NeuralPaca-8b-Q4_K_M-GGUF --model llama-3-neuralpaca-8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/Llama-3-NeuralPaca-8b-Q4_K_M-GGUF --model llama-3-neuralpaca-8b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-neuralpaca-8b.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-repo"], "datasets": ["tatsu-lab/alpaca"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "thumbnail": "https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/njn9I-gHjyq0lMyjF0lZF.jpeg"} | hus960/Llama-3-NeuralPaca-8b-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:tatsu-lab/alpaca",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:03:30+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** flyjin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | flyjin/lora_llama-3-8b-bnb-4bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:04:10+00:00 |
null | null | {"license": "mit"} | ArianH/test | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T14:04:48+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shabboo96/session2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "session2", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | shabboo96/session2 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:05:13+00:00 |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - mrtuandao/textual_inversion_corgi
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true} | mrtuandao/textual_inversion_corgi | null | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| null | 2024-04-27T14:06:44+00:00 |
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.0002 | null | [
"region:us"
]
| null | 2024-04-27T14:07:31+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | saransh03sharma/mintrec2-llama-2-13b-200 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:07:57+00:00 |
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.0005 | null | [
"region:us"
]
| null | 2024-04-27T14:08:25+00:00 |
|
text-to-image | diffusers | {"license": "unknown"} | misri/anima_pencil-XL-v3.1.0 | null | [
"diffusers",
"safetensors",
"license:unknown",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| null | 2024-04-27T14:08:29+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="FitTechMike/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | FitTechMike/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:08:51+00:00 |
text-generation | transformers | {} | RyotaKadoya1993/MoE_Test_V3 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:09:30+00:00 |
|
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.002 | null | [
"region:us"
]
| null | 2024-04-27T14:10:14+00:00 |
|
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-korean-summarization_JU_FineTune_AIHUB_Law
This model is a fine-tuned version of [eenzeenee/t5-small-korean-summarization](https://huggingface.co/eenzeenee/t5-small-korean-summarization) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"tags": ["summarization", "generated_from_trainer"], "base_model": "eenzeenee/t5-small-korean-summarization", "model-index": [{"name": "t5-small-korean-summarization_JU_FineTune_AIHUB_Law", "results": []}]} | dealing08/t5-small-korean-summarization_JU_FineTune_AIHUB_Law | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:eenzeenee/t5-small-korean-summarization",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:10:59+00:00 |
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.005 | null | [
"region:us"
]
| null | 2024-04-27T14:11:09+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-large-model
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5285
- Train Accuracy: 0.6667
- Validation Loss: 0.6526
- Validation Accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6872 | 0.6667 | 0.5984 | 1.0 | 0 |
| 0.5637 | 0.6667 | 0.6125 | 1.0 | 1 |
| 0.5285 | 0.6667 | 0.6526 | 1.0 | 2 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-large-uncased", "model-index": [{"name": "bert-large-model", "results": []}]} | Diluzx/bert-large-model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:11:27+00:00 |
null | null | {} | FitTechMike/q-FrozenLake-4x4-noSlippery | null | [
"region:us"
]
| null | 2024-04-27T14:11:43+00:00 |
|
token-classification | transformers | # OALZ/1788/Q1/NER
A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
## Annotations
Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub as [`oalz-1788-q1-ner-annotations`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations).
There are two versions of the dataset
- [`5a-generate-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5a-generate-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano
- [`5b-merge-documents`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5b-merge-documents) does not retain this split. The text was merged into one long text and annotation indices were adapted.
Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.
<sup>*</sup> The former two should be used together with `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
The following categories were included in the annotation process:
| Tag | Label | Count | Total Length | Median Annotation Length | Mean Annotation Length | SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event | 294 | 6,090 | 18 | 20.71 | 13.24 |
| `LOC` | Location | 2,449 | 24,417 | 9 | 9.97 | 6.21 |
| `MISC` | Miscellaneous | 2,585 | 50,654 | 14 | 19.60 | 19.63 |
| `ORG` | Organisation | 2,479 | 34,693 | 11 | 13.99 | 9.33 |
| `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
| `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
## NER models
Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, you would want to categorise the whole passage "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
```json
{
"text": "Universität Salzburg",
"label": [[0, 20, "ORG"], [12, 20, "LOC"]]
}
```
To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
The models' performance measures are as follows:
| Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) | 1 | `1393` | .021957 | .665233 | .343066 | .351528 | .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc) | 1 | `1393` | .033602 | .829535 | .803648 | .814146 | .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc) | 2 | `2786` | .123994 | .739221 | .503677 | .571298 | 968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org) | 1 | `1393` | .062769 | .744259 | .709738 | .726212 | .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).
| {"language": ["de", "la", "fr", "en"], "tags": ["historical"], "task_categories": ["token-classification"], "pretty_name": "Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788"} | LelViLamp/oalz-1788-q1-ner-event | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"historical",
"de",
"la",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:12:20+00:00 |
text-generation | transformers |
# Saiga – Llama 3 8B – AdaQRound
Based on [Saiga Llama 3 8B](https://huggingface.co/IlyaGusev/saiga_llama3_8b).
Quantized with AdaQRound which is a combination of [AdaRound](https://arxiv.org/abs/2004.10568) and [AdaQuant](https://arxiv.org/abs/2006.10518), with code implementation based on [OmniQuant](https://github.com/OpenGVLab/OmniQuant).
## Evaluation
### PPL (↓)
| | wiki |
| ------------- | ----- |
| FP | 7,862 |
| **Quantized** | 8,272 |
### Accuracy on English Benchmarks, % (↑)
| | piqa | arc_easy | arc_challenge | boolq | hellaswag | winogrande | mmlu_humanities | mmlu_social_sciences | mmlu_stem | mmlu_other |
| ------------- | ---- | -------- | ------------- | ----- | --------- | ---------- | --------------- | -------------------- | --------- | ---------- |
| FP | 78,5 | 82,2 | 50,4 | 82,7 | 58,1 | 72,4 | 65,5 | 72,6 | 53,8 | 68,4 |
| **Quantized** | 78,2 | 81,6 | 49,9 | 81,9 | 57,2 | 71,7 | 63,7 | 69,5 | 51,6 | 66,9 |
### Accuracy on Russian Benchmarks, % (↑)
| | danetqa | terra | rwsd | muserc | rucos | lidirus | parus | rcb | russe | rucola |
| ------------- | ------- | ----- | ---- | ------ | ----- | ------- | ----- | ---- | ----- | ------ |
| FP | 74,9 | 52,1 | 51,5 | 55,9 | 58,1 | 59,5 | 69,0 | 34,1 | 38,8 | 67,5 |
| **Quantized** | 66,7 | 50,8 | 48,0 | 56,2 | 52,6 | 59,7 | 70,0 | 33,6 | 37,0 | 67,5 |
### Summary
| | Avg acc diff on Eng, % (↑) | Avg acc diff on Rus, % (↑) | Occupied disk space, % (↓) |
| ------------- | -------------------------- | -------------------------- | -------------------------- |
| FP | 0 | 0 | 100 |
| **Quantized** | \-1,2 | \-1,9 | 35,7 |
## Examples
### Imports and Model Loading
<details>
<summary>Expand</summary>
```python
import gc
import auto_gptq.nn_modules.qlinear.qlinear_cuda as qlinear_cuda
import auto_gptq.nn_modules.qlinear.qlinear_triton as qlinear_triton
import torch
from accelerate import (
init_empty_weights,
infer_auto_device_map,
load_checkpoint_in_model,
)
from tqdm import tqdm
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
)
def get_named_linears(model):
return {
name: module for name, module in model.named_modules()
if isinstance(module, torch.nn.Linear)
}
def set_module(model, name, module):
parent = model
levels = name.split('.')
for i in range(len(levels) - 1):
cur_name = levels[i]
if cur_name.isdigit():
parent = parent[int(cur_name)]
else:
parent = getattr(parent, cur_name)
setattr(parent, levels[-1], module)
def load_model(model_path):
# Based on: https://github.com/OpenGVLab/OmniQuant/blob/main/runing_quantized_mixtral_7bx8.ipynb
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
if not hasattr(config, 'quantization_config'):
raise AttributeError(
f'No quantization info found in model config "{model_path}"'
f' (`quantization_config` section is missing).'
)
wbits = config.quantization_config['bits']
group_size = config.quantization_config['group_size']
# We are going to init an ordinary model and then manually replace all Linears with QuantLinears
del config.quantization_config
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True)
layers = model.model.layers
for i in tqdm(range(len(layers))):
layer = layers[i]
named_linears = get_named_linears(layer)
for name, module in named_linears.items():
params = (
wbits, group_size,
module.in_features, module.out_features,
module.bias is not None
)
if wbits in [2, 4]:
q_linear = qlinear_triton.QuantLinear(*params)
elif wbits == 3:
q_linear = qlinear_cuda.QuantLinear(*params)
else:
raise NotImplementedError("Only 2, 3 and 4 bits are supported.")
q_linear.to(next(layer.parameters()).device)
set_module(layer, name, q_linear)
torch.cuda.empty_cache()
gc.collect()
model.tie_weights()
device_map = infer_auto_device_map(model)
print("Loading pre-computed quantized weights...")
load_checkpoint_in_model(
model, checkpoint=model_path,
device_map=device_map, offload_state_dict=True,
)
print("Model loaded successfully!")
return model
```
</details>
### Inference
```python
model_path = "compressa-ai/Saiga-Llama-3-8B-AdaQRound"
model = load_model(model_path).cuda()
tokenizer = AutoTokenizer.from_pretrained(
model_path, use_fast=False, trust_remote_code=True
)
system_message = "Ты — дружелюбный чат-бот, который всегда отвечает как пират."
user_message = "Куда мы направляемся, капитан?"
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message},
]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.cuda() for k, v in inputs.items()}
outputs = model.generate(
**inputs, max_new_tokens=512,
do_sample=True, temperature=0.7, top_p=0.95,
)
response = tokenizer.decode(outputs[0])
continuation = response.removeprefix(prompt).removesuffix(tokenizer.eos_token)
print(f'Prompt:\n{prompt}')
print(f'Continuation:\n{continuation}\n')
```
### Inference Using Pipeline
```python
pipe = pipeline(
"text-generation",
model=model, tokenizer=tokenizer,
max_new_tokens=512, do_sample=True,
temperature=0.7, top_p=0.95,
device=0,
)
prompt = pipe.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
outputs = pipe(prompt)
response = outputs[0]["generated_text"]
continuation = response.removeprefix(prompt)
print(f'Prompt:\n{prompt}')
print(f'Continuation:\n{continuation}\n')
```
| {"language": ["ru"], "license": "other", "tags": ["saiga", "llama3", "adaround", "adaquant", "omniquant", "gptq", "triton"], "base_model": "IlyaGusev/saiga_llama3_8b", "model_type": "llama", "pipeline_tag": "text-generation", "quantized_by": "Compressa", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license"} | compressa-ai/Saiga-Llama-3-8B-AdaQRound | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"saiga",
"llama3",
"adaround",
"adaquant",
"omniquant",
"gptq",
"triton",
"conversational",
"ru",
"arxiv:2004.10568",
"arxiv:2006.10518",
"base_model:IlyaGusev/saiga_llama3_8b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-27T14:12:24+00:00 |
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.02 | null | [
"region:us"
]
| null | 2024-04-27T14:12:57+00:00 |
|
token-classification | transformers | # OALZ/1788/Q1/NER
A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
## Annotations
Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub as [`oalz-1788-q1-ner-annotations`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations).
There are two versions of the dataset
- [`5a-generate-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5a-generate-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano
- [`5b-merge-documents`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5b-merge-documents) does not retain this split. The text was merged into one long text and annotation indices were adapted.
Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.
<sup>*</sup> The former two should be used together with `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
The following categories were included in the annotation process:
| Tag | Label | Count | Total Length | Median Annotation Length | Mean Annotation Length | SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event | 294 | 6,090 | 18 | 20.71 | 13.24 |
| `LOC` | Location | 2,449 | 24,417 | 9 | 9.97 | 6.21 |
| `MISC` | Miscellaneous | 2,585 | 50,654 | 14 | 19.60 | 19.63 |
| `ORG` | Organisation | 2,479 | 34,693 | 11 | 13.99 | 9.33 |
| `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
| `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
## NER models
Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, you would want to categorise the whole passage "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
```json
{
"text": "Universität Salzburg",
"label": [[0, 20, "ORG"], [12, 20, "LOC"]]
}
```
To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
The models' performance measures are as follows:
| Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) | 1 | `1393` | .021957 | .665233 | .343066 | .351528 | .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc) | 1 | `1393` | .033602 | .829535 | .803648 | .814146 | .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc) | 2 | `2786` | .123994 | .739221 | .503677 | .571298 | 968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org) | 1 | `1393` | .062769 | .744259 | .709738 | .726212 | .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).
| {"language": ["de", "la", "fr", "en"], "tags": ["historical"], "task_categories": ["token-classification"], "pretty_name": "Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788"} | LelViLamp/oalz-1788-q1-ner-loc | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"historical",
"de",
"la",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:13:12+00:00 |
null | null | {} | HachiML/Llama2-mu-1.4M-lr0.05 | null | [
"region:us"
]
| null | 2024-04-27T14:13:53+00:00 |
|
token-classification | transformers | # OALZ/1788/Q1/NER
A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
## Annotations
Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub as [`oalz-1788-q1-ner-annotations`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations).
There are two versions of the dataset
- [`5a-generate-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5a-generate-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano
- [`5b-merge-documents`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5b-merge-documents) does not retain this split. The text was merged into one long text and annotation indices were adapted.
Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.
<sup>*</sup> The former two should be used together with `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
The following categories were included in the annotation process:
| Tag | Label | Count | Total Length | Median Annotation Length | Mean Annotation Length | SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event | 294 | 6,090 | 18 | 20.71 | 13.24 |
| `LOC` | Location | 2,449 | 24,417 | 9 | 9.97 | 6.21 |
| `MISC` | Miscellaneous | 2,585 | 50,654 | 14 | 19.60 | 19.63 |
| `ORG` | Organisation | 2,479 | 34,693 | 11 | 13.99 | 9.33 |
| `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
| `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
## NER models
Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, you would want to categorise the whole passage "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
```json
{
"text": "Universität Salzburg",
"label": [[0, 20, "ORG"], [12, 20, "LOC"]]
}
```
To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
The models' performance measures are as follows:
| Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) | 1 | `1393` | .021957 | .665233 | .343066 | .351528 | .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc) | 1 | `1393` | .033602 | .829535 | .803648 | .814146 | .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc) | 2 | `2786` | .123994 | .739221 | .503677 | .571298 | 968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org) | 1 | `1393` | .062769 | .744259 | .709738 | .726212 | .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).
| {"language": ["de", "la", "fr", "en"], "tags": ["historical"], "task_categories": ["token-classification"], "pretty_name": "Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788"} | LelViLamp/oalz-1788-q1-ner-misc | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"historical",
"de",
"la",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:13:59+00:00 |
text-generation | transformers |
# Introduction
MetaAligner-UltraFeedback-1.1B is part of the <em>MetaAligner</em> project, the first policy-agnostic and generalizable method for multi-objective preference alignment of large
language models. This model is finetuned based on the TinyLLaMA-1.1B foundation model and
the dynamic multi-objective dataset built from the openbmb/UltraFeedback dataset. UltraFeedback-MetaAligner is trained to align responses of another general AI assistant considering
a single-turn query, but the queries include professional questions such as programming language and
history, and the aligned responses are usually more complicated.
The model is expected to perform multi-objective alignment
efficiently, without tuning the policy models or accessing their parameters. <em>MetaAligner</em> also exerts zero-shot preference alignment
for unseen objectives. To our knowledge, this work marks the first attempt at generalizable multi-
objective preference alignment. Experimental results show that MetaAligner can simultaneously perform effective alignment for multiple unseen objectives
while maintaining performance on aligned objectives.
# Dataset
This model is trained based on the following released dataset:
# Usage
With the Hugging Face Transformers library, you can use the MetaAligner-UltraFeedback-1.1B model in your Python project. Here is a simple example of how to load the model:
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-1.1B', padding_side='left')
model = LlamaForCausalLM.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-1.1B', device_map='auto', torch_dtype=torch.bfloat16)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
```
In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically
use the GPU if it's available.
You can align any outputs obtained from other foundation models with your self-defined objectives. For example, we have the following query from UltraFeedback,
unaligned output examples from LLaMA2-7B,
and target objectives:
```python
all_aspects = {'instruction_following': 'Instruction following: the response should follow the instructions of the query',
'honesty': 'Honesty: the response should not tell lies',
'truthfulness': 'Truthfulness: the response should actively making known all the full truth of a matter',
'helpfulness': 'Helpfulness: the response should provide useful resources and suggestions to the user',
'speci': 'Specificity: the response should refer to facts and details and avoid vague arguments.',
'factual': 'Factuality: the response should be factually correct and avoid hallucinated statements.',
'read': 'Readability: the response should be easy to read and understand, not too technical for laymen.',
'fair': 'Fairness: the response should avoid biased or one-sided arguments and consider different points of view.',
'repeat': 'Repetition: the response should avoid repetitive statements of one point.',
'len': 'Length: the response should be concise and avoid redundant content.'}
aspects = [instruction_following', 'honesty', 'truthfulness', 'helpfulness']
question = "What is the square root of banana?"
answer = "The square root of banana is not a well-defined mathematical concept, as bananas are not objects that can be squared. Bananas are a type of fruit that grow on plants and have a specific shape and size. They do not have any mathematical properties or values that can be used in mathematical operations. Therefore, the square root of banana is not a valid or meaningful mathematical concept. It is important to be aware of the limitations of mathematical language and symbols, and to use them correctly and consistently in order to avoid confusion or misinterpretation."
```
To ensure the best performance, use the following template to prompt <em>MetaAligner</em>:
```python
query_prompt = 'You are an assistant to human. You will be provided with a query and an answer. Consider the query, ' \
'then edit the answer to improve it considering these aspects: {aspects} | ' \
'Query: {question} | Answer: {answer} | Edit: '
aspects = [all_aspects[i] for i in aspects]
aligner_queries = [query_prompt.format(aspects='; '.join(aspects), question=question, answer=str(answer))]
```
You can obtain an aligned response using the following codes:
```python
inputs = tokenizer(aligner_queries, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024)
truc_ids = generate_ids[0][len(input_ids[0]):]
response = tokenizer.decode(truc_ids, skip_special_tokens=True, spaces_between_special_tokens=False)
print(response)
```
One inference of MetaAligner-UltraFeedback-1.1B on the above codes has the following response:
```
The square root of a number is the reciprocal of that number. In this case, the square root of a banana is not a valid mathematical concept. Bananas are not a mathematical quantity, and therefore, there is no square root of a banana.
```
## License
MetaAligner-UltraFeedback-1.1B is licensed under MIT. For more details, please see the MIT file. | {"language": ["en"], "license": "mit", "tags": ["Human Preference Alignment", "large language models"], "datasets": ["openbmb/UltraFeedback"]} | MetaAligner/MetaAligner-UltraFeedback-1.1B | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"Human Preference Alignment",
"large language models",
"conversational",
"en",
"dataset:openbmb/UltraFeedback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:14:18+00:00 |
token-classification | transformers | # OALZ/1788/Q1/NER
A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
## Annotations
Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub as [`oalz-1788-q1-ner-annotations`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations).
There are two versions of the dataset
- [`5a-generate-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5a-generate-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano
- [`5b-merge-documents`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5b-merge-documents) does not retain this split. The text was merged into one long text and annotation indices were adapted.
Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.
<sup>*</sup> The former two should be used together with `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
The following categories were included in the annotation process:
| Tag | Label | Count | Total Length | Median Annotation Length | Mean Annotation Length | SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event | 294 | 6,090 | 18 | 20.71 | 13.24 |
| `LOC` | Location | 2,449 | 24,417 | 9 | 9.97 | 6.21 |
| `MISC` | Miscellaneous | 2,585 | 50,654 | 14 | 19.60 | 19.63 |
| `ORG` | Organisation | 2,479 | 34,693 | 11 | 13.99 | 9.33 |
| `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
| `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
## NER models
Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, you would want to categorise the whole passage "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
```json
{
"text": "Universität Salzburg",
"label": [[0, 20, "ORG"], [12, 20, "LOC"]]
}
```
To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
The models' performance measures are as follows:
| Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) | 1 | `1393` | .021957 | .665233 | .343066 | .351528 | .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc) | 1 | `1393` | .033602 | .829535 | .803648 | .814146 | .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc) | 2 | `2786` | .123994 | .739221 | .503677 | .571298 | 968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org) | 1 | `1393` | .062769 | .744259 | .709738 | .726212 | .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).
| {"language": ["de", "la", "fr", "en"], "tags": ["historical"], "task_categories": ["token-classification"], "pretty_name": "Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788"} | LelViLamp/oalz-1788-q1-ner-org | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"historical",
"de",
"la",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:14:53+00:00 |
null | transformers | {} | barannalp/test | null | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:15:19+00:00 |
|
text-generation | transformers |
# Introduction
MetaAligner-UltraFeedback-7B is part of the <em>MetaAligner</em> project, the first policy-agnostic and generalizable method for multi-objective preference alignment of large
language models. This model is finetuned based on the Meta LLaMA2-7B foundation model and
the dynamic multi-objective dataset built from the openbmb/UltraFeedback dataset. UltraFeedback-MetaAligner is trained to align responses of another general AI assistant considering
a single-turn query, but the queries include professional questions such as programming language and
history, and the aligned responses are usually more complicated.
The model is expected to perform multi-objective alignment
efficiently, without tuning the policy models or accessing their parameters. <em>MetaAligner</em> also exerts zero-shot preference alignment
for unseen objectives. To our knowledge, this work marks the first attempt at generalizable multi-
objective preference alignment. Experimental results show that MetaAligner can simultaneously perform effective alignment for multiple unseen objectives
while maintaining performance on aligned objectives.
# Dataset
This model is trained based on the following released dataset:
# Usage
With the Hugging Face Transformers library, you can use the MetaAligner-UltraFeedback-7B model in your Python project. Here is a simple example of how to load the model:
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-7B', padding_side='left')
model = LlamaForCausalLM.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-7B', device_map='auto', torch_dtype=torch.bfloat16)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
```
In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically
use the GPU if it's available.
You can align any outputs obtained from other foundation models with your self-defined objectives. For example, we have the following query from UltraFeedback,
unaligned output examples from LLaMA2-7B,
and target objectives:
```python
all_aspects = {'instruction_following': 'Instruction following: the response should follow the instructions of the query',
'honesty': 'Honesty: the response should not tell lies',
'truthfulness': 'Truthfulness: the response should actively making known all the full truth of a matter',
'helpfulness': 'Helpfulness: the response should provide useful resources and suggestions to the user',
'speci': 'Specificity: the response should refer to facts and details and avoid vague arguments.',
'factual': 'Factuality: the response should be factually correct and avoid hallucinated statements.',
'read': 'Readability: the response should be easy to read and understand, not too technical for laymen.',
'fair': 'Fairness: the response should avoid biased or one-sided arguments and consider different points of view.',
'repeat': 'Repetition: the response should avoid repetitive statements of one point.',
'len': 'Length: the response should be concise and avoid redundant content.'}
aspects = [instruction_following', 'honesty', 'truthfulness', 'helpfulness']
question = "What is the square root of banana?"
answer = "The square root of banana is not a well-defined mathematical concept, as bananas are not objects that can be squared. Bananas are a type of fruit that grow on plants and have a specific shape and size. They do not have any mathematical properties or values that can be used in mathematical operations. Therefore, the square root of banana is not a valid or meaningful mathematical concept. It is important to be aware of the limitations of mathematical language and symbols, and to use them correctly and consistently in order to avoid confusion or misinterpretation."
```
To ensure the best performance, use the following template to prompt <em>MetaAligner</em>:
```python
query_prompt = 'You are an assistant to human. You will be provided with a query and an answer. Consider the query, ' \
'then edit the answer to improve it considering these aspects: {aspects} | ' \
'Query: {question} | Answer: {answer} | Edit: '
aspects = [all_aspects[i] for i in aspects]
aligner_queries = [query_prompt.format(aspects='; '.join(aspects), question=question, answer=str(answer))]
```
You can obtain an aligned response using the following codes:
```python
inputs = tokenizer(aligner_queries, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024)
truc_ids = generate_ids[0][len(input_ids[0]):]
response = tokenizer.decode(truc_ids, skip_special_tokens=True, spaces_between_special_tokens=False)
print(response)
```
One inference of MetaAligner-UltraFeedback-7B on the above codes has the following response:
```
The square root of banana is not a well-defined mathematical concept, as bananas are not objects that can be squared. The square root operation is typically applied to non-negative numbers, and it returns a value that, when multiplied by itself, gives the original number. In other words, the square root of a number is a number that, when multiplied by itself, gives the original number. However, if you're asking for a playful or creative answer, you could consider the "square root of banana" to be a humorous way of saying "a lot of bananas" or "several bananas." After all, bananas are typically curved, so you wouldn't be able to square them in the traditional sense. But if you want to represent a large quantity of bananas, you could use the term "square root of banana" as a whimsical or lighthearted way to express it.
```
## License
MetaAligner-UltraFeedback-7B is licensed under MIT. For more details, please see the MIT file. | {"language": ["en"], "license": "mit", "tags": ["Human Preference Alignment", "large language models"], "datasets": ["openbmb/UltraFeedback"]} | MetaAligner/MetaAligner-UltraFeedback-7B | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"Human Preference Alignment",
"large language models",
"conversational",
"en",
"dataset:openbmb/UltraFeedback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:15:30+00:00 |
token-classification | transformers | # OALZ/1788/Q1/NER
A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
## Annotations
Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub as [`oalz-1788-q1-ner-annotations`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations).
There are two versions of the dataset
- [`5a-generate-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5a-generate-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano
- [`5b-merge-documents`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations/tree/main/5b-merge-documents) does not retain this split. The text was merged into one long text and annotation indices were adapted.
Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.
<sup>*</sup> The former two should be used together with `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
The following categories were included in the annotation process:
| Tag | Label | Count | Total Length | Median Annotation Length | Mean Annotation Length | SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event | 294 | 6,090 | 18 | 20.71 | 13.24 |
| `LOC` | Location | 2,449 | 24,417 | 9 | 9.97 | 6.21 |
| `MISC` | Miscellaneous | 2,585 | 50,654 | 14 | 19.60 | 19.63 |
| `ORG` | Organisation | 2,479 | 34,693 | 11 | 13.99 | 9.33 |
| `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
| `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
## NER models
Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, you would want to categorise the whole passage "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
```json
{
"text": "Universität Salzburg",
"label": [[0, 20, "ORG"], [12, 20, "LOC"]]
}
```
To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
The models' performance measures are as follows:
| Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) | 1 | `1393` | .021957 | .665233 | .343066 | .351528 | .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc) | 1 | `1393` | .033602 | .829535 | .803648 | .814146 | .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc) | 2 | `2786` | .123994 | .739221 | .503677 | .571298 | 968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org) | 1 | `1393` | .062769 | .744259 | .709738 | .726212 | .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).
| {"language": ["de", "la", "fr", "en"], "tags": ["historical"], "task_categories": ["token-classification"], "pretty_name": "Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788"} | LelViLamp/oalz-1788-q1-ner-time | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"historical",
"de",
"la",
"fr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:15:48+00:00 |
text-generation | transformers |
# Introduction
MetaAligner-UltraFeedback-13B is part of the <em>MetaAligner</em> project, the first policy-agnostic and generalizable method for multi-objective preference alignment of large
language models. This model is finetuned based on the Meta LLaMA2-13B foundation model and
the dynamic multi-objective dataset built from the openbmb/UltraFeedback dataset. UltraFeedback-MetaAligner is trained to align responses of another general AI assistant considering
a single-turn query, but the queries include professional questions such as programming language and
history, and the aligned responses are usually more complicated.
The model is expected to perform multi-objective alignment
efficiently, without tuning the policy models or accessing their parameters. <em>MetaAligner</em> also exerts zero-shot preference alignment
for unseen objectives. To our knowledge, this work marks the first attempt at generalizable multi-
objective preference alignment. Experimental results show that MetaAligner can simultaneously perform effective alignment for multiple unseen objectives
while maintaining performance on aligned objectives.
# Dataset
This model is trained based on the following released dataset:
# Usage
With the Hugging Face Transformers library, you can use the MetaAligner-UltraFeedback-13B model in your Python project. Here is a simple example of how to load the model:
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-13B', padding_side='left')
model = LlamaForCausalLM.from_pretrained('MetaAligner/MetaAligner-UltraFeedback-13B', device_map='auto', torch_dtype=torch.bfloat16)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
```
In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically
use the GPU if it's available.
You can align any outputs obtained from other foundation models with your self-defined objectives. For example, we have the following query from UltraFeedback,
unaligned output examples from LLaMA2-7B,
and target objectives:
```python
all_aspects = {'instruction_following': 'Instruction following: the response should follow the instructions of the query',
'honesty': 'Honesty: the response should not tell lies',
'truthfulness': 'Truthfulness: the response should actively making known all the full truth of a matter',
'helpfulness': 'Helpfulness: the response should provide useful resources and suggestions to the user',
'speci': 'Specificity: the response should refer to facts and details and avoid vague arguments.',
'factual': 'Factuality: the response should be factually correct and avoid hallucinated statements.',
'read': 'Readability: the response should be easy to read and understand, not too technical for laymen.',
'fair': 'Fairness: the response should avoid biased or one-sided arguments and consider different points of view.',
'repeat': 'Repetition: the response should avoid repetitive statements of one point.',
'len': 'Length: the response should be concise and avoid redundant content.'}
aspects = [instruction_following', 'honesty', 'truthfulness', 'helpfulness']
question = "What is the square root of banana?"
answer = "The square root of banana is not a well-defined mathematical concept, as bananas are not objects that can be squared. Bananas are a type of fruit that grow on plants and have a specific shape and size. They do not have any mathematical properties or values that can be used in mathematical operations. Therefore, the square root of banana is not a valid or meaningful mathematical concept. It is important to be aware of the limitations of mathematical language and symbols, and to use them correctly and consistently in order to avoid confusion or misinterpretation."
```
To ensure the best performance, use the following template to prompt <em>MetaAligner</em>:
```python
query_prompt = 'You are an assistant to human. You will be provided with a query and an answer. Consider the query, ' \
'then edit the answer to improve it considering these aspects: {aspects} | ' \
'Query: {question} | Answer: {answer} | Edit: '
aspects = [all_aspects[i] for i in aspects]
aligner_queries = [query_prompt.format(aspects='; '.join(aspects), question=question, answer=str(answer))]
```
You can obtain an aligned response using the following codes:
```python
inputs = tokenizer(aligner_queries, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024)
truc_ids = generate_ids[0][len(input_ids[0]):]
response = tokenizer.decode(truc_ids, skip_special_tokens=True, spaces_between_special_tokens=False)
print(response)
```
One inference of MetaAligner-UltraFeedback-13B on the above codes has the following response:
```
The square root of banana is not a well-defined mathematical concept, as bananas are not objects that can be squared. The square root operation is typically applied to non-negative numbers, and it returns a value that, when multiplied by itself, gives the original number. In other words, the square root of a number is a number that, when multiplied by itself, gives the original number. However, if you're asking for a playful or creative answer, you could consider the "square root of banana" to be a humorous way of saying "a lot of bananas" or "several bananas." After all, bananas are typically curved, so you wouldn't be able to square them in the traditional sense. But if you want to represent a large quantity of bananas, you could use the term "square root of banana" as a whimsical or lighthearted way to express it.
```
## License
MetaAligner-UltraFeedback-13B is licensed under MIT. For more details, please see the MIT file. | {"language": ["en"], "license": "mit", "tags": ["Human Preference Alignment", "large language models"], "datasets": ["openbmb/UltraFeedback"]} | MetaAligner/MetaAligner-UltraFeedback-13B | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"Human Preference Alignment",
"large language models",
"conversational",
"en",
"dataset:openbmb/UltraFeedback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:16:09+00:00 |
null | null | {"license": "mit"} | hurtbadly/face2gan | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T14:16:15+00:00 |
|
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.0002 | null | [
"region:us"
]
| null | 2024-04-27T14:16:37+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.1_4iters_bs256_nodpo_only4w_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.1_4iters_bs256_nodpo_only4w_iter_1", "results": []}]} | ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:17:01+00:00 |
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.0005 | null | [
"region:us"
]
| null | 2024-04-27T14:17:34+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kaist-mistral-orpo-OHP-15k-Mathcode-2epoch-ohp-15k-strat-1-1epoch
This model is a fine-tuned version of [orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode-2epoch](https://huggingface.co/orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode-2epoch) on the orpo-explorers/OHP-15k-Stratified-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2.post303
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer", "trl", "orpo", "generated_from_trainer"], "datasets": ["orpo-explorers/OHP-15k-Stratified-1"], "base_model": "orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode-2epoch", "model-index": [{"name": "kaist-mistral-orpo-OHP-15k-Mathcode-2epoch-ohp-15k-strat-1-1epoch", "results": []}]} | orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode-2epoch-ohp-15k-strat-1-1epoch | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:orpo-explorers/OHP-15k-Stratified-1",
"base_model:orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode-2epoch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:18:00+00:00 |
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.002 | null | [
"region:us"
]
| null | 2024-04-27T14:19:25+00:00 |
|
text-generation | transformers | # Prodigy SM Base v0.1
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/4p2zaOWu6kTS3fcbevHef.png" width="70%" height="70%">
In our latest endeavour, we performed continued pre-training of a large language model (Mistral-7b-v0.1) to understand and generate text in new languages, including **Serbian**, **Bosnian** and **Croatian** using an innovative approach.
Rather than depending only on extensive datasets in the target language, our method utilizes a more compact set of both synthetic and human-curated data along with some mixture of CC Web data, which is implemented in two strategic phases:
1. Establishing a comprehensive demonstration of all grammatical and orthographic rules pertinent to the language.
2. Supplying a diverse array of examples that not only reinforce these rules but also integrate a wide range of linguistic nuances.
While our approach is uniquely tailored to our objectives, we have drawn some inspiration from recent advancements in language model training. Specifically, the conceptual strategies discussed in the paper [ADAPTING LARGE LANGUAGE MODELS VIA READING COMPREHENSION](https://arxiv.org/pdf/2309.09530.pdf) provided valuable insights, though our methods diverge significantly in practice. By adopting this inspired approach, we aim to efficiently teach the model new languages with a balanced blend of accuracy and linguistic diversity.
So... Did it work?!
# **Yes!**
See the benchmark results, or even better, download the model and try it yourself. As you know by now, there's no better benchmark than a quick 'try it yourself' vibe check. :)
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/C9m_OjnYEpQo43VCrwz4A.png" width="100%" height="100%">
Here, we demonstrate results of benchmark that is not frequently performed, yet equally important: how adapting the model for a new language impacted its original English-only performance.
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/IPY0myfQI-Ne5x6b11glz.png" width="100%" height="100%">
*All evals are performed in zero shot manner.
*Also bear in mind that llama-2-7b, llama-3-8b and mistral-7b models compared to Prodigy SM base aren't trained on extensive Serbian language datasets, and these benchmarks demonstrate that primarily English models can be adapted to other languages.
So, as you can see, we successfully improved the original model's performance for Serbian language use cases while retaining or even slightly improving its performance for English language.
### Training results
Training results of continued pre-training of [mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/5xeJ-vfWk4RhJNC7t5I0g.png" width="70%" height="70%">
<img src="https://cdn-uploads.huggingface.co/production/uploads/617bbeec14572ebe9e6ea83f/R4R8ai8LaN3WlYCOenUyb.png" width="70%" height="70%">
As last experimental step we merged produced model with **Mistral-7B-v0.1** and two earlier checkpoints from **prodigy-sm-base** using [Model Stock](https://arxiv.org/abs/2403.19522) method.
# Notes
As this is base model, there is no chat template or strict chat following capabilities, this model is best candidate for further pre-train on Serbian language as there is a lot more room for improvement (you can hit sweet spot), or next step in the pipeline, such as some form of chat or instruct tuning.
If you want model that is already instruction tuned we did that too, check **Prodigy SM Instruct v0.1**
# Prodigy SM Instruct v0.1
🚀[prodigy-sm-instruct]() **COMING SOON**
And stay tuned for:
[prodigy-sm-base (llama-3)]() **COMING SOON**
[prodigy-sm-instruct (llama-3)]() **COMING SOON**
📢 Also we are excited to announce that [iskon.ai](https://Iskon.ai) will soon launch an API platform featuring advanced **Prodigy** series of models, advanced AI tools and much more! 🚀
# Thanks
- [gordicaleksa/serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval) and his community for curating translations and adaptation of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
that we used to perform benchmarks.
- [jondurbin](https://huggingface.co/jondurbin) for amazing airoboros framework
- [teknium](https://huggingface.co/teknium) for various insights shared on discord and twitter aka x.com
- [Eric](https://twitter.com/erhartford) for various insights shared on discord and twitter aka x.com
- [mergekit](https://github.com/arcee-ai/mergekit) for model merging tools
*Huge thanks to Redmond.ai for generous DGX cloud credits* [redmond.ai]( https://redmond.ai) | {"language": ["en", "sr", "hr", "bs"], "license": "apache-2.0"} | draganjovanovich/prodigy-sm-base-v0.1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"sr",
"hr",
"bs",
"arxiv:2309.09530",
"arxiv:2403.19522",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:20:14+00:00 |
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.005 | null | [
"region:us"
]
| null | 2024-04-27T14:20:20+00:00 |
|
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold1
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0946
- Accuracy: 0.6719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2461 | 1.0 | 1848 | 1.2545 | 0.5845 |
| 1.0586 | 2.0 | 3696 | 1.1261 | 0.6239 |
| 1.0191 | 3.0 | 5544 | 1.0421 | 0.6423 |
| 0.8211 | 4.0 | 7392 | 1.0182 | 0.6619 |
| 0.8792 | 5.0 | 9240 | 1.0078 | 0.6678 |
| 0.6359 | 6.0 | 11088 | 1.0642 | 0.6621 |
| 0.7961 | 7.0 | 12936 | 1.0719 | 0.6665 |
| 0.4952 | 8.0 | 14784 | 1.1068 | 0.6627 |
| 0.5376 | 9.0 | 16632 | 1.0907 | 0.6714 |
| 0.4435 | 10.0 | 18480 | 1.0946 | 0.6719 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-small-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold1", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6719131614654003, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold1 | null | [
"transformers",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-small-patch4-window16-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:21:11+00:00 |
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.02 | null | [
"region:us"
]
| null | 2024-04-27T14:22:10+00:00 |
|
null | null | {} | HachiML/Llama2-mu-11.4M-lr0.05 | null | [
"region:us"
]
| null | 2024-04-27T14:23:05+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="saousan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | saousan/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:23:06+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Astowny/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | Astowny/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:24:24+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SarahDhrifa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | SarahDhrifa/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:24:38+00:00 |
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.0001 | null | [
"region:us"
]
| null | 2024-04-27T14:24:57+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="toure32/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | toure32/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:25:07+00:00 |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Unclad3610/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | Unclad3610/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| null | 2024-04-27T14:26:11+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/wgxfn2k | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:26:14+00:00 |
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.0002 | null | [
"region:us"
]
| null | 2024-04-27T14:26:32+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="FitTechMike/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]} | FitTechMike/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:26:33+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Yann2310/CrazyTaxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "CrazyTaxi", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | Yann2310/CrazyTaxi | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:24+00:00 |
null | null |
# hus960/FrankenLlama-3-12B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`mlabonne/FrankenLlama-3-12B-Instruct`](https://huggingface.co/mlabonne/FrankenLlama-3-12B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mlabonne/FrankenLlama-3-12B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/FrankenLlama-3-12B-Instruct-Q4_K_M-GGUF --model frankenllama-3-12b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/FrankenLlama-3-12B-Instruct-Q4_K_M-GGUF --model frankenllama-3-12b-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m frankenllama-3-12b-instruct.Q4_K_M.gguf -n 128
```
| {"license": "other", "tags": ["merge", "mergekit", "lazymergekit", "llama-cpp", "gguf-my-repo"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct", "meta-llama/Meta-Llama-3-8B-Instruct"]} | hus960/FrankenLlama-3-12B-Instruct-Q4_K_M-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
]
| null | 2024-04-27T14:27:27+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="saousan/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | saousan/taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:29+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Astowny/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]} | Astowny/taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:30+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SarahDhrifa/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | SarahDhrifa/taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:36+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="brunel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | brunel/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:47+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="toure32/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.74", "name": "mean_reward", "verified": false}]}]}]} | toure32/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:27:51+00:00 |
null | null | {"license": "mit"} | liruiw/hpt-huge | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T14:27:52+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.0005 | null | [
"region:us"
]
| null | 2024-04-27T14:28:06+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SamirLahouar/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | SamirLahouar/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:29:01+00:00 |
text-generation | transformers |
# OpenAI GPT-2 Samsum
## Model description
This model has been trained with the SAMSum dataset. The SAMSum dataset contains approximately 16,000 conversational dialogues accompanied by summaries. These conversations were created and written by linguists proficient in fluent English. Linguists were instructed to create conversations that reflect the ratio of topics found in real-life journalistic conversations similar to their daily written conversations. The style and tone vary; conversations can be informal, semi-formal, or formal, and may include slang terms, expressions, and spelling errors. Subsequently, the conversations were annotated with summaries. The summaries are expected to be concise summaries of what people were talking about during the conversation, written in the third person. The SAMSum dataset was prepared by the Samsung Research Institute Poland and is distributed for research purposes.
## Training
This GPT-2 model is rated for an average of 1 hour with an L4 GPU.
## Training Results

**Authors**
- **Developed by:** Anezatra
- **Model type:** GPT2
- **Contacts:** https://github.com/anezatra | {"language": ["en"], "datasets": ["samsum"], "pipeline_tag": "text-generation"} | anezatra/gpt2-samsum-124M | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:29:14+00:00 |
text2text-generation | transformers | {} | farukonder/devpath-trained-summarization | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:29:32+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.001 | null | [
"region:us"
]
| null | 2024-04-27T14:29:48+00:00 |
|
null | null | {"license": "openrail"} | martinvdv05/Narrator_Stanley_Parable | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T14:29:54+00:00 |
|
null | null | {"license": "mit"} | suchirsalhan/FR-CamBabyTokeniser-BabyBERTa | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T14:30:04+00:00 |
|
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ljw20180420/sd-class-butterflies-32')
image = pipeline().images[0]
image
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | ljw20180420/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2024-04-27T14:30:20+00:00 |
text-generation | transformers |
# Llama-3-Ko-OpenOrca
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Original model: [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
Dataset: [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
### Training details
Training: Axolotl을 이용해 LoRA-8bit로 4epoch 학습 시켰습니다.
- sequence_len: 4096
- bf16
학습 시간: A6000x2, 6시간
### Evaluation
- 0 shot kobest
| Tasks |n-shot| Metric |Value | |Stderr|
|----------------|-----:|--------|-----:|---|------|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
|kobest_copa | 0|acc |0.6920|± |0.0146|
|kobest_hellaswag| 0|acc |0.4520|± |0.0223|
|kobest_sentineg | 0|acc |0.7330|± |0.0222|
|kobest_wic | 0|acc |0.4881|± |0.0141|
- 5 shot kobest
| Tasks |n-shot| Metric |Value | |Stderr|
|----------------|-----:|--------|-----:|---|------|
|kobest_boolq | 5|acc |0.7123|± |0.0121|
|kobest_copa | 5|acc |0.7620|± |0.0135|
|kobest_hellaswag| 5|acc |0.4780|± |0.0224|
|kobest_sentineg | 5|acc |0.9446|± |0.0115|
|kobest_wic | 5|acc |0.6103|± |0.0137|
### License:
[https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) | {"license": "llama3", "library_name": "transformers", "datasets": ["kyujinpy/OpenOrca-KO"], "pipeline_tag": "text-generation"} | werty1248/Llama-3-Ko-8B-OpenOrca | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:kyujinpy/OpenOrca-KO",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:30:51+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="brunel/taxi-v4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-v4", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | brunel/taxi-v4 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| null | 2024-04-27T14:30:55+00:00 |
null | diffusers | {} | shao918516/sd-class-butterflies-64 | null | [
"diffusers",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2024-04-27T14:30:59+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.002 | null | [
"region:us"
]
| null | 2024-04-27T14:31:23+00:00 |
|
null | null | {"license": "openrail"} | GoldoBasic/teamspeak-female | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T14:31:30+00:00 |
|
null | null | {} | kpaddy/repo_name | null | [
"region:us"
]
| null | 2024-04-27T14:32:05+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.005 | null | [
"region:us"
]
| null | 2024-04-27T14:33:00+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** ramixpe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | ramixpe/llama3-8b-SP_IOSXR | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:33:48+00:00 |
null | transformers |
# hus960/Llama-3-13B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Replete-AI/Llama-3-13B-Instruct`](https://huggingface.co/Replete-AI/Llama-3-13B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Replete-AI/Llama-3-13B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/Llama-3-13B-Instruct-Q4_K_M-GGUF --model llama-3-13b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/Llama-3-13B-Instruct-Q4_K_M-GGUF --model llama-3-13b-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-13b-instruct.Q4_K_M.gguf -n 128
```
| {"license": "other", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": [], "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license/"} | hus960/Llama-3-13B-Instruct-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:33:57+00:00 |
null | null | {} | thodsapon/qwen1.5-llm | null | [
"gguf",
"region:us"
]
| null | 2024-04-27T14:34:23+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.01 | null | [
"region:us"
]
| null | 2024-04-27T14:34:36+00:00 |
|
null | null | {} | Greko89/Maluma | null | [
"region:us"
]
| null | 2024-04-27T14:35:37+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.02 | null | [
"region:us"
]
| null | 2024-04-27T14:36:11+00:00 |
|
text2text-generation | transformers | Question:
- Encoder: ViT5-base
- Max length: 32
- Pre-Processing: lower, remove special character
Image:
- Encoder: VIT-base
- Pre-Processing: None
OCR:
- Text Detection: Paddle OCR
- Text Recognition: VietOCR
- Threshold: 0.8
- Max length: 128
- Post-processing: group layout, divide=4
Answer:
- Max length: 56
Result:
- Dev:
- CIDEr: 3.4616
- BLEU: 0.4689 | {} | truong-xuan-linh/VQA-vit5 | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:36:32+00:00 |
null | null | {} | CyrexPro/mt5-base-finetuned-cnn_dailymail | null | [
"region:us"
]
| null | 2024-04-27T14:37:17+00:00 |
|
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.05 | null | [
"region:us"
]
| null | 2024-04-27T14:37:46+00:00 |
|
null | null | {} | shovonjamali/custom_language_tortoise | null | [
"region:us"
]
| null | 2024-04-27T14:39:00+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/pyuiupv | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/s7o418a | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/lk7t58u | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/ofpiarc | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:02+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/xym9qa3 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:02+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/qcewa7f | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:02+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/ktjj0jh | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:02+00:00 |
null | null | {} | HachiML/Llama2-mu-98.4M-lr0.1 | null | [
"region:us"
]
| null | 2024-04-27T14:39:21+00:00 |
|
text-generation | transformers |
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Model type:** Mamba
- **Language(s) (NLP):** Japanese
- **Tokenizer:** ku-nlp/gpt2-large-japanese-char
- **License:** Model: Aptach 2.0, Tokenizer: CC-BY-SA
- ## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "misdelivery/mamba-char-japanese-790m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
input_prompt = "夏目漱石について教えてください。"
with torch.no_grad():
input_ids = tokenizer.encode(f"以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n{input_prompt}\n\n### 応答:\n", add_special_tokens=False, return_tensors="pt")
output_ids = model.generate(
input_ids.to(model.device),
max_length=512,
do_sample=True,
temperature=0.6,
repetition_penalty=1.2
)
output_ids.tolist()[0]
result = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
print(result)
```
【LOCAL AI HACKATHON #001】240時間ハッカソンにおいてGPUをお借りしました。
関係者の方々に深く御礼申し上げます。
メタデータラボ株式会社様
【AI声づくり技術研究会】
サーバー主:やなぎ(Yanagi)様 (@Yanagi_1112)
【ローカルLLMに向き合う会】
サーバー主:saldra(サルドラ)様 (@sald_ra)
Witness様 (@i_witnessed_it)
チームメンバー:
hayashi,
kzms,
chatblanc,
Ryunosuke Ikeda | {"library_name": "transformers", "tags": []} | misdelivery/mamba-char-japanese-790m | null | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:39:46+00:00 |
text2text-generation | transformers | {} | anhmanucian19032002/vit5-base-finetuned-VN | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:39:59+00:00 |
|
null | null | {} | Imohsinali/dummy | null | [
"region:us"
]
| null | 2024-04-27T14:41:06+00:00 |
|
text2text-generation | transformers | {} | neildlf/cnn_dailymail_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:41:54+00:00 |
|
text-generation | transformers |
**This is a quantized version of HF's [StarChat2 15B v0.1](iHuggingFaceH4/starchat2-15b-v0.1) (see below).**
**Quantization done with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ/).**
<img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for StarChat2 15B
StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English and 600+ programming languages.
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground
## Performance
StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
| Model | MT Bench | IFEval | HumanEval |
|-------------------------------------------------------------------------------------------------|---------:|-------:|----------:|
| [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 |
| [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 |
| [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 |
## Intended uses & limitations
The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/starchat2-15b-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "You are StarChat2, an expert programming assistant",
},
{"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
stop_sequence="<|im_end|>",
)
print(outputs[0]["generated_text"][-1]["content"])
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2)
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173).
## Training details
This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details.
It achieves the following results on the evaluation set:
- Loss: 0.4347
- Rewards/chosen: -0.9461
- Rewards/rejected: -2.7745
- Rewards/accuracies: 0.7658
- Rewards/margins: 1.8284
- Logps/rejected: -322.1934
- Logps/chosen: -316.1898
- Logits/rejected: -2.3817
- Logits/chosen: -2.3005
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 |
| 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 |
| 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 |
| 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 |
| 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 |
| 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 |
| 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 |
| 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 |
| 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 |
| 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 |
| 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| {"license": "bigcode-openrail-m", "datasets": ["HuggingFaceH4/ultrafeedback_binarized", "HuggingFaceH4/orca_dpo_pairs"], "base_model": "HuggingFaceH4/starchat2-15b-sft-v0.1", "model-index": [{"name": "starchat2-15b-v0.1", "results": []}]} | stelterlab/starchat2-15b-v0.1-AWQ | null | [
"transformers",
"safetensors",
"starcoder2",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/orca_dpo_pairs",
"arxiv:2311.07911",
"arxiv:2402.19173",
"base_model:HuggingFaceH4/starchat2-15b-sft-v0.1",
"license:bigcode-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-27T14:42:44+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | erfanzar/Xerxes-8B-Instruct-v0.4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T14:42:46+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EmnaFazaa/donut-financial-document-classification | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T14:44:17+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.