modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
samuel-moreira/hr-resume-8b-v2.0
|
samuel-moreira
| 2024-06-26T18:29:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:29:25Z |
Entry not found
|
axgroup/TVR-Ranking
|
axgroup
| 2024-07-02T08:26:12Z | 0 | 0 | null |
[
"en",
"license:cc",
"region:us"
] | null | 2024-06-26T18:30:26Z |
---
license: cc
language:
- en
---
# Video Moment Retrieval in Practical Setting: A Dataset of Ranked Moments for Imprecise Queries
The benchmark and dataset for the paper "Video Moment Retrieval in Practical Settings: A Dataset of Ranked Moments for Imprecise Queries" is coming soon.
We recommend cloning the code, data, and feature files from the Hugging Face repository at [TVR-Ranking](https://huggingface.co/axgroup/TVR-Ranking).

## Getting started
### 1. Install the requisites
The Python packages we used are listed as follows. Commonly, the most recent versions work well.
```shell
conda create --name tvr_ranking python=3.11
conda activate tvr_ranking
pip install pytorch # 2.2.1+cu121
pip install tensorboard
pip install h5py pandas tqdm easydict pyyaml
```
### 2. Download full dataset
For the full dataset, please go down from Hugging Face [TVR-Ranking](https://huggingface.co/axgroup/TVR-Ranking). \
The detailed introduction and raw annotations is available at [Dataset Introduction](data/TVR_Ranking/readme.md).
```
TVR_Ranking/
-val.json
-test.json
-train_top01.json
-train_top20.json
-train_top40.json
-video_corpus.json
```
### 3. Download features
For the query BERT features, you can download them from Hugging Face [TVR-Ranking](https://huggingface.co/axgroup/TVR-Ranking). \
For the video and subtitle features, please request them at [TVR](https://tvr.cs.unc.edu/).
```shell
tar -xf tvr_feature_release.tar.gz -C data/TVR_Ranking/feature
```
### 4. Training
```shell
# modify the data path first
sh run_top20.sh
```
## Baseline
(ToDo: running the new version...) \
The baseline performance of $NDGC@20$ was shown as follows.
Top $N$ moments were comprised of a pseudo training set by the query-caption similarity.
| Model | $N$ | IoU = 0.3, val | IoU = 0.3, test | IoU = 0.5, val | IoU = 0.5, test | IoU = 0.7, val | IoU = 0.7, test |
|----------------|-----|----------------|-----------------|----------------|-----------------|----------------|-----------------|
| **XML** | 1 | 0.1050 | 0.1047 | 0.0767 | 0.0751 | 0.0287 | 0.0314 |
| | 20 | 0.1948 | 0.1964 | 0.1417 | 0.1434 | 0.0519 | 0.0583 |
| | 40 | 0.2101 | 0.2110 | 0.1525 | 0.1533 | 0.0613 | 0.0617 |
| **CONQUER** | 1 | 0.0979 | 0.0830 | 0.0817 | 0.0686 | 0.0547 | 0.0479 |
| | 20 | 0.2007 | 0.1935 | 0.1844 | 0.1803 | 0.1391 | 0.1341 |
| | 40 | 0.2094 | 0.1943 | 0.1930 | 0.1825 | 0.1481 | 0.1334 |
| **ReLoCLNet** | 1 | 0.1306 | 0.1299 | 0.1169 | 0.1154 | 0.0738 | 0.0789 |
| | 20 | 0.3264 | 0.3214 | 0.3007 | 0.2956 | 0.2074 | 0.2084 |
| | 40 | 0.3479 | 0.3473 | 0.3221 | 0.3217 | 0.2218 | 0.2275 |
### 4. Inferring
[ToDo] The checkpoint can all be accessed from Hugging Face [TVR-Ranking](https://huggingface.co/axgroup/TVR-Ranking).
## Citation
If you feel this project helpful to your research, please cite our work.
```
```
|
habulaj/19006110673
|
habulaj
| 2024-06-26T18:33:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:33:02Z |
Entry not found
|
ddwadadw3r34r3/Edp445
|
ddwadadw3r34r3
| 2024-06-26T18:34:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:33:50Z |
Entry not found
|
pandoradox/testmodel
|
pandoradox
| 2024-06-28T14:17:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"region:us"
] | null | 2024-06-26T18:34:44Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
google/paligemma-3b-pt-224-keras
|
google
| 2024-06-26T20:46:31Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"image-text-to-text",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2024-06-26T18:34:55Z |
---
library_name: keras-nlp
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
To access PaliGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
pipeline_tag: image-text-to-text
---
PaliGemma is a set of multi-modal large language models published by Google based on the Gemma model. Both a pre-trained and instruction tuned models are available. See the model card below for benchmarks, data sources, and intended use cases.
## Links
* [PaliGemma API Documentation](https://keras.io/api/keras_nlp/models/pali_gemma/)
* [KerasNLP Beginner Guide](https://keras.io/guides/keras_nlp/getting_started/)
* [KerasNLP Model Publishing Guide](https://keras.io/guides/keras_nlp/upload/)
## Installation
Keras and KerasNLP can be installed with:
```
pip install -U -q keras-nlp
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|-------------------------------------------------------------|
| [paligemma-3b-224-mix-keras](https://huggingface.co/google/paligemma-3b-224-mix-keras) | 2.92B | image size 224, mix fine tuned, text sequence length is 256 |
| [paligemma-3b-448-mix-keras](https://huggingface.co/google/paligemma-3b-448-mix-keras) | 2.92B | image size 448, mix fine tuned, text sequence length is 512 |
| [**paligemma-3b-224-keras**](https://huggingface.co/google/paligemma-3b-224-keras) | 2.92B | image size 224, pre trained, text sequence length is 128 |
| [paligemma-3b-448-keras](https://huggingface.co/google/paligemma-3b-448-keras) | 2.92B | image size 448, pre trained, text sequence length is 512 |
| [paligemma-3b-896-keras](https://huggingface.co/google/paligemma-3b-896-keras) | 2.93B | image size 896, pre trained, text sequence length is 512 |
## Prompts
The PaliGemma `"mix"` models can handle a number of prompting structures out of the box. It is important to stick exactly to these prompts, including the newline. Lang can be a language code such as `"en"` or `"fr"`. Support for languages outside of English will vary depending on the prompt type.
* `"cap {lang}\n"`: very raw short caption (from WebLI-alt).
* `"caption {lang}\n"`: coco-like short captions.
* `"describe {lang}\n"`: somewhat longer more descriptive captions.
* `"ocr\n"`: optical character recognition.
* `"answer en {question}\n"`: question answering about the image contents.
* `"question {lang} {answer}\n"`: question generation for a given answer.
* `"detect {thing} ; {thing}\n"`: count objects in a scene.
Not `"mix"` presets should be fine-tuned for a specific task.
```
!pip install -U -q keras-nlp
```
Pick a backend of your choice
```
import os
os.environ["KERAS_BACKEND"] = "jax"
```
Now we can load the PaliGemma "causal language model" from the Kaggle Models hub. A causal language model is just a LLM that is ready for generation, by training with a causal mask, and running generation a token at a time in a recurrent loop.
```
keras.config.set_floatx("bfloat16")
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
"hf://google/paligemma-3b-224-keras"
)
```
Function that reads an image from a given URL
```
def read_image(url):
contents = io.BytesIO(requests.get(url).content)
image = PIL.Image.open(contents)
image = np.array(image)
# Remove alpha channel if neccessary.
if image.shape[2] == 4:
image = image[:, :, :3]
return image
```
```
image_url = 'https://storage.googleapis.com/keras-cv/models/paligemma/cow_beach_1.png'
image = read_image(image_url)
```
Use `generate()` call with a single image and prompt. The text prompt
has to end with `\n`.
```
prompt = 'answer en where is the cow standing?\n'
output = pali_gemma_lm.generate(
inputs={
"images": image,
"prompts": prompt,
}
)
print(output)
```
Use `generate()` call with a batched images and prompts.
```
prompts = [
'answer en where is the cow standing?\n',
'answer en what color is the cow?\n',
'describe en\n',
'detect cow\n',
'segment cow\n',
]
images = [image, image, image, image, image]
outputs = pali_gemma_lm.generate(
inputs={
"images": images,
"prompts": prompts,
}
)
for output in outputs:
print(output)
```
There's a few other style of prompts this model can handle out of the box...
`cap {lang}\n`: very raw short caption (from WebLI-alt).
`caption {lang}\n`: nice, coco-like short captions.
`describe {lang}\n`: somewhat longer more descriptive captions.
`ocr\n`: optical character recognition.
`answer en {question}\n`: question answering about the image contents.
`question {lang} {answer}\n`: question generation for a given answer.
`detect {thing} ; {thing}\n`: count objects in a scene.
Call `fit()` on a single batch
```
import numpy as np
image = np.random.uniform(-1, 1, size=(224, 224, 3))
x = {
"images": [image, image],
"prompts": ["answer en Where is the cow standing?\n", "caption en\n"],
}
y = {
"responses": ["beach", "A brown cow standing on a beach next to the ocean."],
}
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset("hf://google/paligemma-3b-224-keras")
pali_gemma_lm.fit(x=x, y=y, batch_size=2)
```
|
daccuong2002/CosineSimilary
|
daccuong2002
| 2024-06-26T18:36:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:36:09Z |
Entry not found
|
habulaj/78559234709
|
habulaj
| 2024-06-26T18:36:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:36:34Z |
Entry not found
|
C0ttontheBunny/Smilingfrens
|
C0ttontheBunny
| 2024-06-26T18:56:36Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T18:37:34Z |
---
license: openrail
---
|
mohammedalaa/mhmd
|
mohammedalaa
| 2024-06-26T18:38:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T18:38:28Z |
---
license: apache-2.0
---
|
iamalexcaspian/LunaLoud-TLH
|
iamalexcaspian
| 2024-06-26T18:42:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:39:17Z |
Entry not found
|
habulaj/257578228312
|
habulaj
| 2024-06-26T18:40:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:40:28Z |
Entry not found
|
gaur3009/gpt2_model
|
gaur3009
| 2024-06-26T18:41:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:41:33Z |
Entry not found
|
google/paligemma-3b-pt-448-keras
|
google
| 2024-06-26T21:03:48Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"image-text-to-text",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2024-06-26T18:41:58Z |
---
license: gemma
library_name: keras-nlp
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
To access PaliGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: image-text-to-text
---
PaliGemma is a set of multi-modal large language models published by Google based on the Gemma model. Both a pre-trained and instruction tuned models are available. See the model card below for benchmarks, data sources, and intended use cases.
## Links
* [PaliGemma API Documentation](https://keras.io/api/keras_nlp/models/pali_gemma/)
* [KerasNLP Beginner Guide](https://keras.io/guides/keras_nlp/getting_started/)
* [KerasNLP Model Publishing Guide](https://keras.io/guides/keras_nlp/upload/)
## Installation
Keras and KerasNLP can be installed with:
```
pip install -U -q keras-nlp
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|-------------------------------------------------------------|
| [paligemma-3b-224-mix-keras](https://huggingface.co/google/paligemma-3b-224-mix-keras) | 2.92B | image size 224, mix fine tuned, text sequence length is 256 |
| [paligemma-3b-448-mix-keras](https://huggingface.co/google/paligemma-3b-448-mix-keras) | 2.92B | image size 448, mix fine tuned, text sequence length is 512 |
| [paligemma-3b-224-keras](https://huggingface.co/google/paligemma-3b-224-keras) | 2.92B | image size 224, pre trained, text sequence length is 128 |
| [**paligemma-3b-448-keras**](https://huggingface.co/google/paligemma-3b-448-keras) | 2.92B | image size 448, pre trained, text sequence length is 512 |
| [paligemma-3b-896-keras](https://huggingface.co/google/paligemma-3b-896-keras) | 2.93B | image size 896, pre trained, text sequence length is 512 |
## Prompts
The PaliGemma `"mix"` models can handle a number of prompting structures out of the box. It is important to stick exactly to these prompts, including the newline. Lang can be a language code such as `"en"` or `"fr"`. Support for languages outside of English will vary depending on the prompt type.
* `"cap {lang}\n"`: very raw short caption (from WebLI-alt).
* `"caption {lang}\n"`: coco-like short captions.
* `"describe {lang}\n"`: somewhat longer more descriptive captions.
* `"ocr\n"`: optical character recognition.
* `"answer en {question}\n"`: question answering about the image contents.
* `"question {lang} {answer}\n"`: question generation for a given answer.
* `"detect {thing} ; {thing}\n"`: count objects in a scene.
Not `"mix"` presets should be fine-tuned for a specific task.
```
!pip install -U -q keras-nlp
```
Pick a backend of your choice
```
import os
os.environ["KERAS_BACKEND"] = "jax"
```
Now we can load the PaliGemma "causal language model" from the Kaggle Models hub. A causal language model is just a LLM that is ready for generation, by training with a causal mask, and running generation a token at a time in a recurrent loop.
```
keras.config.set_floatx("bfloat16")
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
"hf://google/paligemma-3b-448-keras"
)
```
Function that reads an image from a given URL
```
def read_image(url):
contents = io.BytesIO(requests.get(url).content)
image = PIL.Image.open(contents)
image = np.array(image)
# Remove alpha channel if neccessary.
if image.shape[2] == 4:
image = image[:, :, :3]
return image
```
```
image_url = 'https://storage.googleapis.com/keras-cv/models/paligemma/cow_beach_1.png'
image = read_image(image_url)
```
Use `generate()` call with a single image and prompt. The text prompt
has to end with `\n`.
```
prompt = 'answer en where is the cow standing?\n'
output = pali_gemma_lm.generate(
inputs={
"images": image,
"prompts": prompt,
}
)
print(output)
```
Use `generate()` call with a batched images and prompts.
```
prompts = [
'answer en where is the cow standing?\n',
'answer en what color is the cow?\n',
'describe en\n',
'detect cow\n',
'segment cow\n',
]
images = [image, image, image, image, image]
outputs = pali_gemma_lm.generate(
inputs={
"images": images,
"prompts": prompts,
}
)
for output in outputs:
print(output)
```
There's a few other style of prompts this model can handle out of the box...
`cap {lang}\n`: very raw short caption (from WebLI-alt).
`caption {lang}\n`: nice, coco-like short captions.
`describe {lang}\n`: somewhat longer more descriptive captions.
`ocr\n`: optical character recognition.
`answer en {question}\n`: question answering about the image contents.
`question {lang} {answer}\n`: question generation for a given answer.
`detect {thing} ; {thing}\n`: count objects in a scene.
Call `fit()` on a single batch
```
import numpy as np
image = np.random.uniform(-1, 1, size=(224, 224, 3))
x = {
"images": [image, image],
"prompts": ["answer en Where is the cow standing?\n", "caption en\n"],
}
y = {
"responses": ["beach", "A brown cow standing on a beach next to the ocean."],
}
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset("hf://google/paligemma-3b-448-keras")
pali_gemma_lm.fit(x=x, y=y, batch_size=2)
```
|
Hiezen/llama-3-8b-chat-doctor
|
Hiezen
| 2024-06-26T18:42:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:42:21Z |
Entry not found
|
Sakjay/Thai-1Epoch
|
Sakjay
| 2024-06-26T18:42:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:42:22Z |
Entry not found
|
google/paligemma-3b-pt-896-keras
|
google
| 2024-06-26T21:04:14Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"image-text-to-text",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2024-06-26T18:44:37Z |
---
library_name: keras-nlp
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
To access PaliGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
pipeline_tag: image-text-to-text
---
PaliGemma is a set of multi-modal large language models published by Google based on the Gemma model. Both a pre-trained and instruction tuned models are available. See the model card below for benchmarks, data sources, and intended use cases.
## Links
* [PaliGemma API Documentation](https://keras.io/api/keras_nlp/models/pali_gemma/)
* [KerasNLP Beginner Guide](https://keras.io/guides/keras_nlp/getting_started/)
* [KerasNLP Model Publishing Guide](https://keras.io/guides/keras_nlp/upload/)
## Installation
Keras and KerasNLP can be installed with:
```
pip install -U -q keras-nlp
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|-------------------------------------------------------------|
| [paligemma-3b-224-mix-keras](https://huggingface.co/google/paligemma-3b-224-mix-keras) | 2.92B | image size 224, mix fine tuned, text sequence length is 256 |
| [paligemma-3b-448-mix-keras](https://huggingface.co/google/paligemma-3b-448-mix-keras) | 2.92B | image size 448, mix fine tuned, text sequence length is 512 |
| [paligemma-3b-224-keras](https://huggingface.co/google/paligemma-3b-224-keras) | 2.92B | image size 224, pre trained, text sequence length is 128 |
| [paligemma-3b-448-keras](https://huggingface.co/google/paligemma-3b-448-keras) | 2.92B | image size 448, pre trained, text sequence length is 512 |
| [**paligemma-3b-896-keras**](https://huggingface.co/google/paligemma-3b-896-keras) | 2.93B | image size 896, pre trained, text sequence length is 512 |
## Prompts
The PaliGemma `"mix"` models can handle a number of prompting structures out of the box. It is important to stick exactly to these prompts, including the newline. Lang can be a language code such as `"en"` or `"fr"`. Support for languages outside of English will vary depending on the prompt type.
* `"cap {lang}\n"`: very raw short caption (from WebLI-alt).
* `"caption {lang}\n"`: coco-like short captions.
* `"describe {lang}\n"`: somewhat longer more descriptive captions.
* `"ocr\n"`: optical character recognition.
* `"answer en {question}\n"`: question answering about the image contents.
* `"question {lang} {answer}\n"`: question generation for a given answer.
* `"detect {thing} ; {thing}\n"`: count objects in a scene.
Not `"mix"` presets should be fine-tuned for a specific task.
```
!pip install -U -q keras-nlp
```
Pick a backend of your choice
```
import os
os.environ["KERAS_BACKEND"] = "jax"
```
Now we can load the PaliGemma "causal language model" from the Kaggle Models hub. A causal language model is just a LLM that is ready for generation, by training with a causal mask, and running generation a token at a time in a recurrent loop.
```
keras.config.set_floatx("bfloat16")
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
"hf://google/paligemma-3b-896-keras"
)
```
Function that reads an image from a given URL
```
def read_image(url):
contents = io.BytesIO(requests.get(url).content)
image = PIL.Image.open(contents)
image = np.array(image)
# Remove alpha channel if neccessary.
if image.shape[2] == 4:
image = image[:, :, :3]
return image
```
```
image_url = 'https://storage.googleapis.com/keras-cv/models/paligemma/cow_beach_1.png'
image = read_image(image_url)
```
Use `generate()` call with a single image and prompt. The text prompt
has to end with `\n`.
```
prompt = 'answer en where is the cow standing?\n'
output = pali_gemma_lm.generate(
inputs={
"images": image,
"prompts": prompt,
}
)
print(output)
```
Use `generate()` call with a batched images and prompts.
```
prompts = [
'answer en where is the cow standing?\n',
'answer en what color is the cow?\n',
'describe en\n',
'detect cow\n',
'segment cow\n',
]
images = [image, image, image, image, image]
outputs = pali_gemma_lm.generate(
inputs={
"images": images,
"prompts": prompts,
}
)
for output in outputs:
print(output)
```
There's a few other style of prompts this model can handle out of the box...
`cap {lang}\n`: very raw short caption (from WebLI-alt).
`caption {lang}\n`: nice, coco-like short captions.
`describe {lang}\n`: somewhat longer more descriptive captions.
`ocr\n`: optical character recognition.
`answer en {question}\n`: question answering about the image contents.
`question {lang} {answer}\n`: question generation for a given answer.
`detect {thing} ; {thing}\n`: count objects in a scene.
Call `fit()` on a single batch
```
import numpy as np
image = np.random.uniform(-1, 1, size=(224, 224, 3))
x = {
"images": [image, image],
"prompts": ["answer en Where is the cow standing?\n", "caption en\n"],
}
y = {
"responses": ["beach", "A brown cow standing on a beach next to the ocean."],
}
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset("hf://google/paligemma-3b-896-keras")
pali_gemma_lm.fit(x=x, y=y, batch_size=2)
```
|
albertoravasini/justlearn
|
albertoravasini
| 2024-06-26T18:45:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T18:45:20Z |
---
license: apache-2.0
---
|
habulaj/219688191830
|
habulaj
| 2024-06-26T18:46:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:46:51Z |
Entry not found
|
google/paligemma-3b-mix-224-keras
|
google
| 2024-06-26T21:02:11Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"image-text-to-text",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2024-06-26T18:47:06Z |
---
library_name: keras-nlp
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
To access PaliGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
pipeline_tag: image-text-to-text
---
PaliGemma is a set of multi-modal large language models published by Google based on the Gemma model. Both a pre-trained and instruction tuned models are available. See the model card below for benchmarks, data sources, and intended use cases.
## Links
* [PaliGemma API Documentation](https://keras.io/api/keras_nlp/models/pali_gemma/)
* [KerasNLP Beginner Guide](https://keras.io/guides/keras_nlp/getting_started/)
* [KerasNLP Model Publishing Guide](https://keras.io/guides/keras_nlp/upload/)
## Installation
Keras and KerasNLP can be installed with:
```
pip install -U -q keras-nlp
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|-------------------------------------------------------------|
| [**paligemma-3b-224-mix-keras**](https://huggingface.co/google/paligemma-3b-224-mix-keras) | 2.92B | image size 224, mix fine tuned, text sequence length is 256 |
| [paligemma-3b-448-mix-keras](https://huggingface.co/google/paligemma-3b-448-mix-keras) | 2.92B | image size 448, mix fine tuned, text sequence length is 512 |
| [paligemma-3b-224-keras](https://huggingface.co/google/paligemma-3b-224-keras) | 2.92B | image size 224, pre trained, text sequence length is 128 |
| [paligemma-3b-448-keras](https://huggingface.co/google/paligemma-3b-448-keras) | 2.92B | image size 448, pre trained, text sequence length is 512 |
| [paligemma-3b-896-keras](https://huggingface.co/google/paligemma-3b-896-keras) | 2.93B | image size 896, pre trained, text sequence length is 512 |
## Prompts
The PaliGemma `"mix"` models can handle a number of prompting structures out of the box. It is important to stick exactly to these prompts, including the newline. Lang can be a language code such as `"en"` or `"fr"`. Support for languages outside of English will vary depending on the prompt type.
* `"cap {lang}\n"`: very raw short caption (from WebLI-alt).
* `"caption {lang}\n"`: coco-like short captions.
* `"describe {lang}\n"`: somewhat longer more descriptive captions.
* `"ocr\n"`: optical character recognition.
* `"answer en {question}\n"`: question answering about the image contents.
* `"question {lang} {answer}\n"`: question generation for a given answer.
* `"detect {thing} ; {thing}\n"`: count objects in a scene.
Not `"mix"` presets should be fine-tuned for a specific task.
```
!pip install -U -q keras-nlp
```
Pick a backend of your choice
```
import os
os.environ["KERAS_BACKEND"] = "jax"
```
Now we can load the PaliGemma "causal language model" from the Kaggle Models hub. A causal language model is just a LLM that is ready for generation, by training with a causal mask, and running generation a token at a time in a recurrent loop.
```
keras.config.set_floatx("bfloat16")
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
"hf://google/paligemma-3b-224-mix-keras"
)
```
Function that reads an image from a given URL
```
def read_image(url):
contents = io.BytesIO(requests.get(url).content)
image = PIL.Image.open(contents)
image = np.array(image)
# Remove alpha channel if neccessary.
if image.shape[2] == 4:
image = image[:, :, :3]
return image
```
```
image_url = 'https://storage.googleapis.com/keras-cv/models/paligemma/cow_beach_1.png'
image = read_image(image_url)
```
Use `generate()` call with a single image and prompt. The text prompt
has to end with `\n`.
```
prompt = 'answer en where is the cow standing?\n'
output = pali_gemma_lm.generate(
inputs={
"images": image,
"prompts": prompt,
}
)
print(output)
```
Use `generate()` call with a batched images and prompts.
```
prompts = [
'answer en where is the cow standing?\n',
'answer en what color is the cow?\n',
'describe en\n',
'detect cow\n',
'segment cow\n',
]
images = [image, image, image, image, image]
outputs = pali_gemma_lm.generate(
inputs={
"images": images,
"prompts": prompts,
}
)
for output in outputs:
print(output)
```
There's a few other style of prompts this model can handle out of the box...
`cap {lang}\n`: very raw short caption (from WebLI-alt).
`caption {lang}\n`: nice, coco-like short captions.
`describe {lang}\n`: somewhat longer more descriptive captions.
`ocr\n`: optical character recognition.
`answer en {question}\n`: question answering about the image contents.
`question {lang} {answer}\n`: question generation for a given answer.
`detect {thing} ; {thing}\n`: count objects in a scene.
Call `fit()` on a single batch
```
import numpy as np
image = np.random.uniform(-1, 1, size=(224, 224, 3))
x = {
"images": [image, image],
"prompts": ["answer en Where is the cow standing?\n", "caption en\n"],
}
y = {
"responses": ["beach", "A brown cow standing on a beach next to the ocean."],
}
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset("hf://google/paligemma-3b-224-mix-keras")
pali_gemma_lm.fit(x=x, y=y, batch_size=2)
```
|
habulaj/2542025133
|
habulaj
| 2024-06-26T18:47:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:47:46Z |
Entry not found
|
google/paligemma-3b-mix-448-keras
|
google
| 2024-06-26T21:03:08Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"image-text-to-text",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2024-06-26T18:49:50Z |
---
library_name: keras-nlp
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: >-
To access PaliGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
pipeline_tag: image-text-to-text
---
PaliGemma is a set of multi-modal large language models published by Google based on the Gemma model. Both a pre-trained and instruction tuned models are available. See the model card below for benchmarks, data sources, and intended use cases.
## Links
* [PaliGemma API Documentation](https://keras.io/api/keras_nlp/models/pali_gemma/)
* [KerasNLP Beginner Guide](https://keras.io/guides/keras_nlp/getting_started/)
* [KerasNLP Model Publishing Guide](https://keras.io/guides/keras_nlp/upload/)
## Installation
Keras and KerasNLP can be installed with:
```
pip install -U -q keras-nlp
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|-----------------------|------------|-------------------------------------------------------------|
| [paligemma-3b-224-mix-keras](https://huggingface.co/google/paligemma-3b-224-mix-keras) | 2.92B | image size 224, mix fine tuned, text sequence length is 256 |
| [**paligemma-3b-448-mix-keras**](https://huggingface.co/google/paligemma-3b-448-mix-keras) | 2.92B | image size 448, mix fine tuned, text sequence length is 512 |
| [paligemma-3b-224-keras](https://huggingface.co/google/paligemma-3b-224-keras) | 2.92B | image size 224, pre trained, text sequence length is 128 |
| [paligemma-3b-448-keras](https://huggingface.co/google/paligemma-3b-448-keras) | 2.92B | image size 448, pre trained, text sequence length is 512 |
| [paligemma-3b-896-keras](https://huggingface.co/google/paligemma-3b-896-keras) | 2.93B | image size 896, pre trained, text sequence length is 512 |
## Prompts
The PaliGemma `"mix"` models can handle a number of prompting structures out of the box. It is important to stick exactly to these prompts, including the newline. Lang can be a language code such as `"en"` or `"fr"`. Support for languages outside of English will vary depending on the prompt type.
* `"cap {lang}\n"`: very raw short caption (from WebLI-alt).
* `"caption {lang}\n"`: coco-like short captions.
* `"describe {lang}\n"`: somewhat longer more descriptive captions.
* `"ocr\n"`: optical character recognition.
* `"answer en {question}\n"`: question answering about the image contents.
* `"question {lang} {answer}\n"`: question generation for a given answer.
* `"detect {thing} ; {thing}\n"`: count objects in a scene.
Not `"mix"` presets should be fine-tuned for a specific task.
```
!pip install -U -q keras-nlp
```
Pick a backend of your choice
```
import os
os.environ["KERAS_BACKEND"] = "jax"
```
Now we can load the PaliGemma "causal language model" from the Kaggle Models hub. A causal language model is just a LLM that is ready for generation, by training with a causal mask, and running generation a token at a time in a recurrent loop.
```
keras.config.set_floatx("bfloat16")
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
"hf://google/paligemma-3b-448-mix-keras"
)
```
Function that reads an image from a given URL
```
def read_image(url):
contents = io.BytesIO(requests.get(url).content)
image = PIL.Image.open(contents)
image = np.array(image)
# Remove alpha channel if neccessary.
if image.shape[2] == 4:
image = image[:, :, :3]
return image
```
```
image_url = 'https://storage.googleapis.com/keras-cv/models/paligemma/cow_beach_1.png'
image = read_image(image_url)
```
Use `generate()` call with a single image and prompt. The text prompt
has to end with `\n`.
```
prompt = 'answer en where is the cow standing?\n'
output = pali_gemma_lm.generate(
inputs={
"images": image,
"prompts": prompt,
}
)
print(output)
```
Use `generate()` call with a batched images and prompts.
```
prompts = [
'answer en where is the cow standing?\n',
'answer en what color is the cow?\n',
'describe en\n',
'detect cow\n',
'segment cow\n',
]
images = [image, image, image, image, image]
outputs = pali_gemma_lm.generate(
inputs={
"images": images,
"prompts": prompts,
}
)
for output in outputs:
print(output)
```
There's a few other style of prompts this model can handle out of the box...
`cap {lang}\n`: very raw short caption (from WebLI-alt).
`caption {lang}\n`: nice, coco-like short captions.
`describe {lang}\n`: somewhat longer more descriptive captions.
`ocr\n`: optical character recognition.
`answer en {question}\n`: question answering about the image contents.
`question {lang} {answer}\n`: question generation for a given answer.
`detect {thing} ; {thing}\n`: count objects in a scene.
Call `fit()` on a single batch
```
import numpy as np
image = np.random.uniform(-1, 1, size=(224, 224, 3))
x = {
"images": [image, image],
"prompts": ["answer en Where is the cow standing?\n", "caption en\n"],
}
y = {
"responses": ["beach", "A brown cow standing on a beach next to the ocean."],
}
pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset("hf://google/paligemma-3b-448-mix-keras")
pali_gemma_lm.fit(x=x, y=y, batch_size=2)
```
|
mabrouk/dummy
|
mabrouk
| 2024-06-26T18:50:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T18:50:05Z |
---
license: apache-2.0
---
|
habulaj/450295442644
|
habulaj
| 2024-06-26T18:50:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:50:47Z |
Entry not found
|
eskayML/SD-0.1
|
eskayML
| 2024-06-26T19:59:02Z | 0 | 0 | null |
[
"tensorboard",
"art",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T18:52:55Z |
---
license: apache-2.0
datasets:
- huggan/smithsonian_butterflies_subset
tags:
- art
---
|
habulaj/709410270
|
habulaj
| 2024-06-26T18:56:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:56:41Z |
Entry not found
|
HowToSD/face_unblur
|
HowToSD
| 2024-06-28T20:07:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:57:23Z |
Entry not found
|
habulaj/2333723023
|
habulaj
| 2024-06-26T18:57:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T18:57:24Z |
Entry not found
|
Sakjay/Thai-Updated-Parameters
|
Sakjay
| 2024-06-26T20:45:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T18:58:46Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HyperdustProtocol/ImHyperAGI-cog-llama2-7b-5908
|
HyperdustProtocol
| 2024-06-26T19:07:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T19:06:51Z |
---
base_model: unsloth/llama-2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** HyperdustProtocol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Silentcosmo/payal
|
Silentcosmo
| 2024-06-26T19:07:24Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T19:07:24Z |
---
license: mit
---
|
ana2025/robitos
|
ana2025
| 2024-06-26T19:10:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:10:11Z |
Entry not found
|
sontq/naschain
|
sontq
| 2024-07-02T15:08:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:14:04Z |
Entry not found
|
miqueiascoutinho/firts
|
miqueiascoutinho
| 2024-06-26T19:14:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:14:22Z |
Entry not found
|
habulaj/12116595730
|
habulaj
| 2024-06-26T19:14:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:14:55Z |
Entry not found
|
Rishabh5inghj/Mistral_ex
|
Rishabh5inghj
| 2024-06-26T19:16:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T19:16:04Z |
---
license: apache-2.0
---
|
habulaj/243388214614
|
habulaj
| 2024-06-26T19:18:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:18:17Z |
Entry not found
|
henriquepxl/vit-base-patch16-224-pokemon
|
henriquepxl
| 2024-06-26T19:19:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:19:37Z |
Entry not found
|
google/codegemma-1.1-2b-keras
|
google
| 2024-06-26T20:35:30Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"text-generation",
"license:gemma",
"region:us"
] |
text-generation
| 2024-06-26T19:19:45Z |
---
library_name: keras-nlp
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
---
# CodeGemma
**Google Model Page**: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
This model card corresponds to the latest 2B base version of the Code Gemma 1.1 model for usage in keras.
Keras models can be used with JAX, PyTorch or TensorFlow as numerical backends.
JAX, with its support for SPMD model paralellism, is recommended for large models.
For more information: [distributed training with Keras and JAX](https://keras.io/guides/distribution/).
You can find other models in the CodeGemma family here:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [**codegemma-1.1-2b-keras**](https://huggingface.co/google/codegemma-1.1-2b-keras) | |
| 7B | [codegemma-7b-keras](https://huggingface.co/google/codegemma-7b-keras) | [codegemma-1.1-7b-it-keras](https://huggingface.co/google/codegemma-1.1-7b-it-keras) |
For more information about the model, visit https://huggingface.co/google/codegemma-2b.
Google Model Page
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://ai.google.dev/gemma/terms)
Authors
: Google
## Loading the model
```python
import keras_nlp
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("hf://google/codegemma-1.1-2b-keras")
```
|
SamSzamocki/dummy-model_2
|
SamSzamocki
| 2024-06-26T19:20:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:20:06Z |
Entry not found
|
habulaj/1172215565
|
habulaj
| 2024-06-26T19:20:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:20:45Z |
Entry not found
|
google/codegemma-1.1-7b-it-keras
|
google
| 2024-06-26T20:35:07Z | 0 | 0 |
keras-nlp
|
[
"keras-nlp",
"text-generation",
"license:gemma",
"region:us"
] |
text-generation
| 2024-06-26T19:22:33Z |
---
library_name: keras-nlp
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
---
# CodeGemma
**Google Model Page**: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
This model card corresponds to the latest 7B instruct version of the CodeGemma 1.1 model for usage in keras.
Keras models can be used with JAX, PyTorch or TensorFlow as numerical backends.
JAX, with its support for SPMD model paralellism, is recommended for large models.
For more information: [distributed training with Keras and JAX](https://keras.io/guides/distribution/).
You can find other models in the CodeGemma family here:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [codegemma-1.1-2b-keras](https://huggingface.co/google/codegemma-1.1-2b-keras) | |
| 7B | [codegemma-7b-keras](https://huggingface.co/google/codegemma-7b-keras) | [**codegemma-1.1-7b-it-keras**](https://huggingface.co/google/codegemma-1.1-7b-it-keras) |
For more information about the model, visit https://huggingface.co/google/codegemma-2b.
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://ai.google.dev/gemma/terms)
Authors
: Google
## Loading the model
```python
import keras_nlp
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("hf://google/codegemma-1.1-7b-it-keras")
```
|
Anytram/llama-3-8b-Instruct-bnb-4bit-medical
|
Anytram
| 2024-06-26T19:23:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T19:23:23Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** Anytram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/440089408194
|
habulaj
| 2024-06-26T19:26:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:26:07Z |
Entry not found
|
TioPanda/pandev-blocks-v3
|
TioPanda
| 2024-06-26T19:26:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T19:26:33Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** TioPanda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
habulaj/2719826935
|
habulaj
| 2024-06-26T19:28:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:28:47Z |
Entry not found
|
k4d3/hotdogwolf
|
k4d3
| 2024-06-26T19:42:05Z | 0 | 0 | null |
[
"art",
"not-for-all-audiences",
"en",
"dataset:k4d3/furry",
"license:wtfpl",
"region:us"
] | null | 2024-06-26T19:29:34Z |
---
license: wtfpl
datasets:
- k4d3/furry
language:
- en
tags:
- art
- not-for-all-audiences
---
|
Stephen96/apple-gan-generator
|
Stephen96
| 2024-06-27T21:45:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:32:28Z |
ML4B-Team-7
# 🍎🍏The Applegenerator🍏🍎
Welcome to [The Applegenerator](https://ml4b-team-7-applegenerator.streamlit.app/)!
This app allows you to generate images of apples using GAN and view the recently created ones in the history section.
Dataset: Mihai Oltean, [Fruits-360 dataset](https://www.kaggle.com/datasets/moltean/fruits), 2017-.
---
Thank you for using The Applegenerator! We hope you enjoy it.
|
amrdiab/Yaseen
|
amrdiab
| 2024-06-26T19:32:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:32:53Z |
Entry not found
|
Flamenco43/200k-DDP-run5
|
Flamenco43
| 2024-06-26T19:34:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:34:20Z |
Entry not found
|
arjan-hada/esm2_t33_650M_UR50D-finetuned-Ab14H-v0
|
arjan-hada
| 2024-06-26T19:34:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:34:22Z |
Entry not found
|
dammyogt/Dammyogt_finetuned_voxpopuli_nl
|
dammyogt
| 2024-06-26T19:34:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:34:49Z |
Entry not found
|
habulaj/106076115139
|
habulaj
| 2024-06-26T19:37:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:37:47Z |
Entry not found
|
musicforte/images
|
musicforte
| 2024-06-26T19:40:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:40:39Z |
Entry not found
|
habulaj/144684121079
|
habulaj
| 2024-06-26T19:40:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:40:45Z |
Entry not found
|
Tuhaishi/FinTwitBERT-sentiment
|
Tuhaishi
| 2024-06-26T19:44:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:44:51Z |
Entry not found
|
RicardoMorim/q-FrozenLake-v1-4x4-noSlippery
|
RicardoMorim
| 2024-06-26T19:46:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T19:46:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RicardoMorim/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RicardoMorim/Taxi_Driver_AI
|
RicardoMorim
| 2024-06-26T19:50:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T19:49:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_Driver_AI
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RicardoMorim/Taxi_Driver_AI", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
felipesampaio2010/JamesMarshallOsUnder
|
felipesampaio2010
| 2024-06-26T19:59:55Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T19:57:37Z |
---
license: openrail
---
|
intracta/Meta-Llama-3-8B-Instruct
|
intracta
| 2024-06-26T19:57:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:57:42Z |
Entry not found
|
Maxi00/test
|
Maxi00
| 2024-06-26T20:09:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T19:59:09Z |
Entry not found
|
alex2204/prepaabiertadurango
|
alex2204
| 2024-06-26T19:59:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T19:59:12Z |
---
license: apache-2.0
---
|
XueyingJia/zerogen
|
XueyingJia
| 2024-06-26T19:59:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T19:59:29Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** XueyingJia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kathleenge/qa-model-1
|
kathleenge
| 2024-06-26T20:02:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T20:02:05Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** kathleenge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
leeloli/wendy-wish-you-well
|
leeloli
| 2024-06-26T20:06:23Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T20:04:49Z |
---
license: openrail
---
|
Grayx/john_paul_van_damme_38
|
Grayx
| 2024-06-26T20:06:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:06:40Z |
Entry not found
|
Grayx/john_paul_van_damme_39
|
Grayx
| 2024-06-26T20:07:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:07:18Z |
Entry not found
|
Grayx/john_paul_van_damme_40
|
Grayx
| 2024-06-26T20:08:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:08:20Z |
Entry not found
|
BatoolZ/llama-2-7b-hf-small-shards
|
BatoolZ
| 2024-06-26T20:09:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:09:56Z |
Entry not found
|
UlrikKoren/PIIMask-NOR
|
UlrikKoren
| 2024-06-26T21:05:02Z | 0 | 1 | null |
[
"tensorboard",
"safetensors",
"no",
"license:gemma",
"region:us"
] | null | 2024-06-26T20:12:22Z |
---
license: gemma
language:
- 'no'
---
# PIIMask-NOR Model
The PIIMask-NOR model is a specialized language model fine-tuned for the task of Personal Identifiable Information (PII) redaction in Norwegian, Bokmål. It is based on the "google/gemma-1.1-2b-it" model and trained to identify and redact various types of PII in text while maintaining the grammatical structure of sentences.
## Model Description
- **Model Name:** PIIMask-NOR
- **Base Model:** [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
- **Quantization:** 4-bit quantization using NF4 with double quantization and float16 compute dtype.
- **Training Steps:** The model checkpoints are available at 1, 2, 3, and 4 epochs.
## Methodology
The PIIMask-NOR model was fine-tuned using the ai4privacy/pii-masking-65k dataset, which was machine translated into Norwegian, Bokmål. The training process involved several epochs to improve the model's ability to accurately redact PII from text. The quantization configuration was applied to make the model more efficient for deployment.
## Usage
### Installation
To use the PIIMask-NOR model, you need to have the `transformers` and `datasets` libraries installed. You can install them using pip:
```bash
pip install transformers datasets
```
### Code Example
Here is a code example to load and use the PIIMask-NOR model for PII redaction:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, BitsAndBytesConfig
import torch
# Quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
)
# System instructions for PII redaction
system_instructions = """Erstatt følgende typer personopplysninger i teksten nedenfor med '[REDACTED]': [FIRST_NAME_x], [CITY_x], [COUNTRY_x]. Sørg for at hver type informasjon erstattes på en måte som opprettholder den grammatiske strukturen i setningen. Du skal kun returnere den nye teksten med de relevante erstatningene utført, uten den opprinnelige teksten eller noen tilleggsannotasjoner.
Input:"""
example_prompt = "Jeg heter Clara og bor i Bergen, Norge."
# Load model function
def load_model(repo, step):
model = AutoModelForCausalLM.from_pretrained(repo,
device_map="cuda:0",
trust_remote_code=True,
quantization_config=bnb_config,
adapter_kwargs={"subfolder": f"checkpoint-{step}"},
attn_implementation="flash_attention_2")
return model
# Initialize tokenizer and model
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it", use_fast=True)
# Apply chat template for input
chat = [
{"role": "system", "content": system_instructions},
{"role": "user", "content": example_prompt},
]
inputs = tokenizer.apply_chat_template(chat, tokenize=False, return_tensors="pt", padding=True, truncation=False)
model = load_model("UlrikKoren/PIIMask-NOR", step=1)
outputs = model.generate(input_ids=inputs['input_ids'].to(device), max_new_tokens=2048)
decoded_outputs = [tokenizer.decode(output, skip_special_tokens=False) for output in outputs]
print(decoded_outputs[0])
```
### Checkpoints
The model checkpoints for different training epochs can be accessed as follows:
- **Epoch 1:** `UlrikKoren/PIIMask-NOR/tree/main/checkpoint-579`
- **Epoch 2:** `UlrikKoren/PIIMask-NOR/checkpoint-1159`
- **Epoch 3:** `UlrikKoren/PIIMask-NOR/checkpoint-1739`
- **Epoch 4:** `UlrikKoren/PIIMask-NOR/checkpoint-2316`
## Compliance with Gemma Terms of Use
This model is a derivative of the "google/gemma-1.1-2b-it" model and complies with the Gemma Terms of Use:
- **Distribution:** Any distribution of this model or its derivatives must include the use restrictions specified in the Gemma Terms of Use and provide notice to subsequent users.
- **Notices:** The model is distributed with the following notice: “Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms”.
- **Modifications:** Any modified files carry prominent notices stating the modifications made.
- **Prohibited Uses:** The use of this model is subject to the restrictions outlined in the Gemma Prohibited Use Policy.
- **Trademarks:** This distribution does not grant any rights to use Google’s trademarks, trade names, or logos.
## License
The PIIMask-NOR model is distributed under the same terms as the base model. For more details, please refer to the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
|
UlrikKoren/PIIMask-EN
|
UlrikKoren
| 2024-06-26T21:06:47Z | 0 | 0 | null |
[
"safetensors",
"en",
"license:gemma",
"region:us"
] | null | 2024-06-26T20:13:10Z |
---
license: gemma
language:
- en
---
# PIIMask-EN Model
The PIIMask-EN model is a specialized language model fine-tuned for the task of Personal Identifiable Information (PII) redaction. It is based on the "google/gemma-1.1-2b-it" model and trained to identify and redact various types of PII in text while maintaining the grammatical structure of sentences.
## Model Description
- **Model Name:** PIIMask-EN
- **Base Model:** [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
- **Fine-tuning Dataset:** [ai4privacy/pii-masking-65k](https://huggingface.co/datasets/ai4privacy/pii-masking-65k) (specifically `english_balanced_10k.jsonl` subset)
- **Quantization:** 4-bit quantization using NF4 with double quantization and float16 compute dtype.
- **Training Steps:** The model checkpoints are available at 1, 2, 3, and 4 epochs.
## Methodology
The PIIMask-EN model was fine-tuned using the ai4privacy/pii-masking-65k dataset, which contains various text entries annotated with different types of PII. The training process involved several epochs to improve the model's ability to accurately redact PII from text. The quantization configuration was applied to make the model more efficient for deployment.
## Usage
### Installation
To use the PIIMask-EN model, you need to have the `transformers` and `datasets` libraries installed. You can install them using pip:
```bash
pip install transformers datasets
```
### Code Example
Here is a code example to load and use the PIIMask-EN model for PII redaction:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, BitsAndBytesConfig
import torch
# Quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
)
# System instructions for PII redaction
system_instructions = """Replace the following types of personal information in the text below with '[REDACTED]': [FIRST_NAME_x], [CITY_x], [STATE_x]. Ensure that each type of information is replaced in a way that maintains the grammatical structure of the sentence. You should only return the new text with the relevant replacements made, without the original text or any additional annotations.
Input:"""
example_prompt = "My name is Clara and I live in Berkeley, California."
# Load model function
def load_model(repo, step):
model = AutoModelForCausalLM.from_pretrained(repo,
device_map="cuda:0",
trust_remote_code=True,
quantization_config=bnb_config,
adapter_kwargs={"subfolder": f"checkpoint-{step}"},
attn_implementation="flash_attention_2")
return model
# Initialize tokenizer and model
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-2b-it", use_fast=True)
# Apply chat template for input
chat = [
{"role": "system", "content": system_instructions},
{"role": "user", "content": example_prompt},
]
inputs = tokenizer.apply_chat_template(chat, tokenize=False, return_tensors="pt", padding=True, truncation=False)
model = load_model("UlrikKoren/PIIMask-EN", step=1)
outputs = model.generate(input_ids=inputs['input_ids'].to(device), max_new_tokens=2048)
decoded_outputs = [tokenizer.decode(output, skip_special_tokens=False) for output in outputs]
print(decoded_outputs[0])
```
### Checkpoints
The model checkpoints for different training epochs can be accessed as follows:
- **Epoch 1:** `UlrikKoren/PIIMask-EN/tree/main/checkpoint-579`
- **Epoch 2:** `UlrikKoren/PIIMask-EN/checkpoint-1159`
- **Epoch 3:** `UlrikKoren/PIIMask-EN/checkpoint-1739`
- **Epoch 4:** `UlrikKoren/PIIMask-EN/checkpoint-2316`
## Compliance with Gemma Terms of Use
This model is a derivative of the "google/gemma-1.1-2b-it" model and complies with the Gemma Terms of Use:
- **Distribution:** Any distribution of this model or its derivatives must include the use restrictions specified in the Gemma Terms of Use and provide notice to subsequent users.
- **Notices:** The model is distributed with the following notice: “Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms”.
- **Modifications:** Any modified files carry prominent notices stating the modifications made.
- **Prohibited Uses:** The use of this model is subject to the restrictions outlined in the Gemma Prohibited Use Policy.
- **Trademarks:** This distribution does not grant any rights to use Google’s trademarks, trade names, or logos.
## License
The PIIMask-EN model is distributed under the same terms as the base model. For more details, please refer to the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
|
storm23/segformer-b0-finetuned-segments-sidewalk-2
|
storm23
| 2024-06-26T20:15:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:15:31Z |
Entry not found
|
XueyingJia/zerogen_mnli
|
XueyingJia
| 2024-06-26T20:17:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T20:17:04Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** XueyingJia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
souleater-04/q-FrozenLake-v1-4x4-noSlippery
|
souleater-04
| 2024-06-26T20:19:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T20:19:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="souleater-04/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JEFFERSONMUSIC/MJHISTORYERADE
|
JEFFERSONMUSIC
| 2024-06-26T20:23:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-26T20:22:38Z |
---
license: apache-2.0
---
|
souleater-04/q-learning-taxi
|
souleater-04
| 2024-06-26T20:24:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-26T20:24:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="souleater-04/q-learning-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Muhammed164/checkpoints
|
Muhammed164
| 2024-06-26T20:25:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:25:00Z |
Entry not found
|
yj373/pokemon
|
yj373
| 2024-06-26T20:25:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:25:05Z |
Entry not found
|
habulaj/3484931733
|
habulaj
| 2024-06-26T20:25:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:25:24Z |
Entry not found
|
TitanRTX/Anti_Recaptcha-MBB
|
TitanRTX
| 2024-06-26T23:20:18Z | 0 | 0 | null |
[
"code",
"text-classification",
"en",
"dataset:TitanRTX/mbbank_recaptcha",
"license:mit",
"region:us"
] |
text-classification
| 2024-06-26T20:25:56Z |
---
license: mit
datasets:
- TitanRTX/mbbank_recaptcha
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- code
---
|
malicy256256/voices3
|
malicy256256
| 2024-06-26T20:51:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:26:42Z |
Entry not found
|
habulaj/1601423894
|
habulaj
| 2024-06-26T20:33:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:33:36Z |
Entry not found
|
ErickGonCruz/Testing
|
ErickGonCruz
| 2024-06-26T20:35:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:35:21Z |
Entry not found
|
Xelanizul/Soccer_player_01
|
Xelanizul
| 2024-06-26T20:35:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:35:30Z |
Entry not found
|
Zuncoe/emo
|
Zuncoe
| 2024-06-26T20:38:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:38:49Z |
Entry not found
|
Bruh110/Ken_AND_FriendsAI
|
Bruh110
| 2024-06-26T20:44:08Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-06-26T20:43:04Z |
---
license: openrail
---
|
habulaj/80299169
|
habulaj
| 2024-06-26T20:44:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:44:42Z |
Entry not found
|
alex2020xx/dildo
|
alex2020xx
| 2024-06-26T20:45:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:45:34Z |
Entry not found
|
habulaj/873033579
|
habulaj
| 2024-06-26T20:47:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:47:00Z |
Entry not found
|
eagle0504/sample_ysa_data_v1
|
eagle0504
| 2024-06-26T20:49:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:49:00Z |
Entry not found
|
habulaj/42903447752
|
habulaj
| 2024-06-26T20:56:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:56:03Z |
Entry not found
|
habulaj/5659243193
|
habulaj
| 2024-06-26T20:56:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:56:35Z |
Entry not found
|
samiasghar/text-summarization
|
samiasghar
| 2024-06-26T20:59:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:59:02Z |
Entry not found
|
habulaj/229213307897
|
habulaj
| 2024-06-26T20:59:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T20:59:20Z |
Entry not found
|
hamzaish/peft-model-vllm
|
hamzaish
| 2024-06-26T21:01:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T21:01:50Z |
Entry not found
|
wassemgtk/mergekit-passthrough-pbpdltu
|
wassemgtk
| 2024-06-26T21:02:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T21:02:25Z |
Entry not found
|
pranay-ar/gmflow
|
pranay-ar
| 2024-06-26T21:03:47Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-06-26T21:03:19Z |
---
license: mit
---
|
haytoox/testemunhaa
|
haytoox
| 2024-06-26T21:07:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-26T21:07:21Z |
Entry not found
|
chreh/active-passive-sft
|
chreh
| 2024-06-26T21:16:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T21:08:23Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** chreh
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
valerielucro/mistral_gsm8k_beta_0.4_epoch1
|
valerielucro
| 2024-06-26T21:09:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T21:08:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.