modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
manavisrani07/acmss4 | manavisrani07 | 2024-06-27T10:49:51Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-27T10:49:50Z | ---
license: mit
---
|
Likich/falcon-finetune-qualcoding-10 | Likich | 2024-06-27T10:52:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T10:51:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Masallah/ponyDiffusionV6XL_v6StartWithThisOne | Masallah | 2024-06-27T10:57:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T10:53:55Z | Entry not found |
mohdmahdi/acmsummerschool24 | mohdmahdi | 2024-06-27T10:54:18Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T10:54:18Z | Entry not found |
Likich/falcon-finetune-qualcoding-5 | Likich | 2024-06-27T10:55:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T10:55:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
russfischer/test_bug_temporary | russfischer | 2024-06-27T10:55:48Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T10:55:48Z | Entry not found |
leduyit/opt-6.7b-lora | leduyit | 2024-06-27T10:57:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T10:56:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IslemTouati/my_awesome_food_model | IslemTouati | 2024-06-27T10:59:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T10:59:43Z | Entry not found |
Likich/mistral-finetune-qualcoding-50 | Likich | 2024-06-27T11:00:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:00:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XxLOLxX/donald_duck | XxLOLxX | 2024-06-27T11:00:53Z | 0 | 0 | null | [
"tensorboard",
"region:us"
]
| null | 2024-06-27T11:00:29Z | Entry not found |
howarudo/paligemma-3b-pt-224-vqa-continue | howarudo | 2024-06-27T11:01:10Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:01:10Z | Entry not found |
habibkhan22cp/llama2-openassistant | habibkhan22cp | 2024-06-27T11:03:40Z | 0 | 0 | null | [
"license:llama2",
"region:us"
]
| null | 2024-06-27T11:03:40Z | ---
license: llama2
---
|
briaai/Image-Prompt-BETA | briaai | 2024-07-02T07:29:31Z | 0 | 0 | null | [
"text-to-image",
"legal liability",
"commercial use",
"ip-adapter",
"arxiv:2308.06721",
"license:other",
"region:us"
]
| text-to-image | 2024-06-27T11:05:54Z | ---
license: other
license_name: bria-2.3
license_link: https://bria.ai/bria-huggingface-model-license-agreement/
inference: false
tags:
- text-to-image
- legal liability
- commercial use
- ip-adapter
extra_gated_description: >-
BRIA 2.3 IP-Adapter requires access to BRIA 2.3
Text-to-Image model
extra_gated_heading: Fill in this form to get access
extra_gated_fields:
Name:
type: text
Company/Org name:
type: text
Org Type (Early/Growth Startup, Enterprise, Academy):
type: text
Role:
type: text
Country:
type: text
Email:
type: text
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below:
type: checkbox
---
# BRIA 2.3 Image-Prompt Beta
BRIA 2.3 Image-Prompt-Beta enables the generation of high-quality images guided by an image as input, alongside (or instead of) the textual prompt. This allows for the creation of images inspired by the content or style of an existing images, which can be useful for the creation of image variations or for transferring the style or content of an image. This module uses the architecture of [IP-Adapter](https://huggingface.co/papers/2308.06721) and is trained on the foundation of [BRIA 2.3 Text-to-Image](https://huggingface.co/briaai/BRIA-2.3).
This adapter can be used in combination with other adapters trained over our foundation model, such as [ControlNet-Depth](briaai/BRIA-2.3-ControlNet-Depth) or [ControlNet-Canny](briaai/BRIA-2.3-ControlNet-Canny).
Similar to [BRIA 2.3](https://huggingface.co/briaai/BRIA-2.3), this adapter was trained from scratch exclusively on licensed data from our data partners. Therefore, it is safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
#### Image Variations:

#### Style Transfer (textual prompt: "Paris, high quality"):

### Model Description
- **Developed by:** BRIA AI
- **Model type:** [IP-Adapter](https://huggingface.co/docs/diffusers/using-diffusers/ip_adapter) for Latent diffusion
- **License:** [Commercial licensing terms & conditions.](https://bria.ai/customer-general-terms-and-conditions)
- **Model Description:** IP-Adapter for BRIA 2.3 Text-to-Image model. The model generates images guided by an image prompt.
- **Resources for more information:** [BRIA AI](https://bria.ai/)
Bria AI licenses the foundation model on which this model was trained, with full legal liability coverage. Our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
For more information, please visit our [website](https://bria.ai/).
### Get Access
Interested in BRIA 2.3? Purchase is required to license and access BRIA 2.3, ensuring royalty management with our data partners and full liability coverage for commercial use.
Are you a startup or a student? We encourage you to apply for our [Startup Program](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to request access. This program are designed to support emerging businesses and academic pursuits with our cutting-edge technology.
Contact us today to unlock the potential of BRIA 2.3! By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions](https://bria.ai/terms-and-conditions/).
### Code example using Diffusers
```
pip install diffusers
```
```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("briaai/BRIA-2.3", torch_dtype=torch.float16, force_zeros_for_empty_prompt=False).to("cuda")
pipeline.load_ip_adapter("briaai/Image-Prompt-BETA", subfolder='models', weight_name="ip_adapter_bria.bin")
```
## Create variations of the input image
```py
pipeline.set_ip_adapter_scale(1.0)
image = load_image("examples/example1.jpg")
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt="high quality",
ip_adapter_image=image.resize((224, 224)),
num_inference_steps=50,
generator=generator,
height=1024, width=1024
).images
images[0]
```
## Use both image and textual prompt as inputs
```py
textual_prompt = "Paris, high quality"
pipeline.set_ip_adapter_scale(0.8)
image = load_image("examples/example2.jpg")
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt=textual_prompt,
ip_adapter_image=image.resize((224, 224)),
num_inference_steps=50,
generator=generator,
height=1024, width=1024,
guidance_scale=7
).images
images[0]
```
### Some tips for using our text-to-image model at inference:
1. You must set `pipe.force_zeros_for_empty_prompt = False`
2. For image variations, you can try setting an empty prompt. Also, you can add a negative prompt.
3. We support multiple aspect ratios, yet resolution should overall consists approximately `1024*1024=1M` pixels, for example:
`(1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)`
4. Change the scale of the ip-adapter by using the "set_ip_adapter_scale()" method (range 0-1). The higher the scale, the closer the output will be to the input image.
5. Resize the input image into a square, otherwise the CLIP image embedder will perform center-crop.
|
codewizardUV/llama-5-epochs | codewizardUV | 2024-06-27T11:06:48Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:06:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nattawatWe/sd-class-butterflies-64 | nattawatWe | 2024-06-27T11:08:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:08:21Z | Entry not found |
IlyaGusev/saiga_llama3_8b_sft_m11_d7_lora | IlyaGusev | 2024-06-27T11:12:22Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-27T11:08:23Z | Entry not found |
Mirgan/fine-tune-blip-test-4bits | Mirgan | 2024-06-27T11:08:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:08:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mustozsarac/finetuned-one-epoch-multi-qa-mpnet-base-dot-v1 | mustozsarac | 2024-06-27T11:09:15Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:62964",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-06-27T11:08:59Z | ---
base_model: sentence-transformers/multi-qa-mpnet-base-dot-v1
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:62964
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Google, Fransa Rekabet Kurumu'nun verdiği cezayı temyize götürecek
sentences:
- Google France'ın Yönetim Kurulu Başkanı Sebastien Missoffe, Rekabet Kurumunun
ceza kararının bazı unsurlarına katılmadıklarını ve ”eser sahibinin haklarına
komşu hakları uygulamak için gösterdikleri çabaya karşın ceza miktarını fazla
bulduklarını” belirtti. Missoffe, Fransa'da imzalanan anlaşmalara sadık kaldıklarını
ve eser sahibinin haklarına komşu hakları tanıdıklarını ifade etti. Rekabet Kurumunun
geçen ay kendisinden istediği ”yayınevleri ve basın ajanslarına telif hakkına
tabi içeriklerinin kullanımı için ücret teklifi sunması” ihtarına uyduklarını
aktaran Missoffe, ”Teklifimizi 1200'den fazla gazete yayımcısına götürdük, anlaşmalarımızın
belli yönlerini değiştirdik” dedi. Google'ın başvurusu Paris Temyiz Mahkemesinde
incelenecek.
- 'Anadolu Efes, ligde 15 kezle en fazla şampiyonluk yaşayan takım unvanına sahip.
Adını 2011-2012 sezonunda Anadolu Efes olarak değiştiren Efes Pilsen, lig tarihinde
ilk şampiyonluğunu 1978-1979 sezonunda kazandı. Lacivert-beyazlılar, ligde 2001-2002,
2002-2003, 2003-2004 ve 2004-2005 sezonlarında üst üste 4 şampiyonluk yaşadı.
Anadolu Efes, Süper Lig''de son olarak 2020-2021 sezonunu şampiyon tamamladı.
Fenerbahçe''nin 10 şampiyonluğu var Fenerbahçe''nin Basketbol Süper Ligi''nde
10 şampiyonluğu bulunuyor. İlk şampiyonluğunu 1990-1991 sezonunda yaşayan sarı-lacivertli
takım, daha sonra 16 yıl şampiyon olamadı. 2006-2007 sezonunda Ülker ile sponsorluk
anlaşması imzalayan ve Fenerbahçe Ülker olarak ligde mücadele eden sarı-lacivertliler,
16 yıl aradan sonra şampiyon olarak, kulübün 100. yılında hasretine son verdi.
Fenerbahçe Ülker, 2007-2008 sezonunda da şampiyonluğa ulaşıp tarihinde ilk kez
üst üste iki kez zirveye çıktı. Efes Pilsen''e 2009-2010 sezonunda play-off final
serisinde 4-2 üstünlük kurarak şampiyon olan sarı-lacivertliler, sonraki iki şampiyonluğunu
Galatasaray''a karşı elde etti. Fenerbahçe Ülker, 2010-2011 sezonunda play-off
finalinde rakibini 4-2 ile geçerek kupayı kaldırdı. Sarı-lacivertliler, 2013-2014
sezonunda da 3-3 eşitliğin olduğu play-off final serisinde Ülker Spor ve Etkinlik
Salonu''ndaki son maça sarı-kırmızılı takım çıkmayınca, 6. kez şampiyon olarak
rakibini şampiyonluk sayısında geride bırakmayı başardı. Fenerbahçe, 2015-2016
sezonunda Anadolu Efes''i geçerek 7, 2016-2017 sezonunda da Beşiktaş Sompo Japan''a
play-off finallerinde üstünlük sağlayarak 8. kez şampiyonluğunu elde etti. Sarı-lacivertli
takım, 2017-2018 sezonunda play-off finallerinde TOFAŞ''ı geçerek 9. şampiyonluğuna
ulaştı. Fenerbahçe, geçen sezon da play-off finallerinde Anadolu Efes''e üstünlük
sağlayarak 10. şampiyonluğunu kazandı. Galatasaray 5 kez şampiyon oldu Galatasaray,
Basketbol Süper Ligi''nde 5 kez şampiyonluk kupasını müzesine götürdü. Sarı-kırmızılı
takım, ilk şampiyonluk kupasını 1968-1969 sezonunda kazandı. Ligde play-off sistemine
geçildikten sonra 1984-1985 sezonunda finalde Fenerbahçe''ye ve 1985-1986 sezonunda
da Efes Pilsen''e 2-1''lik üstünlük kurarak üst üste iki kez şampiyonluk kupasını
müzesine götüren sarı-kırmızılılar, 1989-1990 sezonunda da play-off finalinde
Paşabahçe''yi 3-1 ile geçerek şampiyonluk yaşadı. Ligde 2012-2013 sezonunda play-off
finalinde Banvit''e 4-1 üstünlük kuran sarı-kırmızılılar, 23 yıllık şampiyonluk
hasretine son verdi ve 5. şampiyonluğunu kazandı. Galatasaray, 2013-2014 sezonunda
play-off final serisinde yönetim kurulunun kararıyla son maça çıkmadı. Türkiye
Basketbol Federasyonu Yönetim Kurulu, play-off final serisinin 7. maçını 20-0
Fenerbahçe Ülker lehine tescil ederek, sarı-lacivertli takımı şampiyon ilan etti.
Beşiktaş''ın 2 şampiyonluğu var Beşiktaş, ligde iki kez şampiyonluğa ulaştı. Siyah-beyazlılar,
1974-1975 sezonunda 57 puanı bulunan Galatasaray''ın önünde 60 puanla ligi ilk
sırada tamamlayarak, ilk şampiyonluğunu elde etti. Beşiktaş, 2011-2012 sezonunda
ise play-off final serisinde Anadolu Efes''e 4-2 üstünlük kurup, 37 yıl sonra
mutlu sona ulaşarak ligde ikinci kez şampiyon oldu. Eczacıbaşı da 8 kez şampiyonluk
sevinci yaşadı Basketbol şubesini yıllar önce kapatan Eczacıbaşı, ligde 8 şampiyonluk
kazandı. Ligde İTÜ 5, Ülkerspor da 4 kez kupayı müzesine götürdü. İlk şampiyon
Altınordu 1966-1967 sezonuyla başlayan Deplasmanlı Erkekler Basketbol Birinci
Ligi''nde ilk şampiyonluğu Altınordu kazandı. Ligde 1983-1984 sezonunda play-off
sistemine geçildi ve bu tarihten itibaren lig şampiyonu, play-off maçlarının ardından
belirlendi. Pınar Karşıyaka ve TOFAŞ''ın ikişer, Muhafızgücü''nün de ligde bir
şampiyonluğu bulunuyor. 2019-2020 sezonunu tamamlanamadı Basketbol Süper Lig''inde
2019-2020 sezonu yeni tip koronavirüs salgını nedeniyle tamamlanamadı. Salgın
nedeniyle lige 23. hafta maçlarının ardından 19 Mart''ta ara verilirken, Türkiye
Basketbol Federasyonu Yönetim Kurulu 11 Mayıs''ta sezonu şampiyon ilan edilmeden
ve küme düşme olmadan sonlandırma kararı aldı. Şampiyonlar Basketbol Süper Ligi''nde
şampiyon olan takımlar şöyle:'
- Markaların ve perakendecilerin sınır ötesi bir e-ticaret şirketi kurmalarına yardımcı
olan Flow, New Enterprise Associates liderliği üstlendiği B Serisi yatırım turunda
37 milyon dolar yatırım aldığını açıkladı. Yatırım turuna katılan diğer isimler
American Express Ventures, Latitude Ventures, Liza Landsman oldu. 37 milyon dolarlık
yeni yatırımla birlikte Flow'un aldığı toplam yatırım miktarı 55 milyon dolara
yükseldi. Şirket, 37 milyon doları Flow'ın satış ve pazarlama ekibini genişletmek
ve ürünü geliştirmek için kullanacağını açıkladı. Flow CEO'su Rob Keve, sosyal
medya ve dijital pazarlamanın büyüsü sayesinde, tüketiciye yönelik birçok markanın
globaldeki tüketicilere ulaştığını söyledi. Bununla birlikte, bu tüketiciler için
gerçek alışveriş deneyiminde nakliye genellikle yavaş ya da pahalı olarak karşımıza
çıkıyor. Ayrıca site yerel ödeme hizmetleri ile bütünleşmekte başarısız olabiliyor.
Flow ise hem e-ticaret sitesi hem de tüketici için bu sorunları ortadan kaldırmayı
hedefliyor. Flow, mevcut e-ticaret platformlarının en üstünde yer alıyor. Böylece
alışveriş deneyimi yerel fiyatlandırma ve ödeme seçenekleriyle otomatik olarak
konumlarına uyarlanıyor. Ayrıca, Flow'un taşıyıcılarla olan ilişkileri sayesinde,
uluslararası nakliye zamanında ve uygun fiyatlı hale getiriliyor. Bir işletme,
halihazırda uluslararası denizcilik fırsatları ve dağıtım merkezlerine sahip olsa
bile lojistik yönetimi için Flow'u kullanabiliyor. 2015 yılında kurulan şirketin
müşterileri arasında MZ Wallace ve Charles & Colvard gibi çok kanallı işletmelerin
yanı sıra MVMT Watches gibi online markalar da bulunuyor. Flow, müşterilerinin
yıldan yıla yüzde 200 oranında arttığının altını çiziyor.
- source_sentence: Arama
sentences:
- Zorunlu Çerezler Bu çerez, insanlarla botları ayırt etmek için kullanılır. Bu,
web sitelerinin kullanımı hakkında geçerli raporlar hazırlamak için kullanılmakta
olup web sitesi için faydalıdır. İşlevsel Çerezler Kullanıcının web sitesinin
seçtiği dil sürümünü hatırlar. Performans/Analitik Çerezler Ziyaretçinin web sitesini
nasıl kullandığına ilişkin istatistiksel veriler oluşturmak için kullanılan benzersiz
bir kimliği kaydeder. Google Analytics tarafından talep oranını kısmak için kullanılır.
Kabul et Reddet Reklam/Pazarlama Çerezleri Bu çerez, Alexa Analytics'e gönderilen
tüketici davranışları hakkında bilgi toplamak için kullanılır. (Alexa Analytics
bir Amazon şirketidir.)
- 'Taipei Intel Developer Forum‘da tanıtılan UrbanMax isimli ürün, Intel’in dizüstü
ve netbook arasında gelip giden bir tasarım şeklini gösteriyor. UrbanMax, 11,1
inç (28 cm) köşegene sahip dokunmatik ekranıyla birlikte Windows Vista işletim
sistemi çalıştıran hafif bir dizüstü bilgisayar. Aslında ilk bakışta bir tablet;
fakat alt tarafından çıkan klavye, düz bir yüzeye yerleştirildiğinde açılarak
bir dizüstü bilgisayara dönüşüyor. Tabii, etkin olarak kullanabiliyorsanız, her
zaman bir ekran klavyeniz var. {pagebreak::İçinde Neler Var?} İçinde Ne Var? UrbanMax
isimli prototip, içinde MacBook Air’larda kullanılan ufaltılmış bir Core 2 Duo
işlemci barındırıyor. 1366×768 piksellik ekran yeterince keskin bir görüntü üretecek
kadar piksele sahip durumda. Ayrıca bu küçük makinede HD video oynatabilmek de
mümkün olacak deniliyor. Enerji tasarrufu açısından bir de SSD maliyetini bu ürüne
eklememiz gerekiyor. Bunların haricinde içinde Intel’in en son geliştirdiği ve
her türlü kablosuz bağlantı teknolojisini de destekleyen bir kart olduğu düşüncesi
çok yanlış değil. :: UrbanMax, dev bir iPhone’a benziyor mu? Bilgi için: Intel
Yazan: Berkin Bozdoğan'
- By Euronews Fransa'da aşırı sağcı lider Marine Le Pen Paris'i ziyaret eden Mısır
Devlet Başkanı Abdulfettah El-Sisi'nin Fransa'nın sağlam bir müttefiği olmalı
dedi. REKLAM Fransa'da aşırı sağcı lider Marine Le Pen Paris'i ziyaret eden Mısır
Devlet Başkanı Abdulfettah El-Sisi'nin Fransa'nın sağlam bir müttefiği olmalı
dedi. Sosyal medya hesabından paylaşımda bulunan Le Pen ”Bölgede istikrarın çıpası
olan ve ülkesinde Müslüman Kardeşleri bastıran Mısır Devlet Başkanı El-Sisi Fransa'nın
güçlü bir müttefiği olmak zorunda özellikle Türkiye'nin Libya konusundaki provokasyonları
ve terörizmle mücadele konusunda,” ifadelerini kullandı. Bu açıklamada ayrılıkçılık
ve radikal İslam'la mücadele için hazırlanan tasarının bakanlar konseyine sunulmasından
birkaç gün önce geldi. Le Pen tasarının doğru yönde atılmış adımlar içerdiğini
fakat kanunun herkesi hedef almak yerine İslamcılığa daha fazla odaklanması gerektiğini
belirtmişti. Paris ziyaretinde Fransa Cumhurbaşkanı Emmanuel Macron'la bir araya
gelecek olan El-Sisi, Libya ve terörizmle mücadele dahil bir çok bölgesel konuyu
ele alacak. Fransız aşırı sağ parti Ulusal Birlik lideri Marine Le Pen sosyal
medya hesabından bir çağrı yaparak, ülkedeki Ülkü Ocaklarının yanı sıra Milli
Görüş vakıflarının da kapatılması gerektiğini söylemişti.
- source_sentence: Manş Denizi'nde bir teknenin batması sonucu en az 31 düzensiz göçmen
öldü
sentences:
- BİST hisse verileri 15 dakika gecikmelidir. BİST isim ve logosu ”Koruma Marka
Belgesi” altında korunmakta olup izinsiz kullanılamaz, iktibas edilemez, değiştirilemez.
BİST ismi altında açıklanan tüm bilgilerin telif hakları tamamen BİST'e ait olup,
tekrar yayınlanamaz. Veriler tarafından sağlanmaktadır. www.sozcu.com.tr internet
sitesinde yayınlanan yazı, haber ve fotoğrafların her türlü telif hakkı Mega Ajans
ve Rek. Tic. A.Ş'ye aittir. İzin alınmadan, kaynak gösterilerek dahi iktibas edilemez.
Copyright © 2023 - Tüm hakları saklıdır. Mega Ajans ve Rek. Tic. A.Ş.
- By euronews Fransa'dan Manş Denizi üzerinden İngiltere'ye geçmeye çalışan ve düzensiz
göçmenleri taşıyan teknenin batması sonucu en az 31 göçmen hayatını kaybetti.
REKLAM Fransa'dan Manş Denizi üzerinden İngiltere'ye geçmeye çalışan ve düzensiz
göçmenleri taşıyan teknenin batması sonucu en az 31 göçmen hayatını kaybetti.
Fransa İçişleri Bakanlığı'ndan yapılan açıklamada, düzensiz göçmenlerin Fransa'nın
Calais kentinden Manş Denizi üzerinden tekneyle İngiltere'ye ulaşmaya çalıştığı
belirtildi. Cumhurbaşkanı Emmanuel Macron, göçmenlerin ölüm haberi üzerine yaptığı
açıklamada, ilgili bakanların acil olarak toplanmasını istedi ve ”Fransa, Manş
Denizi'nin mezarlığa dönüşmesine izin vermeyecek.” dedi. Manş Denizi'nde yaşanan
insani dramın sorumlularının derhal bulunacağı sözünü veren Macron, AB Sınır Koruma
Ajansı'nın (Frontex) Manş Denizi'nde sınır güvenliğinin korunması konusunda imkanlarının
artırılmasını istedi. Başbakan Jean Castex ise ilgili 8 bakanın katılımıyla yarın
konuyla ilgili acil bir toplantı düzenleneceğini duyurdu. İçişleri Bakanı Gerald
Darmanin, göçmenlerin yasa dışı bir şekilde denize açılmalarını sağladıklarından
şüphelenilen 4 kişinin gözaltına alındığı duyurdu. İngiltere Başbakanı Johnson
acil toplantı düzenledi İngiltere Başbakanı Boris Johnson ise ilgili bakanlarıyla
bu akşam acil bir toplantı düzenleyerek, Manş Denizi'nde yaşanan trajediyi görüştü.
Johnson daha sonra basına yaptığı açıklamada üstü kapalı Fransa'yı suçlayarak,
”Bazı ortaklarımızı, özellikle de Fransızları son gelişmelerle ilgili duruma ayak
uydurmaya ikna etmekte zorlandık, ancak bu konuda tüm ülkelerin karşı karşıya
olduğu zorlukları anlıyorum.” dedi. Bir balıkçının Manş Denizi'nde cesetler görmesi
üzerine yetkililere haber verdiği ifade edilen açıklamada, düzensiz göçmenleri
taşıyan teknenin battığı, yapılan aramanın ardından ilk belirlemelere göre 5 düzensiz
göçmenin de bilincini kaybettiği kaydedildi. İçişleri Bakanı Gerald Darmanin,
Twitter hesabından yaptığı açıklamada, yaşanan bu dram nedeniyle üzüntü duyduğunu
belirtti. Düzensiz göçmenlerin tekneyle İngiltere'ye geçişini sağlamaya çalışanların
suçlu olduğunu ifade eden Darmanin, Calais kentine gideceği bilgisini paylaştı.
Calais'de bulunan ve kötü şartlar içinde yaşam mücadelesi veren çok sayıda düzensiz
göçmen İngiltere'ye gitmeye çalışıyor. İngiltere'ye bu ay içinde yaklaşık 2 bin
göçmenin geleceği tahminine karşın sadece ilk 11 günde 3 bin 780 kişi Fransa üzerinden
ülkeye giriş yaptı. Fransa'nın kuzeyindeki Grand-Synthe kentinde yol kenarlarında
barınan yaklaşık 1500 düzensiz göçmen, 16 Kasım'da polisin düzenlendiği operasyonla
barınma merkezlerine taşınmıştı.
- Zorunlu Çerezler Bu çerez, insanlarla botları ayırt etmek için kullanılır. Bu,
web sitelerinin kullanımı hakkında geçerli raporlar hazırlamak için kullanılmakta
olup web sitesi için faydalıdır. İşlevsel Çerezler Kullanıcının web sitesinin
seçtiği dil sürümünü hatırlar. Performans/Analitik Çerezler Ziyaretçinin web sitesini
nasıl kullandığına ilişkin istatistiksel veriler oluşturmak için kullanılan benzersiz
bir kimliği kaydeder. Google Analytics tarafından talep oranını kısmak için kullanılır.
Kabul et Reddet Reklam/Pazarlama Çerezleri Bu çerez, Alexa Analytics'e gönderilen
tüketici davranışları hakkında bilgi toplamak için kullanılır. (Alexa Analytics
bir Amazon şirketidir.)
- source_sentence: Gafele facute de catre autoritatile comuniste
sentences:
- Totul s-a întâmplat sâmbătă după-amiază, când mama celor doi a sunat la 112 și
a spus că fiica ei de 5 ani a fost violată de băiatul de 13. Când au ajuns polițiștii
la locuința familiei din localitatea Ivănești, băiatul de 13 ani deja fugise de
acasă. Potrivit unor surse judiciare, fata a fost dusă la Institutul de Medicină
Legală pentru consult, iar politiștii l-au căutat pe fratele ei. După câteva ore,
băiatul a fost găsit ascuns într-o casă părăsită din localitate. El a fost dus
la un centru al DGASPC Vaslui, unde a fost audiat de către polițiști în prezența
unui psiholog. Sora lui este acum acasă, în grija familiei. DGASCPC Vaslui a început
propria anchetă în acest caz, urmând ca în cursul zilei de luni să aibă loc mai
multe discuții cu reprezentanții familiei.
- 'Costinești, cunoscută ca stațiunea tineretului de pe litoralul românesc, este
o destinație de vacanță vibrantă și plină de viață, ideală pentru cei care doresc
să se bucure de soare, mare și distracție. Situată la aproximativ 30 de kilometri
sud de Constanța, locul atrage anual mii de turiști dornici de petreceri, activități
recreative și relaxare pe plajă. Ce prețuri la cazare sunt în iulie și august
2024? Câți bani trebuie să scoți din buzunar pentru o vacanță la Costinești Pe
un forum dedicat vacanțelor la Costinești, o româncă a dorit să știe dacă va găsi
cazare pentru perioada 20….23 August, 11 adulți și 10 copii. Iată ce detalii a
primit: „Mai avem disponibilitate pentru urmatoarele perioade * 15-21 iulie, 6
nopti, 1000 lei/noapte * 17-20 august, 3 nopti, 1000 lei/noapte * 26-31 august,
5 nopti, 800lei/noapte * 1-6 septembrie, 5 nopti, 800 lei/noapte”. De banii ăștia
ai aer conditionat în toate camerele, TV, WIFI, terasă amenajată, foisor, grătar,
bucătărie complet utilată (frigider, cuptor electric, cuptor cu microunde, plită,
cafetieră, prăjitor de pâine), parcare. Sunt acceptate vouchere de vacanță. Plus
că te afli la doar 10 minute de plajă! Altcineva are disponibile camere duble
matrimoniale și triple pentru 19….3 1august (160 RON cameră dubla matrimonială/
200 triplă). Atmosfera stațiunii Costinești Stațiunea Costinești este renumită
pentru atmosfera sa prietenoasă. Plajele sale sunt unele dintre cele mai populare
de pe litoralul românesc. Cu nisip fin și ape clare, acestea oferă condiții excelente
pentru plajă și înot. Pentru cei care preferă un loc mai liniștit, există și plaje
retrase în apropiere, unde se pot bucura de soare într-un cadru intim. Printre
atracțiile principale ale stațiunii se numără Epava Evanghelia, o navă grecească
eșuată pe țărm în anii ’60, care a devenit un simbol al Costineștiului. Ambarcațiunea
este un loc popular pentru fotografii și explorări. Activități recreative În Costinești
ai la dispoziție o gamă largă de activități recreative pentru toate gusturile.
Sporturile nautice, cum ar fi windsurfing, kitesurfing și jetskiing, sunt foarte
populare. De asemenea, poți închiria biciclete sau scutere pentru a explora stațiunea
și împrejurimile. Pentru turiștii cu buget limitat, camping-urile sunt o alegere
bună, oferind o experiență autentică de vacanță pe litoral. În ceea ce privește
gastronomia, stațiunea este plină de restaurante și terase care servesc preparate
tradiționale românești, fructe de mare proaspete și mâncăruri internaționale.
Costinești are toate ingredientele necesare pentru o vacanță de neuitat. Indiferent
dacă ești în căutare de petreceri până în zori sau de relaxare pe plajă, această
stațiune promite o experiență memorabilă pentru toți cei care o vizitează.'
- 'Ziua de 16 decembrie, inainte de ora 17:00. De dimineata, Securitatea Judetului
Timis isi desfasoara activitatea normal. ALEX MIHAI STOENESCU ADUNAREA. Coloane
de timisoreni au inceput sa manifesteze impotriva regimului Ceausescu Asa cum
am aratat, in jurul orei 8:30, manifestantul Simion Cherlea vine sa discute cu
Radu Tinu, apoi maiorul va lucra impreuna cu locotenent-colonelul Kope R. la intocmirea
raportului referitor la agentul maghiar Varga Ladislau, cel care luase contact
cu Ambasada Ungariei. Filajul prezinta si el raportul - liniste. ”In dimineata
de 16 decembrie 1989, la ora 8:00 - declara in proces Ion Florea, fost secretar
al Comitetului Judetean Timis al PCR - , am fost chemat de Radu Balan, o data
cu mine venind si ceilalti secretari: Bolog Vasile, Avram Teodorea, Boiboreanu
Viorica si Lazureanu Aurel. Balan Radu ne-a informat cu privire la cazul Tokes
si anume ca in Piata Maria s-au adunat trei-patru sute de persoane care-si exprimau
opozitia fata de masura de evacuare a pastorului ce urma a fi luata.” ”Primul
secretar ne-a informat ca pentru dezorganizarea acelei manifestatii urmeaza a
fi infiltrati alti patru-cinci sute de oameni cu diferite responsabilitati pe
linia muncii de partid sau de sindicat. Ne-a mai precizat ca deja printre acesti
demonstranti se gasesc lucratori din aparatul Inspectoratului Judetean al Ministerului
de Interne.” Avand in vedere ca la ora 8:00 nu era nimeni in fata casei parohiale,
decizia lui Balan pare de neinteles. Ea capata un inteles - determinat si de natura
masurii care urma a fi luata - doar daca in noaptea de 15 spre 16 decembrie Emil
Bobu l-a informat de Nicolae Ceausescu asupra convorbirii sale cu Balan, iar Ceausescu
l-a sunat tot in aceeasi noapte pe Balan la Timisoara si i-a cerut sa ia masura
neinspirata, dar tipica lui Ceausescu, de a aduce muncitori care sa-i ia la bataie
pe manifestantii din fata casei lui Tokes. Si Balan, si Bobu au ascuns in proces
aceasta convorbire de noapte cu Ceausescu, pentru ca ordinul lui Ceausescu se
afla la originea evenimentelor, iar ei l-au executat, tot ca niste oameni lipsiti
de judecata. Sa ne uitam la text si sa vedem ce contine. Isi poate cineva imagina
”infiltrarea” a 400-500 de cadre ale partidului printre 200-300 de manifestanti?
Este hilar. Ce fel de ”infiltrare” poate fi aceea a unui grup masiv de oameni
care depasesc ca numar numarul manifestantilor? Nu, este clar ca au fost trimisi
la bataie, iar aceasta masura este tipica unui singur om - Nicolae Ceausescu.
Sa ne amintim ca in timpul teleconferintei din 21 decembrie 1989, Ceausescu va
invoca trecutul sau de membru al grupelor de soc organizate de sovietici pe strazile
Bucurestilor in anii 1944-1947, pe care populatia i-a identificat sub numele de
”mardeiasi”: ”Reamintesc numai ca in Bucuresti functionau inainte asemenea grupe
si nu indraznea nimeni, nici un huligan nu indraznea sa ridice capul pe bulevardele
Capitalei. Este adevarat, am fost atunci criticati ca sunt prea aspre - este demult
- , dar le-am spus ca trebuie sa fie si mai aspre cu cei care incalca legile”.
Toti biografii, cu exceptia celor platiti de el, confirma prezenta lui Nicolae
Ceausescu in grupurile de batausi de pe strazile Bucurestilor, dar si la conducerea
unor astfel de echipe in perioada colectivizarii fortate. Dupa ora 9:00 incep
sa se adune mai intai cei patru enoriasi de serviciu, apoi aproximativ zece curiosi.
La un moment dat, strada se goleste subit. Nimeni nu intelege din ce cauza, si
ofiterii de Securitate coboara pentru a investiga. Pe Bulevardul 6 Martie trecuse
o masina de vidanja care avea scapari tehnologice si lasase pe caldaram o dara
de materii fecale urat mirositoare. Persoanele din fata casei lui Tokes se imprastiasera
pentru a scapa de mirosul insuportabil. Se indeplinea una din constatarile celebre
ale lui Petre Tutea: ”Toate revolutiile se umplu pana la urma de cacat si sange”.
Informat asupra acelui incident, Radu Tinu cere aprobarea ca vidanja sa fie oprita
si pusa sa mai treaca o data. Colonelul Sima, seful Securitatii Timis, considera
propunerea neserioasa si nu o aproba. Maiorul Tinu propune atunci ca in intersectia
strazilor Treboniu Laurian si Timotei Cipariu sa fie amplasat un militean care
sa dirijeze circulatia camioanelor grele pe Timotei Cipariu astfel incat sa nu
se poata aduna o multime care sa ocupe strada. Colonelul Sima respinge si aceasta
propunere pe motiv ca, dimpotriva, camioanele ar putea fi blocate acolo si va
fi foarte greu sa le deblocheze apoi. Pentru acest moment al zilei, dar si pentru
intervalul foarte mare dintre orele 10:00 si 17:30, Comisia senatoriala are o
versiune din pacate prea schematica. Se afirma ca la ora 10:00 Nicolae Ceausescu
a luat legatura cu primul secretar Radu Balan si a dispus ”masuri concrete printre
care si evacuarea imediata si neconditionata a pastorului la noul loc de munca.
Dictatorul devenise nervos”. Nu avem stenograma convorbirii telefonice, dar din
ceea ce se intamplase in 15 decembrie si din reconstituirea facuta din relatarile
revolutionarilor si ale ofiterilor Securitatii, un interes al lui Ceausescu pentru
situatia de la Timisoara este perfect plauzibil. Bobu fusese sunat noaptea de
Balan si apoi acesta l-a informat pe seful statului acasa. Ceausescu a sunat si
probabil ca Balan l-a informat asupra faptului ca evacuarea ar putea fi impiedicata
de prezenta unui grup de oameni in fata casei lui Tokes. Iarasi in tonul cunoscut,
Ceausescu a ordonat efectuarea evacuarii imediat, adica in ziua de 16 decembrie,
cum era prevazut in hotararea judecatoreasca, fara sa tina cont ca legea obliga
la executarea sentintei in prima zi lucratoare. Este de asemenea posibil ca Balan
sa-l fi informat ca duminica Laszlo Tokes avea slujba si ar fi putut profita de
ocazie pentru a incita lumea la nesupunere civica. Nu ne trebuie prea multe investigatii
ca sa ne imaginam ca Ceausescu a cerut sa se ia masuri ”pe linie de partid”, de
”influentare obsteasca”, adica, altfel spus, muncitorii, oamenii muncii din Timisoara
sa ia atitudine si sa intervina pentru executarea sentintei judecatoresti, dar
mai ales pentru a-i lua la bataie pe cei stransi in fata casei lui Tokes. Acesta
era patentul gandirii lui Ceausescu, primitiv, ramas la anii 1945-1948, cand facuse
parte din acel grup de soc instruit de sovietici pentru incaierarile cu ”fascistii”
din centrul Bucurestilor. REPRESIUNEA. Fortele de ordine au imprastiat multimea
cu jeturi de apa Subliniem insa ca ordinul dat de Balan subalternilor sai a fost
la ora 8:00, in timp ce discutia invocata de Comisia senatoriala a avut loc la
ora 10:00, ceea ce demonstreaza existenta unei alte convorbiri, de noapte. Oricum,
dupa convorbirea de dimineata cu Nicolae Ceausescu, primul secretar Radu Balan
hotareste sa nu execute ordinul secretarului general: ”Sambata, 16 decembrie 1989,
la ora 10:00, m-a sunat Ceausescu Nicolae, interesandu-se de situatia privitoare
la pastorul amintit. I-am expus-o asa cum era in realitate, sustinand ca nu se
poate trece la evacuare, deoarece hotararea judecatoreasca nu era inca executabila.
El mi-a ordonat sa trec de indata la evacuare, lucru pe care insa nu l-am facut,
tocmai in ideea de a nu da nastere la conflicte”. Ceausescu ii cerea sa execute
un ordin ilegal. Este primul ordin ilegal dat de Ceausescu in acea perioada. Urmatoarele
ordine ilegale date de el vor fi mult mai sangeroase. Comisia senatoriala arata
ca ”in aceeasi zi, din ordinul ministrului de Interne, in toate unitatile subordonate
acestui minister, a fost introdusa situatia numarul 2 prevazuta de Ordinul 2030
din 15.05.1972”. Nu se precizeaza ora la care s-a dat ordinul. Ea este importanta,
pentru ca lipsa acestui amanunt din concluziile Comisiei senatoriale permite confuzia,
inducand ideea ca in dupa-amiaza de 16 decembrie situatia era deosebit de grava.
Ora declararii Situatiei nr. 2 o aflam de la generalul Grigorie Ghita, in timpul
audierii sale din 1994: ”In 16 decembrie, ora 20:00, m-a sunat Vlad la domiciliu
sa ma prezint la Comandament, la Baneasa, in spatele Institutului de Meteorologie
si Hidrologie (unde este acum Comandamentul Trupelor de Jandarmi). Am fost informat
de situatia de la Timisoara. Am luat legatura cu gen. Bunoaica, comandantul brigazii
de la Timisoara, i-am dat misiunea sa nu permita intrarea sau iesirea din biserica
reformata din Timisoara. La ora 22:00 s-a ordonat la MI aplicarea Ordinului 230/1973,
deci Situatia nr. 2, stare de alerta a efectivelor. Am transmis ordinul in teritoriu”.
Insa revolutionarii insisi, precum si fostii ofiteri de Securitate arata ca in
dimineata zilei de 16 decembrie numarul persoanelor stranse in fata casei lui
Tokes era redus. Revolutionara Veronica Balaj are o amintire romantica despre
prima jumatate a acelei zile: ”Pana la pranz nu s-a aratat vreun semn ca ziua
aceea ar fi putut fi deosebita. Era o sambata de sfarsit de an. Un 16 decembrie
ca oricare altul. Asa parea. Si nici nu banuiam altfel. Atata ca vremea se incalzise
peste asteptari. Soarele se hlizea in ciuda calendarului in care scria mijloc
de decembrie”. Temperatura maxima la Timisoara in ziua de 16 decembrie 1989 va
fi de 16 grade C. Avem temeiuri sa credem ca incepand cu ora 11:00, in sediul
Comitetului Judetean de Partid s-a dezvoltat un conflict personal intre primul
secretar Radu Balan si fostul prim-secretar Ilie Matei. Balan va arata in proces
ca ”tot in aceeasi zi, la ora 11:00, la Comitetul Judetean de partid si-a facut
aparitia Matei Ilie, secretar al CC al PCR, care era invoit pentru a-si rezolva
unele probleme familiale. L-am pus pe acesta in tema cu situatia creata in jurul
pastorului mentionat, cu ordinul dat de Ceausescu de a se trece la evacuare, la
care Matei Ilie a fost de parere ca trebuie sa se puna in executare hotararea
de evacuare”. S-a nascut o contradictie intre cei doi lideri comunisti locali,
provenita din faptul ca Ilie Matei era fostul prim-secretar al judetului Timisoara,
iar Balan ii luase locul doar de o luna si jumatate. Conform unor surse din Primaria
Timisoarei, Matei cunostea foarte bine cazul Tokes, fusese implicat in emiterea
ordinului de evacuare si il considera pe Balan inca nefamiliarizat cu situatia
judetului si cu a lui Tokes in particular. Radu Balan nu dorea sa-si inceapa conducerea
judetului cu acte de violenta si, in plus, fiind prizonierul unei imagini clasice,
larg raspandite, ca Timisoara este un oras civilizat, a mizat pe reactia civilizata
a cetatenilor. Pentru Ilie Matei insa, proaspat promovat secretar in CC, ordinul
lui Ceausescu era litera de lege. Faptul ca Balan a refuzat sa execute acest ordin
a generat starea de conflict si poate si un telefon la Bucuresti. Avem astfel
ipoteza unei diferente majore de opinie tocmai la nivelul deciziei superioare
pe plan local. Mai tarziu, in seara zilei de 17 decembrie, Balan ii va declara
revolutionarului Ioan Savu: ”Am vorbit cu oamenii, Savule. Am vorbit, dar n-am
putut face mai mult pentru ca sosise Matei, pentru ca venisera generalii”. Sa
ne lamurim asupra problemei evacuarii lui Laszlo Tokes. In primul rand sa reamintim
ca acesta ocupa ilegal apartamentul din cladirea situata in Strada Timotei Cipariu.
In al doilea rand, legalitatea actiunii de evacuare era stabilita prin Codul de
Procedura Civila. Potrivit prevederilor art. 385 din Codul de Procedura Civila,
in vigoare la acea data, ”nici o executare nu se va putea face inainte de ora
8 dimineata si dupa ora 6 seara”. Dar art. 386, in vigoare la acea data, prevedea
ca ”executarea silita nu se va putea face in zilele nelucratoare, potrivit legii,
afara de cazurile urgente in care executarea poate fi incuviintata de presedintele
instantei de executare”. Asadar, in caz de urgenta, Tokes putea fi evacuat in
orice zi, intre ora 8:00 si 18:00, cu incuviintarea presedintelui Tribunalului
Timis, daca intervenea un ”caz de urgenta”. Intamplarile din noaptea de 15 spre
16 decembrie nu intruneau conditiile cazului de urgenta, astfel ca ordinul de
evacuare fortata dat de Ceausescu era ilegal. In jurul orei 12:00, in dreptul
imobilului in care locuia Tokes erau stranse aproximativ 30 de persoane. De aceasta
data, procentul curiosilor, al celor care poate incercau o solidarizare muta cu
pastorul Tokes, este dominant; dintre cele aproximativ 30 de persoane lipsesc
indivizii lumii interlope din ziua precedenta. La ora 13:00, maiorul Radu Tinu
il suna la Bucuresti pe seful Directiei I din DSS, colonelul Ratiu, si ii raporteaza,
printre altele: ”Nu e bine ce se intampla. E o balbaiala la partid, habar n-au
ce sa faca”. Este vorba, fara indoiala, de conflictul Balan-Matei din sediul CJP.
La cateva minute dupa ora 14:00, strazile din apropiere se anima si tramvaiele
devin ceva mai aglomerate. Pentru cei care nu sunt familiarizati cu Piata Maria
din Timisoara, trebuie precizat ca aceasta este un nod important de circulatie,
locul unde angajatii intreprinderilor de la periferie schimba tramvaiele venite
din trei directii diferite, pentru tramvaiele care circula spre centru. In Piata
Maria, in mod normal la ore de varf, se adunau pentru a lua alte mijloace de transport
multe sute de persoane. Asadar, posibilitatea de a vedea ce se intampla cativa
pasi mai incolo, la parohiala, era maxima, iar sansele ca un calator curios sa
intarzie pentru a afla ce se intampla treceau mult peste 50%. Era sambata si programul
de lucru al intreprinderilor timisorene se incheia la ora 14:00. In jurul orei
16:00, in Strada Timotei Cipariu apare un grup compact, de aproximativ 60-70 de
persoane, care se opresc in dreptul casei lui Tokes, ocupand si o parte din carosabil.
Securitatea isi trimite rapid oamenii printre cei veniti, iar Radu Tinu se duce
personal pentru a afla ce se intampla. De la ziaristul Teodor (Doru) Burza, venit
in scurt timp si el la fata locului, afla ca sunt sindicalisti trimisi de primarul
Petre Mot sa impiedice adunarea manifestantilor si, la nevoie, sa-i imprastie.
Prin aceasta decizie stupida, autoritatile locale constituie ele insele un grup
masiv de peste 100 de persoane in fata casei lui Tokes, trezind curiozitatea trecatorilor.
Multi dintre ei se vor opri si apoi vor ramane pe loc pentru a vedea ce se mai
intampla. Altii vor stationa un timp, se vor duce acasa sa manance si sa-si rezolve
unele probleme casnice, insa hotarati sa revina pe seara. Sindicalistii - persoane
cu functii pe linie de sindicat din mai multe intreprinderi timisorene - devin
cu timpul agitati, lasati acolo fara nici o conducere, impiedicati sa se duca
acasa dupa terminarea programului, preocupati ca si restul persoanelor prezente
de lipsurile zilnice, enervati ca pierd timpul intr-un loc lipsit de interes.
Li se spusese ca in Timotei Cipariu este un grup violent de iredentisti, de unguri
care vor sa impiedice punerea in aplicare a unei hotarari judecatoresti. Nu se
intampla nimic din toate astea. In aceasta manevra tipica mentalitatilor comuniste
care dominau gandirea activistilor de partid, de jos si pana la Bobu si Ceausescu,
trebuie identificat continutul discutiilor telefonice de noapte si dimineata dintre
Ceausescu si Balan. LOCUL. Sindicalistii trebuia sa sparga ”mitingul” de la casa
parohiala a lui Tökes Informatiile obtinute de Comisia senatoriala despre convorbirile
telefonice la inalt nivel politic intre Timisoara si Bucuresti provin exclusiv
de la factori politici locali. Era de asteptat ca in declaratiile lor ulterioare,
date in procese sau in fata Comisiei, sa ascunda gafa monumentala pe care au facut-o,
dovada a ingustimii gandirii lor si a incapacitatii de a conduce o structura,
de a gestiona o situatie oarecare. Le era extrem de greu sa recunoasca faptul
ca sunt autorii primei aglomerari importante de oameni din Timotei Cipariu si
mai ales ca sindicalistii pe care i-au trimis acolo, ”reprezentantii clasei muncitoare”,
”forta inaintata a partidului” etc., pactizasera cu micul grup de enoriasi si
simpatizanti de acolo, satui de propaganda, de minciuna, de conditiile mizerabile
de trai si de salarii diminuate. Teza unei multimi de 1.000 de persoane prezente
in dimineata sau dupa-amiaza zilei de 16 decembrie nu este realista. Nici autorii
cei mai entuziasti si inclinati spre exagerari nu confirma aceste cifre, nici
jumatate, nici macar un sfert. In dupa-amiaza de 16 decembrie, Emil Bobu va lua
si alte masuri, asa cum aflam din declaratia adjunctului sau, Nicolae Mihalache:
”La data de 16 decembrie 1989, la ora 16:30, din ordin, m-am prezentat la Bobu
Emil care, de fata cu Constantin Radu, mi-a spus urmatoarele: ”Vei pleca la Timisoara.
In cursul acestei nopti va trebui sa fie evacuat un pastor, problema de care se
vor ocupa organele Ministerului de Interne. Cumpanasu Ion va discuta cu pastorul
si ii va preciza noua parohie. Toate indicatiile necesare au fost transmise si
primului secetar Radu Balan. Tu nu vei avea alta sarcina decat aceea de a ma informa
cu evolutia situatiei de la Timisoara””. Avem posibilitatea acum sa incercam o
reconstituire a evenimentelor si din punctul de vedere al Securitatii, punct de
vedere care a lipsit din analizele anterioare. In primul rand, trebuie subliniat
ca supravegherea adunarilor de oameni in fata casei lui Tokes reprezenta doar
un aspect, o parte a activitatii Securitatii Timis, care era mult mai complexa.
Din punct de vedere strict profesional, adunarea oamenilor acolo ii deranja pe
securisti in indeplinirea misiunii lor, atat prin faptul ca le mobiliza fortele
pentru a depista prezenta unor eventuali instigatori din afara sau din interior,
cat si prin faptul ca perturba operatiile de supraveghere asupra lui Tokes. Este
clar ca in momentul in care s-au implicat autoritatile locale, Securitatea s-a
retras in interiorul misiunilor sale stricte de urmarire informativa. Si sa nu
uitam ca avea in zona cateva coloane de ”turisti” sovietici care tot declarau
ca se duc in Iugoslavia sa petreaca Craciunul, dar nu mai paraseau imprejurimile
Timisoarei. Ei nu se cazau la hoteluri si dormeau peste noapte in masini. Dormitul
peste noapte in masina, la jumatatea lui decembrie, presupune fie o rezistenta
fizica iesita din comun, fie folosirea intensa, pe durata intregii nopti, a sistemului
de incalzire al autoturismului, fapt care produce un consum de combustibil foarte
greu de recuperat. Iarasi nu trebuie sa uitam ca nu se gasea benzina si ca la
statiile de alimentare erau cozi imense, zi si noapte. Este imposibil sa neglijam
aceste detalii ale unor intamplari nefiresti si ilogice. Coloanele de turisti
se aflau la doar cativa kilometri de Iugoslavia si totusi nu treceau granita.
In cursul zilei, unul sau doua autoturisme din coloana plecau in recunoastere
prin oras, oprindu-se in apropierea unor locuri care vor deveni ”aprinse” incepand
cu seara zilei de 16 decembrie - in fata Consiliului Judetean, in dreptul aleii
ce ducea la Opera, pe strazile din vecinatatea casei lui Tokes. In dimineata aceleiasi
zile, doua tiruri sovietice vor stationa pe interzis in apropierea unor unitati
militare din Timisoara, ingreunand accesul. Vor fi indepartate de Militie. Mirko
Atanaskovici, consulul iugoslav de la Timisoara, care va fi acuzat in timpul proceselor
revolutiei ca s-a implicat in revolta si se va apara dupa aceea ca nu a depasit
”cu nimic ceea ce e prevazut in Conventia Internationala privind relatiile diplomatice
internationale”, facea in saptamana 10-16 decembrie trei si chiar cinci deplasari
pe zi in Iugoslavia si inapoi. El nu stie sau nu vrea sa spuna ca urmarirea sa
nu se reducea la un granicer care ii numara iesirile si intrarile pe la granita,
ci era supravegheat pe teritoriul Iugoslaviei si interceptat de la Belgrad, astfel
ca Directia de Contrainformatii a Securitatii cunostea unde si in ce masura incalca
prevederile Conventiei Internationale. In plus, el isi activase propria retea
de informatii, intre care unii agenti ai sai au fost identificati la casa parohiala.
Precizam ca in orasul Timisoara se mai aflau cateva obiective ale supravegherii
operative, asemanatoare lui Tokes.'
- source_sentence: AKP'li vekilin traktör açıklamasına tepki
sentences:
- Cumhuriyet Halk Partisi (CHP) Niğde Milletvekili Ömer Fethi Gürer, “Türkiye’de
AK Parti’den önce traktör yoktu” diyen AK Parti Grup Başkanvekili ve Ankara Milletvekili
Leyla Şahin Usta’ya tepki gösterdi. Usta’nın Meclis Genel Kurulu’ndaki konuşmasını,
“İnkarcılığın bu kadara da pes” diyerek eleştiren CHP Milletvekili ve TBMM Tarım,
Orman ve Köyişleri Komisyon Üyesi Ömer Fethi Gürer, Osmanlı Döneminde bile 4 traktörün
olduğunu anımsattı. “ATATÜRK’ÜN TRAKTÖR ÜZERİNDEKİ FOTOĞRAFLARINA İYİ BAKIN” 1923
yılında Cumhuriyet kurulduğunda, Büyük Önder Mustafa Kemal Atatürk’ün ilk talimatlarından
birinin de tarımda makineleşmenin gerçekleşmesi yönünde olduğunu hatırlatan Milletvekili
Gürer, “Bu nedenle 221 traktör ithal edildi. Atatürk’ün de üzerinde olduğu traktör
fotoğrafları arşivlere girildiğinde görülebilir” dedi. ATATÜRK ASKERE GİDEN ÇOCUKLARA
TRAKTÖR EĞİTİMİ VERİLMESİNİ İSTEMİŞTİ CHP Milletvekili Ömer Fethi Gürer, Atatürk’ün
askere giden köylü çocuklarına traktör kursu verilerek, ileride traktör sayısı
artacağı için gençlerin köylerine döndüklerinde, traktör kullanıyor olmalarının
sağlanmasını istediğini de ifade etti. 1980’LERDE TRAKTÖR ÜRETİMI HIZLA ARTTI
Türkiye’de 1944 yılında 956 traktörün bulunduğuna işaret eden CHP Milletvekili
Ömer Fethi Gürer, “1960 yılında ülkemizde 42 bin 136 traktör vardı ki, Türkiye
o dönemde traktör üretimine de başlamıştı. 1980’lere kadar traktör üretimi hızla
arttı. 1980’lerden sonra traktör fabrikalarından birinde ben de genel müdür olarak
görev yaptım. Ama AK Parti Grup Başkanvekilinin sözlerini duyunca, insana ‘Bu
kadar da olmaz’ dedirtiyor. Sanayi ve Teknoloji Bakanı da bu konuşma olurken Mecliste
genel kurulunda idi. En azından Bakan bir düzeltme yapmalı idi. Grup Başkanvekili
Usta bu sözlerinden sonra da konuştu ancak bir düzeltme yapmadı. Görünen o ki
sözlerini düzeltme yerine hala öyle sanıyor. Cumhuriyet tarihi bilmemek de böyle
bir şey” diye konuştu. 2000 YILINDA 1 MİLYONA YAKIN TRAKTÖR VARDI “Türkiye’de
traktörün olmadığını iddia etmenin, iddia sahibinin ülkemizin dününün sanayide
gelişmelerini de bilmediğini gösterir” diyen Milletvekili Gürer, “Çünkü 2000 yılına
gelindiğinde ülkemizde traktör sayısı 941 bin 843 adetti. 1 tane değil, 5 tane
değil, 10 tane değil, neredeyse 1 milyon traktör vardı ülkemizde” dedi. Gürer
bir traktör fabrikasında 1980 sonrası yönetici olarak çalıştığını da ifade ederek
1960’lardan sonra ülkede üretimi yapılan traktörler ile Türkiye’nin önemli bir
aşamaya geldiğini ifade etti. Gürer, “AKP’den önce bir şey yoktu masalının iş
yaptığı sanısı bundan sonra da benzer açıklamaların olmasını olası kılıyor. Cumhuriyet
tarihini bilmeyenler sanayi ununun, şekerin, bezin ithal olduğunu ve ülkemizde
Cumhuriyetin ilk yıllarında yapılan fabrikalarla üretildiğini öğrenmeyenler, fabrika
yapan fabrika olduğu gibi tiyatro salonlarına dahi sahip şeker fabrikalarını yapmak
değil satmaktan anlayanların, ülkenin dünü-bugünü arasında kamuda sata sata bitiremedikleri
varlıkların nasıl oluştuğunu ve halen dünya Endüstri 5.0 geçmişken Endüstri 3.5’te
debelendiğini göstermemek için her türlü ifadeyi rahatlıkla kullanabiliyorlar”
diye konuştu.
- Meteoroloji Genel Müdürlüğü tarafından yapılan uyarının ardından Tekirdağ'da etkili
olan kar yağışı İstanbul'un sınırına dayandı. Meteoroloji Genel Müdürlüğü tarafından
yapılan uyarıların ardından Tekirdağ'da bu sabah saatlerinde kar yağışı etkili
oldu. Kar yağışı şehrin yüksek kesimlerini ve evin çatılarına beyaz örtü ile kaplarken,
sabah işe çıkmak için arabalarına binen sürücülerde yollarda ilerlemekte güçlük
çekti. Etkisini sürdüren kar yağışı Tekirdağ sınırındaki İstanbul'un kapısına
dayandı. (İHA)
- 'Milli Eğitim Bakanı Tekin, 24 Kasım Öğretmenler Günü dolayısıyla sosyal medya
hesabından bir mesaj yayımladı. Mesajında, 100 yıldır var olan şanlı Cumhuriyet''in
ilelebet payidar kalmasında öğretmenlerin her zaman en önemli görevi üstlendiğini
belirten Tekin, ”Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini
özveriyle ifa eden, vatan evlatlarının yarınları için canı gönülden çalışarak
daha müreffeh bir geleceği tahayyül eden meslektaşlarımın omuzlarında yükselecek.”
değerlendirmesinde bulundu. 100 yıldır var olan şanlı Cumhuriyetimizin ilelebet
payidar kalmasında her zaman en önemli görevi üstlenen kıymetli Öğretmenim! Aziz
milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden,
vatan evlatlarının yarınları için canıgönülden çalışarak daha… pic.twitter.com/074mzguYYn
— Yusuf Tekin (@Yusuf__Tekin) November 22, 2023 Bakan Tekin, şunları kaydetti:
”Türkiye Yüzyılı''nın mimarları olmanız, evlatlarımızın ülkemize faydalı bir nesil
olarak yetişmesinde sarf ettiğiniz emek ve maarif davamıza ruh katan vakur duruşunuz
için sizlere minnettarım. Uhdenize emanet edilen öğrencilerinizi, bir anne, bir
baba şefkatiyle benimseyip her daim onları düşündüğünüzü biliyorum. O sebepledir
ki ülkemizin tüm başarısı, Sayende Öğretmenim.” Tekin, Bakanlık tarafından hazırlanan
”Sayende” adlı kısa filmi de paylaştı. Sanatçılar filmde gönüllü olarak yer aldı
Milli Eğitim Bakanlığının 24 Kasım Öğretmenler Günü kutlamaları içi hazırladığı
”Sayende” adlı kısa filmde Gülen Karaman, Ziya Kürküt, Zuhal Yalçın, Sefa Zengin,
Gülçin Gülrek ve Özge İnce rol aldı. Arzu Balkan''ın seslendirdiği filmde müzik,
edebiyat, tiyatro ve sporla ilgilenen dört öğrencinin sorunlarını, kendi evlatlarının
sorunlarıymış gibi benimseyen öğretmenlerin, onların hayatlarına dokunuşu konu
edildi. Dört öğrencinin farklı alanlarda çalışma yaparken yaşadıkları zorluklar
ekrana yansıtılırken öğretmenlerinin bu sorunlara çözüm bulmak için düşüncelerine
yer verildi. Öğretmenler odasındaki mutlu finalde öğrenciler, onlara yol gösteren
öğretmenleri ile buluşup Öğretmenler Günü''nü kutladı. Tüm sanatçıların gönüllü
olarak rol aldığı projenin çekimleri Kabataş Lisesi, Maçka Mesleki ve Teknik Anadolu
Lisesi ile Nişantaşı Anadolu Lisesi''nde gerçekleştirildi.'
---
# SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1) <!-- at revision 3af7c6da5b3e1bea796ef6c97fe237538cbe6e7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Dot Product
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mustozsarac/finetuned-one-epoch-multi-qa-mpnet-base-dot-v1")
# Run inference
sentences = [
"AKP'li vekilin traktör açıklamasına tepki",
'Cumhuriyet Halk Partisi (CHP) Niğde Milletvekili Ömer Fethi Gürer, “Türkiye’de AK Parti’den önce traktör yoktu” diyen AK Parti Grup Başkanvekili ve Ankara Milletvekili Leyla Şahin Usta’ya tepki gösterdi. Usta’nın Meclis Genel Kurulu’ndaki konuşmasını, “İnkarcılığın bu kadara da pes” diyerek eleştiren CHP Milletvekili ve TBMM Tarım, Orman ve Köyişleri Komisyon Üyesi Ömer Fethi Gürer, Osmanlı Döneminde bile 4 traktörün olduğunu anımsattı. “ATATÜRK’ÜN TRAKTÖR ÜZERİNDEKİ FOTOĞRAFLARINA İYİ BAKIN” 1923 yılında Cumhuriyet kurulduğunda, Büyük Önder Mustafa Kemal Atatürk’ün ilk talimatlarından birinin de tarımda makineleşmenin gerçekleşmesi yönünde olduğunu hatırlatan Milletvekili Gürer, “Bu nedenle 221 traktör ithal edildi. Atatürk’ün de üzerinde olduğu traktör fotoğrafları arşivlere girildiğinde görülebilir” dedi. ATATÜRK ASKERE GİDEN ÇOCUKLARA TRAKTÖR EĞİTİMİ VERİLMESİNİ İSTEMİŞTİ CHP Milletvekili Ömer Fethi Gürer, Atatürk’ün askere giden köylü çocuklarına traktör kursu verilerek, ileride traktör sayısı artacağı için gençlerin köylerine döndüklerinde, traktör kullanıyor olmalarının sağlanmasını istediğini de ifade etti. 1980’LERDE TRAKTÖR ÜRETİMI HIZLA ARTTI Türkiye’de 1944 yılında 956 traktörün bulunduğuna işaret eden CHP Milletvekili Ömer Fethi Gürer, “1960 yılında ülkemizde 42 bin 136 traktör vardı ki, Türkiye o dönemde traktör üretimine de başlamıştı. 1980’lere kadar traktör üretimi hızla arttı. 1980’lerden sonra traktör fabrikalarından birinde ben de genel müdür olarak görev yaptım. Ama AK Parti Grup Başkanvekilinin sözlerini duyunca, insana ‘Bu kadar da olmaz’ dedirtiyor. Sanayi ve Teknoloji Bakanı da bu konuşma olurken Mecliste genel kurulunda idi. En azından Bakan bir düzeltme yapmalı idi. Grup Başkanvekili Usta bu sözlerinden sonra da konuştu ancak bir düzeltme yapmadı. Görünen o ki sözlerini düzeltme yerine hala öyle sanıyor. Cumhuriyet tarihi bilmemek de böyle bir şey” diye konuştu. 2000 YILINDA 1 MİLYONA YAKIN TRAKTÖR VARDI “Türkiye’de traktörün olmadığını iddia etmenin, iddia sahibinin ülkemizin dününün sanayide gelişmelerini de bilmediğini gösterir” diyen Milletvekili Gürer, “Çünkü 2000 yılına gelindiğinde ülkemizde traktör sayısı 941 bin 843 adetti. 1 tane değil, 5 tane değil, 10 tane değil, neredeyse 1 milyon traktör vardı ülkemizde” dedi. Gürer bir traktör fabrikasında 1980 sonrası yönetici olarak çalıştığını da ifade ederek 1960’lardan sonra ülkede üretimi yapılan traktörler ile Türkiye’nin önemli bir aşamaya geldiğini ifade etti. Gürer, “AKP’den önce bir şey yoktu masalının iş yaptığı sanısı bundan sonra da benzer açıklamaların olmasını olası kılıyor. Cumhuriyet tarihini bilmeyenler sanayi ununun, şekerin, bezin ithal olduğunu ve ülkemizde Cumhuriyetin ilk yıllarında yapılan fabrikalarla üretildiğini öğrenmeyenler, fabrika yapan fabrika olduğu gibi tiyatro salonlarına dahi sahip şeker fabrikalarını yapmak değil satmaktan anlayanların, ülkenin dünü-bugünü arasında kamuda sata sata bitiremedikleri varlıkların nasıl oluştuğunu ve halen dünya Endüstri 5.0 geçmişken Endüstri 3.5’te debelendiğini göstermemek için her türlü ifadeyi rahatlıkla kullanabiliyorlar” diye konuştu.',
"Milli Eğitim Bakanı Tekin, 24 Kasım Öğretmenler Günü dolayısıyla sosyal medya hesabından bir mesaj yayımladı. Mesajında, 100 yıldır var olan şanlı Cumhuriyet'in ilelebet payidar kalmasında öğretmenlerin her zaman en önemli görevi üstlendiğini belirten Tekin, ”Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden, vatan evlatlarının yarınları için canı gönülden çalışarak daha müreffeh bir geleceği tahayyül eden meslektaşlarımın omuzlarında yükselecek.” değerlendirmesinde bulundu. 100 yıldır var olan şanlı Cumhuriyetimizin ilelebet payidar kalmasında her zaman en önemli görevi üstlenen kıymetli Öğretmenim! Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden, vatan evlatlarının yarınları için canıgönülden çalışarak daha… pic.twitter.com/074mzguYYn — Yusuf Tekin (@Yusuf__Tekin) November 22, 2023 Bakan Tekin, şunları kaydetti: ”Türkiye Yüzyılı'nın mimarları olmanız, evlatlarımızın ülkemize faydalı bir nesil olarak yetişmesinde sarf ettiğiniz emek ve maarif davamıza ruh katan vakur duruşunuz için sizlere minnettarım. Uhdenize emanet edilen öğrencilerinizi, bir anne, bir baba şefkatiyle benimseyip her daim onları düşündüğünüzü biliyorum. O sebepledir ki ülkemizin tüm başarısı, Sayende Öğretmenim.” Tekin, Bakanlık tarafından hazırlanan ”Sayende” adlı kısa filmi de paylaştı. Sanatçılar filmde gönüllü olarak yer aldı Milli Eğitim Bakanlığının 24 Kasım Öğretmenler Günü kutlamaları içi hazırladığı ”Sayende” adlı kısa filmde Gülen Karaman, Ziya Kürküt, Zuhal Yalçın, Sefa Zengin, Gülçin Gülrek ve Özge İnce rol aldı. Arzu Balkan'ın seslendirdiği filmde müzik, edebiyat, tiyatro ve sporla ilgilenen dört öğrencinin sorunlarını, kendi evlatlarının sorunlarıymış gibi benimseyen öğretmenlerin, onların hayatlarına dokunuşu konu edildi. Dört öğrencinin farklı alanlarda çalışma yaparken yaşadıkları zorluklar ekrana yansıtılırken öğretmenlerinin bu sorunlara çözüm bulmak için düşüncelerine yer verildi. Öğretmenler odasındaki mutlu finalde öğrenciler, onlara yol gösteren öğretmenleri ile buluşup Öğretmenler Günü'nü kutladı. Tüm sanatçıların gönüllü olarak rol aldığı projenin çekimleri Kabataş Lisesi, Maçka Mesleki ve Teknik Anadolu Lisesi ile Nişantaşı Anadolu Lisesi'nde gerçekleştirildi.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 62,964 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 25.67 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 439.19 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>“Case pentru generali”</code> | <code>O teribilă afacere, pusă la cale la vîrful Armatei, a fost dezvăluită de Jurnalul Naţional în decembrie 1999. Mărimile oştirii au pus mîna pe locuinţe de lux, din cota MApN, chiar în centrul Capitalei, deşi mai aveau şi alte date de stat şi Armată. Operaţiunea, dovedită de cotidianul nostru, a fost anchetată de Parchetul Militar şi, după o primă spălare pe mîini, dosarul a fost redeschis la cinci ani de la articolele noastre. MApN • Afacerea care a ştirbit imaginea morală a armatei O teribilă afacere, pusă la cale la vîrful Armatei, a fost dezvăluită de Jurnalul Naţional în decembrie 1999. Mărimile oştirii au pus mîna pe locuinţe de lux, din cota MApN, chiar în centrul Capitalei, deşi mai aveau şi alte case date de stat şi Armată. Operaţiunea, dovedită de cotidianul nostru, a fost anchetată de Parchetul Militar şi, după o primă spălare pe mîini, dosarul a fost redeschis la cinci ani de la articolele noastre. Trecuseră, deja, cîteva luni de cînd o anonimă care a ajuns pe masa procurorilor militari, conduşi atunci de generalul Dan Voinea, dădea frisoane. Mai mulţi ofiţeri din Garnizoana Bucureşti erau scandalizaţi de faptul că un grup de generali, care-i avea în frunte pe şeful Statului Major General de la acea vreme, generalul Constantin Degeratu, obţinuse al doilea şi chiar al treilea rînd de apartamente din cota MApN, deşi 20.000 de cadre militare trăiau, atunci, în condiţii mizere. Locuinţele de serviciu vizate erau în Centrul Civic. Faptele sesizate erau descrise foarte explicit şi trimiteau direct la dovezi. Anonima fusese scrisă de cîţiva ofiţeri ce au avut acces la documente şi care se temeau foarte tare de represalii. Ca de obicei, nu se întîmpla nimic. Dezvăluirile din decembrie 1999 din Jurnalul Naţional despre acest subiect, publicate într-un serial cu nouă episoade, au impulsionat investigaţiile, atît la Parchetul Militar, cît şi în interiorul ministerului. Abia atunci, Victor Babiuc, ministrul Apărării, a ordonat verbal “verificarea aspectelor semnalate în ziarul Jurnalul Naţional din 16-20 decembrie 1999, referitoare la «repartizarea şi vînzarea locuinţelor de serviciu unor colonei şi generali cu funcţii importante»“ de către o comisie din Inspectoratul General al Ministerului Apărării Naţionale. De altfel, cînd a demisionat Victor Babiuc a recunoscut într-un interviu acordat nouă: “În legătură cu Afacerea «Case pentru generali», într-adevăr, sînt nereguli”. RAPORTUL. La începutul anului 2000, an electoral, nimeni nu se grăbea să găsească eventuali vinovaţi. În toamna acelui an, cotidianul nostru a oferit şi dovada afacerii. Comisia Inspectoratului MApN confirma rezultatele articolelor noastre şi arăta şi adevărata amploare a operaţiunii: peste 60 de generali şi colonei implicaţi. Raportul Inspectoratului MApN a fost dosit la cel mai înalt nivel, dar am reuşit să intrăm în posesia lui şi, astfel, să-l publicăm. Atunci, în urma articolelor, la adresa subsemnatului au fost făcute presiuni foarte mari. Inclusiv insinuări adresate vecinilor că aş fi traficant de droguri, iar locuinţa mi-a fost spartă demonstrativ. Nu lipsea nimic. Investigaţiile făcute de ofiţerii Inspectoratului MApN spuneau că locuinţe de serviciu din cota Armatei au fost luate la preţ de nimic, cu mult sub cel al pieţei, de generali şi colonei cu funcţiile cele mai mari în Ministerul Apărării Naţionale, deşi aceştia nu aveau dreptul, conform legilor, şi dăduseră declaraţii în fals la notariate. “Apreciem că repartizarea unor locuinţe de serviciu unor ofiţeri care deţin sau au deţinut şi înstrăinat copiilor locuinţe proprietate personală reprezintă încălcarea legislaţiei în vigoare”, era una dintre concluziile raportului. Mai mult, unii îşi cumpăraseră în rate, deşi nici acest lucru nu era permis de lege, potrivit ofiţerilor Inspectoratului MApN. Pe lista Inspectoratului se afla şi viitorul şef al Statului Major General, generalul Eugen Bădălan. PUNCT. Pentru declanşarea şi derularea afacerii s-au făcut presiuni şi asupra ofiţerilor care aveau atribuţii de verificare a legalităţii repartiţiei din Comenduirea Garnizoanei Bucureşti, care au priceput imediat: “A aprobat ministrul!”. Deloc surprinzătoare a fost viteza cu care în noiembrie 2000 dosarul “de la Parchetul Militar” a şi fost închis. Nimeni nu era vinovat. “Actele premergătoare administrate în cauză nu au confirmat învinuirile tendenţioase aduse unor cadre militare, cu funcţii de conducere, din structurile MApN. De asemenea, nu este sarcina urmăririi penale de a lua poziţii critice faţă de anumite iniţiative legislative ale ministerului, ori de a interpreta corectitudinea sau moralitatea unuia sau altuia din actele normative ce au stat la baza procesului de vînzare a locuinţelor de serviciu din administrarea MapN”, spunea rezoluţia semnată de procurorul militar, col. Dumitru Carp. DE LA CAPĂT. La bilanţul Serviciului de Telecomunicaţii Speciale pe anul 2001, şeful serviciului, Tudor Tănase, a prezentat ce a găsit Curtea de Conturi în “ograda” pe care tocmai o prelua. Astfel, Curtea de Conturi atrăgea atenţia asupra achiziţiei de către STS tocmai a apartamentului de patru camere din Cluj-Napoca pe care generalul Constantin Degeratu, fostul şef al Statului Major General, o deţinea. E taman dovada că generalul Degeratu n-avea cum să-şi cumpere locuinţa de serviciu în 1999. Conducerii din momentul achiziţiei a STS i se imputa faptul că suma de 42.000 de dolari, cu care s-a plătit apartamentul, depăşea cotaţia pieţei de la acea vreme, iar legislaţia privind achiziţionarea de imobile pentru STS a fost încălcată. De atunci şi pînă în 2005 nu s-a mai întîmplat nimic. De pe poziţia de consilier de stat la Administraţia Prezidenţială, generalul Constantin Degeratu ne declara că nimic n-a fost în neregulă. “Nu a existat nici o ilegalitate. La mutarea mea de la Cluj aici, în Capitală, am stat un timp, provizoriu, cu familia, la un cămin de garnizoană. Apoi, la un moment dat, s-a pus chiar problema trecerii mele în rezervă, că, dacă nu, mă mut definitiv în Bucureşti. Mai întîi mi s-a oferit o altă locuinţă, care, chiar dacă se afla într-o poziţie centrală, nu i-a plăcut soţiei mele. A doua, cea în care locuim şi acum, i-a plăcut, chiar dacă avea unele probleme. Ne-am mutat, iar apoi, ani la rînd, a tot trebuit să facem reparaţii pentru că, practic, ori de cate ori ploua, se produceau infiltraţii. Asta este locuinţa cu pricina. Ştiu că la un moment dat a existat o cercetare a Parchetului, dar, concret, nimeni nu a fost acuzat de vreo încălcare a legii”, spunea Degeratu în 2005. Locuinţa de care se plîngea generalul este un apartament duplex pe B-dul Unirii. Poveste fără sfîrşit În 2005, accesul la Dosarul “Case pentru generali”, era deja închis. Generalul Samoilă Joarză, şeful Secţiei Parchetelor Militare, mirat că ne-aducem aminte de o aşa anchetă, ne-a spus că nu poate să ne permită accesul la el fiindcă are documente secrete. Mai mult, ne-a mai zis că, dacă tot întrebăm de el, îl va reciti pentru a vedea cum s-au pus soluţiile. La scurt timp, şeful procurorilor militari a decis infirmarea soluţiei de NUP date în anul 2000. Fapt care demonstrează că şi generalului i s-a părut ceva în neregulă în cazul respectiv. Joarză ne-a declarat atunci că Jurnalul Naţional a avut mai multe informaţii decît procurorii militari. Din 2005 şi pînă astăzi au mai trecut încă trei ani. Dosarul a fost repartizat la procurori militari care erau în prag de trecere în rezervă şi ancheta a continuat, normal, cu sincope. Un alt procuror, o nouă familiarizare cu cazul şi tot aşa. Cert este că la nouă ani de cînd am publicat primul articol nimeni nu a fost găsit vinovat, nici măcar moral, pentru afacerea cu locuinţele de serviciu ale Armatei. Şi nimeni, după toate probabilităţile, nici n-o să fie. Citiţi şi: Monarhistul a devenit republican în trei zile Epopeea ”Mineriadei” din ianuarie 1999 Reportaj fără vestă antiglonţ Bebe Carabină s-a înţepat la Ghimpaţi Prindeţi bestia! La 20 august 1996, Jurnalul Naţional a stîrnit un iureş politic preluînd un interviu acordat de Emil Constantinescu revistei Micro Magazin, revistă de limbă română ce apărea în Statele Unite ale Americii. Interviul fusese luat în timpul unei vizite pe care candidatul de atunci al CDR la Preşedinţie o făcuse în comunităţile româneşti Los Angeles, Chicago şi New York.Aprilie 1998, primăvara în care Armata, M.I. şi serviciile secrete au fost implicate într-un scandal imens, care a ricoşat şi în clasa politică: Ţigareta II. Atunci, Jurnalul Naţional a relatat zi de zi amănuntele acestei afaceri extrem de încîlcite. Aflaţi, de multe ori, cu un pas înaintea anchetatorilor, reporterii noştri au dezvăluit informaţii spectaculoase pe care autorităţile le-ar fi dorit ascunse pentru totdeauna.Ianuarie 1999, luna în care România s-a aflat în pragul dezastrului. Ieşiţi din bezna galeriilor, minerii din Valea Jiului s-au răzvrătit împotriva Guvernului şi au fost la un pas de a arunca ţara în haos. Au fost zile şi nopţi dramatice, în cursul cărora reporterii Jurnalului Naţional s-au aflat în ”linia întîi” a evenimentelor, martori ai dezastrului de la Costeşti, dar şi ai ”Păcii de la Cozia”.”De nouă ani, Iugoslavia nu mai are pace. Se strecoară printre războaie, dar nu vrea să recunoască. Nu vrea să se lase doborîtă. Îmbină pacea şi războiul aşa de bine, încît nu mai ştii să faci diferenţa”.”Ghimpaţi, un sătuc liniştit din Giurgiu, a intrat în istorie cu tot cu vajnicii săi paznici comunali, care au reuşit performanţa de a-l prinde pe unul dintre cei mai căutaţi bandiţi din România. Ce n-a putut face Poliţia atîta amar de vreme s-a întîmplat sîmbătă noaptea datorită sănătosului spirit al ţăranului român.””Jurnalul Naţional oferă recompensă 5 milioane de lei pentru informaţiile ce vor duce la prinderea şoferului criminal” – anunţa ziarul de luni, 17 iulie 2000, anul VIII, nr. 2178. Campania a avut succes. Bestiile care au accidentat un copil pe o stradă din Bucureşti, apoi l-au răpit şi l-au lăsat să moară pe un teren viran în zona Vitan au fost identificate cu ajutorul martorilor.</code> | <code>1.0</code> |
| <code>Filenin Efeleri'nin rakibi Belarus</code> | <code>CEV Avrupa Altın Ligi'nde ilk etap karşılaşmalarında 3'te 3 yapan ”Filenin Efeleri”, 4-6 Haziran 2021 tarihlerinde Portekiz'de oynanacak ikinci etap karşılaşmaları öncesinde son antrenmanını İstanbul'da yaptı. TVF Burhan Felek Vestel Voleybol Salonu'nda başantrenör Nedim Özbey yönetiminde yapılan antrenmanda milliler, hücum ve savunma üzerine taktik uygulamalar çalıştı. Milli kafile, ikinci etap karşılaşmalarını oynamak üzere bugün Portekiz'e hareket edecek. C Grubu'nda Belarus, Çekya ve Portekiz ile mücadele edecek milli takımın maçları TRT Spor ve TRT Spor Yıldız'da yayınlanacak. 4 Haziran Cuma: 18.00 Türkiye-Belarus 5 Haziran Cumartesi: 20.00 Portekiz-Türkiye 6 Haziran Pazar: 17.00 Çekya-Türkiye Statü İki ayrı turnuva şeklinde düzenlenecek Avrupa Altın Ligi'nin sonunda gruplarını ilk sırada tamamlayan 3 takım ve final grubuna ev sahipliği yapacak ülke (Belçika), Dörtlü Final oynamaya hak kazanacak.</code> | <code>1.0</code> |
| <code>Ankara için fırtına ve kuvvetli yağış uyarısı</code> | <code>Meteoroloji Genel Müdürlüğü tarafından yapılan son değerlendirmelere göre, yarın kuvvetli yağış beklenen Ankara'da rüzgarın da güney (lodos) yönlerden fırtına (50-70 km/saat), yer yer kuvvetli fırtına şeklinde esmesi bekleniyor. Ankara Valiliği, yarından itibaren beklenen sağanak ve kuvvetli fırtına nedeniyle dikkatli ve tedbirli olunması uyarısında bulundu. Valilikten yapılan açıklamada, Meteoroloji Genel Müdürlüğünden alınan son verilere göre, yarından itibaren Balkanlar üzerinden gelecek yağışlı havanın etkisiyle Ankara genelinde sağanak ve yer yer gök gürültülü sağanak beklendiği, yağışların cumartesi, pazar ve pazartesi yer yer kuvvetli olacağının tahmin edildiği belirtildi. Açıklamada, ”Cumartesi günü rüzgarın güney yönlerden fırtına, yer yer kısa süreli kuvvetli fırtına, yüksek kesimlerde tam fırtına şeklinde eseceği ve mevsim normallerinin üzerinde olması beklenen hava sıcaklıklarının pazar gününden itibaren yağışlarla beraber hissedilir derecede azalarak mevsim normalleri civarına düşmesi bekleniyor. Rüzgar ve fırtına sebebiyle ulaşımda aksamalar, çatı uçması, ağaç ve direk devrilmesi, soba ve doğal gaz kaynaklı zehirlenmeler gibi olumsuzluklara karşı dikkatli ve tedbirli olunmalıdır.” uyarısına yer verildi. AFAD, SMS ile uyardı Afet ve Acil Durum Yönetimi Başkanlığı (AFAD) ise cep telefonlarına gönderdiği SMS'te, ”Meteorolojiye göre yarın Ankara'da kuvvetli lodos ve yağış bekleniyor. Baca zehirlenmesi, ulaşımda aksamalar ve çatı uçmasına karşı dikkatli olun.” uyarısı yaptı.</code> | <code>1.0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.1270 | 500 | 0.3574 |
| 0.2541 | 1000 | 0.3181 |
| 0.3811 | 1500 | 0.2846 |
| 0.5081 | 2000 | 0.2585 |
| 0.6352 | 2500 | 0.2455 |
| 0.7622 | 3000 | 0.235 |
| 0.8892 | 3500 | 0.2324 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
yemen2016/memobert3_1_NCS | yemen2016 | 2024-06-27T11:11:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:11:37Z | Entry not found |
habulaj/323210289491 | habulaj | 2024-06-27T11:14:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:13:56Z | Entry not found |
cccornflake/absa_v2_sentiment | cccornflake | 2024-06-27T11:25:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-27T11:16:25Z | ---
license: apache-2.0
---
|
francisronca/Llama-2-7b-chat-finetuned | francisronca | 2024-06-27T11:18:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:18:09Z | Entry not found |
tfshaman/SymPy-Mistral-tokenizer | tfshaman | 2024-06-27T11:18:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:18:26Z | Entry not found |
antex100/trained_model.h5 | antex100 | 2024-06-27T11:20:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-27T11:20:21Z | ---
license: apache-2.0
---
|
Nex432/Tails-SAD2 | Nex432 | 2024-06-27T13:54:36Z | 0 | 1 | null | [
"license:cc-by-4.0",
"region:us"
]
| null | 2024-06-27T11:21:48Z | ---
license: cc-by-4.0
--- |
rajparmar/mistral-7B-v0.2-bf16-sharded-finetuned-tpicap-emails | rajparmar | 2024-06-27T11:22:20Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:ybelkada/Mistral-7B-v0.1-bf16-sharded",
"region:us"
]
| null | 2024-06-27T11:22:09Z | ---
base_model: ybelkada/Mistral-7B-v0.1-bf16-sharded
tags:
- generated_from_trainer
model-index:
- name: mistral-7B-v0.2-bf16-sharded-finetuned-tpicap-emails
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7B-v0.2-bf16-sharded-finetuned-tpicap-emails
This model is a fine-tuned version of [ybelkada/Mistral-7B-v0.1-bf16-sharded](https://huggingface.co/ybelkada/Mistral-7B-v0.1-bf16-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.3.0+cu121
- Datasets 2.13.0
- Tokenizers 0.14.1
|
bezzam/digicam-celeba-unet4M-trainable-inv-unet4M_wave | bezzam | 2024-06-27T11:23:24Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-27T11:22:56Z | ---
license: mit
---
|
samad321kk/saman24 | samad321kk | 2024-06-27T11:29:48Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-27T11:23:42Z | ---
license: openrail
---
|
starreza/hhh | starreza | 2024-06-27T11:24:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-27T11:24:04Z | ---
license: apache-2.0
---
|
SemihDurmaz/whisper-small-tr5 | SemihDurmaz | 2024-06-27T11:26:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:26:13Z | Entry not found |
ericJoung/chatglm3-6b-VE | ericJoung | 2024-06-27T11:27:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-27T11:27:39Z | ---
license: apache-2.0
---
|
vittoriomaniezzo/testtransformers | vittoriomaniezzo | 2024-06-27T11:28:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:28:09Z | Entry not found |
bubasword/1 | bubasword | 2024-06-27T11:31:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:31:34Z | Entry not found |
ACEGameAI/Gary-Jiang_ohwx-man | ACEGameAI | 2024-06-27T12:09:32Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:32:46Z | Entry not found |
samad321kk/dfg | samad321kk | 2024-06-27T11:33:11Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-27T11:33:11Z | ---
license: openrail
---
|
youpennBadgley/joe | youpennBadgley | 2024-06-27T11:33:45Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:33:45Z | Entry not found |
AXTUN/23123213221322132213213 | AXTUN | 2024-06-27T11:37:51Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:37:51Z | Entry not found |
Hitesh17/REINFORCE-PixelCopter | Hitesh17 | 2024-06-27T11:41:01Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-27T11:38:20Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: REINFORCE-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.30 +/- 29.47
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Grayx/john_paul_van_damme_42 | Grayx | 2024-06-27T11:38:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:38:48Z | Entry not found |
ABDALLALSWAITI/3d-icon-sdxl-dora | ABDALLALSWAITI | 2024-06-27T11:38:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:38:56Z | Entry not found |
WHU-Sigma/HyperSIGMA | WHU-Sigma | 2024-06-30T06:55:41Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-27T11:39:27Z | ---
license: apache-2.0
---
|
Grayx/john_paul_van_damme_43 | Grayx | 2024-06-27T11:42:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:42:20Z | Entry not found |
DokiQueen/Self-suck | DokiQueen | 2024-06-27T11:46:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:42:28Z | Entry not found |
f4c/finetuned_model_airplane | f4c | 2024-06-27T11:42:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:42:28Z | Entry not found |
habulaj/1417114058 | habulaj | 2024-06-27T11:43:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:43:28Z | Entry not found |
Likich/mistral-finetune-qualcoding-500no | Likich | 2024-06-27T11:45:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:45:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danaaubakirova/mplugdocowl1.5-Omni | danaaubakirova | 2024-06-27T11:50:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mplugdocowl",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-06-27T11:47:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RoschildRui/AES2_deberta | RoschildRui | 2024-07-01T08:44:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-06-27T11:49:44Z | ---
license: apache-2.0
---
|
knetai/phi_3_finetune_merged | knetai | 2024-06-27T11:49:47Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:49:47Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** knetai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
1nhye/rough | 1nhye | 2024-06-27T12:04:09Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:50:00Z | Entry not found |
Likich/mistral-finetune-qualcoding-20no | Likich | 2024-06-27T11:50:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:50:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cortexso/gpt-4o | cortexso | 2024-06-27T11:51:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:51:16Z | Entry not found |
ABDALLALSWAITI/3diconsdxldora | ABDALLALSWAITI | 2024-06-27T11:52:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:52:03Z | Entry not found |
OXES72/mixtral | OXES72 | 2024-06-27T11:52:09Z | 0 | 0 | null | [
"license:unknown",
"region:us"
]
| null | 2024-06-27T11:52:09Z | ---
license: unknown
---
|
emerie/classifier-tokenizer | emerie | 2024-06-27T11:56:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-27T11:53:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kurtarici1/nott | kurtarici1 | 2024-06-27T11:54:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:54:21Z | Entry not found |
Likich/mistral-finetune-qualcoding-10 | Likich | 2024-06-27T11:55:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T11:55:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rrct/Person2_LoRA | rrct | 2024-06-27T12:06:51Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-06-27T11:56:16Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of TOK person
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - rrct/Person2_LoRA
<Gallery />
## Model description
These are rrct/Person2_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](rrct/Person2_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
rajkumaralma/Emoji | rajkumaralma | 2024-06-27T11:56:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
]
| text-to-image | 2024-06-27T11:56:46Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'Emoji '
output:
url: images/out-0.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Emoji
license: mit
---
# Emoji
<Gallery />
## Model description
Emoji
## Trigger words
You should use `Emoji` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/rajkumaralma/Emoji/tree/main) them in the Files & versions tab.
|
ShapeKapseln33/PharmaFlexw3 | ShapeKapseln33 | 2024-06-27T11:59:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T11:58:24Z | Pharma Flex XR South Korea Reviews - The most popular nutritional supplement Pharma Flex Rx is intended to maintain and promote joint health. The manufacturer says it features all natural ingredients with no fillers.
**[Click here to buy now from official website of Pharma Flex XR](https://capsules24x7.com/pharma-flex-kr)**
##PharmaFlex Rx – for joints – fake – effects – where to buy
There are several measures that fake PharmaFlex Rx for joint effects determine the lifestyle. This includes the world of feelings, social life and even psychology. LABEL STICKER PROBLEM IN FRANCE No matter how we test external variables, we are only bodies we can protect.
If you can run as well as where to buy PharmaFlex Rx for fake joints walk around as you please, if you can do the tasks you value and also feel unpleasant while doing these things, this life should be.
Bone and joint system; bones, PharmaFlex Rx effects where to buy cartilage, joint, tendon, and connective tissue materials.
The combination of all these pieces encourages us to enjoy life with kicks by ensuring that we can move openly and also silently. The first moment we want to move any arm or leg, a corresponding caution signal will be sent.
Muscles stimulated by the conversion of incoming signals with the help of bones. In order for the desired movement to be performed in a very good way, the joints need to function properly.
Therefore, having healthy and balanced and durable bones and joint structures also have important preventive measures so that we can act as we wish. Joints; where
##PharmaFlex Rx – for joints – how to use – pharmacy – Ingredients
2 or more united bones, connective cells, tendons and even muscle tissue located separately or from each other how to use PharmaFlex Rx for pharmacy joints and also offers a system. There are three types of joints in our body.
The joint, which interconnects the PharmaFlex Rx Materials for wearable joints with each other, is visible in the head, the so-called non-deformable joint.
COVERED BY EXPENSES Joints between PharmaFlex Rx pharmacies Spinal materials and also ribs, which actually have limited wheelchairs, are called semi-play joints. Joints with the ability to move such as hips, knees and shoulders are called play joints.
**[Click here to buy now from official website of Pharma Flex XR](https://capsules24x7.com/pharma-flex-kr)**
##Joints are covered with unique cells called cartilage material.
Cartilaginous tissue plays an important role in the muscles and also the skeletal system, offering the bones to fold over one another while protecting from damage the locations where the bones collaborate without rubbing.
MOVEMENT IS NECESSARY FOR HEALTHY BONES If you want your joints to be healthier, the first and most important thing you need to do is move. Many people with joint disorders choose not to move to avoid worsening pain.
However this is not correct! Especially those with diseases such as arthritis (usually older) are unable to move due to broad social ideas. Researchers who have explored this have proven that this belief is far from over the top!
According to the results of the survey, those who suffer from arthritis or who have actually played sports in a previous life have healthier joints than those who don't!
Because the task strengthens the muscle tissue, it helps the joint skeleton. YOUR JOINING IS CRUCIAL
In particular, those with musculoskeletal system problems must be related to professionals when doing sports activities, and ideally, they must do sports activities with specific skills.
Making and repeating forced activities can also lead to the development of existing discomfort.
Quick turns, turns or sudden reflexes can be counted among preventive movements. Extending a healthy and balanced life through what you have. GOOD NUTRITION Some health problems stem from wrong eating habits.
For muscle mass and a healthy skeletal system; cartilage material, muscle mass, and even bone.
These are green leafy vegetables, zucchini, olive oil, citrus and even turkey meat! Vitamin C, K, Vitamins found in environmentally friendly leafy vegetables, magnesium, iron and calcium are very beneficial to improve your joints.
Turkey meat contains healthy protein without fat, which is very popular in our society, but which has a reparative effect on cartilage material and muscle tissue structure. Individuals with similar problems must add turkey meat to the diet checklist.
VERY CONSIDERING CYLONUS The musculoskeletal system is like the supporting pillars of the body. Therefore, the weight they have to carry is a great value in terms of damage.
If you are taking medications such as pain relievers as pain relievers to help deal with persistent bone or joint pain, and also you want to know about safer options, you are in the right place.
##Summary
If you feel pain in your back, shoulders, knees, or elsewhere, there are a number of natural remedies that can help ease the symptoms of joint discomfort, which include stiffness and difficulty walking. Some studies show that about one-third of all adults experience joint pain every month.
**[Click here to buy now from official website of Pharma Flex XR](https://capsules24x7.com/pharma-flex-kr)**
|
Likich/mistral-finetune-qualcoding-5 | Likich | 2024-06-27T12:00:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:00:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AyushSar45/Fine_Tuned_SatarcoderV2 | AyushSar45 | 2024-06-27T12:00:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:00:42Z | Entry not found |
yusufdemrr/ReinforcePixelCopter | yusufdemrr | 2024-06-27T12:03:59Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-06-27T12:02:33Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforcePixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 26.10 +/- 25.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bezzam/digicam-celeba-unet4M-unrolled-admm5-unet4M | bezzam | 2024-06-27T12:06:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-06-27T12:05:49Z | ---
license: mit
---
|
Masallah/SDVN6-RealXL | Masallah | 2024-06-27T12:14:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:07:29Z | Entry not found |
shng2025/xlm-roberta-base-finetuned-panx-all | shng2025 | 2024-06-28T16:45:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-06-27T12:07:53Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1767
- F1: 0.8508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.295 | 1.0 | 835 | 0.1898 | 0.8188 |
| 0.1556 | 2.0 | 1670 | 0.1714 | 0.8372 |
| 0.1025 | 3.0 | 2505 | 0.1767 | 0.8508 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Kabiru/NPKRecommender | Kabiru | 2024-06-27T14:15:04Z | 0 | 2 | null | [
"joblib",
"random-forest",
"regression",
"agriculture",
"soil-nutrients",
"dataset:custom",
"license:mit",
"region:us"
]
| null | 2024-06-27T12:08:43Z | ---
license: mit
datasets:
- custom
metrics:
- mean_squared_error
- mean_absolute_error
- r2_score
model_name: Random Forest Regressor for Crop Nutrient Prediction
tags:
- random-forest
- regression
- agriculture
- soil-nutrients
---
# Random Forest Regressor for Crop Nutrient Prediction
## Overview
This model predicts the nutrient needs (Nitrogen, Phosphorus, Potassium) for various crops based on features like crop type, target yield, field size, and soil properties. It is trained using a Random Forest Regressor.
## Training Data
The model was trained on a custom dataset containing the following features:
- Crop Name
- Target Yield
- Field Size
- pH (water)
- Organic Carbon
- Total Nitrogen
- Phosphorus (M3)
- Potassium (exch.)
- Soil moisture
The target variables are:
- Nitrogen (N) Need
- Phosphorus (P2O5) Need
- Potassium (K2O) Need
## Model Training
The model was trained using a Random Forest Regressor. Below are the steps taken for training:
1. Data preprocessing: handling missing values, scaling numerical features, and one-hot encoding categorical features.
2. Splitting the dataset into training and testing sets.
3. Training the Random Forest model on the training set.
4. Evaluating the model on the test set.
## Evaluation Metrics
The model was evaluated using the following metrics:
- Mean Squared Error (MSE)
- Mean Absolute Error (MAE)
- R-squared (R2) Score
## How to Use
### Input Format
The model expects input data in JSON format with the following fields:
- "Crop Name": String
- "Target Yield": Numeric
- "Field Size": Numeric
- "pH (water)": Numeric
- "Organic Carbon": Numeric
- "Total Nitrogen": Numeric
- "Phosphorus (M3)": Numeric
- "Potassium (exch.)": Numeric
- "Soil moisture": Numeric
### Preprocessing Steps
1. Load your input data.
2. Ensure all required fields are present and in the expected format.
3. Handle any missing values if necessary.
4. Scale numerical features based on the training data.
5. One-hot encode categorical features (if applicable).
### Inference Procedure
#### Example Code:
```python
from sklearn.externals import joblib
import pandas as pd
# Load the trained model
model = joblib.load('ModelV2.joblib')
# Example input data
new_data = {
'Crop Name': 'apple',
'Target Yield': 1200.0,
'Field Size': 1.0,
'pH (water)': 5.76,
'Organic Carbon': 12.9,
'Total Nitrogen': 1.1,
'Phosphorus (M3)': 1.2,
'Potassium (exch.)': 1.7,
'Soil moisture': 11.4
}
# Preprocess the input data
input_df = pd.DataFrame([new_data])
# Ensure the same columns as in training
input_df = pd.get_dummies(input_df, columns=['Crop Name'])
for col in X.columns:
if col not in input_df.columns:
input_df[col] = 0
# Make predictions
predictions = model.predict(input_df)
print("Predicted nutrient needs:")
print(predictions)
|
habulaj/155490133094 | habulaj | 2024-06-27T12:09:02Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:08:56Z | Entry not found |
Antrap151/Dragon1214213 | Antrap151 | 2024-06-27T12:11:01Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:11:01Z | Entry not found |
Likich/llama3-finetune-qualcoding-500no | Likich | 2024-06-27T12:12:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:12:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1nhye/casual | 1nhye | 2024-06-27T12:26:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:13:06Z | Entry not found |
erayyapagci/berturk-onnx | erayyapagci | 2024-06-27T12:18:49Z | 0 | 0 | null | [
"onnx",
"region:us"
]
| null | 2024-06-27T12:15:40Z | Entry not found |
emerie/tokenizer | emerie | 2024-06-27T12:17:55Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:17:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
newih/korean | newih | 2024-06-27T12:32:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:18:59Z | Entry not found |
VKapseln475/Nexalyn785 | VKapseln475 | 2024-06-27T12:33:35Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:20:32Z | # Nexalyn Norge Opplevelser Anmeldelser - Dose og inntak Nexalyn Offisiell pris, kjøp 2024
Nexalyn Norge er et kraftig kosttilskudd spesielt utviklet for menn som ønsker å øke testosteronnivået naturlig. Denne formelen er laget av en blanding av naturlige ingredienser, inkludert urter og ekstrakter kjent for deres evne til å støtte hormonell balanse og fremme mannlig helse. Nexalyn skiller seg ut på markedet på grunn av sine vitenskapelig støttede komponenter som er både trygge og effektive for daglig bruk.
## **[Klikk her for å kjøpe nå fra den offisielle nettsiden til Nexalyn](https://ketogummies24x7.com/nexalyn-no)**
## Nøkkelingredienser og deres fordeler på seksuell helse:
Nøkkelingredienser spiller en viktig rolle i effektiviteten til ethvert kosttilskudd, og Nexalyn testosteronboosterformel er intet unntak. La oss se nærmere på noen av nøkkelingrediensene i denne kraftige formelen og hvordan de kan være til fordel for din seksuelle helse.
Horny Goat Weed, også kjent som Epimedium, har blitt brukt i århundrer i tradisjonell kinesisk medisin for å forbedre libido og behandle erektil dysfunksjon. Den inneholder icariin, en forbindelse som kan bidra til å øke blodstrømmen til penis, noe som resulterer i sterkere og lengre varige ereksjoner.
Tongkat Ali Root Extract er en annen potent ingrediens som har afrodisiakum egenskaper. Det kan virke ved å øke testosteronnivået, noe som kan føre til økt utholdenhet, forbedret muskelmasse og forbedret seksuell ytelse.
Saw Palmetto har en sammenheng med prostatahelse, men den spiller også en rolle i å støtte generell seksuell velvære. Ved å hemme omdannelsen av testosteron til dihydrotestosteron (DHT), kan sagpalmetto bidra til å opprettholde sunn hormonbalanse og støtte optimal seksuell funksjon.
## Hva er prisen på Nexalyn Testo Booster?
Kostnaden for en enkelt flaske med Nexalyn mannlig forbedringsformel i Australia er bare 69,95 USD. Du kan sjekke den offisielle nettsiden til Nexalyn Testosterone Enhancer for å vite kostnadene i ditt land. Vi nevner her kun prisen i noen få land. Sjekk nedenfor:
Nexalyn Pris i Sør-Afrika: R925 per flaske
Nexalyn Pris i Storbritannia: £44,95 per flaske
Nexalyn Pris i Spania: 44,95€ per flaske
Nexalyn-pris i Canada: 64,95 CAD per flaske
Nexalyn Pris i Singapore: USD 49,95 per flaske
Nexalyn-pris i UAE: USD 49,95 per flaske
Nexalyn Pris i New Zealand: NZ$ 79,95 per flaske
## Hvordan hjelper dette produktet deg med å øke energinivået ditt?
Hvis du mangler energi og utholdenhet, kan Nexalyn testo booster-kapsler i Sør-Afrika være løsningen du har lett etter. Dette kraftige kosttilskuddet har ingredienser som kan bidra til å øke energinivået og revitalisere kroppen din.
Det kan virke ved å øke testosteronnivået, noe som kan føre til forbedret energi og vitalitet. Det kan også fungere ved å forbedre libido og seksuell ytelse. Produktet inneholder forbindelser som kan øke produksjonen av nitrogenoksid, noe som kan bidra til å forbedre blodstrømmen gjennom hele kroppen, inkludert til musklene. Denne forbedrede sirkulasjonen kan føre til økt energinivå.
Det kan støtte hormonbalansen samtidig som det gir en naturlig kilde til antioksidanter som kan fremme generell velvære. Nexalyn testosteronbooster spiller en viktig rolle i å støtte sunne hormonnivåer i kroppen, samt fremme sunn prostatafunksjon, som begge kan bidra til økt energinivå.
Ved å legge til dette produktet i din daglige rutine, kan du oppleve en merkbar forbedring i dine generelle energinivåer gjennom dagen. Enten det er å håndtere oppgaver på jobben eller nyte intime øyeblikk med partneren din, kan dette tillegget gi deg den ekstra boosten du trenger for å klare deg med kraft.
## **[Klikk her for å kjøpe nå fra den offisielle nettsiden til Nexalyn](https://ketogummies24x7.com/nexalyn-no)** |
hansa15100/model_a10_r16_epoch50_opeimage | hansa15100 | 2024-06-27T12:22:30Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-06-27T12:22:01Z | Entry not found |
hansa15100/model_a10_r16_epoch30_opeimage | hansa15100 | 2024-06-28T09:50:14Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-06-27T12:22:51Z | Entry not found |
froyoiscool13/klee | froyoiscool13 | 2024-06-27T12:24:19Z | 0 | 0 | null | [
"license:unknown",
"region:us"
]
| null | 2024-06-27T12:24:19Z | ---
license: unknown
---
|
MarzottiAlessia/FirstModel100-1500 | MarzottiAlessia | 2024-06-27T12:24:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:24:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Likich/falcon-finetune-qualcoding-500no | Likich | 2024-06-27T12:25:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:25:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
InderV94/gemma_continued_finetuned | InderV94 | 2024-06-27T12:27:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:26:34Z | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** InderV94
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
starnet/18-star21-06-27 | starnet | 2024-06-27T12:34:12Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| null | 2024-06-27T12:27:07Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
jointriple/brand_classification_2_20240627_model_1 | jointriple | 2024-06-27T12:28:06Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:eu"
]
| null | 2024-06-27T12:28:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/8454265240 | habulaj | 2024-06-27T12:28:24Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:28:20Z | Entry not found |
detek/FT_synth_llama-3-8b-instruct-bnb-4bit_LORA | detek | 2024-06-27T12:31:23Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-27T12:30:31Z | Entry not found |
trustvare/pst-to-pdf-converter | trustvare | 2024-06-27T12:31:25Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:30:58Z | TrustVare PST to PDF Converter is an excellent software to utilize. It efficiently converts PST to PDF, including attachments and other attributes such as from, subject, and date. This program provides a user-friendly GUI to both technical and non-technical users. You can use these procedures to save Outlook emails in PDF format. The software will give you two options: add a single PST file or add numerous PST files and folders. Now, the program will provide its user with some advanced functions, such as - Save in the Same Folder- With this option, users can keep the resultant folder's destination and source folders the same. Maintains the folder hierarchy throughout the converting process. This will convert all attachments into PDF format. Use these options based on your preferences. It supports all versions of Windows, Outlook, and Adobe. Even offer a free demo pack to all users which converts an initial few PST files into pdf format.
Visit Here - https://www.trustvare.com/pst/pdf/ |
jayoohwang/alpham_round2 | jayoohwang | 2024-06-27T17:42:21Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-27T12:31:15Z | Entry not found |
bsmani/paligemma-3b-pt-224-caption | bsmani | 2024-06-27T12:31:24Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:31:24Z | Entry not found |
detek/FT_synth_llama-3-8b-instruct-bnb-4bit_LORA_merged | detek | 2024-06-27T12:33:29Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-27T12:32:34Z | Entry not found |
MinhhMinhh/Jimin-by-MinhMinh | MinhhMinhh | 2024-06-27T12:36:22Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-06-27T12:33:05Z | ---
license: openrail
---
|
detek/FT_synth_llama-3-8b-instruct-bnb-4bit_merged_16bit | detek | 2024-06-27T13:04:36Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2024-06-27T12:34:45Z | Entry not found |
jointriple/brand_classification_1_20240627_tokenizer_1 | jointriple | 2024-06-27T12:35:41Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:eu"
]
| null | 2024-06-27T12:35:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adityakorade/documental_llm | adityakorade | 2024-06-27T12:37:55Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:35:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hansa15100/model_noqlora_r16_epoch10_openimage | hansa15100 | 2024-06-27T12:43:49Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-06-27T12:36:15Z | Entry not found |
Supersonic001/DynAIA2 | Supersonic001 | 2024-06-27T12:37:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:37:14Z | Entry not found |
Cyanex/D.r.e.a.m_Forge | Cyanex | 2024-06-27T12:38:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-06-27T12:38:05Z | Entry not found |
Timeset/timeset-icp | Timeset | 2024-06-27T12:38:50Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:38:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adityakorade/new_documental_llm | adityakorade | 2024-06-27T12:39:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:38:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Likich/tinyllama-finetune-qualcoding-500no | Likich | 2024-06-27T12:40:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-27T12:40:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Longtianmu/simple_word_embedding_model_for_depression_detection | Longtianmu | 2024-06-27T18:40:20Z | 0 | 0 | null | [
"zh",
"license:cc-by-4.0",
"region:us"
]
| null | 2024-06-27T12:40:54Z | ---
license: cc-by-4.0
language:
- zh
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.