|
--- |
|
license: deepfloyd-if-license |
|
datasets: |
|
- microsoft/orca-agentinstruct-1M-v1 |
|
- OpenCoder-LLM/opc-sft-stage1 |
|
- fka/awesome-chatgpt-prompts |
|
- HuggingFaceTB/smoltalk |
|
- alpindale/two-million-bluesky-posts |
|
- bluesky-community/one-million-bluesky-posts |
|
- dijihax/Dataset |
|
- internlm/Lean-Workbook |
|
- PleIAs/common_corpus |
|
- O1-OPEN/OpenO1-SFT |
|
- allenai/tulu-3-sft-mixture |
|
- OpenCoder-LLM/RefineCode-code-corpus-meta |
|
- OpenCoder-LLM/opc-fineweb-code-corpus |
|
- iamtarun/python_code_instructions_18k_alpaca |
|
- codeparrot/github-code |
|
- nenad1002/quantum_science_research_dataset |
|
- quantumiracle-git/robotinder-data |
|
- open-llm-leaderboard-old/details_quantumaikr__KoreanLM-hf |
|
- chemora/EntanglementDetectionDataSet |
|
- glaiveai/glaive-function-calling-v2 |
|
- Salesforce/xlam-function-calling-60k |
|
- NousResearch/hermes-function-calling-v1 |
|
- >- |
|
Younes-Abdeahad-Software-Requirements/FNFC-Functional_Non-Functional_Classification |
|
- cgoosen/prompt_injection_password_or_secret |
|
- google/frames-benchmark |
|
- Kaeyze/computer-science-synthetic-dataset |
|
- gretelai/gretel-text-to-python-fintech-en-v1 |
|
- Vezora/Tested-143k-Python-Alpaca |
|
- Nan-Do/instructional_code-search-net-python |
|
- hackaprompt/hackaprompt-dataset |
|
- hackercupai/hackercup |
|
- OpenPipe/hacker-news |
|
- open-phi/programming_books_llama |
|
- kanhatakeyama/wizardlm8x22b-logical-math-coding-sft |
|
- datatune/LogiCoT |
|
- kanhatakeyama/LogicalDatasetsByMixtral8x22b |
|
- dongyu0205/working-memory-capacity-of-ChatGPT |
|
- memorylost731/linux_man_pages_library |
|
- mmathys/openai-moderation-api-evaluation |
|
- BAAI/IndustryCorpus2_current_affairs_government_administration |
|
- bigcode/admin |
|
- HuggingFaceFW/admin |
|
- HuggingFaceFW/fineweb-edu |
|
- HuggingFaceFV/finevideo |
|
- lmms-lab/LLaVA-Video-178K |
|
- Wild-Heart/Disney-VideoGeneration-Dataset |
|
- DL3DV/DL3DV-ALL-video |
|
- laion/laion-high-resolution |
|
- joey234/mmlu-high_school_computer_science-neg |
|
- sentence-transformers/embedding-training-data |
|
- Cohere/wikipedia-22-12-en-embeddings |
|
- philschmid/finanical-rag-embedding-dataset |
|
- jwaters8978/web_scraper_dataset |
|
- jwaters8978/web_scraper_dataset_2 |
|
- ammarnasr/the-stack-java-clean |
|
- angie-chen55/javascript-github-code |
|
- anjandash/java-8m-methods-v2 |
|
- Vikhrmodels/physics_big |
|
- k-mktr/improved-flux-prompts-photoreal-portrait |
|
- jacobcd52/physics-papers |
|
- zeroshot/arxiv-biology |
|
- joey234/mmlu-college_biology-neg |
|
- cmcmaster/rheumatology-biologics-dataset |
|
- HAERAE-HUB/QARV-KOEN-10M-Entangled |
|
- Qutiba/LinuxCommands_Virsh_KVM_Docker_2 |
|
- MattCoddity/docker_ps |
|
- adeocybersecurity/DockerCommand |
|
- JetBrains-Research/lca-codegen-huge |
|
- chaofengc/IQA-PyTorch-Datasets |
|
- open-source-metrics/pytorch-image-models-dependents |
|
- nodchip/tanuki-.nnue-pytorch-2024-07-30.1 |
|
- karpathy/fineweb-edu-100B-gpt2-token-shards |
|
- google/code_x_glue_cc_code_completion_token |
|
- edbeeching/gia-dataset-tokenized-2024-2 |
|
- OpenDILabCommunity/MasterMind |
|
- LinkSoul/Chinese-LLaVA-Vision-Instructions |
|
- hoang-quoc-trung/fusion-image-to-latex-datasets |
|
- OpenCoder-LLM/opc-fineweb-math-corpus |
|
- xinlai/Math-Step-DPO-10K |
|
- hendrycks/competition_math |
|
- meta-math/MetaMathQA |
|
- microsoft/BiomedParseData |
|
- Twenty1/aws-lambda-developer-guide-docs |
|
- developer0hye/korocr |
|
- DeveloperOats/DBPedia_Classes |
|
- developerZoyal/full_drugs_data |
|
- LangChainHub-Prompts/LLM_Bash |
|
- OS-Copilot/OS-Atlas-data |
|
- nvidia/OpenMathInstruct-2 |
|
- nvidia/HelpSteer2 |
|
language: |
|
- en |
|
- es |
|
- it |
|
- ar |
|
- id |
|
- zh |
|
- ja |
|
base_model: |
|
- Dijitaal/DijiHax.Spooky.Pi |
|
- Qwen/Qwen2.5-Coder-32B-Instruct |
|
- ayjays132/Quantum-NeuralAdaptiveLearningSystem |
|
- neuralmagic/Sparse-Llama-3.1-8B-2of4 |
|
- bigscience/bloom |
|
- bigcode/starcoder |
|
- bigcode/starcoder2-3b |
|
- wolfram/Athene-V2-Chat-4.65bpw-h6-exl2 |
|
- wolfram/Mistral-Large-Instruct-2411-2.75bpw-h6-exl2 |
|
- stabilityai/stable-diffusion-3.5-large |
|
- openai/whisper-large-v3-turbo |
|
- black-forest-labs/FLUX.1-dev |
|
- black-forest-labs/FLUX.1-Fill-dev |
|
- black-forest-labs/FLUX.1-Redux-dev |
|
- si-pbc/hertz-dev |
|
- InstantX/FLUX.1-dev-IP-Adapter |
|
- unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF |
|
- Qwen/Qwen2.5-Coder-32B-Instruct-GGUF |
|
- Qwen/QwQ-32B-Preview |
|
- tencent/HunyuanVideo |
|
- tencent/HunyuanVideo-PromptRewrite |
|
- AIDC-AI/Marco-o1 |
|
- DevQuasar/AIDC-AI.Marco-o1-GGUF |
|
- Lightricks/LTX-Video |
|
- NexaAIDev/OmniVLM-968M |
|
- ali-vilab/In-Context-LoRA |
|
- udev4096/docker-commands |
|
- philomath-1209/programming-language-identification |
|
- vivecccccc/phi-2_kqa-program |
|
- Qwen/Qwen2.5-Coder-7B-Instruct |
|
- featherless-ai-quants/Qwen-Qwen2.5-Coder-32B-Instruct-GGUF |
|
- aws-neuron/optimum-neuron-cache |
|
- nm-testing/TinyLlama-1.1B-compressed-tensors-kv-cache-scheme |
|
- vuiseng9/ov-gpt2-fp32-no-cache |
|
- RichardErkhov/vuiseng9_-_ov-gpt2-fp32-no-cache-gguf |
|
- ntc-ai/SDXL-LoRA-slider.eye-catching |
|
- stabilityai/stable-diffusion-3.5-large-turbo |
|
- nvidia/NV-Embed-v2 |
|
- jinaai/jina-embeddings-v3 |
|
- nvidia/MM-Embed |
|
- nomic-ai/nomic-embed-text-v1.5 |
|
- nomic-ai/nomic-embed-text-v1.5-GGUF |
|
- stabilityai/stablecode-completion-alpha-3b-4k |
|
- stabilityai/stablecode-completion-alpha-3b |
|
- tensorblock/stablecode-completion-alpha-3b-4k-GGUF |
|
- RaniAimlTest/multi-user-chat-open-llama-7b-v2-open-instruct-completions-only |
|
- Iker/Llama-3-Instruct-Neurona-8b |
|
- NeuroWhAI/ko-gemma-2-9b-it-fn |
|
- Nexusflow/Athene-V2-Chat |
|
- comfyanonymous/flux_text_encoders |
|
- city96/t5-v1_1-xxl-encoder-gguf |
|
- mlabonne/NeuralDaredevil-8B-abliterated |
|
- Sao10K/I_am_alive_yay |
|
metrics: |
|
- code_eval |
|
- competition_math |
|
- confusion_matrix |
|
- codeparrot/apps_metric |
|
- bertscore |
|
- BucketHeadP65/confusion_matrix |
|
- precision |
|
- perplexity |
|
- phonemetransformers/segmentation_scores |
|
- Aledade/extraction_evaluation |
|
- wiki_split |
|
- berkatil/map |
|
- spearmanr |
|
- ter |
|
- chrf |
|
- He-Xingwei/sari_metric |
|
- KaliSurfKukt/brier_score |
|
- LottieW/accents_unplugged_eval |
|
- DaliaCaRo/accents_unplugged_eval |
|
- ecody726/bertscore |
|
- Yeshwant123/mcc |
|
- ola13/precision_at_k |
|
- Ikala-allen/relation_extraction |
|
- charcut_mt |
|
- pearsonr |
|
- poseval |
|
- Pipatpong/perplexity |
|
- NCSOFT/harim_plus |
|
- gorkaartola/metric_for_tp_fp_samples |
|
- giulio98/code_eval_outputs |
|
- f1 |
|
new_version: Dijitaal/DijiHax.Spooky.Pi |
|
library_name: adapter-transformers |
|
tags: |
|
- chemistry |
|
- biology |
|
- code |
|
- merge |
|
- climate |
|
- medical |
|
- text-generation-inference |
|
- legal |
|
- music |
|
- art |
|
- moe |
|
- finance |
|
- not-for-all-audiences |
|
pipeline_tag: video-text-to-text |
|
--- |
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** [More Information Needed] |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** [More Information Needed] |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing [optional] |
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** [More Information Needed] |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
[More Information Needed] |
|
|
|
#### Hardware |
|
|
|
[More Information Needed] |
|
|
|
#### Software |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |