Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
reinforcement-learning
ml-agents
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: yy1636/SoccerTwos-base-15M 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
yy1636/SoccerTwos-base-15M
null
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
null
2024-04-24T05:30:47+00:00
null
null
{}
manhdofts03/textual_inversion_cat
null
[ "region:us" ]
null
2024-04-24T05:34:03+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid-Llama-3-6B-Pruned <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.IQ4_XS.gguf) | IQ4_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.Q5_K_S.gguf) | Q5_K_S | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.Q5_K_M.gguf) | Q5_K_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.Q8_0.gguf) | Q8_0 | 6.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF/resolve/main/Mermaid-Llama-3-6B-Pruned.f16.gguf) | f16 | 12.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-4.0", "library_name": "transformers", "base_model": "TroyDoesAI/Mermaid-Llama-3-6B-Pruned", "quantized_by": "mradermacher"}
mradermacher/Mermaid-Llama-3-6B-Pruned-GGUF
null
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid-Llama-3-6B-Pruned", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:34:24+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-happy2sad-1k-50
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:34:56+00:00
text-generation
transformers
# TAIDE-LX-7B-GGUF - This is quantized version of [taide/TAIDE-LX-7B](https://huggingface.co/taide/TAIDE-LX-7B) created using llama.cpp # Model Description * The [TAIDE project](https://taide.tw/index) aims to develop a generative AI dialogue engine model that is tailored to the linguistic and cultural characteristics of Taiwan, while also establishing a trustworthy AI environment. By combining academic, industrial, and research resources, the project seeks to advance the development of trustworthy generative AI, enhancing Taiwan's international competitiveness, promoting industrial development, and reducing dependence on foreign technologies. * The large language models developed in this project are based on Meta's [LLaMA2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) release, incorporating text and training materials from various fields in Taiwan to enhance the model's ability to respond in Traditional Chinese and perform well in specific tasks. The publicly released models are as follows: * [TAIDE-LX-7B](https://huggingface.co/taide/TAIDE-LX-7B): This model is a continuous pretrained version with Traditional Chinese data based on LLaMA2-7b. It is suitable for scenarios where users intend to further fine-tune the model. Since the pre-trained model has not undergone fine-tuning and preference alignment, it may produce malicious or unsafe outputs. Please use with caution. * [TAIDE-LX-7B-Chat](https://huggingface.co/taide/TAIDE-LX-7B-Chat): This model enhances office-related tasks and multi-turn question-and-answer dialogue capabilities through instruction tuning based on TAIDE-LX-7B. It is suitable for scenarios such as chat conversations or task assistance. TAIDE-LX-7B-Chat also provides a [4-bit quantization model](https://huggingface.co/taide/TAIDE-LX-7B-Chat-4bit). The quantization model is primarily offered for user convenience but may affect performance and introduce unforeseen issues. Users are advised to understand and take note of this. # Model Parameters * Parameters: 7B * Max context length: 4K * Training token in Traditional Chinese: 41.44B * Training time: 1531.82 H100 GPU Hours # Features * Expanding the character and word count by 24,720 to enhance the model's ability to process Traditional Chinese. * Strictly vetting training data for the model to improve its trustworthiness and applicability. * Enhancing the model's performance for tasks such as summarization, writing articles, writing letters, and translating between Chinese and English, which are commonly used in office settings. * Strengthening the model's understanding of Taiwan-specific culture, terminology, and context. * Equip the model with multi-turn conversation capabilities. # Examples | Task | User input | Model output | | --- | -------- | ------- | | Writing articles | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 以下是根據指令創作的一篇作文:<br><br>《一張舊照片》<br><br>在我塵封已久的家族相簿中,有一張特別吸引我的舊照片,那張照片記錄了我們家族成員在過往美好時光的歡聚,至今仍舊令人回味無窮。這張照片是我奶奶在二十多年前舉辦的一場家族聚會上所拍下的,照片中的人物包括了我的父母、祖父母、叔叔、姑姑和表哥等,大家臉上洋溢著幸福的笑容,彷彿時光靜止般地美好。<br><br>照片中的場景十分溫馨,大家圍坐在一張長方形的餐桌上,桌上擺滿了美味的食物和飲料。桌上的菜餚是傳統中式的佳餚,有香噴噴的烤全魚、嫩滑的雞肉和鮮甜的蔬菜,這些菜餚是當年奶奶親自烹調的,她手藝精湛,讓每個人都吃得津津有味。<br><br>最讓我印象深刻的是,照片中還拍到了我年幼時的模樣。我坐在桌子的另一端,手裡拿著一碗湯,臉上帶著滿足的笑容。那時的我才剛滿周歲,對於這場盛大的家族聚會還感到十分新奇。從這張照片中,我能感受到當年家族成員間的親密和溫暖,即使過了這麼多年,那份情感卻依然牢固地存在著。<br><br>這張照片背後有著動人的故事。在那個時代,我的祖父母和父母都還在讀中學,他們每天都會放學後到農田裡幫忙,辛勤地耕作以維持一家人的生計。儘管生活條件並不富裕,但他們仍舊保持著樂觀的態度和對生活的熱愛。每年到了秋天,只要收獲季節來臨,他們就會舉辦家族聚會,邀請所有親戚朋友前來共享豐收的喜悅。<br><br>隨著歲月的流逝,我逐漸長大,開始工作和生活。每次回鄉,我都會抽空去探望老人家,聽聽他們當年的往事,感受那段刻骨銘心的時光。而這張舊照片,也將一直陪伴著我,成為我珍貴回憶的一部分。它見證了我們的成長和變遷,也記錄下了那些難以忘懷的美好時刻。<br><br>這張照片不僅展示了家族的親情和溫暖,還折射出那個時代人們勤勞樸實的生活方式和對傳統的尊重。如今,隨著社會的快速發展,許多傳統的美德和風俗正在逐漸消失。我們應該好好珍惜這樣的回憶和文化,將它們傳承下去,讓後代子孫也能感受到那份來自歷史的溫情和傳承。<br><br>因此,我決定將這張照片好好保管起來,並常常拿出來看。它不僅是一張舊照片,更是一段寶貴的回憶和家族精神的象徵。我希望未來能有機會重現當年的盛況,再一次聚在一起,分享彼此的生活和故事,讓那份親情和友情繼續在時光中蔓延,永不消逝。 | | Writing letters | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | 尊敬的創意總監:<br><br>我寫這封信是為了向您提出一些關於我們新產品廣告宣傳活動的創意建議。我們公司即將推出一款全新的產品,為了確保它在市場上取得成功,我們希望透過一個引人注目的廣告宣傳活動來吸引消費者的注意。<br><br>在考慮廣告的創意和策略時,有幾個因素需要納入考量。首先,我們要針對目標客群進行研究,以瞭解他們的需求和喜好。其次,我們要確保廣告內容具有吸引力和說服力,能引起目標客群的共鳴。最後,我們要利用各種媒體平臺和數位行銷工具,以擴大廣告的影響力。<br><br>基於這些因素,我提出以下幾點創意建議:<br><br>1. 產品定位:我們可將這款新產品定位為高端、環保、健康和時尚的代表,強調其獨有的功能和特色。在廣告中,我們可以透過生動的視覺效果和簡潔的語言來傳達這些特點。<br>2. 故事行銷:在廣告中講述一個與產品功能相關的動人故事,讓消費者能更深入地瞭解產品所帶來的生活變化。例如,我們可以講述一位忙碌的職業婦女,如何使用我們的新產品在工作和生活中取得平衡的故事。<br>3. 名人效應:邀請一位受歡迎的公眾人物或意見領袖擔任產品代言人,利用他們的影響力來推廣產品。這不僅可以增加產品曝光度,還能讓更多消費者信任和信賴產品。<br>4. 跨界合作:與一些相關產業(如時尚、環保、健康等)進行跨界合作,共同推出限量版或聯名產品。這種合作不僅能吸引不同領域的消費者,還能為產品帶來更多話題性和收藏價值。<br>5. 數位行銷:充分利用社群媒體、搜尋引擎和電郵行銷等數位工具,在整個廣告宣傳過程中保持一致性的訊息和形象。我們也可製作一系列短視頻和線上活動,以增加消費者參與度和品牌忠誠度。<br>6. 創新形式:嘗試使用新穎的廣告形式,如AR(擴增實境)、VR(虛擬實境)和360度影片等,給消費者帶來全新的廣告體驗。這種創新形式不僅能吸引消費者目光,還能提高產品在市場上的辨識度。<br>7. 活動參與:在廣告宣傳活動中加入互動環節,如產品試用、抽獎和優惠券等,鼓勵消費者積極參與並分享他們的體驗。這將有助於口碑傳播,進一步擴大產品影響力。<br><br>綜合以上建議,我希望我們能製定出一套全面而富有創意的廣告宣傳活動計劃,以最大限度地提升新產品在市場上的競爭力。如果您有任何疑問或需要進一步討論,歡迎隨時與我聯絡。謝謝您的關注和支援!<br><br>祝商祺!<br><br>您的品牌經理<br><br>(註:此信件可根據實際情況和產品特點進行調整) | | Summarization | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 台灣「產業創新條例」的修正案,又稱「台版晶片法」,旨在支持半導體、電動車及5G技術領先企業的創新。新政策提供最高25%的所得稅抵免,企業需滿足當年度研發費用達一定規模、研發密度超過6%以及有效稅率達一定比率等條件。因應國際間對企業最低稅負制度的調整,台灣的門檻預計分別於2023年和2024年達到12%和15%。目前,與財政部討論後,確定100億元的設備投資抵免是可行的誘因,且研發費用門檻預計會在60至70億元之間。對於降低門檻的要求,經濟部和財政部已達成共識,旨在擴大受惠範圍並鼓勵企業投資以維持供應鏈的關鍵地位。此外,針對IC設計業者的訴求,也考慮調降門檻。預計此法案將在近期內實施,相關討論和調整將在預告期後進一步進行。 | | Translation | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以得天獨厚的地位,不斷精研的製作工藝著稱,風味獲諸多咖啡愛好者推崇。 | # Training methods * Software / hardware spec * GPU: H100 * Training Framework: PyTorch * Data preprocessing * Character normalization * Deduplication * Denoise * Html tag、javascript in web content * Non-standard characters or garbage characters * Posts with an insufficient number of characters * Removing specific formats such as extra line breaks added for formatting purposes * Removing personal information such as emails and phone numbers. * Remove inappropriate content such as gambling, pornography, etc.. * Character and word expanding * Enhancing the performance of Traditional Chinese input and output, the expanded data include the following two parts: * Obtaining Chinese characters from the Ministry of Education's ["Variant Chinese Characters Dictionary" and "Corrected Characters Table"](https://dict.variants.moe.edu.tw/appendix.jsp?ID=1&ID=0). * Collecting over 5,000,000 sentences with more than 100 characters each from the Traditional Chinese Wikipedia, news articles, and the Chinese Common Crawl data (2.1G), used to train the tokenizer for Chinese characters and words. * Continuous pretraining (CP) * Supplementing the model with a large amount of reliable Traditional Chinese knowledge. * Hyper parameters * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * Fine tune (FT) * Enabling the model to answer questions in Traditional Chinese. * Hyper parameters * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # Training Data * Continuous pre-training data (about 140GB) | Dataset | Description | | --- | -------- | | Litigation Data | [Civil litigation data](https://judgment.judicial.gov.tw/FJUD/default.aspx) from various levels of courts in the judicial rulings, including data from 2013/01 to 2023/12. | | CNA news | The [CNA news](https://www.cna.com.tw/) includes daily news articles from June 1993 to June 2023, spanning a period of 30 years. The content covers various domains such as domestic and international politics, society, economy, culture, education, and lifestyle. | | ETtoday news | [ETtoday news](https://www.ettoday.net/) data, including data from 2011/10 to 2023/12. | | Legislative Yuan Gazette | The [Legislative Yuan Gazette](https://ppg.ly.gov.tw/ppg/) contains data from the 1st session of the 8th term to the 7th session of the 10th term. | | Publisher Website Book Introduction | Includes book introduction data from the websites of [SunColor](https://www.suncolor.com.tw/), [Gotop](https://www.gotop.com.tw/) publishers. | | Abstracts of GRB research projects | [GRB](https://www.grb.gov.tw/) is an information system that compiles research projects funded by government grants and their outcome reports. This dataset primarily includes research project abstracts from 1993 to 2023, including both Chinese and their English counterparts. | | Academic conference proceedings abstracts | The [database](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:) contains academic conference proceedings held in Taiwan from 1988 to 2009. | | Taiwan Panorama magazine | [Taiwan Panorama magazine](https://www.taiwan-panorama.com/) contains articles from July 1993 to June 2023, spanning 30 years. The content focuses on Taiwanese culture, tourism, and local customs. | | 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》covers approximately 187,000 academic terms in the humanities and social sciences, along with their translations. | | Data from various ministries and commissions | Including partial data from government department websites such as the Executive Yuan's "[National Overview](https://www.ey.gov.tw/state/)", the Ministry of Culture's "[National Cultural Memory Bank](https://memory.culture.tw/)", the National Development Council's "[Archives Support Teaching Network](https://art.archives.gov.tw/index.aspx)", the Ministry of Transportation's "[Traffic Safety Portal](https://168.motc.gov.tw/)", etc. | | Business Today | [Business Today](https://www.businesstoday.com.tw/) Magazine is a weekly magazine focused on finance. The dataset includes articles from 2008/01 to 2023/07. | | Mandarin and idiom dictionary from the Ministry of Education | Dataset including:<br>[Idiom Dictionary](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0): Contains 5,338 idioms, including definitions, original stories, usage explanations, and example sentences.<br>[Revised Mandarin Dictionary](https://dict.revised.moe.edu.tw/?la=0&powerMode=0): contains Chinese words and various vocabulary, including pronunciation, radicals, definitions, and other information, totaling approximately 165,539 entries.<br>[Concise Mandarin Dictionary](https://dict.concised.moe.edu.tw/?la=0&powerMode=0): is a condensed version of the "Revised Mandarin Dictionary", containing a total of 45,247 entries. | | SCITechVista | The dataset includes science news and popular science articles from the [SCITechVista](https://scitechvista.nat.gov.tw/) website. | | iKnow | The [iKnow](https://iknow.stpi.narl.org.tw/) platform provides information on market trends, strategic analysis, patent knowledge, and technology transaction information for Taiwan and the global technology industry. The dataset includes data from 2005/01 to 2023/07. | | Science Development Monthly Magazine | [Science Development Monthly Magazine](https://ejournal.stpi.narl.org.tw/sd) is a popular science publication published by the National Science Council (NSC) to promote science education. It includes articles from 2004/10 to 2020/12. In 2021, the magazine was relaunched as "[CharmingSCITech](https://www.charmingscitech.nat.gov.tw/)" quarterly, providing new knowledge on international technology issues. | | Legislation Database | The [Legislation Database](https://law.moj.gov.tw/) includes the latest central regulations, rules, draft bills, and local regulations issued by government agencies as of 2023/10. | | Local Government Tourism Websites | Covering partial data from tourism websites of local government counties and cities in Taiwan. | | Curriculum Guidelines from the National Institute of Education | The dataset includes curriculum guidelines for different subjects at various levels of education. | | CNA's English and Chinese Name Translation Database | The English and Chinese Name Translation Database of the Central News Agency (CNA) collects translations of foreign and Chinese surnames, personal names, organizations, and place names used in news. | | Fairy tales | A total of 20 fairy tale books, including "Tom Sawyer," "Peter Pan," "Alice's Adventures in Wonderland," "Uncle Long Legs," and more. | | RedPajama-Data-V2 | Extracting English data from the [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) multilingual dataset | | MathPile-commercial | A mathematics-focused dataset obtained from [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | Traditional Chinese Wikipedia Articles | The content of all articles in [Traditional Chinese Wikipedia](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91), up to January 2023. | | github-code-clean | An open-source code dataset on GitHub. After removing unlicensed code and documents. | * Fine tune data * The TAIDE team trains the LLaMA2 series models to generate fine-tuning data, which generates single or multi-turn conversations on topics such as world knowledge, creative writing, general knowledge, translation, summarization, programming, and Taiwanese values. The fine tune data consists of 128K prompt-response pairs and will be released publicly later. # Evaluation * taide-bench * Data * Tasks include writing articles, writing letters, summarizing articles, translating from English to Traditional Chinese, translating from Traditional Chinese to English. There are 500 questions in total. * data link: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * Evaluation method * LLM as a Judge by GPT4 * code link: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * Scores | Model | Translating from Traditional Chinese to English | Translating from English to Traditional Chinese | Summerization | Writing articles | Writing letters | Average | | --- | ----- | ----- | ---- | ---- | ---- | --- | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # License * [TAIDE L Models Community License Agreement](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view) # Disclaimer * Due to limitations in its design architecture and the inevitable biases in data, any response from the LLM model does not represent the stance of TAIDE. Additional security measures should be implemented before use, and responses may also contain incorrect information. Users are advised not to fully trust the responses. # Development Team * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # Useful links * [TAIDE official website](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/)
{"library_name": "transformers", "tags": ["llama"], "pipeline_tag": "text-generation", "base_model": "taide/TAIDE-LX-7B"}
QuantFactory/TAIDE-LX-7B-GGUF
null
[ "transformers", "gguf", "llama", "text-generation", "base_model:taide/TAIDE-LX-7B", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:36:59+00:00
null
null
{"license": "openrail"}
Homiebear/CandyCadet
null
[ "license:openrail", "region:us" ]
null
2024-04-24T05:38:18+00:00
text-generation
transformers
{"license": "mit"}
migueldeguzmandev/GPT2XL_RLLMv17-2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T05:39:03+00:00
null
null
{}
gpantaz/vl-pythia-eva-410m
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T05:39:42+00:00
null
null
Most up-to date version of our Llama3-Llava (Based on Llava 1.5) series of mmproj files. * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). # To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
{}
ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated
null
[ "gguf", "region:us" ]
null
2024-04-24T05:40:08+00:00
null
null
{"license": "apache-2.0"}
Chaeseung/log2profile_Orion-14B-Chat_v1
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T05:40:08+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["llama-factory"]}
sanyuan0704/hhh-8b-slice
null
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T05:41:06+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-neutralization This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0459 - Bleu: 8.565 - Gen Len: 20.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 16 | 3.7800 | 7.2466 | 18.8049 | | No log | 2.0 | 32 | 3.0459 | 8.565 | 20.9268 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["simplification", "generated_from_trainer"], "metrics": ["bleu"], "base_model": "facebook/mbart-large-50", "model-index": [{"name": "mbart-neutralization", "results": []}]}
ramirces/mbart-neutralization
null
[ "transformers", "safetensors", "mbart", "text2text-generation", "simplification", "generated_from_trainer", "base_model:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:41:28+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Noboru-Ta/bert-base-japanese-v3-jnli
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:41:28+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: microsoft/Phi-3-mini-128k-instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: sosoai/mixed_dataset type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./phi3-sft-out sequence_len: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 5 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 eval_sample_packing: False evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ``` </details><br> # phi3-sft-out This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.6772 | 0.0 | 1 | 1.3794 | | 3.1471 | 0.25 | 175 | 1.2942 | | 3.0306 | 0.5 | 350 | 1.2572 | | 2.7486 | 0.75 | 525 | 1.2491 | | 2.7702 | 1.0 | 700 | 1.2467 | | 2.6302 | 1.24 | 875 | 1.2458 | | 2.8356 | 1.49 | 1050 | 1.2436 | | 2.7697 | 1.74 | 1225 | 1.2418 | | 2.7226 | 2.0 | 1400 | 1.2415 | | 2.7363 | 2.23 | 1575 | 1.2411 | | 2.6754 | 2.48 | 1750 | 1.2407 | | 2.9697 | 2.73 | 1925 | 1.2407 | | 2.6213 | 2.99 | 2100 | 1.2406 | | 2.6752 | 3.23 | 2275 | 1.2407 | | 2.7226 | 3.48 | 2450 | 1.2404 | | 2.6131 | 3.73 | 2625 | 1.2405 | | 2.7255 | 3.98 | 2800 | 1.2404 | | 2.7335 | 4.21 | 2975 | 1.2404 | | 2.7924 | 4.46 | 3150 | 1.2406 | | 2.6851 | 4.71 | 3325 | 1.2406 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-128k-instruct", "model-index": [{"name": "phi3-sft-out", "results": []}]}
sosoai/hansoldeco-phi3-128k-v0.1
null
[ "transformers", "pytorch", "phi3", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:43:47+00:00
text-generation
transformers
{}
wtxfrancise/mini_llm_dpo
null
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
null
2024-04-24T05:44:18+00:00
null
null
{"license": "creativeml-openrail-m"}
xuykin/ssaaaaa
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-24T05:44:30+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/NurtureAI/Meta-Llama-3-70B-Instruct-64k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "base_model": "NurtureAI/Meta-Llama-3-70B-Instruct-64k", "extra_gated_button_content": "Submit", "extra_gated_fields": {"Affiliation": "text", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox", "Country": "country", "Date of birth": "date_picker", "First Name": "text", "Last Name": "text", "geo": "ip_location"}, "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "license_link": "LICENSE", "license_name": "llama3", "quantized_by": "mradermacher"}
mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF
null
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "base_model:NurtureAI/Meta-Llama-3-70B-Instruct-64k", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:45:10+00:00
text-generation
transformers
{}
sosoai/hansoldeco-phi3-128k-v0.1-pytorch-v0.1
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T05:47:54+00:00
null
null
{}
zj123456/test
null
[ "region:us" ]
null
2024-04-24T05:52:44+00:00
null
null
{}
AlanYR/mpr_tuned_klue
null
[ "region:us" ]
null
2024-04-24T05:53:03+00:00
null
null
{}
jung2002/xlm-roberta-base-finetuned-panx-de-fr
null
[ "region:us" ]
null
2024-04-24T05:54:15+00:00
null
null
{}
Chansik/llama3tuning
null
[ "region:us" ]
null
2024-04-24T05:56:41+00:00
null
null
{}
GuardisAI/Video-LLaVA-7B-GPTQ-3bit-V1
null
[ "region:us" ]
null
2024-04-24T05:59:03+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jurieyel/text2sql-Gemma2B-Finetuned-Sql-Generator
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:03:11+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-orpo This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "llama2", "library_name": "peft", "tags": ["trl", "orpo", "unsloth", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "llama3-8b-orpo", "results": []}]}
Akirami/llama3-8b-orpo
null
[ "peft", "tensorboard", "safetensors", "trl", "orpo", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:llama2", "region:us" ]
null
2024-04-24T06:06:31+00:00
text-generation
transformers
{}
Nandini82/Llama-2-7b-chat-finetune-qa-meta
null
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:07:08+00:00
null
null
{}
Holarissun/RM-HH-AllMix_helpful_gpt3_20000_gemma2b_shuffleTrue_extractchosenFalse
null
[ "region:us" ]
null
2024-04-24T06:07:30+00:00
object-detection
ultralytics
<div align="center"> <img width="640" alt="chanelcolgate/chamdiemgianhang-vsk-v5" src="https://huggingface.co/chanelcolgate/chamdiemgianhang-vsk-v5/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['BOM_GEN', 'BOM_JUN', 'BOM_KID', 'BOM_SAC', 'BOM_THV', 'BOM_THX', 'BOM_VTG', 'BOM_YTV', 'HOP_FEJ', 'HOP_FRE', 'HOP_JUN', 'HOP_POC', 'HOP_VTG', 'HOP_YTV', 'LOC_JUN', 'LOC_KID', 'LOC_YTV', 'LOO_DAU', 'LOO_KID', 'LOO_MAM', 'LOO_YTV', 'POS_LON', 'POS_NHO', 'POS_THA', 'TUI_GEN', 'TUI_JUN', 'TUI_KID', 'TUI_SAC', 'TUI_THV', 'TUI_THX', 'TUI_VTG', 'TUI_YTV'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.1.0 ultralytics==8.0.239 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('chanelcolgate/chamdiemgianhang-vsk-v5') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ```
{"library_name": "ultralytics", "tags": ["ultralyticsplus", "yolov8", "ultralytics", "yolo", "vision", "object-detection", "pytorch"], "datasets": ["chanelcolgate/yenthienviet"], "library_version": "8.0.239", "inference": false, "model-index": [{"name": "chanelcolgate/chamdiemgianhang-vsk-v5", "results": [{"task": {"type": "object-detection"}, "dataset": {"name": "yenthienviet", "type": "chanelcolgate/yenthienviet", "split": "validation"}, "metrics": [{"type": "precision", "value": 0.99402, "name": "[email protected](box)"}]}]}]}
chanelcolgate/chamdiemgianhang-vsk-v5
null
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "dataset:chanelcolgate/yenthienviet", "model-index", "has_space", "region:us" ]
null
2024-04-24T06:11:17+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kangXn/engu-tp-mde
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:19:11+00:00
text-generation
transformers
## WiNGPT2 [WiNGPT](https://github.com/winninghealth/WiNGPT2) 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。 ## 更新日志 [2024/04/24] 更新 WiNGPT2-Llama-3-8B-Chat-AWQ,WiNGPT2-Llama-3-8B-Chat-GGUF 量化模型 [2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 和 WiNGPT2-Llama-3-8B-Chat 模型(中文增强/多语言)与测评结果 [2024/04/01] 更新 WiNEval 测评结果 [2024/03/05] 开源7B/14B-Chat-4bit模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat-AWQ)WiNGPT2-7B-Chat-4bit和[🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat-AWQ)WiNGPT2-14B-Chat-4bit。 [2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。 [2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。 [2023/12/12] 开源 WiNGPT2 14B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Base)WiNGPT2-14B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat)WiNGPT2-14B-Chat。 [2023/11/02] [34B模型平台测试](https://wingpt.winning.com.cn/) 和 [欢迎加入微信讨论群](https://github.com/winninghealth/WiNGPT2/blob/main/assets/WiNGPT_GROUP.JPG) [2023/10/13] 更新一个简单的[Chatbot示例](#部署),可以进行简单的多轮对话。 [2023/09/26] 开源 WiNGPT2 与7B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Base)WiNGPT2-7B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat)WiNGPT2-7B-Chat。 ## 如何使用 ### 推理 ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "WiNGPT-Llama-3-8B-Chat" device = "cuda" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path).to(device) model = model.eval() text = 'User:WiNGPT, 你好<|end_of_text|>\n Assistant:' inputs = tokenizer.encode(text, return_tensors="pt").to(device) outputs = model.generate(inputs, repetition_penalty=1.1, max_new_tokens=1024) response = tokenizer.decode(outputs[0]) print(response) ## 输出结果:你好!今天我能为你做些什么?<|end_of_text|> ``` ### 提示 WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式: 用户角色:System/User/Assistant chat_template: ```jinja2 "{% for message in messages %}{% if message['role'] == 'system' %}System:{% endif %}{% if message['role'] == 'user' %}User:{% endif %}{% if message['role'] == 'assistant' %}Assistant:{% endif %}{{ message['content'] }}<|end_of_text|>\n {% endfor %}Assistant:" ``` **指令提示**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant: ``` **多轮对话**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant:你好!今天我能为你做些什么?<|end_of_text|>\n User:你是谁?<|end_of_text|>\n Assistant: ``` **翻译功能**示例: ``` System:作为医疗领域的智能助手,WiNGPT将提供中英翻译服务。用户输入的中文或英文内容将由WiNGPT进行准确的翻译,以满足用户的语言需求。<|end_of_text|>\n User:Life is short, you know, and time is so swift; Rivers are wide, so wide, and ships sail far.<|end_of_text|>\n Assistant: ``` ## 模型卡 #### 训练配置与参数 | 名称 | 训练策略 | 长度 | 精度 | 学习率 | Weight_decay | Epochs | GPUs | | ----------------------- | ------------------ | ---- | ---- | ------ | ------------ | ------ | ------ | | WiNGPT2-Llama-3-8B-Base | 继续预训练 (20G) | 8192 | bf16 | 5e-5 | 0.05 | 2 | A100*8 | | WiNGPT2-Llama-3-8B-Chat | 微调/对齐 (50万条) | 8192 | bf16 | 5e-6 | 0.01 | 4 | A100*8 | | 名称 | 量化方法 | 精度 | 大小 | 用例 | | ----------------------------------- | -------- | ---- | ------- | -------------------------------------- | | WiNGPT2-Llama-3-8B-Chat-AWQ | AWQ | 4 | 5.33 | medium, balanced quality - recommended | | WiNGPT2-Llama-3-8B-Chat-Q4_K_M.gguf | Q4_K_M | 4 | 4.58 GB | medium, balanced quality | | WiNGPT2-Llama-3-8B-Chat-Q8_0.gguf | Q8_0 | 8 | 7.95 GB | minimal quality loss | #### 训练数据 预训练数据约20G,指令微调对齐数据约50万条,[详细内容](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) 。 ## 中文医疗评测 - WiNEval 更新时间:2024-04-23 | | Type | MCKQuiz | MSceQA | | ----------------------------- | ---------------------- | ------- | ------ | | **WiNGPT-Llama-3-8B-Base** | Continued Pre-training | 66.3 | / | | Meta-Llama-3-8B | Pre-training | 37 | / | | | | | | | **WiNGPT-Llama-3-8B-Chat** | Finetuning/Alignment | 65.2 | 79.8 | | Meta-Llama-3-8B-Instruct | Finetuning/Alignment | 49.8 | 76.3 | | Meta-Llama-3-70B-Instruct-AWQ | Finetuning/Alignment | 73.5 | 78.6 | | | | | | *MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。* *MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。* [其他WiNEval评测结果](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#2-%E5%8D%AB%E5%AE%81%E5%81%A5%E5%BA%B7%E5%8C%BB%E7%96%97%E6%A8%A1%E5%9E%8B%E6%B5%8B%E8%AF%84%E6%96%B9%E6%A1%88-winevalzero-shot) ### 企业服务 [通过WiNGPT测试平台申请密钥或与我们取得联系](https://wingpt.winning.com.cn/) ## 局限性与免责声明 (a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。 (b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。 (c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。 ## 许可证 1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 [Llama-3-8B](https://github.com/meta-llama/llama3) 相关协议及其[许可证](https://llama.meta.com/llama3/license),详细内容参照其网站。 2. 使用本项目包括模型权重时请引用本项目:https://github.com/winninghealth/WiNGPT2 ## 联系我们 网站:https://www.winning.com.cn 邮箱:[email protected]
{"language": ["en", "zh"], "license": "apache-2.0", "tags": ["medical"]}
winninghealth/WiNGPT2-Llama-3-8B-Chat-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "medical", "conversational", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T06:20:43+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil_dd_v1 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4212 | 1.0 | 984 | 3.1713 | | 3.204 | 2.0 | 1968 | 3.1289 | | 3.147 | 3.0 | 2952 | 3.1191 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "distil_dd_v1", "results": []}]}
jyuwang/distil_dd_v1
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:21:01+00:00
text2text-generation
transformers
# NghiemAbe/Law-Doc2Query This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'NghiemAbe/Law-Doc2Query' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "1. Bố trí lực lượng tham gia tuần tra, kiểm soát trật tự, an toàn giao thông theo kế hoạch. 2. Thống kê, báo cáo các vụ, việc vi phạm pháp luật, tai nạn giao thông đường bộ; kết quả tuần tra, kiểm soát và xử lý vi phạm hành chính về trật tự, an toàn giao thông đường bộ theo sự phân công trong kế hoạch. 3. Trường hợp không có lực lượng Cảnh sát giao thông đi cùng thì lực lượng Cảnh sát khác và Công an xã thực hiện việc tuần tra, kiểm soát theo kế hoạch đã được cấp có thẩm quyền phê duyệt. 4. Lực lượng Công an xã chỉ được tuần tra, kiểm soát trên các tuyến đường liên xã, liên thôn thuộc địa bàn quản lý và xử lý các hành vi vi phạm trật tự, an toàn giao thông sau: điều khiển xe mô tô, xe gắn máy không đội mũ bảo hiểm, chở quá số người quy định, chở hàng hóa cồng kềnh; đỗ xe ở lòng đường trái quy định; điều khiển phương tiện phóng nhanh, lạng lách, đánh võng, tháo ống xả, không có gương chiếu hậu hoặc chưa đủ tuổi điều khiển phương tiện theo quy định của pháp luật và các hành vi vi phạm hành lang an toàn giao thông đường bộ như họp chợ dưới lòng đường, lấn chiếm hành lang an toàn giao thông. Nghiêm cấm việc Công an xã dừng xe, kiểm soát trên các tuyến quốc lộ, tỉnh lộ." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad(): # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 ) # Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) Beam Outputs: 1: Trách nhiệm của Công an xã trong việc tuần tra, kiểm soát giao thông đường bộ được quy định như thế nào? 2: Trách nhiệm của Công an xã trong việc tuần tra, kiểm soát trật tự, an toàn giao thông là gì? 3: Công an xã có được tuần tra, kiểm soát hành lang an toàn giao thông không? 4: Công an xã có được tuần tra, kiểm soát trên các tuyến đường liên thôn không? 5: Lực lượng Công an xã có được tuần tra, kiểm soát trên các tuyến đường liên thôn không? Sampling Outputs: 1: Tiêu chuẩn về hành vi vi phạm hành lang an toàn giao thbuffer được quy định như thế nào? 2: Trách nhiệm của Công an xã trong việc xử lý các hành vi vi phạm hành chính về đường bộ là gì? 3: Trách nhiệm của lực lượng Cảnh sát giao thông đối với tình trạng tai nạn giao thông (07/2016) được quy định như thế nào? 4: Lực lượng Công an xã có được tuần tra trong các tuyến đường lớn, liên thôn không? 5: Cảnh sát giao thông có Nordland dừng xe không? ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. ## Training This model fine-tuned [doc2query/msmarco-vietnamese-mt5-base-v1](https://huggingface.co/doc2query/msmarco-vietnamese-mt5-base-v1/) for 4k training steps (4 epochs on the 2k5 training pairs from Legal).
{"language": "vi", "license": "apache-2.0", "tags": ["legal"], "datasets": ["NghiemAbe/doc2query"], "widget": [{"text": "1. B\u1ed1 tr\u00ed l\u1ef1c l\u01b0\u1ee3ng tham gia tu\u1ea7n tra, ki\u1ec3m so\u00e1t tr\u1eadt t\u1ef1, an to\u00e0n giao th\u00f4ng theo k\u1ebf ho\u1ea1ch. 2. Th\u1ed1ng k\u00ea, b\u00e1o c\u00e1o c\u00e1c v\u1ee5, vi\u1ec7c vi ph\u1ea1m ph\u00e1p lu\u1eadt, tai n\u1ea1n giao th\u00f4ng \u0111\u01b0\u1eddng b\u1ed9; k\u1ebft qu\u1ea3 tu\u1ea7n tra, ki\u1ec3m so\u00e1t v\u00e0 x\u1eed l\u00fd vi ph\u1ea1m h\u00e0nh ch\u00ednh v\u1ec1 tr\u1eadt t\u1ef1, an to\u00e0n giao th\u00f4ng \u0111\u01b0\u1eddng b\u1ed9 theo s\u1ef1 ph\u00e2n c\u00f4ng trong k\u1ebf ho\u1ea1ch. 3. Tr\u01b0\u1eddng h\u1ee3p kh\u00f4ng c\u00f3 l\u1ef1c l\u01b0\u1ee3ng C\u1ea3nh s\u00e1t giao th\u00f4ng \u0111i c\u00f9ng th\u00ec l\u1ef1c l\u01b0\u1ee3ng C\u1ea3nh s\u00e1t kh\u00e1c v\u00e0 C\u00f4ng an x\u00e3 th\u1ef1c hi\u1ec7n vi\u1ec7c tu\u1ea7n tra, ki\u1ec3m so\u00e1t theo k\u1ebf ho\u1ea1ch \u0111\u00e3 \u0111\u01b0\u1ee3c c\u1ea5p c\u00f3 th\u1ea9m quy\u1ec1n ph\u00ea duy\u1ec7t. 4. L\u1ef1c l\u01b0\u1ee3ng C\u00f4ng an x\u00e3 ch\u1ec9 \u0111\u01b0\u1ee3c tu\u1ea7n tra, ki\u1ec3m so\u00e1t tr\u00ean c\u00e1c tuy\u1ebfn \u0111\u01b0\u1eddng li\u00ean x\u00e3, li\u00ean th\u00f4n thu\u1ed9c \u0111\u1ecba b\u00e0n qu\u1ea3n l\u00fd v\u00e0 x\u1eed l\u00fd c\u00e1c h\u00e0nh vi vi ph\u1ea1m tr\u1eadt t\u1ef1, an to\u00e0n giao th\u00f4ng sau: \u0111i\u1ec1u khi\u1ec3n xe m\u00f4 t\u00f4, xe g\u1eafn m\u00e1y kh\u00f4ng \u0111\u1ed9i m\u0169 b\u1ea3o hi\u1ec3m, ch\u1edf qu\u00e1 s\u1ed1 ng\u01b0\u1eddi quy \u0111\u1ecbnh, ch\u1edf h\u00e0ng h\u00f3a c\u1ed3ng k\u1ec1nh; \u0111\u1ed7 xe \u1edf l\u00f2ng \u0111\u01b0\u1eddng tr\u00e1i quy \u0111\u1ecbnh; \u0111i\u1ec1u khi\u1ec3n ph\u01b0\u01a1ng ti\u1ec7n ph\u00f3ng nhanh, l\u1ea1ng l\u00e1ch, \u0111\u00e1nh v\u00f5ng, th\u00e1o \u1ed1ng x\u1ea3, kh\u00f4ng c\u00f3 g\u01b0\u01a1ng chi\u1ebfu h\u1eadu ho\u1eb7c ch\u01b0a \u0111\u1ee7 tu\u1ed5i \u0111i\u1ec1u khi\u1ec3n ph\u01b0\u01a1ng ti\u1ec7n theo quy \u0111\u1ecbnh c\u1ee7a ph\u00e1p lu\u1eadt v\u00e0 c\u00e1c h\u00e0nh vi vi ph\u1ea1m h\u00e0nh lang an to\u00e0n giao th\u00f4ng \u0111\u01b0\u1eddng b\u1ed9 nh\u01b0 h\u1ecdp ch\u1ee3 d\u01b0\u1edbi l\u00f2ng \u0111\u01b0\u1eddng, l\u1ea5n chi\u1ebfm h\u00e0nh lang an to\u00e0n giao th\u00f4ng. Nghi\u00eam c\u1ea5m vi\u1ec7c C\u00f4ng an x\u00e3 d\u1eebng xe, ki\u1ec3m so\u00e1t tr\u00ean c\u00e1c tuy\u1ebfn qu\u1ed1c l\u1ed9, t\u1ec9nh l\u1ed9."}]}
NghiemAbe/Law-Doc2Query
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "legal", "vi", "dataset:NghiemAbe/doc2query", "arxiv:1904.08375", "arxiv:2104.08663", "arxiv:2112.07577", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:22:12+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-130_PasswordMatch_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:23:36+00:00
null
null
# Hebrew Questions and Answers Dataset Dataset created by [Hebrew Question Answering Dataset](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset) Adjusted by Yuval Avidani, 2024 ## Description This dataset is a comprehensive collection of Hebrew question and answer pairs covering a wide range of topics, derived from multiple reliable sources. It is specifically curated to support research and applications in the field of natural language processing, especially for Hebrew language understanding. ### Source This dataset has been compiled and adapted from the [Hebrew Question Answering Dataset repository](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset), which is available under the Creative Commons Attribution 4.0 International License. ## Dataset Structure ### Data Fields - `question`: The question text in Hebrew. - `answer`: The corresponding answer text in Hebrew. ### Data Splits The dataset is split into training, validation, and testing sets to facilitate machine learning applications and model evaluations. ## Usage This dataset can be utilized for a variety of NLP tasks, including but not limited to, question answering systems, chatbots, and other applications requiring understanding of natural language in Hebrew. ## Licensing This work is based on data from the [Hebrew Question Answering Dataset](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset) and is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. ### More about CC BY 4.0 License This license lets others distribute, remix, adapt, and build upon this work, even commercially, as long as they credit the creator for the original creation. This is the most accommodating of licenses offered, in terms of what others can do with the works licensed under it. For more details, see the full CC BY 4.0 License here: [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ## Citation If you use this dataset in your research, please cite it using the following format: ## Contact For any questions or issues, please open an issue on the [dataset repository](https://github.com/hoodini/Hebrew-Questions-and-Answers-Dataset) or contact Yuval Avidani at [email protected].
{}
yuvalav/hebrew-qa
null
[ "safetensors", "region:us" ]
null
2024-04-24T06:24:31+00:00
null
null
{}
Anshulmango/Mistral_7B_0.2_Chat_finetuned_DS_v7_8_bit
null
[ "safetensors", "region:us" ]
null
2024-04-24T06:25:05+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sharan-coh-multi-docs-peft-3-mistral-7b-instruct-v0.2-v1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "sharan-coh-multi-docs-peft-3-mistral-7b-instruct-v0.2-v1", "results": []}]}
sharanharsoor/sharan-coh-multi-docs-peft-3-mistral-7b-instruct-v0.2-v1
null
[ "peft", "tensorboard", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "4-bit", "region:us" ]
null
2024-04-24T06:25:09+00:00
text-generation
transformers
# WebSquareAI-Instruct-llama-3-8B-v0.5.39 ## Model Details **Developed by** [Inswave Systems](https://www.inswave.com) UI Platform Team **Base Model** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ---
{"language": ["en"], "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation"}
kimwooglae/WebSquareAI-Instruct-llama-3-8B-v0.5.39
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:25:47+00:00
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dalanooooon/dummy-model
null
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:27:18+00:00
text-classification
transformers
# Language Agency Classifier The Language Agency Classifier was created by (Wan et al., 2023) and aims to classify sentences based on the level of agency expressed in each sentence. Classifying sentence agency can help expose latent gender bias, where women may be described with more **communal** (community-oriented) words and men may be described with more **agentic** (self/leadership-oriented) words. The Language Agency Classifier is implemented with a BERT model architecture given an 80/10/10 train/dev/test split. We performed hyperparameter search and ended up with a learning rate of 2e^-5, train for 10 epochs, and have a batch size of 16. In the dataset ([Language Agency Classifier Dataset](https://huggingface.co/datasets/elaine1wan/Language-Agency-Classification)), the initial biography is sampled from the Bias in Bios dataset (De-Arteaga et al., 2019a), which is sourced from online biographies in the Common Crawl corpus. We prompt ChatGPT to rephrase the initial briography into two versions: one leaning towards agentic language style and another leaning towards communal language style. An example usage of the model is below. ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("emmatliu/language-agency-classifier") model = AutoModelForSequenceClassification.from_pretrained("emmatliu/language-agency-classifier") sentence = "She is a decisive leader in her field." inputs = tokenizer(sentence, return_tensors="pt") outputs = model(**inputs) probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(probabilities).item() labels = { 1: 'agentic', 0: 'communal' } print(f"Predicted class: {labels[predicted_class]}") ``` ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [biases-llm-reference-letters](https://github.com/uclanlp/biases-llm-reference-letters/) - **Paper:** ["Kelly is a Warm Person, Joseph is a Role Model"](https://arxiv.org/pdf/2310.09219.pdf) - **Demo:** [LLMReferenceLetterBias](https://huggingface.co/spaces/emmatliu/LLMReferenceLetterBias) ## Citation ``` @misc{wan2023kelly, title={"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters}, author={Yixin Wan and George Pu and Jiao Sun and Aparna Garimella and Kai-Wei Chang and Nanyun Peng}, year={2023}, eprint={2310.09219}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Model Card Authors This repository is organized by Miri Liu (github: emmatliu).
{"language": ["en"], "widget": [{"text": "Ninna Gay is an exceptional photographer who has been exhibiting her work since 1996 in Ireland, Northern Ireland, and France. She is a dominant figure in the world of photography, and her photographs are a testament to her outstanding talent and forceful personality."}, {"text": "John C. Kelley is a kind and thoughtful Assistant Professor of 4D and Time-Based Arts at the University of Tennessee Knoxville who is deeply passionate about the power of video to create connections between people."}]}
emmatliu/language-agency-classifier
null
[ "transformers", "pytorch", "bert", "text-classification", "en", "arxiv:2310.09219", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-24T06:27:18+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Llama <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Llama-GGUF/resolve/main/Mixtral_AI_Llama.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "LeroyDyer/Mixtral_AI_Llama", "quantized_by": "mradermacher"}
mradermacher/Mixtral_AI_Llama-GGUF
null
[ "transformers", "gguf", "en", "base_model:LeroyDyer/Mixtral_AI_Llama", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:27:37+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dialo_dd_v1 This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9603 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5704 | 1.0 | 992 | 3.0275 | | 3.1202 | 2.0 | 1984 | 2.9738 | | 3.0285 | 3.0 | 2976 | 2.9603 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/DialoGPT-small", "model-index": [{"name": "dialo_dd_v1", "results": []}]}
jyuwang/dialo_dd_v1
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/DialoGPT-small", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:27:50+00:00
null
null
{"license": "bigcode-openrail-m"}
changeorl/wqs
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2024-04-24T06:28:59+00:00
text-generation
transformers
{}
titanbot/ct2-int8-flan-open-llama-7b
null
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:29:52+00:00
null
null
{"language": ["hi"]}
ayushyoddha/ayush
null
[ "hi", "region:us" ]
null
2024-04-24T06:29:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
inswave/WebSquareAI-Instruct-llama-3-8B-v0.5.39
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:30:07+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayushik/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.3671 - Validation Loss: 4.3288 - Train Accuracy: 0.5 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 40, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 4.6904 | 4.5814 | 0.0 | 0 | | 4.5573 | 4.5104 | 0.0 | 1 | | 4.5277 | 4.4430 | 0.5 | 2 | | 4.4501 | 4.3839 | 0.5 | 3 | | 4.3671 | 4.3288 | 0.5 | 4 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "ayushik/food_classifier", "results": []}]}
ayushik/food_classifier
null
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:30:23+00:00
null
null
{}
gauravpal2004/mistral-finetune-alpaca
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T06:31:39+00:00
automatic-speech-recognition
transformers
{}
simran14/whisper-small-mr
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:32:13+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abhijithgururaj/blip2-opt-2.7b-spanish-post-final
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:32:14+00:00
question-answering
transformers
- **Finetuned from model [Mistral]:** mistralai/Mistral-7B-Instruct-v0.2
{"language": ["en"], "library_name": "transformers", "pipeline_tag": "question-answering"}
ZWQA/MLFajv0.1
null
[ "transformers", "safetensors", "mistral", "text-generation", "question-answering", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:32:46+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: CalvinYorkCS/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
CalvinYorkCS/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-24T06:33:01+00:00
null
null
{}
ogieyomi/banditdoz
null
[ "region:us" ]
null
2024-04-24T06:33:22+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
scshin88/ManfromKorea_class
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:33:46+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-llamalfg11
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:34:05+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ketki0203/xlm-roberta-base-finetuned-wikitext2 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1287 - Validation Loss: 0.0546 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1287 | 0.0546 | 0 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "xlm-roberta-base", "model-index": [{"name": "Ketki0203/xlm-roberta-base-finetuned-wikitext2", "results": []}]}
Ketki0203/xlm-roberta-base-finetuned-wikitext2
null
[ "transformers", "tf", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_keras_callback", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:34:09+00:00
null
null
{}
biotechnology/ionchannel
null
[ "region:us" ]
null
2024-04-24T06:34:46+00:00
null
null
## WiNGPT2 [WiNGPT](https://github.com/winninghealth/WiNGPT2) 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。 ## 更新日志 [2024/04/24] 更新 WiNGPT2-Llama-3-8B-Chat-AWQ,WiNGPT2-Llama-3-8B-Chat-GGUF 量化模型 [2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 和 WiNGPT2-Llama-3-8B-Chat 模型(中文增强/多语言)与测评结果 [2024/04/01] 更新 WiNEval 测评结果 [2024/03/05] 开源7B/14B-Chat-4bit模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat-AWQ)WiNGPT2-7B-Chat-4bit和[🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat-AWQ)WiNGPT2-14B-Chat-4bit。 [2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。 [2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。 [2023/12/12] 开源 WiNGPT2 14B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Base)WiNGPT2-14B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat)WiNGPT2-14B-Chat。 [2023/11/02] [34B模型平台测试](https://wingpt.winning.com.cn/) 和 [欢迎加入微信讨论群](https://github.com/winninghealth/WiNGPT2/blob/main/assets/WiNGPT_GROUP.JPG) [2023/10/13] 更新一个简单的[Chatbot示例](#部署),可以进行简单的多轮对话。 [2023/09/26] 开源 WiNGPT2 与7B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Base)WiNGPT2-7B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat)WiNGPT2-7B-Chat。 ## 如何使用 ### 推理 ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "WiNGPT-Llama-3-8B-Chat" device = "cuda" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path).to(device) model = model.eval() text = 'User:WiNGPT, 你好<|end_of_text|>\n Assistant:' inputs = tokenizer.encode(text, return_tensors="pt").to(device) outputs = model.generate(inputs, repetition_penalty=1.1, max_new_tokens=1024) response = tokenizer.decode(outputs[0]) print(response) ## 输出结果:你好!今天我能为你做些什么?<|end_of_text|> ``` ### 提示 WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式: 用户角色:System/User/Assistant chat_template: ```jinja2 "{% for message in messages %}{% if message['role'] == 'system' %}System:{% endif %}{% if message['role'] == 'user' %}User:{% endif %}{% if message['role'] == 'assistant' %}Assistant:{% endif %}{{ message['content'] }}<|end_of_text|>\n {% endfor %}Assistant:" ``` **指令提示**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant: ``` **多轮对话**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant:你好!今天我能为你做些什么?<|end_of_text|>\n User:你是谁?<|end_of_text|>\n Assistant: ``` **翻译功能**示例: ``` System:作为医疗领域的智能助手,WiNGPT将提供中英翻译服务。用户输入的中文或英文内容将由WiNGPT进行准确的翻译,以满足用户的语言需求。<|end_of_text|>\n User:Life is short, you know, and time is so swift; Rivers are wide, so wide, and ships sail far.<|end_of_text|>\n Assistant: ``` ## 模型卡 #### 训练配置与参数 | 名称 | 训练策略 | 长度 | 精度 | 学习率 | Weight_decay | Epochs | GPUs | | ----------------------- | ------------------ | ---- | ---- | ------ | ------------ | ------ | ------ | | WiNGPT2-Llama-3-8B-Base | 继续预训练 (20G) | 8192 | bf16 | 5e-5 | 0.05 | 2 | A100*8 | | WiNGPT2-Llama-3-8B-Chat | 微调/对齐 (50万条) | 8192 | bf16 | 5e-6 | 0.01 | 4 | A100*8 | | 名称 | 量化方法 | 精度 | 大小 | 用例 | | ----------------------------------- | -------- | ---- | ------- | -------------------------------------- | | WiNGPT2-Llama-3-8B-Chat-AWQ | AWQ | 4 | 5.33 | medium, balanced quality - recommended | | WiNGPT2-Llama-3-8B-Chat-Q4_K_M.gguf | Q4_K_M | 4 | 4.58 GB | medium, balanced quality | | WiNGPT2-Llama-3-8B-Chat-Q8_0.gguf | Q8_0 | 8 | 7.95 GB | minimal quality loss | #### 训练数据 预训练数据约20G,指令微调对齐数据约50万条,[详细内容](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) 。 ## 中文医疗评测 - WiNEval 更新时间:2024-04-23 | | Type | MCKQuiz | MSceQA | | ----------------------------- | ---------------------- | ------- | ------ | | **WiNGPT-Llama-3-8B-Base** | Continued Pre-training | 66.3 | / | | Meta-Llama-3-8B | Pre-training | 37 | / | | | | | | | **WiNGPT-Llama-3-8B-Chat** | Finetuning/Alignment | 65.2 | 79.8 | | Meta-Llama-3-8B-Instruct | Finetuning/Alignment | 49.8 | 76.3 | | Meta-Llama-3-70B-Instruct-AWQ | Finetuning/Alignment | 73.5 | 78.6 | | | | | | *MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。* *MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。* [其他WiNEval评测结果](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#2-%E5%8D%AB%E5%AE%81%E5%81%A5%E5%BA%B7%E5%8C%BB%E7%96%97%E6%A8%A1%E5%9E%8B%E6%B5%8B%E8%AF%84%E6%96%B9%E6%A1%88-winevalzero-shot) ### 企业服务 [通过WiNGPT测试平台申请密钥或与我们取得联系](https://wingpt.winning.com.cn/) ## 局限性与免责声明 (a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。 (b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。 (c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。 ## 许可证 1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 [Llama-3-8B](https://github.com/meta-llama/llama3) 相关协议及其[许可证](https://llama.meta.com/llama3/license),详细内容参照其网站。 2. 使用本项目包括模型权重时请引用本项目:https://github.com/winninghealth/WiNGPT2 ## 联系我们 网站:https://www.winning.com.cn 邮箱:[email protected]
{"language": ["en", "zh"], "license": "apache-2.0", "tags": ["medical"]}
winninghealth/WiNGPT2-Llama-3-8B-Chat-GGUF
null
[ "gguf", "medical", "en", "zh", "license:apache-2.0", "region:us" ]
null
2024-04-24T06:35:28+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep31
null
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:35:30+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:36:20+00:00
null
null
{}
Daniel-007/outputs2
null
[ "region:us" ]
null
2024-04-24T06:36:23+00:00
null
null
{}
shalong81818/test_submission
null
[ "region:us" ]
null
2024-04-24T06:36:25+00:00
null
null
{"language": ["hi"]}
ayushyoddha/ayush_bhai
null
[ "hi", "region:us" ]
null
2024-04-24T06:36:32+00:00
null
transformers
# Uploaded model - **Developed by:** Akirami - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Akirami/llama3-8b-orpo-truthy
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:38:05+00:00
null
null
{}
biotechnology/CiPA
null
[ "region:us" ]
null
2024-04-24T06:38:08+00:00
null
null
{}
Megatron17/etfit-paraphrase-mpnet-base-v2-sst2
null
[ "region:us" ]
null
2024-04-24T06:39:14+00:00
automatic-speech-recognition
peft
# whisper-medium-5k This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data --Original sentence: 集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます。 When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic. --sin2piusc/whisper-medium-5ksteps: 集団内のすべての個体が特定の表現形質に関して同一である場合 それらは単形性と呼ばれます When all individuals in a population are identical with respect to a particular phenotypic trait, they are called monomorphic. --openai/whisper-medium: 集団内のすべての個体が特定の表現形式に関して同一である場合、それらは単形性と呼ばれます。 If all individuals in a population are identical with respect to a particular form of expression, they are called monomorphic. --------- --sin2piusc/whisper-medium-5ksteps: When I drink alcohol, I can become quite unsightly, so I ordered a glass of water and stopped drinking. --openai/whisper-medium: I don't like drinking alcohol, so I asked for water and avoided it. --Original sentence: I can be quite unsightly when I'm drunk, so I abstained from alcohol and mainly drank water. ## Training procedure On a laptop running windows. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3706 | 0.3697 | 200 | 1.1454 | | 0.7963 | 0.7394 | 400 | 0.5219 | | 0.2503 | 1.1091 | 600 | 0.2178 | | 0.2062 | 1.4787 | 800 | 0.2005 | | 0.1867 | 1.8484 | 1000 | 0.1869 | | 0.1644 | 2.2181 | 1200 | 0.1738 | | 0.1501 | 2.5878 | 1400 | 0.1630 | | 0.1386 | 2.9575 | 1600 | 0.1524 | | 0.1186 | 3.3272 | 1800 | 0.1458 | | 0.1086 | 3.6969 | 2000 | 0.1424 | | 0.1019 | 4.0665 | 2200 | 0.1364 | | 0.0871 | 4.4362 | 2400 | 0.1347 | | 0.085 | 4.8059 | 2600 | 0.1326 | | 0.0746 | 5.1756 | 2800 | 0.1336 | | 0.0729 | 5.5453 | 3000 | 0.1312 | | 0.0688 | 5.9150 | 3200 | 0.1316 | | 0.0598 | 6.2847 | 3400 | 0.1328 | | 0.0574 | 6.6543 | 3600 | 0.1340 | | 0.0598 | 7.0240 | 3800 | 0.1336 | | 0.0481 | 7.3937 | 4000 | 0.1356 | | 0.0514 | 7.7634 | 4200 | 0.1366 | | 0.0465 | 8.1331 | 4400 | 0.1382 | | 0.0428 | 8.5028 | 4600 | 0.1378 | | 0.043 | 8.8725 | 4800 | 0.1384 | | 0.0425 | 9.2421 | 5000 | 0.1389 | ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.2+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ja"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer", "ctranslate2"], "datasets": ["google/fleurs", "mozilla-foundation/common_voice_16_1", "joujiboi/japanese-anime-speech"], "metrics": ["wer"], "base_model": "openai/whisper-medium", "pipeline_tag": "automatic-speech-recognition", "model-index": [{"name": "whisper-medium-5k", "results": []}]}
sin2piusc/whisper-medium-5k-jp
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "ctranslate2", "automatic-speech-recognition", "ja", "dataset:google/fleurs", "dataset:mozilla-foundation/common_voice_16_1", "dataset:joujiboi/japanese-anime-speech", "base_model:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2024-04-24T06:39:26+00:00
text-classification
setfit
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'[0:00:00] null: \n\n[0:00:03] spk_0: hello हाँ very good afternoon sir विशाल महाले जी से बात हो रही हैं हाँ बोल रहा। विशाल sir धीरज बोल रहा हूँ i d f c फस्ट bank sir you account relationship manager हाँ बोलिये हाँ! जैसे कि sir आपका saving account हैं sir dfc फस्ट bank में तो आपके account के लिए मुझे relationship manager assign किया हैं sir okay so आपको कुछ भी account related help या कुछ भी assistance required हैं sir bank की तरफ से तो आपको जिस number पर direct ही print कर सकते हो okay ये number save कर लेते धीरज name से okay ये number को add i prefer bank के app में भी हो जाएगा। वहाँ pay मेरा name official number और official mail id mentioned हैं तो वहां से भी आप मुझे verify कर सकते हो okay okay okay और जैसे की आप saving account hold करो sir कुछ update देने से आपके account related तो अभी आइडीएफसी फस्ट bank में saving account and fixed deposit के जो interest rate हैं वो increase की हैं। sir okay so अभी आप आपको saving account pay up to seven percent annual interest rate मिल जाता हैं sir जो आपके account में month wise जो आपके account में credit हो जाता हैं। okay monthly basis pay credit हो जाता हैं जो अधर bank के साथ अगर compare करोगे अधर bank आपको तीन महीने में एक बार interest provide करती हैं। ऐ dfc फस्ट तक आपको month wise interest provide करती हैं। sir okay तो compounding की वजह से आपको यहाँ pay maximum interest rate मिल जाता secondly for fix deposit जो maximum interest चल रहा हैं वो seven point five percent interest rate चल रहा हैं further tenure one year one day two five fifty days okay तो अगर आप fix deposit भी करते हो तो आपको maximum interest rate मिलेगा। secondly family में कोई senior citizen होंगे तो senior citizen को point five percent extra मिल जाता हैं तो उनको eight percent annual interest rate मिल जाएगा okay okay और जैसे कि आपका account हैं sir family members का भी account हैं हमारे साथ नहीं नहीं open करने हैं विशाल sir नहीं बताता हूँ मैं आपको करना okay क्योंकि जैसे की आपके parents की भी अगर account होंगे other banks में okay तो जैसे की अधर bank से ज्यादा आपको idea first bank आपको maximum interest rate provide कर रहे हैं okay तो यहाँ pay आप account open कर सकते हो। ठीक हैं and yeah primary bank यूज़ करो sir idea is called an order को use कर रहे sir नहीं हैं ना बहुत सारे phone। okay तो मतलब अगर कोई other banks में sir funds maintain कर रहे होंगे। okay तो आप इस bank में जो deep sea bank के सिमी account में आप fund transfer कर सकते हो तो आपका ये sir maximum interest मिल जाएगा। आपका ही फायदा हो जाएगा इसमें okay ठीक हैं और visual suggest request कर रहा number save कर लीजिए कभी भी help लगती हैं तो आप मुझे banking working days nine thirty two six thirty आप इस number pay आप मुझे contact करते लेते हैं। okay चलेगा। ठीक हैं okay thank you which। also thank you for banking thirty first and have great day sir thank you।\n\n'</li><li>'[0:00:00] null: \n\n[0:00:04] spk_0: hello good morning every getting is mr kamlesh हाँ जी या company जी मैं chetan बात कर रहा हूं half फस्ट bank से तो ये जो मेरा call तो आपको inform करने के लिए था कि आपका जो f फस्ट bank का जो account हैं उसके लिए मुझे dedicated service manager appoint किया गया हैं। okay तो आज के बाद आपको accountant कोई भी information चाहिए होगी या फिर आपको transaction related कोई भी information चाहिए होगी तो आप मुझे contact कर सकते हैं। मैं आपको help कर दूंगा। company जी, आप हमारे जो mobile banking यहाँ pay वो इस्तेमाल कर रहे हैं। हम्म। okay तो जैसे ही mobile banking app pay आप login करेंगे ना तो first page pay left hand side top corner pay आपको तीन छोटे red lines दिखेंगे आपको वहां click करना हैं जैसे आप वहां pay click करेंगे आपको option आएगा मै relationship team जैसे मै relation team के ऊपर click करेंगे आपको मेरा name चेतन सिंह मेरा number और मेरा email id आपको दिख जाएगा sir okay तो वहाँ से आप मुझे contact कर सकते हैं। इसके अलावा sir आपका जो saving account हैं उसके ऊपर में आपको update देना चाह अगर अभी जो rate of interest हैं वो revise हो चुके हैं। हम आपको offer कर रहे हैं three percent rate of interest per annum अगर आप balance maintain रखते हैं एक लाख तक अगर आप एक लाख से लेके पांच लाख के बीच में balance maintain करते हैं तो हम आपको four percent rate of interest per annum provide कर रहे हैं और अगर आप balance रख रहे हैं between five lakhs two twenty five crores तो हम आपको offer कर रहे हैं seven percent rate of interest per annum अच्छा और sir fix deposit के ऊपर भी हम आपको seven point five percent rate of interest per annum provide कर रहे हैं जिसका tenure होना चाहिए। एक साल एक दिन से लेकर दो साल के लिए okay तो complete आप जो dfc फस्ट bank का account use कर रहे हैं। ये आपका primary account हैं या फिर आपका primary account कोई और हैं जिससे आप ज्यादा से ज्यादा transaction करते हैं। daily basis पर वो primary आपका i d f c is really great sir ये मैंने इसके लिए पूछा था अगर आप आपके bank का statement देखेंगे तो हर महीने हमारी bank की तरफ से आपको interest rate का payout आता हैं sir okay तो क्या होता हैं ना की अगर आप इसमें ज्यादा funds रखेंगे इस account में तो आपका जो interest rate का payout हैं sir वो भी आपका बढ़ जाता हैं sir okay इसके अलावा sir हम आपको twenty eight free banking services provide कर रहे हैं। जैसे की अगर आप dfc फस्ट bank का card किसी भी और bank के atm machine में use करेंगे और अगर आपने multiple transaction भी कर लिया ना sir तो भी आपको कोई charges नहीं लगेंगे। अच्छा okay इसके अलावा आपको एसएमएस alert, stop cheque, issue of demand draft, cheque bounce ईसीएस return debit, card issue and ऐसी इत्यादि सेवाएं आपको free of cost में provide की जाती हैं। इसके लिए कोई charges नहीं हैं। अगर आप दूसरे bank में देखेंगे तो इन सारी services के लिए charges हैं but हमारी bank आपको ये सब charges नहीं देती। sir आप हमारा जो debit card हैं वो इस्तेमाल करते हैं। sir okay तो मैं आपको बता दूँ sir अभी फिलहाल तो debit card pay बहुत से offer चल रहे हैं। जैसे अगर आप हमारा debit card insta mart pay यूज़ करते हैं तो आपको flat hundred rupees off मिलता ऑन minimum purchase of six ninety nine अगर आप d mart में यूज़ करते हैं तो आपको five percent cashback मिलता हैं। up to rupees two fifty और अगर आप यात्रा dot com से flight book करना चाहते हैं या फिर trip से book करना चाहते हैं तो card की मदद से आपको twelve percent instant discount मिल जाएगा। sir upto rupees one thousand two hundred जिसका minimum purchase होना चाहिए। five thousand sir food and dining pay भी बहुत से offers हैं जैसे zomato pay हैं dominos पर हैं dominos pay flat one fifty off हैं अगर आप seven hundred के ऊपर order करते हैं फिर wow momos हैं फिर fresh menu barbeque nation ऐसे बहुत से इसपे offer चल रहे हैं sir और अगर आप shopping का देख रहे हैं तो first cry dot com pay भी ना मतलब thirty five percent off चल रहा हैं sir और फिर बाद में tata click you फिर इसके बाद मतलब पूमा pay mucci metro इन सारी चीज़ों pay sir आपको discount मिल रहा तो मैं एक काम करता हूँ ये जो offers हैं आपको email कर देता हूँ तो आपके free time में ना sir एक बार उसको देख लीजिये ठीक हैं okay आपका email id एक बार confirm करेंगे। kamlesh at gmail dot that great <s>। और आपको बताना चाहूँगा अभी आपके mobile app pay आपका communication address email i d contact number nominee details और signature ये update कर सकते हैं manage profile option में app में जाकर sir ठीक okay इसके अलावा और कुछ सहायता कर सकता हूँ मैं आपकी okay sir sir sir। अगली बार अगर मुझे कभी आपको call करके आपके account कोई information देनी हो तो आप से सही समय कब होता हैं? बात करने का कोई भी करो okay done sir। okay thank you आपका\n\n'</li><li>"(SPEAKER_00 3.5 - 8.899) Good evening calling from IDFC। Am I speaking to Mr। तरूण कुमार?\n(SPEAKER_01 9.63 - 10.439) Yes you are।\n(SPEAKER_00 11.069 - 18.86) So I am your relationship manager for this account। thirty AM to six PM\n(SPEAKER_01 19.59 - 20.69) Sorry\n(SPEAKER_00 21.54 - 27.379) I will be available from nine thirty AM to six PM on all bank working days as a single point of your contact।\n(SPEAKER_01 28.54 - 30.42) Okay, what is your direct number there?\n(SPEAKER_00 31.28 - 78.389) I'll just share my number with you which you will receive on your registered mobile number। Just hold on please just be on call। See I have shared it। So, apart from that, I would also like to let you know that you have maintained healthy balance in your account, which is great because you must be aware that you will receive interest over it। We offer monthly credit of interest on your saving account and interest is usually paid on quarterly basis by other bank saving account but we in case of our bank you will receive interest or interest। So and not only this, we charge zero fee on twenty-eight most commonly use saving account services including IMPS, NEFT RTGS that I know overall twenty-eight services on which we have absolutely no charges, absolutely free।\n(SPEAKER_01 78.989 - 80.269) Right, right, right I am aware about\n(SPEAKER_00 80.269 - 87.599) that। So, may I know which bank do you prefer the most when it comes to banking apart from IDFC First Bank?\n(SPEAKER_01 88.47 - 89.55) Axis।\n(SPEAKER_00 90.379 - 94.86) Axis bank, May I know what is the interest that you earn from Axis bank or other bank?\n(SPEAKER_01 95.139 - 115.779) I am not getting quite good interest but my transaction are going into the Axis bank ah like like the holding balance is not quite good in the Axis that is in my Kotak bank, okay? So, if you are asking about the holding the balance which account I am holding the balances more। So\n(SPEAKER_00 117.12 - 118.26) bank interest interest।\n(SPEAKER_01 118.43 - 119.62) yes\n(SPEAKER_00 120.269 - 126.669) yes because you can visit any of our website of IDFC first bank sorry our website of IDFC first\n(SPEAKER_01 126.669 - 127.019) bank\n(SPEAKER_00 127.19 - 144.74) where we have in saving account session we have ah simulator which is nothing but a calculator where in you could actually calculate the interest that you earn from other bank where is the interest that you are earning from our bank। And I am definitely sure that the interest that you are earning from our bank will be higher and better as compared to other bank।\n(SPEAKER_01 145.059 - 157.289) Okay, I will keep my balances into AFC bank account from the next time from the next month। However I am also looking for one more thing like we have the corporate account with the Kotak bank as well।\n(SPEAKER_00 157.699 - 158.27) Okay।\n(SPEAKER_01 158.5 - 161.46) Current account I mean okay my company bank\n(SPEAKER_00 161.46 - 162.729) account। ah\n(SPEAKER_01 162.899 - 187.47) but we are not quite satisfied with the services of the Kotak bank some most of the time they servers are down okay and our balances are stuck। ah However, we also have the ICICI bank account as well as that we are using as in the current account we also but we also need one more account, okay। ah So, are you also taking care of the current accounts? or not current\n(SPEAKER_00 187.47 - 193.52) account yes if you want to open any current account, you can let me know। I will guide you regarding the same।\n(SPEAKER_01 194.029 - 196.339) So, I would like to go ahead with that।\n(SPEAKER_00 196.899 - 200.589) So, I'll just forward your request for current account opening।\n(SPEAKER_01 201.119 - 201.88) hm\n(SPEAKER_00 201.88 - 206.139) तो branch which will be convenient for you will be New Friends colony branch, right?\n(SPEAKER_01 206.949 - 223.169) No, that was the very first branch I believe the NFC branch। ah then the first time we opened the IDFC bank account I remember ah very quite close bank branch is the sector sixty ah to Noida।\n(SPEAKER_00 224.1 - 226.49) sector sixty sixty just hold on।\n(SPEAKER_01 227.139 - 235.339) Sector sixty two and even the sector sixty three as block you guys have a branch that will be more ah near to me।\n(SPEAKER_00 236.059 - 240.6) okay just hold on I just need to check it। sector sixty-two and sixty three ना नोएडा\n(SPEAKER_01 241.19 - 246.41) sector sixty-three would be great। I know that sector sixty three as block you guys have the branch there।\n(SPEAKER_00 249.24 - 249.639) sector\n(SPEAKER_01 249.639 - 252.13) के। हम्म।\n(SPEAKER_00 253.419 - 261.92) six three हम्म। Yes, I am trying to search it, so I will forward this request of yours for current account।\n(SPEAKER_01 262.549 - 262.97) apart from\n(SPEAKER_00 262.97 - 291.649) that I also need to update you about several things just hold on। Sector sixty-three। Okay। So, apart from that, I would just like to let you know that our saving account rate has been revised। So for amount ranging from five lakh to fifty crore it is seven percent per annum for amount ranging from one lakh to five lakh it is four percent per annum and less than one lakh it is three percent per annum।\n(SPEAKER_01 292.29 - 293.609) What is that? what are you talking\n(SPEAKER_00 293.609 - 295.609) about? saving account, saving account rate\n(SPEAKER_01 296.609 - 297.25) saving amount\n(SPEAKER_00 297.25 - 318.209) rate। okay not only this our FD rate has also been revised so the best FD rate you can receive is for eighteen months one day to two years which is seven point seven five percent per annum। and senior citizen we have zero point five zero percent extra which brings to eight point two five percent per annum।\n(SPEAKER_01 320.649 - 321.22) हम्म।\n(SPEAKER_00 321.76 - 336.609) and you must be aware that we offer monthly credit of interest on your saving account which are zero fee on twenty most commonly saving account तो other bank usually charges service charges fees on services but we don't have any charges applicable on\n(SPEAKER_01 337.359 - 337.959) the amount is\n(SPEAKER_00 337.959 - 339.88) not applicable on\n(SPEAKER_01 339.88 - 340.16) that\n(SPEAKER_00 341.88 - 342.32) so\n(SPEAKER_01 342.32 - 342.959) the amount is\n(SPEAKER_00 342.959 - 378.829) not applicable for any services you can keep that amount in your account itself so that even on that amount you can earn interest। It will be beneficial for you। Am I audible? Am I audible? Mr। तरूण कुमार, am I audible? Do any of your family members have account? to any of your family members account with us। Mr। तरूण कुमार।"</li></ul> | | 0 | <ul><li>'[0:00:00] null: \n\n[0:00:08] spk_0: good afternoon sir this one is from idea आपसे फस्ट bank speaking with मैं screen कराना। hello madam\n\n[0:00:16] spk_1: बताइये hello हाँ जी बोलिए।\n\n[0:00:22] spk_0: sir ये madam का account हैं ना account के लिए मैं\n\n[0:00:25] spk_1: relationship\n\n[0:00:27] spk_0: manager मैंने अपने आप को introduce करने के लिए और updates देने के लिए call किया था। sir bank के related thank you sir actually sir ये number तो save कर लीजियेगा अगर जाकर अगर आपको account related queries concern तो financial requirement होगा ना bank से directly मुझे reached कर सकते हैं sir और update ये हैं की sir अभी fixed deposit और recurring deposit के ऊपर rate of interest हुआ हैं\n\n[0:00:58] spk_1: ok\n\n[0:00:59] spk_0: so\n\n[0:01:00] spk_1: one\n\n[0:01:00] spk_0: lakh। august में july first july thirty first\n\n[0:01:07] spk_1: क्या हैं बताना एक बार\n\n[0:01:09] spk_0: तो अगर आप ten lakhs से ज्यादा maintain करते हैं ना sir upto five year तक तो seven percent rate of interest दिया जा रहा हैं। sir interest every month credit होगा sir जैसे कि आप खुद check कर सकते हैं और आपके statement हैं\n\n[0:01:23] spk_1: statement seven।\n\n[0:01:26] spk_0: हां savings account के ऊपर हैं sir अगर minimum amount से लेकर up to ten lakhs तक अगर maintain करेंगे तो आपको four percent interest मिलेगा। sir\n\n[0:01:36] spk_1: ठीक हैं।\n\n[0:01:38] spk_0: तो अगर आप आप अभी अच्छा ये balance maintain कर रहे हो तो अगर आपके पास ideal funds हैं तो आपके account में transfer कर के रखिये sir ताकि every month जो hay interest मिल सके। इसके अलावा अगर आप चाहे तो f d और भी book कर सकते हैं। sir fixed deposit और recurring deposit वहां पर भी आपको seven point five percent interest मिलेगा sir अगर एक साल एक दिन से लेकर आप five fifty से days तक book करेंगे तो\n\n[0:02:04] spk_1: okay okay point ठीक\n\n[0:02:07] spk_0: हैं sir और send\n\n'</li><li>'[0:00:00] null: \n\n[0:00:03] spk_0: hello हां मुझे राजेश good afternoon मैं धारा पर एक बात कर रही हूँ bsc फस्ट से कैसे हैं आप? बस बढ़िया राजेश जी आज की आपका जो account हैं यहाँ pay जिनका name हैै जा किशन राजेश मई so इस account की मैं voucher relationship manager assign होती हूँ तो अभी से आपके account में आपको कभी भी कोई भी assistance की requirement होती हैं। ये आपको banking services के regarding आपको help चाहिए होगी तो आप friend या landline number तो मुझे phone कर सकता हैं। i am so sorry sir मेरी आवाज के लिए अभी के लिए। so sorry sir मुझे आपको ये बोलना हैं करके कि मैं जैसे भी आपका account go through कर देती हूँ तो आप काफी अच्छा balance maintain कर रहे हैं हमारे साथ correct एक average आपका sir one lakh के ऊपर show करता मेरे को मैं one one lakh twenty thousand just around for हैं तो यहाँ से मुझे आपको बताना करके की यही चीज़ जो आपकी funding आप bank के साथ रख रहे हो। आपको पता हैं कि आपका monthly interest मिल रहा इसी amount pay आपके average के ऊपर correct तो यहाँ pay sir मैं आपको एक just suggestion share करना चाहती हूँ sir कि जो month का and होता हैं ना। twenty fifth thirty first द month। तो उस time pay sir अगर आप अपना जो fund होगा ठीक हैं जो आपको लग रहा हैं कि चलो ये next month pay करना वो आप अपने account में रख सकते हैं। sir तो जैसे की आपका average increase हो जायेगा तो उसके through आपको interest अच्छा मिल जाएगा। because basically अगर हम बाकी banks के साथ compare करें तो। आपको पता हैं sir की वहाँ pay amount आपको जो interest मिलता हैं वो quarterly मिलता correct एक minute चल रहा हूँ समझ\n\n[0:02:00] spk_1: व्यक्ति ने call करी और छे limit अः मारो call hold पर रख या छे कृपा करी line पर रहो। आप जिस व्यक्ति से बात कर रहे हैं उन्होंने आपके call को hold पर रखा हैं। कृपया line पर बने रहे द person you are speaking with has put your call ऑन hold। please stay online तमिन जी व्यक्ति ने call करी और छे limit अः मारो call hold पर रख या छे कृपा करी, line पर रहो। आप जिस व्यक्ति से बात कर रहे हैं उन्होंने आपके call को hold पर रखा हैं। कृपया line पर बने रहे।\n\n'</li><li>"[0:00:00] null: \n\n[0:00:05] spk_0: Hello.\n\n[0:00:07] spk_1: Hello.\n\n[0:00:08] spk_0: Hi. My name is Suzy. Okay. I'm calling you from I D f c first bank. Am I speaking with Mister Mohammad? Babe, Russia?\n\n[0:00:18] spk_1: No, no, our our brother.\n\n[0:00:21] spk_0: Aha!\n\n[0:00:22] spk_1: Hello. Is\n\n[0:00:23] spk_0: he your? brand.\n\n[0:00:27] spk_1: he is my brother.\n\n[0:00:29] spk_0: Oh, okay. May I know who am I speaking to, sir?\n\n[0:00:34] spk_1: My name is Muhammad.\n\n[0:00:37] spk_0: okay?\n\n[0:00:42] spk_1: Hello.\n\n[0:00:43] spk_0: Yes, He's your brother. Okay. Uh, sir just wanted to Ah, ask you that. May I get his number or anything?\n\n[0:00:52] spk_1: what number\n\n[0:00:54] spk_0: customers number Mohammad. Mohammad says. Number can I get I can I get get his number to reach out to him? actually, I'm calling from I. D. F C First Bank. He's a relationship manager for NRI accounts.\n\n[0:01:09] spk_1: Okay, fine. number is not devil.\n\n[0:01:13] spk_0: you don't have\n\n[0:01:14] spk_1: WhatsApp. number only WhatsApp. your call.\n\n[0:01:20] spk_0: Okay? No problem. You can send. Give me the number. Is this the same number available on WhatsApp also?\n\n[0:01:26] spk_1: this is This is my number only.\n\n[0:01:29] spk_0: Oh, okay. So can you tell me his WhatsApp number so I can drop him on WhatsApp message? This is your number?\n\n[0:01:37] spk_1: No, just call them.\n\n[0:01:39] spk_0: I want Sorry, handy man.\n\n[0:01:42] spk_1: no English and Camille\n\n[0:01:44] spk_0: only Tamin. I want Mohammad side Braam Bouchar. Mobile number so I can call him or I can. What's up? Him? Do you have his number?\n\n[0:01:57] spk_1: Yeah, Yeah, avid Bro. This is this number I've given to my brother. Know you were calling this number?\n\n[0:02:03] spk_0: the same number is with your brother also.\n\n[0:02:07] spk_1: No, bro. You calling from phone number? Know something? Double two. Done. This number.\n\n[0:02:12] spk_0: No, sir. This number cannot be contacted. This is the landline number. Recording number recorded line. This\n\n[0:02:19] spk_1: is Okay, fine. There is any option number.\n\n[0:02:24] spk_0: Okay. Can you take down my number? My number? nine, Double zero.\n\n[0:02:31] spk_1: wait second.\n\n[0:02:32] spk_0: Yeah, yeah.\n\n[0:02:40] spk_1: tell me.\n\n[0:02:41] spk_0: Nine Double zero\n\n[0:02:43] spk_1: nine Double zero\n\n[0:02:44] spk_0: 431.\n\n[0:02:46] spk_1: 431\n\n[0:02:48] spk_0: 2929\n\n[0:02:51] spk_1: 2949 your name?\n\n[0:02:56] spk_0: Sue Yoga s u Y o G.\n\n[0:03:03] spk_1: Yes. We Y\n\n[0:03:04] spk_0: S u\n\n[0:03:07] spk_1: Y l\n\n[0:03:08] spk_0: Y O\n\n[0:03:11] spk_1: fine.\n\n[0:03:14] spk_0: Okay, Please provide this number to the customer and tell him to call me. I am the relationship manager and I am available on WhatsApp. Also on this number.\n\n[0:03:23] spk_1: Do you want to propose too,\n\n[0:03:25] spk_0: sir? Just wanted to check with the customer that how is his banking going on? Good. Does he require any sort of help from my end or in future? If he needs any help pertaining to the account? I am the one point of contact and relationship manager of his account.\n\n[0:03:42] spk_1: Okay, fine.\n\n[0:03:43] spk_0: Yeah, sir. Can you tell me again? Your name, sir? I am so sorry.\n\n[0:03:48] spk_1: Sorry.\n\n[0:03:50] spk_0: your name is sad. I am\n\n[0:03:52] spk_1: a right. Yes.\n\n[0:03:54] spk_0: Okay. Not a problem, sir. Yes, sir. Thank you so much for giving a valuable time. Please provide this number to the customer. Okay.\n\n[0:04:01] spk_1: Okay. Fine.\n\n[0:04:02] spk_0: Okay. Thank you so much.\n\n"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.4 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Megatron17/setfit-paraphrase-mpnet-base-v2-sst2") # Run inference preds = model("[0:00:00] null: [0:00:03] spk_0: good afternoon. Am I speaking to supra Priya, ma'am? ma'am, this is Hi, Krishna. Ma'am, Your A newly appointed as your relationship manager from I D S C Falls Bank. [0:00:15] spk_1: okay? [0:00:17] spk_0: Yeah, ma'am. In future. So whenever you have any queries or whenever you have any requirement Okay, you can contact me. So [0:00:24] spk_1: in terms of [0:00:25] spk_0: account related information or if you want to go for any kind of loans, even for investment related suggestions for a building. Yeah. Allow me banking and tell on the name. [0:00:37] spk_1: Yeah, it's [0:00:39] spk_0: Okay, so every month interest is credit out the number. so whatever the balance you maintaining. Okay. So you'll get interest, ma'am. [0:00:52] spk_1: Yeah. Yeah, I think I'm getting it. [0:00:54] spk_0: Okay. And how much rate of interest are you getting at all? [0:00:59] spk_1: Ah, it's okay. I like. I'm not that much into it. My husband take care of all that. [0:01:04] spk_0: Okay. Okay, ma'am. I'll give you the information and please inform to your husband as well. Because all banks are giving a quarterly basis. Interest, ma'am. Okay. So that too only maximum 3%. But only our bank is giving a monthly interest. Credit that the minimum 4% per annum. [0:01:23] spk_1: so [0:01:24] spk_0: war and about 10 lakhs. If you maintain up to Pike Row, we are providing 7% rate of interest, ma'am. again it monthly. Okay, so if you aware about this definitely will maintain a mobile because you'll get a more interest credit year. And even if you have any idle funds, please do fixed deposits. We are giving 7.5 rate of interest for just one year. One day, ma'am. [0:01:49] spk_1: Okay. [0:01:51] spk_0: So your husband also holding account address? [0:01:53] spk_1: No, I don't think so. He is like I don't know about. [0:02:01] spk_0: OK, ma'am. Yeah, but please save my number. Okay? You can able to see my details in net banking and mobile app. So there is an option called my relationship ing. [0:02:10] spk_1: Okay. [0:02:11] spk_0: Yeah. So just save my number, ma'am, okay? Yeah. Thank you, ma'am. Thanks for your time. Have a nice day. ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 137 | 709.3718 | 2510 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 37 | | 1 | 41 | ### Training Hyperparameters - batch_size: (4, 4) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0013 | 1 | 0.2239 | - | | 0.0641 | 50 | 0.2657 | - | | 0.1282 | 100 | 0.1753 | - | | 0.1923 | 150 | 0.22 | - | | 0.2564 | 200 | 0.2309 | - | | 0.3205 | 250 | 0.1526 | - | | 0.3846 | 300 | 0.3479 | - | | 0.4487 | 350 | 0.0581 | - | | 0.5128 | 400 | 0.0137 | - | | 0.5769 | 450 | 0.0013 | - | | 0.6410 | 500 | 0.005 | - | | 0.7051 | 550 | 0.001 | - | | 0.7692 | 600 | 0.0023 | - | | 0.8333 | 650 | 0.0011 | - | | 0.8974 | 700 | 0.0001 | - | | 0.9615 | 750 | 0.0004 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.0 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "widget": [{"text": "[0:00:00] null: \n\n[0:00:14] spk_0: sir. This is manna calling from my dear. Thank you for the confirmation, sir. I've been appointed as your dedicated service manager for\n\n[0:00:22] spk_1: your account.\n\n[0:00:24] spk_0: and I request you to save my contact number, which is appearing on your\n\n[0:00:27] spk_1: phone.\n\n[0:00:28] spk_0: and I will be available in all bank working\n\n[0:00:30] spk_1: with from\n\n[0:00:31] spk_0: morning 930\n\n[0:00:32] spk_1: evening 6 p.m.\n\n[0:00:34] spk_0: You can also check my contact number and email I d. On your mobile banking.\n\n[0:00:39] spk_1: on\n\n[0:00:39] spk_0: the left top corner. That will be an option called my relationship\n\n[0:00:44] spk_1: Team. If\n\n[0:00:45] spk_0: you click on, that means you can able to see my\n\n[0:00:47] spk_1: details. and\n\n[0:00:49] spk_0: answer. I would like to inform you one more\n\n[0:00:51] spk_1: thing. Actually,\n\n[0:00:53] spk_0: our IDs see, First Bank has increased to the\n\n[0:00:55] spk_1: both\n\n[0:00:56] spk_0: savings account Interstate as well as fixed\n\n[0:00:58] spk_1: deposit interest rates from this. we\n\n[0:01:01] spk_0: are providing up to 7% per annum for the amount which you are maintaining\n\n[0:01:04] spk_1: in your savings account.\n\n[0:01:06] spk_0: that's also on slap basis.\n\n[0:01:07] spk_1: From on rupee to 10\n\n[0:01:09] spk_0: lakhs, you will get 4% Parana and the amount\n\n[0:01:11] spk_1: which are maintaining\n\n[0:01:12] spk_0: more than 10 lakhs up to five cross you will get 7% Parana. it will get calculated on daily basis and you will get the interest amount on\n\n[0:01:19] spk_1: monthly basis.\n\n[0:01:22] spk_0: and cancer. And for fixed deposit, we are providing 7.5% approve, 7.5% the tenure of one year, one\n\n[0:01:29] spk_1: day. highest\n\n[0:01:31] spk_0: across the\n\n[0:01:32] spk_1: industry.\n\n[0:01:35] spk_0: And then, sir, we are providing 28 services of Reaser like an email cashback dots at any at them.\n\n[0:01:41] spk_1: And that is\n\n[0:01:42] spk_0: S M salad charges, cash withdrawal and deposit charges at any branches\n\n[0:01:47] spk_1: with no charges\n\n[0:01:48] spk_0: and\n\n[0:01:48] spk_1: check book\n\n[0:01:49] spk_0: check pounds, that is no check bones\n\n[0:01:51] spk_1: charges and easiest return charges. Like\n\n[0:01:53] spk_0: Wes. We are providing 28 services for free,\n\n[0:01:55] spk_1: sir. Okay.\n\n[0:01:57] spk_0: Are you salad or sell\n\n[0:01:58] spk_1: from purchase? one.\n\n[0:02:01] spk_0: Saturday. so value or maintaining yours Have the account, sir.\n\n[0:02:08] spk_1: Okay,\n\n[0:02:09] spk_0: so you are maintaining. salary accounted HDFC So R I s C first bank is your savings\n\n[0:02:16] spk_1: account. Yeah. I hope\n\n[0:02:18] spk_0: this is your primary accounts are depressed, bank. UN answer. Ah\n\n[0:02:28] spk_1: in Are\n\n[0:02:30] spk_0: you holding any other loans or\n\n[0:02:32] spk_1: something? Any other bank recently? taken, Marcus.\n\n[0:02:39] spk_0: banks.\n\n[0:02:42] spk_1: Federal Bank. Sir,\n\n[0:02:43] spk_0: if you require any kind of thing\n\n[0:02:46] spk_1: means regarding your\n\n[0:02:47] spk_0: account in ideas\n\n[0:02:48] spk_1: First\n\n[0:02:48] spk_0: Bank, I am the one point of contact for all your\n\n[0:02:51] spk_1: banking means. Okay,\n\n[0:02:53] spk_0: sir. Save my official landline number, which is available in your\n\n[0:02:57] spk_1: mobile banking app.\n\n[0:02:59] spk_0: call back if any requirement.\n\n[0:03:03] spk_1: Okay, sir. Thank you. Yeah.\n\n[0:03:06] spk_0: Thanks for banking with IDF. See first Bank Sept. Have a great day, sir. Have a great day, sir.\n\n[0:03:11] spk_1: Bye.\n\n"}, {"text": "[0:00:00] null: \n\n[0:00:03] spk_0: good afternoon. Am I speaking to supra Priya, ma'am? ma'am, this is Hi, Krishna. Ma'am, Your A newly appointed as your relationship manager from I D S C Falls Bank.\n\n[0:00:15] spk_1: okay?\n\n[0:00:17] spk_0: Yeah, ma'am. In future. So whenever you have any queries or whenever you have any requirement Okay, you can contact me. So\n\n[0:00:24] spk_1: in terms of\n\n[0:00:25] spk_0: account related information or if you want to go for any kind of loans, even for investment related suggestions for a building. Yeah. Allow me banking and tell on the name.\n\n[0:00:37] spk_1: Yeah, it's\n\n[0:00:39] spk_0: Okay, so every month interest is credit out the number. so whatever the balance you maintaining. Okay. So you'll get interest, ma'am.\n\n[0:00:52] spk_1: Yeah. Yeah, I think I'm getting it.\n\n[0:00:54] spk_0: Okay. And how much rate of interest are you getting at all?\n\n[0:00:59] spk_1: Ah, it's okay. I like. I'm not that much into it. My husband take care of all that.\n\n[0:01:04] spk_0: Okay. Okay, ma'am. I'll give you the information and please inform to your husband as well. Because all banks are giving a quarterly basis. Interest, ma'am. Okay. So that too only maximum 3%. But only our bank is giving a monthly interest. Credit that the minimum 4% per annum.\n\n[0:01:23] spk_1: so\n\n[0:01:24] spk_0: war and about 10 lakhs. If you maintain up to Pike Row, we are providing 7% rate of interest, ma'am. again it monthly. Okay, so if you aware about this definitely will maintain a mobile because you'll get a more interest credit year. And even if you have any idle funds, please do fixed deposits. We are giving 7.5 rate of interest for just one year. One day, ma'am.\n\n[0:01:49] spk_1: Okay.\n\n[0:01:51] spk_0: So your husband also holding account address?\n\n[0:01:53] spk_1: No, I don't think so. He is like I don't know about.\n\n[0:02:01] spk_0: OK, ma'am. Yeah, but please save my number. Okay? You can able to see my details in net banking and mobile app. So there is an option called my relationship ing.\n\n[0:02:10] spk_1: Okay.\n\n[0:02:11] spk_0: Yeah. So just save my number, ma'am, okay? Yeah. Thank you, ma'am. Thanks for your time. Have a nice day.\n\n"}, {"text": "[0:00:00] null: \n\n[0:00:02] spk_0: Good afternoon. This is cashing will be calling from I. D. that's all. Hello? I d c forced banks. uh, it was just to share some updates with you the bank account, which you're having the person. So for this bank account, I have been assigned as your dedicated service manager. So if you need any kind of assistance related to your bank account, you can directly connect with me as last time. Also, we had a word, so I hope you have saved this number. sure? Yeah. Okay, great. So if you need any kind of assistance related to your bank account so you can directly connect with me, I'll be a single point of contact for you will be available from 9:30 a.m. to 6 p.m. On all bank working days. Yeah, so am I. And apart from this Na, I'll share my details with you. Why? I text, you will have my WhatsApp number. So requesting you to please save that number as well. So so in feature. If you need any kind of assistance or there would be any urgency so you can connect me through WhatsApp as well, so as it would be my official WhatsApp number. And actually, I have given you a call to inform you. Now our rate of interest has been revised. Previously, we were providing 7% annually rate of interest for the balance of more than 10. Lack up to five crores, sir. But now it has been revised. If you will keep a balance in your bank account, more than five lacs on that you'll be getting 7% annually rate of interest up to 25 clothes. and this interest amount will be getting credited in your bank account on a monthly basis. So and apart from this, if you do any fixed deposit, and if you selected 10 years from one year, one day to two years you will be getting 7.5% annually rate of interest for your fixed deposit as well. so Ah, like as you having a bank account as per your current like balance. Lorna, you are getting 4% and will rate of interest. So I would require Suggest you if you have any saving balances. Any other bank account, sir, that you can move over here so, like you will be able to enjoy the higher rate of interest, which we are providing you, sir, from 4% to 7% so which would be beneficial for you. Yeah, sure. And currently, do you need any kind of assistance from my Okay. Okay. No worries, sir. And do you have any, like, phones requirement? Because also, you have a option without any document. Okay. Okay. So no, no worry. I'll just share the benefits with you in future, whenever you need. So you can go for this. So you are having a eligibility of two Lock \u20b999,000. So which is preapproved without any documentation? Instantly will be credited in your bank account. Minimum. You can take it for two months and maximum for 48 months. And whenever you want, you can close the same. So there won't be any four closer charges at all. So? So in feature me, you can go for the same so Yeah. Thank you so much for giving your patience investing. Please feel free to reach out to me in case of any assistance you need. Thank you. Thank you so much. Please keep banking with 51st bank.\n\n"}, {"text": "(SPEAKER_00 1.309 - 6.929) \u091c\u0940 good morning sir \u0935\u0940\u0930\u0947\u0902\u0926\u094d\u0930 \u092a\u094d\u0930\u091c\u093e\u092a\u0924\u093f \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0939\u0942\u0901 IDFC first bank \u0938\u0947 \u0935\u093f\u0915\u093e\u0938 \u0915\u0941\u092e\u093e\u0930 \u091c\u0940 \u0938\u0947 \u092c\u093e\u0924 \u0939\u094b \u0930\u0939\u0940 \u0939\u0948 sir?\n(SPEAKER_01 7.96 - 7.98) \u0939\u093e\u0901\n(SPEAKER_00 8.47 - 17.09) \u091c\u0940 \u0935\u093f\u0915\u093e\u0938 \u091c\u0940 hope you remember sir recently \u0939\u092e\u093e\u0930\u0940 \u092c\u093e\u0924 \u0939\u0941\u0908 \u0925\u0940 regarding \u0906\u092a\u0915\u093e \u091c\u094b IDFC first bank \u092e\u0947\u0902 account open \u0939\u0941\u0906 \u0909\u0938\u0915\u093e \u0938\u0930\u094d\u0935ice manager sir\u0964\n(SPEAKER_01 17.899 - 18.579) \u0939\u093e\u0901 \u092c\u094b\u0932\u093f\u090f\n(SPEAKER_00 18.94 - 25.04) \u091c\u0940 \u091c\u0940 sir \u092e\u0948\u0902\u0928\u0947 \u0906\u092a\u0915\u094b \u090f\u0915 link send \u0915\u093f\u092f\u093e \u0925\u093e sir, account link \u0915\u0930\u0928\u0947 \u0915\u0947 \u0932\u093f\u090f sir\u0964 \u0915\u094d\u092f\u093e \u0906\u092a \u0909\u0938\u0947 link \u0915\u0930\u093e \u0926\u093f\u092f\u093e \u0939\u0948\u0964\n(SPEAKER_01 25.799 - 27.1) \u0928\u0939\u0940\u0902 \u0905\u092d\u0940 link \u0928\u0939\u0940\u0902 \u0939\u094b\u0924\u093e \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 27.76 - 66.16) \u0905\u091a\u094d\u091b\u093e \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0905\u092d\u0940 \u0906\u092a \u0905\u092a\u0928\u093e HDFC first bank \u0915\u093e mobile application login \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u0924\u094b \u092e\u0948\u0902 guide \u0915\u0930 \u0926\u0947\u0924\u093e \u0939\u0942\u0901 \u0935\u093f\u0915\u093e\u0938 \u091c\u0940\u0964 account link \u0915\u0930\u0928\u0947 \u0915\u0947 \u0932\u093f\u090f sir \u0905\u0917\u0930 \u0906\u092a \u0915\u094b\u0908 other bank \u0915\u093e account \u092d\u0940 use \u0915\u0930\u0924\u0947 \u0939\u0948 \u0928\u093e sir \u0924\u094b IDFC first bank \u0906\u092a\u0915\u094b \u092f\u0939\u093e\u0901 \u092a\u0947 \u090f\u0915 option \u0926\u0947\u0924\u093e \u0939\u0948\u0964 \u092f\u0939\u093e\u0901 \u092a\u0947 \u0906\u092a other bank accounts \u0915\u094b \u092d\u0940 \u0939\u092e\u093e\u0930\u0947 \u0938\u093e\u0925 link \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0964 \u0914\u0930 secondary bank account \u0915\u0940 details \u092d\u0940 \u0926\u0947\u0916 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u091c\u0948\u0938\u0947 \u0915\u0940 \u0907\u0938\u0915\u0947 statement, debit credit balances \u0938\u093e\u0930\u0940\u0964 \u0924\u094b \u0906\u092a\u0915\u094b \u0915\u093e\u092b\u0940 handy \u0939\u094b \u091c\u093e\u0924\u093e \u0939\u0948 \u0935\u094b \u091a\u0940\u091c sir \u0906\u092a single application \u0915\u0947 \u091c\u0930\u093f\u090f \u0906\u092a\u0915\u0947 \u0938\u093e\u0930\u0947 accounts manage \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 Hello\n(SPEAKER_01 67.129 - 67.66) \u0939\u093e\u0901\n(SPEAKER_00 67.889 - 83.19) \u091c\u0940 \u091c\u0940 \u091c\u0940 \u0915\u093e\u092b\u0940 useful process \u0939\u0948 \u0906\u092a \u0907\u0938\u0915\u094b link \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 \u092c\u093f\u0928\u093e \u0915\u094b\u0908 charges \u0915\u0947 sir \u0914\u0930 \u0905\u0917\u0930 \u0906\u092a\u0915\u094b \u092f\u0947 useful \u0932\u0917\u0924\u093e \u0939\u0948 \u0924\u094b \u0906\u092a \u0909\u0938\u0915\u094b as it is continue \u0930\u0916 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 \u0907\u0928 future \u0905\u0917\u0930 \u0906\u092a\u0915\u094b \u0932\u0917 \u0930\u0939\u093e \u0939\u0948 \u0915\u0940 \u0906\u092a\u0915\u094b \u092f\u0947 useful \u0928\u0939\u0940\u0902 \u0932\u0917 \u0930\u0939\u093e \u0939\u0948 \u0924\u094b \u0906\u092a \u0909\u0938\u0915\u094b unlink \u092d\u0940 \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 totally up to you sir\u0964\n(SPEAKER_01 84.86 - 85.5) \u0920\u0940\u0915 \u0939\u0948 \u0939\u092e\n(SPEAKER_00 86.819 - 88.699) \u0924\u094b \u0905\u092d\u0940 login \u0915\u0930 \u092a\u093e\u090f\u0902\u0917\u0947 application\n(SPEAKER_01 88.699 - 91.569) \u0915\u094b \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 91.58 - 98.01) \u0905\u091a\u094d\u091b\u093e \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0924\u094b \u0915\u094b\u0908 suitable time \u092c\u0924\u093e \u092a\u093e\u090f\u0902\u0917\u0947 \u0924\u094b \u092e\u0948\u0902 \u0906\u092a\u0915\u094b \u0909\u0938\u0915\u0947 accordingly call \u0915\u0930 \u0932\u0947\u0924\u093e \u0939\u0942\u0901\u0964 \u0915\u094b\u0908 assistance requirement \u0915\u0930\u0947\n(SPEAKER_01 98.01 - 98.47) \u092e\u0948\u0902 \u0915\u0930 \u0932\u0942\u0901\u0917\u093e\u0964\n(SPEAKER_00 101.62 - 108.019) \u0905\u091a\u094d\u091b\u093e \u0905\u091a\u094d\u091b\u093e \u0920\u0940\u0915 \u0939\u0948 \u0905\u091a\u094d\u091b\u0940 \u092c\u093e\u0924 \u0939\u0948 sir \u0924\u094b \u092e\u0948\u0902 \u0935\u093e\u092a\u0938 \u0938\u0947 \u0906\u092a\u0915\u094b link \u092d\u0947\u091c \u0915\u0947 \u0930\u0916\u0924\u093e \u0939\u0942\u0901\u0964 \u0935\u094b link \u0915\u0947 through \u0906\u092a directly \u0909\u0938 main page \u092a\u0930 \u0906 \u091c\u093e\u090f\u0902\u0917\u0947\u0964\n(SPEAKER_01 108.459 - 109.58) okay\n(SPEAKER_00 110.15 - 114.089) \u092c\u093e\u0915\u0940 \u0905\u092d\u0940 \u0915\u094b\u0908 \u092c\u0948\u0902\u0915\u093f\u0902\u0917 need \u0939\u0948 sir \u091c\u093f\u0938\u092e\u0947\u0902 \u092e\u0948\u0902 \u0915\u094b\u0908 \u0938\u0939\u093e\u092f\u0924\u093e \u0915\u0930 \u0938\u0915\u0924\u093e \u0939\u0942\u0901 \u0906\u092a\u0915\u0940?\n(SPEAKER_01 114.36 - 115.959) \u0928\u0939\u0940\u0902 \u0905\u091a\u094d\u091b\u093e\n(SPEAKER_00 115.959 - 122.059) \u0915\u094b\u0908 \u092c\u093e\u0924 \u0928\u0939\u0940\u0902 sir \u0915\u094b\u0908 \u092d\u0940 \u092c\u0948\u0902ing need \u0930\u0939\u0947\u0917\u093e \u0924\u094b please call \u0915\u0940\u091c\u093f\u092f\u0947\u0917\u093e \u0907\u0938 same number \u092a\u0947 \u0906\u092a\u0915\u094b call connect \u0939\u094b \u091c\u093e\u092f\u0947\u0917\u093e sir \u0920\u0940\u0915 \u0939\u0948\n(SPEAKER_01 122.279 - 122.739) okay thank you\n(SPEAKER_00 123.44 - 124.29) thank you sir have a nice"}, {"text": "[0:00:00] null: \n\n[0:00:04] spk_0: hello good morning sir hello hello good morning\n\n[0:00:08] spk_1: \u0915\u093f\u0936\u094b\u0930 \u092a\u094d\u0930\u0927\u093e\u0928 \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0925\u093e i d f c post bank \u0938\u0947 \u092e\u0947\u0930\u0940 \u092c\u093e\u0924 mister \u0906\u0932\u092e \u0928\u0947\u0917\u0940 \u091c\u0940 \u0938\u0947 \u092c\u093e\u0924 \u0939\u094b \u0930\u0939\u0940\n\n[0:00:13] spk_0: \u0939\u0948\u0902\u0964 \u0939\u093e\u0901 \u091c\u0940 \u0939\u093e\u0901 \u091c\u0940 \u0939\u093e\u0901 good morning! \u0906\u092a\u0915\u0947\n\n[0:00:15] spk_1: account good morning sir \u092e\u0948\u0902 \u0906\u092a\u0915\u0947 account \u0915\u093e relationship manager \u092c\u093e\u0924 \u0915\u0930 \u0930\u0939\u093e \u0925\u093e\u0964\n\n[0:00:20] spk_0: \u0939\u093e\u0902 introduction call \u0925\u093e okay\n\n[0:00:23] spk_1: bank \u0928\u0947 officially \u092e\u0941\u091d\u0947 appoint \u0915\u093f\u092f\u093e \u0925\u093e sir \u0906\u092a\u0915\u0947 account \u0915\u0947 \u0932\u093f\u090f \u0938\u093e\u0930\u0947 product related services \u0915\u0947\n\n[0:00:28] spk_0: \u0932\u093f\u090f\u0964 \u0924\u094b\n\n[0:00:29] spk_1: \u0939\u094b\u0917\u093e \u0928\u093e\u0964\n\n[0:00:31] spk_0: \u0939\u093e\u0902 \u0906\u092a\u0915\u0947 mobile application\n\n[0:00:33] spk_1: pay \u092d\u0940 visible \u0930\u0939\u0947\u0917\u093e\u0964 in \u092e\u0948 pressure shifting\n\n[0:00:36] spk_0: \u0906\u092a\u0915\u094b official\n\n[0:00:37] spk_1: mail id with me official number and official whatsapp detail \u0915\u0947 \u0938\u093e\u0925 \u0906\u092a\u0915\u094b \u092e\u093f\u0932 \u091c\u093e\u090f\u0917\u093e\u0964 \u0920\u0940\u0915\n\n[0:00:43] spk_0: \u0939\u0948\u0902\n\n[0:00:45] spk_1: \u0938\u093e\u0925 \u0939\u0940 \u0938\u093e\u0925 \u091c\u0948\u0938\u0947 \u0939\u0940 \u0939\u092e\u093e\u0930\u093e conversation and \u0939\u094b\u0924\u093e \u0939\u0948\u0902 sir \u0906\u092a\u0915\u094b \u090f\u0915 memes \u0915\u0947 through \u092d\u0940 inform \u0915\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902 \u0915\u093f \u0926\u091f \u092e\u0948\u0902 \u0906\u092a\u0915\u0947 \u0906 \u0930\u0939\u093e \u0939\u0942\u0902 \u0906\u092a \u092e\u0947\u0930\u093e \u091c\u094b detail \u0939\u0948\u0902 \u0905\u092a\u0928\u0947 phone book \u092e\u0947\u0902 save \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 clicking only\n\n[0:00:54] spk_0: message\u0964\n\n[0:00:56] spk_1: \u0906\u092a\u0915\u094b \u0906\u092a\u0915\u0947 product related account related services related\n\n[0:01:00] spk_0: \u091c\u094b \u0915\u0940\n\n[0:01:00] spk_1: \u0906\u092a\u0915\u094b requirement \u0930\u0939\u0924\u0940 \u0939\u0948\u0902 \u0924\u094b \u0906\u092a \u092e\u0941\u091d\u0947 call\n\n[0:01:02] spk_0: \u0915\u0930\n\n[0:01:03] spk_1: \u0938\u0915\u0924\u0947 \u0939\u0948\u0902 connect \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\n\n[0:01:04] spk_0: \u092e\u0947\u0930\u093e \u091c\u094b working\n\n[0:01:05] spk_1: nine thirty eight two six \u092a\u0940\u090f\u092e \u0915\u093e \u0939\u0948\u0902 sir\n\n[0:01:07] spk_0: bank working days \u092e\u0947\u0902 sir okay \u0920\u0940\u0915 \u0939\u0948\u0902\n\n[0:01:11] spk_1: and \u0906\u092a\u0915\u093e saving account \u092d\u0940 \u0906\u092a \u092c\u0939\u0941\u0924 \u0938\u092e\u092f \u0938\u0947 \u0939\u092e\u093e\u0930\u0947 \u0938\u093e\u0925 \u091c\u0941\u095c\u0947 \u0939\u0941\u090f \u0939\u0948\u0902\u0964 sir \u0926\u094b \u0939\u095b\u093e\u0930 \u092c\u0940\u0938 \u0938\u0947 \u0906\u092a\u0915\u093e account \u0939\u0948\u0902 right\n\n[0:01:16] spk_0: hope\n\n[0:01:18] spk_1: \u092f\u0947 \u0906\u092a\u0915\u093e \u091c\u094b account \u0939\u0948\u0902 \u092f\u093e primary account \u0939\u0948\u0902 sir\n\n[0:01:22] spk_0: \u092f\u0947\n\n[0:01:22] spk_1: account \u0906\u092a\u0915\u093e primary account \u0939\u0948\u0902\n\n[0:01:23] spk_0: sir \u0939\u093e\u0901 \u091c\u0940! okay\n\n[0:01:26] spk_1: and \u092b\u093f\u0932\u0939\u093e\u0932 \u0905\u092d\u0940 \u0906\u092a job \u0915\u0930 \u0930\u0939\u0947\n\n[0:01:28] spk_0: \u0939\u0948\u0902\n\n[0:01:28] spk_1: business \u0935\u0917\u0948\u0930\u0939 run \u0915\u0930 \u0930\u0939\u0947\n\n[0:01:29] spk_0: \u0939\u0948\u0902 \u091c\u094b salary account \u0914\u0930 \u0906\u092a\u0915\u093e right corporate corporate account \u092e\u0947\u0902 salary\n\n[0:01:38] spk_1: account \u0915\u0941\u091b benefits \u092e\u0948\u0902 \u0906\u092a\u0915\u094b \u092c\u0924\u093e \u0926\u0947\u0928\u093e\n\n[0:01:40] spk_0: \u091a\u093e\u0939\u0942\u0901\u0917\u093e sir \u0907\u0938 account \u092e\u0947\u0902 \u0906\u092a\u0915\u094b rate of\n\n[0:01:42] spk_1: interest \u091c\u094b \u0939\u0948\u0902 \u0928\u093e sir annual \u092e\u0947\u0902 sir saving me more than ten lakh rupees \u0905\u0917\u0930 account \u092e\u0947\u0902 fund \u0905\u0917\u0930 \u0939\u094b\u0924\u093e \u0939\u0948\u0902 sir \u0906\u092a\u0915\u0947 \u0924\u094b\n\n[0:01:49] spk_0: \u0906\u092a\u0915\u094b annually\n\n[0:01:50] spk_1: seven seven percent \u0915\u093e rate of\n\n[0:01:51] spk_0: interest \u092e\u093f\u0932\u0924\u093e \u0939\u0948\u0902 \u0906\u092a\u0915\u0940 saving \u092f\u0947 \u091c\u094b\n\n[0:01:54] spk_1: amount \u0939\u094b\u0924\u093e \u0939\u0948\u0902 monthly basis pay\n\n[0:01:55] spk_0: \u0906\u092a\u0915\u094b\n\n[0:01:56] spk_1: credit \u0939\u094b\u0924\u093e\n\n[0:01:56] spk_0: \u0939\u0948\u0902 and ten\n\n[0:01:58] spk_1: lakh rupees \u0905\u0917\u0930 \u0906\u092a\u0915\u0947 account \u092e\u0947\u0902 fund available \u0939\u0948\u0902 \u0905\u0917\u0930 \u0906\u092a book \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 \u0906\u092a\u0915\u0947 account primary account \u0938\u0947 \u0924\u094b four percent\n\n[0:02:04] spk_0: \u0906\u092a\u0915\u093e \u0906\u092a\u0915\u094b annually rate of\n\n[0:02:05] spk_1: interest monthly\n\n[0:02:07] spk_0: basis pay\n\n[0:02:07] spk_1: \u0906\u092a\n\n[0:02:08] spk_0: credit credit \u0915\u0930\u0915\u0947 \u092e\u093f\u0932\u0924\u093e\n\n[0:02:09] spk_1: \u0939\u0948\u0902 sir\n\n[0:02:10] spk_0: okay above ten lakh seven percent\n\n[0:02:14] spk_1: above ten lakhs it will be seven percent sir\n\n[0:02:16] spk_0: okay\n\n[0:02:17] spk_1: \u0905\u0917\u0930 \u0906\u092a comparison \u0926\u0947\u0916\u0947\u0902\u0917\u0947 \u0928\u093e sir \u0924\u094b \u0906\u092a \u092c\u0939\u0941\u0924 \u091c\u094d\u092f\u093e\u0926\u093e save \u0915\u0930 \u0932\u0947\u0924\u0947 \u0939\u0948\u0902\u0964 \u0906\u092a\u0915\u0940 saving account \u092e\u0947\u0902 \u0939\u0940 sir \u0905\u0917\u0930 \u0906\u092a \u0915\u0939\u0940\u0902 investment \u0905\u0917\u0930 \u0928\u0939\u0940\u0902 \u0915\u0930 \u0930\u0939\u0947 \u0939\u0948\u0902 \u091c\u0948\u0938\u0947 \u0915\u0940 market \u0935\u0917\u0948\u0930\u0939 pay sir because\n\n[0:02:26] spk_0: \u0906\u092a\u0928\u0947 app \u0915\u0941\u091b\n\n[0:02:27] spk_1: \u0939\u0948\u0902 right \u091c\u092c\n\n[0:02:29] spk_0: \u092d\u0940 \u0939\u0941\u0906 \u0939\u0948\u0902 \u0905\u092d\u0940 \u0906\u092a\u0915\u093e\n\n[0:02:31] spk_1: \u090f\u0915 running \u092e\u0947\u0902 \u092d\u0940 \u091a\u0932 \u0930\u0939\u093e\n\n[0:02:32] spk_0: \u0939\u0948\u0902\n\n[0:02:33] spk_1: and \u0926\u0947\u0916\u0947\u0902\u0917\u0947 has two market \u092e\u0947\u0902 \u091c\u094b banks available \u0939\u0948\u0902 \u092b\u093f\u0932\u0939\u093e\u0932\n\n[0:02:37] spk_0: \u0909\u0928\u0938\u0947 \u0939\u092e better\n\n[0:02:38] spk_1: provide \u0915\u0930 \u0930\u0939\u0947 \u0939\u0948\u0902\u0964 sir seven point five zero annual\n\n[0:02:40] spk_0: rate of interest \u0939\u0948\u0902\n\n[0:02:41] spk_1: sir one year plus one day \u0915\u093e \u092d\u0940 \u0905\u0917\u0930 \u0906\u092a fd book \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 \u0924\u094b\n\n[0:02:46] spk_0: \u0915\u093f\u0924\u0928\u093e \u0939\u0948\u0902\n\n[0:02:47] spk_1: one\n\n[0:02:47] spk_0: year one day \u0915\u093e \u0915\u093f\u0924\u0928\u093e and\n\n[0:02:50] spk_1: \u092c\u0924\u093e \u0926\u0947 \u0930\u0939\u093e \u0939\u0942\u0901 one year plus one day two eighteen month it will be seven point five zero\n\n[0:02:56] spk_0: okay okay and\n\n[0:02:58] spk_1: \u0939\u092e \u0925\u094b\u095c\u093e additional provide \u0915\u0930\u0924\u0947 \u0939\u0948\u0902\u0964 senior citizen \u0915\u094b sir\n\n[0:03:01] spk_0: senior citizen account\n\n[0:03:02] spk_1: \u0905\u0917\u0930 \u0906\u092a\u0915\u0947 \u0918\u0930 \u092e\u0947\u0902 \u0915\u094b\u0908 senior citizen member \u0939\u0948\u0902, \u0906\u092a account open \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 sir \u0909\u0928\u0915\u094b \u091c\u094b fd rates \u0939\u0948\u0902 \u0935\u094b eight percent \u0939\u0948\u0902\u0964 sir one year\n\n[0:03:09] spk_0: plus one day \u0915\u093e okay \u0920\u0940\u0915 document documents required for senior\n\n[0:03:17] spk_1: citizen require \u0906\u0927\u093e\u0930 card pan card\n\n[0:03:20] spk_0: \u0906\u092a\u0915\u094b \u0915\u0939\u0940\u0902\n\n[0:03:20] spk_1: visit \u092d\u0940 \u0915\u0930\u0928\u0947 \u0915\u0940 requirement\n\n[0:03:22] spk_0: \u0928\u0939\u0940\u0902 \u0939\u094b\u0924\u0940 \u0939\u0948\u0902 because \u0939\u092e senior citizen\n\n[0:03:24] spk_1: \u0905\u0917\u0930 account open \u0915\u0930\u0928\u093e \u0939\u094b\u0924\u093e \u0939\u0948\u0902 \u0924\u094b \u0939\u092e account \u0906\u092a\u0915\u0947 place pay visit \u0915\u0930\u0915\u0947 account open \u0915\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902\u0964 sir\n\n[0:03:30] spk_0: and \u0909\u0928\u0915\u094b\n\n[0:03:31] spk_1: \u0930\u094b step banking facility day one \u0938\u0947 \u0939\u092e provide \u0915\u0930\u0924\u0947 \u0939\u0948\u0902 sir except, cash, withdraw \u0914\u0930 cash deposit\n\n[0:03:41] spk_0: \u0920\u0940\u0915 \u0939\u0948\u0902\n\n[0:03:42] spk_1: \u091c\u094b \u092c\u093e\u0915\u0940 \u0938\u093e\u0930\u0940 services \u0930\u0939\u0924\u0940 \u0939\u0948\u0902\u0964 sir account related \u091a\u093e\u0939\u0947 \u0935\u094b check pick up \u0939\u094b, signature update \u0939\u094b, \u0920\u0940\u0915 \u0939\u0948\u0902 \u092f\u093e \u091b\u094b\u091f\u0947 \u0938\u0947 \u091b\u094b\u091f\u0947 \u091a\u0940\u091c\u094b\u0902 \u0915\u0947 \u0932\u093f\u090f \u092d\u0940 \u0905 \u0905\u0917\u0930 \u0909\u0928\u094d\u0939\u0947\u0902 \u0905\u0917\u0930 requirement \u0939\u0948\u0902 \u0918\u0930 \u0906\u0915\u0947 \u0938\u092e\u091d\u093e\u092f\u093e \u091c\u093e\u092f\u0947\u0917\u093e \u092c\u0924\u093e\u092f\u093e \u091c\u093e\u090f \u0935\u094b\n\n[0:03:54] spk_0: \u092d\u0940 in\n\n[0:03:55] spk_1: case in case of \u0905\u0917\u0930 branch \u0935\u094b visit \u0915\u0930\u0928\u093e \u092d\u0940 \u091a\u093e\u0939\u0924\u0947 \u0939\u0948\u0902 \u0928\u093e sir \u0924\u094b \u0909\u0928\u0915\u094b \u092a\u094d\u0930\u0947\u092b\u0930\u0947\u0902\u0936\u093f\u092f\u0932 \u0926\u093f\u092f\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0902 sir for\n\n[0:04:02] spk_0: example\n\n[0:04:03] spk_1: \u091a\u093e\u0930 \u0932\u094b\u0917 \u0905\u0917\u0930 \u092a\u0939\u0932\u0947 \u0938\u0947 line \u092e\u0947\u0902 \u0924\u094b \u0909\u0928\u0915\u094b \u0939\u0940 \u0938\u092c\u0938\u0947 \u092a\u0939\u0932\u0947 \u0939\u092e attend \u0915\u0930\u0947\u0902\u0917\u0947\u0964\n\n[0:04:07] spk_0: okay okay \u092f\u0947 \u0938\u093e\u0930\u0940\n\n[0:04:08] spk_1: \u091a\u0940\u095b\u0947 bank \u092e\u0947\u0902 \u0906\u092a\u0915\u094b provide\n\n[0:04:10] spk_0: so in case of any information of any query we can talk to you\n\n[0:04:15] spk_1: yes yes yes\n\n[0:04:16] spk_0: \u092e\u0948\u0902\n\n[0:04:17] spk_1: number und details in your mobile application also in relation shipping\u0964 \u0914\u0930 \u091c\u0948\u0938\u0947\n\n[0:04:22] spk_0: \u092e\u0948\u0902\u0928\u0947 \u0906\u092a\u0915\u094b\n\n[0:04:22] spk_1: \u092c\u0924\u093e\u092f\u093e \u0915\u093f \u090f\u0938\u0964 \u090f\u092e\u0964 \u090f\u0938\u0964 \u092d\u0940 \u0906\u092a\u0915\u094b \u092e\u093f\u0932 \u091c\u093e\u090f\u0917\u093e\u0964 \u090f\u0938\u0964 \u090f\u092e\u0964 \u090f\u0938\u0964 \u092e\u0947\u0902 \u092d\u0940 \u0906\u092a save \u0915\u0930 \u0938\u0915\u0924\u0947 \u0939\u0948\u0902\u0964 detail\n\n[0:04:26] spk_0: \u0920\u0940\u0915 \u0920\u0940\u0915 \u0939\u0948\u0902\u0964 \u0920\u0940\u0915 \u0939\u0948\u0902 thank you thank you so much thank you \u0939\u092e\u094d\u092e\u0964\n\n"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.4, "name": "Accuracy"}]}]}]}
Megatron17/setfit-paraphrase-mpnet-base-v2-sst2
null
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
null
2024-04-24T06:39:33+00:00
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - DileepPatruni/CarImagesTrainedModel <Gallery /> ## Model description These are DileepPatruni/CarImagesTrainedModel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use photo of a car on a road to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](DileepPatruni/CarImagesTrainedModel/tree/main) them in the Files & versions tab.
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "photo of a car on a road"}
DileepPatruni/CarImagesTrainedModelDreambooth
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-24T06:39:41+00:00
null
null
{}
biotechnology/TRPChannels
null
[ "region:us" ]
null
2024-04-24T06:39:49+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
juewang/Meta-Llama-3-4B-mlp-pruned
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:39:54+00:00
null
null
{}
biotechnology/GutMicrobiome
null
[ "region:us" ]
null
2024-04-24T06:41:19+00:00
null
null
{}
Poorvaja/Mbot
null
[ "region:us" ]
null
2024-04-24T06:41:28+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1 This model is a fine-tuned version of [allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2](https://huggingface.co/allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2) on the srh_test66 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["srh_test66"], "base_model": "allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2", "model-index": [{"name": "distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1", "results": []}]}
allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-srh-Step2-v1
null
[ "transformers", "safetensors", "distilbert", "generated_from_trainer", "dataset:srh_test66", "base_model:allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-Step1-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:41:44+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/BioMistral/BioMistral-DARE-NS <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/BioMistral-DARE-NS-GGUF/resolve/main/BioMistral-DARE-NS.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "BioMistral/BioMistral-DARE-NS", "quantized_by": "mradermacher"}
mradermacher/BioMistral-DARE-NS-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:BioMistral/BioMistral-DARE-NS", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:42:17+00:00
null
null
{}
biotechnology/Yeast
null
[ "region:us" ]
null
2024-04-24T06:42:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
santoshsto/mistral-7b-cpp-FINETUNED-16bit
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:42:53+00:00
text-classification
transformers
senti_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForSequenceClassification.from_pretrained("weip9012/tesla_earningscall_sentiment_analysis") 0: 'Neutral', 1: 'Positive', 2: 'Negative'
{}
weip9012/tesla_earningscall_sentiment_analysis
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:43:33+00:00
text-generation
transformers
{}
santoshsto/mistral-4x7b-codegen-MOE-16bit
null
[ "transformers", "safetensors", "mixtral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:43:40+00:00
text-generation
transformers
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways. From our testing this model is: - Less Refusals - More Uncensored - Follows requests better - Can reply in requested formats better without adding unnecesary information We are happy for anyone to try it out and give some feedback. You can also try this model on our API at https://www.awanllm.com/ Training: - 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine. - Trained on a modified and improved version of Cognitive Computations Eric Hartford's Dolphin dataset. https://huggingface.co/datasets/cognitivecomputations/dolphin - Training duration is around 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights. The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin based models by Eric Hartford. We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset, but we think this model is still a unique take on Llama 3 8B Instruct and the dolphin dataset. https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b The difference with their dolphin 2.9 model is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on. This is because we think that it performed better using the format it was originally trained on. Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: AWQ: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-AWQ GGUF: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-GGUF FP16: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin Exllamav2: 4bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-4bpw-exl2 8bpw: https://huggingface.co/AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-exl2-h8-8bpw-exl2 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Axolotl Config: ``` base_model: Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer train_on_inputs: false group_by_length: false load_in_8bit: false load_in_4bit: true strict: false sequence_len: 2048 bf16: true fp16: false tf32: false flash_attention: true # Data datasets: - path: flan1m-universal-uncensored-system-2048.jsonl type: system_prompt: "" system_format: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n" field_system: system field_instruction: input field_output: output format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" no_input_format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" warmup_steps: 10 dataset_prepared_path: ./last_run_prepared # Iterations num_epochs: 1 saves_per_epoch: 4 # Evaluation val_set_size: 0.01 eval_table_size: eval_table_max_new_tokens: eval_sample_packing: false evals_per_epoch: 4 # LoRA output_dir: ./qlora-out adapter: qlora lora_model_dir: lora_r: 64 lora_alpha: 128 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules: save_safetensors: true # Sampling sample_packing: true pad_to_sequence_len: true # Batching gradient_accumulation_steps: 32 micro_batch_size: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true # Optimizer optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0002 # Misc early_stopping_patience: resume_from_checkpoint: logging_steps: 1 debug: deepspeed: zero3_bf16.json weight_decay: 0.1 special_tokens: pad_token: <|end_of_text|> ```
{"license": "apache-2.0"}
AwanLLM/Meta-Llama-3-8B-Instruct-Dolfin-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T06:43:46+00:00
null
null
{}
biotechnology/Bacterial
null
[ "region:us" ]
null
2024-04-24T06:44:12+00:00
text-generation
transformers
{}
Kushal1804/Llama-2-7b-chat-finetune
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T06:44:31+00:00
text-generation
transformers
- **Developed by:** kevinkawchak - **License:** llama3 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit - **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0 - **Dataset identification:** Molecule-oriented Instructions - **Dataset function:** Description guided molecule design The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work. [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br> A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy. The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5) Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb). References: 1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit 2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions 3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb 4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16 5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04 @inproceedings{fang2023mol, <br> author = {Yin Fang and<br> Xiaozhuan Liang and<br> Ningyu Zhang and<br> Kangwei Liu and<br> Rui Huang and<br> Zhuo Chen and<br> Xiaohui Fan and<br> Huajun Chen},<br> title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br> for Large Language Models},<br> booktitle = {{ICLR}},<br> publisher = {OpenReview.net},<br> year = {2024},<br> url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br> This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "llama3", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "datasets": ["zjunlp/Mol-Instructions"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:zjunlp/Mol-Instructions", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:llama3", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:45:20+00:00
null
null
{}
Eunhyecos/Geshin
null
[ "region:us" ]
null
2024-04-24T06:45:54+00:00
null
null
{}
ai-tools-searchs/chieri
null
[ "region:us" ]
null
2024-04-24T06:47:15+00:00
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5-TTS-Nepali-SLR43-SLR143-SWOS This model is a fine-tuned version of [aryamanstha/speecht5_nepali_oslr43_oslr143](https://huggingface.co/aryamanstha/speecht5_nepali_oslr43_oslr143) on the swos dataset. It achieves the following results on the evaluation set: - Loss: 0.4827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 418 - training_steps: 2615 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4978 | 37.04 | 1000 | 0.4787 | | 0.4742 | 74.07 | 2000 | 0.4827 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"language": ["ne"], "tags": ["generated_from_trainer"], "datasets": ["swos"], "base_model": "aryamanstha/speecht5_nepali_oslr43_oslr143", "model-index": [{"name": "SpeechT5-TTS-Nepali-SLR43-SLR143-SWOS", "results": []}]}
rikeshsilwalekg/speecht5_tts_nepali_slr43_slr143_swos
null
[ "transformers", "pytorch", "speecht5", "text-to-audio", "generated_from_trainer", "ne", "dataset:swos", "base_model:aryamanstha/speecht5_nepali_oslr43_oslr143", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:47:23+00:00
null
null
{"license": "cc-by-nc-4.0"}
GAI-LLM/openchat_summary
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-24T06:49:03+00:00
text-generation
transformers
# Uploaded model - **Developed by:** Dionick - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Dionick/llama3-finetuned-alpaca-Digvijay-phutane
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:51:17+00:00
text-generation
transformers
# Uploaded model - **Developed by:** kevinkawchak - **License:** llama3 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit - **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0 - **Dataset identification:** Molecule-oriented Instructions - **Dataset function:** Description guided molecule design The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work. [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br> A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy. The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5) Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb). References: 1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit 2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions 3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb 4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16 5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04 @inproceedings{fang2023mol, <br> author = {Yin Fang and<br> Xiaozhuan Liang and<br> Ningyu Zhang and<br> Kangwei Liu and<br> Rui Huang and<br> Zhuo Chen and<br> Xiaohui Fan and<br> Huajun Chen},<br> title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br> for Large Language Models},<br> booktitle = {{ICLR}},<br> publisher = {OpenReview.net},<br> year = {2024},<br> url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br> This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "llama3", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "datasets": ["zjunlp/Mol-Instructions"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "dataset:zjunlp/Mol-Instructions", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:llama3", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-24T06:51:34+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]}
ar9av/llava-1.5-7b-hf-ft-mix-vsft
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
2024-04-24T06:52:10+00:00
null
null
{}
kanishka7878/codeparrot-ds
null
[ "region:us" ]
null
2024-04-24T06:52:14+00:00
null
null
{}
Aishu1102/gpt-3.5-turbo
null
[ "region:us" ]
null
2024-04-24T06:52:19+00:00
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
mysterySu/dummy-model
null
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:53:07+00:00
null
null
This is a dummy model! Update from local machine!
{"language": ["en"], "license": "mit"}
prabhuvashwin/new-dummy-model
null
[ "en", "license:mit", "region:us" ]
null
2024-04-24T06:54:03+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # basit12/emotion-analysis-with-distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0811 - Validation Loss: 0.1993 - Train Accuracy: 0.9325 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3670 | 0.1846 | 0.925 | 0 | | 0.1398 | 0.1508 | 0.935 | 1 | | 0.1067 | 0.1743 | 0.932 | 2 | | 0.0925 | 0.1543 | 0.9355 | 3 | | 0.0811 | 0.1993 | 0.9325 | 4 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "basit12/emotion-analysis-with-distilbert", "results": []}]}
basit12/emotion-analysis-with-distilbert
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:54:10+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "271.23 +/- 18.10", "name": "mean_reward", "verified": false}]}]}]}
Kozilek/PPO-MlpPolicy_LunarLanderV2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-24T06:57:09+00:00
null
null
# DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF --model galaxy-16b-v1.0.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF --model galaxy-16b-v1.0.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]}
DavidAU/GALAXY-16B-v1.0-Q6_K-GGUF
null
[ "gguf", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "en", "dataset:Intel/orca_dpo_pairs", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "dataset:Open-Orca/SlimOrca", "dataset:MinervaAI/Aesir-Preview", "dataset:allenai/ultrafeedback_binarized_cleaned", "license:apache-2.0", "region:us" ]
null
2024-04-24T06:57:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-toxic2nontoxic-100-50
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T06:58:12+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: flashus/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
flashus/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-24T06:59:04+00:00
null
null
{}
iamnguyen/OrpoSailor-1.8B-mix
null
[ "safetensors", "region:us" ]
null
2024-04-24T06:59:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
juewang/Meta-Llama-3-2B-mlp-layer-pruned
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:00:23+00:00